uuid
int64
541B
3,299B
dataset
stringclasses
1 value
text
stringlengths
1
4.29M
1,314,259,995,363
arxiv
\section{Introduction} \noindent The purpose of this paper is to generalize the main results of \cite{St} on the stability of travelling waves in Nagumo equation with multiplicative noise to general bistable reaction diffusion equations with noise. To this end let us first consider the deterministic reaction-diffusion equation \begin{equation} \label{RDE} \partial_t v (t,x) = \nu v_{xx}(t,x) + b f(v(t,x)) \, , \quad v(t,x) = v_0 (x) \end{equation} for $(t,x)\in\mathbb{R}_+\times\mathbb{R}$. Here, $f: \mathbb{R}\to\mathbb{R}$ is a continuously differentiable function satisfying \begin{equation} \tag{\bf A1} \begin{aligned} & f (0) = f(a) = f (1) = 0 \quad \text{ for some } a \in (0,1) \\ & f(x) < 0 \quad \text{ for } x \in (0,a) \, , f(x) > 0 \text{ for } x \in (a,1) \\ & f^\prime (0) < 0 , f^\prime (a) > 0, f^\prime (1) < 0\, . \end{aligned} \end{equation} Theorem 12 in \cite{HR} implies for $\nu$, $b > 0$ the existence of a travelling wave connecting the stable fixed points $0$ and $1$ of the reaction term, i.e., a monotone increasing $C^2$ function $\hat{v}$ satisfying $$ c \hat{v}_x = \nu \hat{v}_{xx} + bf (\hat{v}) $$ for some wavespeed $c\in\mathbb{R}$ and boundary conditions $\hat{v} (- \infty) = 0$, $\hat{v} (+ \infty) = 1$. It follows that $\hat{v}(t) := \hat{v} (\cdot + ct)$ and all its spatial translates $\hat{v} (\cdot + x_0 + ct)$ are solutions of \eqref{RDE}. A particular example is the Nagumo equation with $f(v) = v (1-v)(v -a)$ where the travelling wave is explicitely given by $\hat{v}(x) = \left( 1 + e^{- \sqrt{\frac{b}{2v}} x}\right)^{-1}$. \medskip \noindent It is known that the wave speed $c$ and the integral $\int_0^1 f(v)\, dv \ge 0$ have the same sign and that in particular $c=0$ if and only if $\int_0^1 f(v)\, dv = 0$. To simplify the presentation of our results we will therefore assume from now on that \begin{equation} \tag{\bf A2} \int_0^1 f(v)\, dv \ge 0 \end{equation} hence that the wave speed $c$ is nonnegative. \medskip \noindent So far the assumptions on the reaction term $f$ are classical. The existing results in the literature on the stability of the travelling wave can be divided up into results based on maximum principle and comparison techniques, see in particular \cite{FMcL} for a stability result w.r.t. initial conditions $v_0$ satisying $0\le v_0\le 1$, $\liminf_{x\to-\infty} v_0 (x) < a$ and $\limsup_{x\to\infty} v_0 (x) > a$, and results w.r.t. $L^2$- or $H^{1,2}$-norms, based on spectral information on the linearization of \eqref{RDE} along the travelling wave $\hat{v}$ (see, e.g. \cite{Henry, OR}). Whereas the first approach is not appropriate for stochastic perturbations, unless the noise terms would satisfy unnatural monotonicity conditions, the second approach can be in principle generalized to the stochastic case. However, in order to do this, the existing spectral information on the linearization of \eqref{RDE} has to be considerably refined. Abstract perturbation results on the spectral gap below the eigenvalue corresponding to the travelling wave cannot be easily generalized to the stochastic case. We will therefore use functional inequalities to derive Lyapunov stability of the travelling wave in the space $L^2 (\mathbb{R})$. To be more precise, we will show in Theorem \ref{th1} under the following additional assumptions on the reaction term \begin{equation} \tag{\bf A3} \begin{aligned} & \exists v_\ast\in (a, 1) \mbox{ such that } f'' (v) > 0 \, (\mbox{ resp. } < 0) \\ & \mbox{ on } [0 , v_\ast) (\mbox{ resp. }(v_\ast , 1]) \end{aligned} \end{equation} saying that $f$ is strictly convex on $[0, v_\ast )$ and strictly concave on $(v_\ast , 1]$, that the $L^2$-norm is a Lyapunov function restricted to the orthogonal complement of $\hat{v}_x$. As a consequence of this phase-space stability, the stochastic case will become much easier to investigate. Our assumptions are satisfied in the case of the Nagumo equation (for all $a\in (0,1)$) and do not require any estimates on the unknown wave speed $c$. \medskip \noindent Our interest in the above reaction diffusion equation is motivated by the fact that \eqref{RDE} can be seen as a singular limit $\epsilon\downarrow 0$ of Fitz-Hugh Nagumo systems $$ \begin{aligned} \partial_t v (t,x) & = \nu v_{xx}(t,x) + b f(v(t,x)) - w(t,x) + I \\ \partial_t w (t,x) & = \varepsilon ( v (t,x) -\gamma w(t,x)) \qquad (t,x)\in\mathbb{R}_+\times\mathbb{R} \end{aligned} $$ when the adaptation variable $w$ is set constant to the value of the input current $I$ (see the monograph \cite{ET}). The Fitz-Hugh Nagumo system, a mathematical idealization of the Hodgkin Huxley model, admits, under appropriate assumptions on the coefficients, pulse solutions that serve as a mathematical model for the action potential travelling along the nerve axon. By adding noise to this system, e.g. channel noise, the resulting dynamical system exhibits many interesting features like propagation failure of the pulse solution, backpropagation, annihilation and spontaneous pulse solutions. Recent computational studies can be found in \cite{T2010, T2011}. \medskip \noindent We are therefore interested in a rigorous mathematical analysis of stochastic reaction-diffusion systems with bistable reaction terms. With a view towards the above mentioned features of the noisy system, we are in particular interested to establish a multiscale analysis of the whole dynamics which requires in a first step a robust stability result of the travelling pulse solution. As already mentioned for the scalar-valued case, the existing stability results (e.g. \cite{Ev, Jones} for systems) cannot be carried over to the stochastic case. In order to reduce the mathematical difficulty of the problem, we therefore consider the scalar-valued case in the present paper as a starting point. \medskip \noindent Before we proceed let us first draw a couple of conclusions on the travelling wave resulting from our assumptions. \begin{lemma} \label{lem1_1} Assume that {\bf (A1)} and {\bf (A2)} hold. Then: \begin{itemize} \item[(i)] $\hat{v}^2_x (x) \le \frac{2b}{\nu} \int^1_{\hat{v}(x)} f (v)\, dv$ for all $x$. In particular, $$ \lim_{x\to + \infty} e^{- \alpha \frac{c}{\nu}x} \hat{v}^2_x = 0 \qquad\mbox{ for } \alpha \ge 0\, . $$ \item[(ii)] $e^{- 2 \frac{c}{\nu}x} \hat{v}^2_x$ is increasing (resp. decreasing) for $x\le\hat{v}^{-1} (a)$ (resp. $x\ge \hat{v}^{-1} (a)$). In particular, $$ \lim_{x\to\pm\infty} e^{- \alpha\frac{c}{\nu} x } \hat{v}_x^2 = 0 \qquad\mbox{ for } \alpha \in\, [0, 2[ \, . $$ \end{itemize} \end{lemma} \medskip \noindent The proof of Lemma \ref{lem1_1} is given in Section \ref{Section0} below. The next Proposition summarizes the main conclusions implied by the additional assumption {\bf (A3)}. \begin{proposition} \label{prop0} Assume that {\bf (A1)} - {\bf (A3)} hold. Then: \begin{itemize} \item[(i)] $\frac{f(\hat{v})}{\hat{v}_x}$ is strictly monotone increasing. In particular, $$ -\frac{d^2}{dx^2} \log{\hat{v}_x} = -\frac{d}{dx} \frac{\hat{v}_{xx}}{\hat{v}_x} = \frac b\nu \frac{d}{dx}\frac{f(\hat{v})}{\hat{v}_x} > 0 \, , $$ i.e., $\hat{v}_x$ is strictly log-concave (but not uniformly). \item[(ii)] $$ \begin{aligned} \gamma_- & := \inf \frac b\nu \frac{f(\hat{v})}{\hat{v}_x} = \frac{c}{2\nu} - \sqrt{\left(\frac{c}{2\nu}\right)^2 - \frac b\nu f^\prime(0)} \\ \gamma_+ & := \sup \frac b\nu \frac{f(\hat{v})}{\hat{v}_x} = \frac{c}{2\nu} + \sqrt{\left(\frac{c}{2\nu}\right)^2 - \frac b\nu f^\prime(1)}\, . \end{aligned} $$ \item[(iii)] $$ \begin{aligned} \int_{-\infty}^0 e^{-2\alpha\frac c\nu x}\left( \hat{v}_x^2 + \hat{v}_{xx}^2\right)\, dx < \infty \qquad\mbox{ for all } \alpha \frac c\nu < \frac c\nu - \gamma_- \\ \int_0^\infty e^{-2\alpha\frac c\nu x}\left( \hat{v}_x^2 + \hat{v}_{xx}^2\right)\, dx < \infty \qquad\mbox{ for all } \alpha \frac c\nu > \frac c\nu - \gamma_+ \\ \end{aligned} $$ In particular, $$ \int e^{-\frac c\nu x}\left( \hat{v}_x^2 + \hat{v}_{xx}^2\right)\, dx < \infty\, . $$ \end{itemize} \end{proposition} \medskip \noindent The proof of Proposition \ref{prop0} is given in Section \ref{Section0} below. \bigskip \noindent The next theorem contains the essential functional inequality that is implied by {\bf (A3)}. \begin{theorem} \label{th0} Assume that {\bf (A1)} - {\bf (A3)} hold. Then there exists some $\kappa > 0$ such that \begin{equation} \label{FunctIneq} - \frac{d^2}{dx^2} \log{\hat{v}_x} + \left( \frac d{dx} \log\hat{v}_x \right)^2 - \frac c\nu \frac d{dx}\log \hat{v}_x \ge \kappa\, . \end{equation} \end{theorem} \medskip \noindent The proof of Theorem \ref{th0} is given in Section \ref{Section0} below. We will assume from now on for all subsequent results that {\bf (A1)} - {\bf (A3)} hold. \begin{example} \label{ExampleNagumo} In the particular case of the Nagumo equation, i.e., $f(v) = v(1-v)(v-a)$ for $a\in (0,1)$, the travelling wave is explicitely given as $\hat{v}(x) = (1+ e^{-kx})^{-1}$ (resp. its spatial translates) with $k = \sqrt{\frac{b}{2\nu}}$. The corresponding wave speed $c$ can be calculated as $c = \sqrt{2\nu b} \left( \frac 12 - a\right)$. The logarithmic derivative $\rho := \frac d{dx} \log\hat{v}_x = \frac{\hat{v}_{xx}}{\hat{v}_x}$ is given as $\rho = \frac c\nu - \frac b\nu \frac{f}{\hat{v}_x} = \sqrt{\frac{2b}{\nu}} \left( \frac 12 - \hat{v}\right)$. Thus $$ - \rho ' + \rho^2 - \frac c\nu \rho = \frac b\nu \left( (\hat{v}-a)^2 + a(1-a) \right) \ge \frac b\nu a(1-a) > 0 \, . $$ \end{example} \bigskip \noindent With the functional inequality \eqref{FunctIneq} of Theorem \ref{th0} we can now state the mentioned result on the Lyapunov stability of the linearization of \eqref{RDE} along the travelling wave $\hat{v}$ in the deterministic case. To state our result precisely, let us introduce the Hilbert space $H = L^2 (\mathbb{R})$ and the Sobolev space $V = H^{1,2} (\mathbb{R})$, defined as the closure of $C_c^1 (\mathbb{R})$ w.r.t. the norm $$ \|u\|_V^2 = \int_{\mathbb{R}} u^2 + u^2_x \, dx $$ in $H$. Identifying $H$ with its dual $H^\prime$ we obtain dense and continuous embeddings $V\hookrightarrow H \equiv H^\prime \hookrightarrow H^\prime$. Note that w.r.t. this embedding the dualization between $V^\prime$ and $V$ reduces for $f\in H$ to the inner product in $H$, i.e., $_{V^\prime}\langle f, g\rangle_V = \langle f,g\rangle_H = \int fg \, dx$. The elementary estimate $u^2 (y) = 2\int_{-\infty}^y u_x (x) u(x) \, dx \le \int u_x^2 + u^2\, dx \le\|u\|_V^2$ for $u\in C_c^1 (\mathbb{R} )$ can be extended to the estimate $\|u\|_\infty \le \|u\|_V$ for all $u\in V$ that turns out to be crucial in the following. \smallskip \noindent The unbounded linear operator $\nu u_{xx}$ induces a continuous mapping $ A: V\rightarrow V'$, because for $u\in C_c^1 (\mathbb{R})$ $$ \begin{aligned} _{V^\prime}\langle A u, v \rangle_{V^\prime} & = \int \nu u_{xx} \, v\, dx = - \nu\int u_x v_x \, dx \le \nu \| u \|_V \|v\|_V\, . \end{aligned} $$ \begin{theorem} \label{th1} Let $u\in V$. Then $$ _{V^\prime}\langle Au + bf^\prime (\hat{v}) u, u\rangle_V \le - \kappa_\ast \|u\|_V^2 + C_\ast \langle u, \hat{v}_x \rangle^2 $$ where $$ \kappa_\ast := \frac{\kappa}{\kappa + \left( \frac c{2\nu} \right)^2} \frac \nu{q_1} $$ and $$ C_\ast = \left( \kappa_\ast q_2 + \frac{\nu}{\kappa} \left(\frac c{2\nu}\right)^2 \left( \kappa + \left(\frac c{2\nu}\right)^2\right) \frac{\int e^{-\frac c\nu x }\hat{v}_x^2\, dx}{\left(\int e^{-\frac c{2\nu}x }\hat{v}_x^2\, dx \right)^2} \right)\, . $$ Here, $\kappa$ is the lower bound obtained in Theorem \ref{th0} and $q_1$ and $q_2$ are defined in Lemma \ref{lem2_1} below. \end{theorem} \noindent The proof of Theorem \ref{th1} is given in Section \ref{Proofth1} below. \medskip \noindent The previous Theorem states that the flow generated by the semilinear diffusion equation is contracting in the direction that is orthogonal to $\hat{v}_x$ (and its spatial translates). To properly quantify this contraction we will need to model the equation \eqref{RDE} as an evolution equation in the appropriate function space. \subsection{Realization of \eqref{RDE} as evolution equation} \medskip \noindent In the next step we want to realize the reaction diffusion equation \eqref{RDE} as an evolution equation on a suitable function space. To this end we need to impose yet additional assumptions on the reaction term, but now concerning only its global behaviour at infinity and not affecting its behaviour on $[0,1]$ hence also not the travelling wave $\hat{v}$. We assume that the derivative $f^\prime$ of the reaction term is bounded from above \begin{equation} \tag{\bf B1} \eta_1 := \sup_{x\in\mathbb{R}} f^\prime (x) < \infty \, , \end{equation} that there exists a finite positive constant $L$ such that \begin{equation} \tag{\bf B2} \left| f(x_1 ) - f(x_2)\right| \le L |x_1 - x_2| \left( 1 + x_1^2 + x_2^2 \right) \qquad \forall x_1 , x_2 \in \mathbb{R}\, , \end{equation} which is typically satisfied for polynomials of third degree with leading negative coefficient and that there exists $\eta_2$ such that \begin{equation} \tag{\bf B3} \left| f(u+v) - f(v) - f^\prime (v) u \right| \le \eta_2 (1 + |u|) |u|^2 \qquad \forall v\in [0,1]\, , u\in \mathbb{R}\, . \end{equation} \smallskip \noindent Since we are interested in the asymptotic stability of the travelling wave also w.r.t. stochastic perturbations, it is now natural to consider the following decomposition $v(t,x) = u(t,x) + \hat{v} (x)$ of the solution $v$ of \eqref{RDE}, where $u$ now satisfies the following equation \begin{equation} \label{WREL} u_t (t,x) = \nu u_{xx} (t,x) + b\left( f(u(t,x) + \hat{v}(x)) - f(\hat{v}(x)\right) \end{equation} on $\mathbb{R}_+\times \mathbb{R}$ that can be analysed best in a variational framework. \section{The deterministic case} \smallskip \noindent The nonlinear term \begin{equation} \label{Nonlinear} G (t,u) : = f (u + \hat{v}(t)) - f (\hat{v}(t)) \end{equation} can be realized as a continuous mapping $$ G : [0, \infty )\times V \rightarrow V^\prime $$ being Lipschitz w.r.t. second variable $u$ on bounded subsets of $V$. Indeed, condition {\bf (B2)} on $f$ implies that $$ \begin{aligned} _{V^\prime}\langle G (t,u), w \rangle_{V^\prime} & = \int_{\mathbb{R}} G (t,u) w\, dx = \int_{\mathbb{R}} \left(f(u + \hat{v}(t)) - f(\hat{v}(t))\right) w\, dx \\ & \le L \int_{\mathbb{R}} |u| (2 + u^2)|w|\, dx \le L \|u\|_H \left( 3 + 2\|u\|^2_V\right) \|w\|_H \end{aligned} $$ hence \begin{equation} \label{Est1} \begin{aligned} \| G (t,u) \|_{V^\prime} \leq L \|u\|_H \left( 3 + 2\|u\|^2_V \right) \end{aligned} \end{equation} and similarly $$ \begin{aligned} _{V^\prime} \langle G (t,u_1) - G(t,u_2), w \rangle_{V^\prime} & = \int_{\mathbb{R}} (f (u_1 + \hat{v} (t)) - f (u_2 + \hat{v}(t) ) w \, dx \\ & \le L \|u_1 - u_2\|_H \left( 4 + 2\| u_1\|^2_V + 2\| u_2 \|^2_V \right) \|w\|_H \end{aligned} $$ which implies \begin{equation} \label{Est2} \| G (t,u_1) - G (t,u_2) \|_{V^\prime} \le 2L \left( 2 + \|u_1\|_V^2 + \|u_2\|^2_V\right) \| u_1 - u_2\|_H \, . \end{equation} The sum $Au + bG (t,u)$ of both operators now satisfies the global monotonicity condition \begin{equation} \label{Est3} \begin{aligned} \langle A u_1 & + b G (t,u_1) - A u_2 - b G (t,u_2), u_1 - u_2 \rangle \\ & = \int A(u_1 - u_2)(u_1 - u_2)\, dx + b\int (G(t,u_1) - G(t,u_2))(u_1 - u_2)\, dx \\ & = - \nu \int (u_1 - u_2)_x^2\, dx + b\int (G(t,u_1) - G(t,u_2))(u_1 - u_2)\, dx \\ & \le b\eta_1 \| u_1 - u_2 \|^2_H \end{aligned} \end{equation} using {\bf (B1)} and similarly the coercivity condition \begin{equation} \label{Est4} \langle Au + bG(t,u), u \rangle \le - \nu \|u\|^2_V + (\nu + b \eta_1 ) \|u\|^2_H \end{equation} since $f(s)s = (f(s) - f(0))(s-0) \le\eta_1 s^2$ for all $s\in\mathbb{R}$ using {\bf (B1)}. \smallskip \noindent Theorem 1.1 in \cite{LR} now implies for all initial conditions $u_0\in H$ and all finite times $T$ existence and uniqueness of a variational solution $u\in L^\infty ([0,T];H)\cap L^2 ([0,T]; V)$ satisfying the integral equation \begin{equation} \label{VSDet} u (t) = u_0 + \int^t_0 \left( A u (s) + b (f (u(s) + \hat{v}(s)) - f (\hat{v}(s)))\right)\, ds \end{equation} and we may extend the solution to the whole time axes $\mathbb{R}_+$. \smallskip \noindent The integral on the right hand side of \eqref{VSDet} is well-defined as a Bochner integral in $L^2 ([0,T]; V^\prime )$ using \eqref{Est1} which implies in particular that the mapping $t \mapsto u (t)$, $\mathbb{R}_+ \rightarrow V^\prime$, is differentiable with differential \begin{equation} \label{RDDet} \frac{du}{dt} = Au (t) + b \left( f (u (t) + \hat{v}(t)) - f ( \hat{v}(t))\right) \quad \in V^\prime \, , \end{equation} hence continuous. \smallskip \noindent We are now ready to state precisely our notion of stability we are going to prove in the following. \begin{definition} \label{Defi1} The travelling wave solution $\hat{v}$ is called locally asymptotically stable w.r.t. the $H$-norm if there exists $\delta > 0$ such that for initial condition $v_0$ with $v_0 - \hat{v} \in H$ and $\|v_0 - \hat{v}\|_H\le \delta$ the unique variational solution $u(t,x) = v(t,x) - \hat{v}(x)$ of \eqref{WREL} satisfies $$ \lim_{t\to\infty} \| v_0 - \hat{v}(\cdot + x_0)\|_H = 0 $$ for some (phase) $x_0\in\mathbb{R}$. \end{definition} \medskip \noindent In order to apply Theorem \ref{th1} we need to control the tangential component $\langle v(t) - \hat{v}(\cdot + x_0),\hat{v}_x (\cdot + x_0)\rangle^2$ of the given solution $v(t) = u(t) + \hat{v}(t)$ w.r.t. the appropriate phase-shift $x_0$, i.e., the phase-shift $x_0$ that minimizes the $L^2$-distance between the solution $v(t)$ and the orbit consisting of all phase-shifted travelling waves $\hat{v} (\cdot + x_0 )$. This can be achieved asymptotically by introducing dynamically by by introducing the following ordinary differential equation \begin{equation} \label{ODE} \begin{aligned} \dot{C}(t) & = c + m\langle v(t) - \hat{v} \left( \cdot + C(t)\right), \hat{v}_x \left( \cdot + C(t)\right) \rangle \, , \\ C(0) & = 0 \end{aligned} \end{equation} for $m \ge 0$. To simplify notations, let $$ \tilde{v} (t) := \hat{v} (\cdot + C(t)) $$ so that we can rewrite equation \eqref{ODE} as \begin{equation} \label{ODE1} \begin{aligned} \dot{C}(t) & = c + m \langle v(t) - \tilde{v}(t) , \tilde{v}_x (t)\rangle\, , \\ C(0) & = 0\, . \end{aligned} \end{equation} The next Proposition first shows that \eqref{ODE} is well-posed. \begin{proposition} \label{prop3_4} Let $v = u + \hat{v} (t)$ be a solution of \eqref{RDDet} with $u\in L^\infty ([0, T ], H) \cap L^2 ([0, T ]; V )$. Then $$ B(t, C) = \langle v (t)- \hat{v}(\cdot + C), \hat{v}_x (\cdot + C)\rangle_H $$ is continuous in $(t, C) \in [0,T]\times\mathbb{R}$ and Lipschitz continuous w.r.t. $C$ with Lipschitz constant independent of $t$. \end{proposition} \begin{proof} First note that $$ \begin{aligned} B(t, C_1) - B (t, C_2) & = \langle \hat{v}_x ( \cdot + C_1) - \hat{v}_x ( \cdot + C_2), u (t)\rangle_H \\ & \qquad - \langle \hat{v}_x (\cdot + C_1), \hat{v} ( \cdot + C_1) - \hat{v} ( \cdot )\rangle_H \\ & \qquad + \langle \hat{v}_x (\cdot + C_2), \hat{v} ( \cdot + C_2) - \hat{v} ( \cdot )\rangle_H \end{aligned} $$ Using $$ \begin{aligned} \hat{v}_x (x + C_1) - \hat{v}_x (x + C_2) & = \int_{C_1}^{C_2} \hat{v}_{xx} (x + y)\, dy \\ & \le \int_{C_1}^{C_2} |\hat{v}_{xx}| (x + y) \, dy \end{aligned} $$ we conclude that the first term on the right hand side can be estimated from above by $$ \begin{aligned} \|\hat{v}_x ( \cdot + C_1) & - \hat{v}_x ( \cdot + C_2)\|_H \|u(t)\|_H \\ & \le \left( |C_1 - C_2| \int_{\mathbb{R}} \int_{C_1}^{C_2} \hat{v}_{xx}^2 (x + y) \, dy\, dx \right)^{\frac 12} \|u(t)\| _H \\ & = |C_1 - C_2 | \|\hat{v}_{xx}\|_H^2 \|u(t)\|_H \end{aligned} $$ which implies that this term is Lipschitz continuous with Lipschitz constant independent of $t \in [0,T]$. \medskip \noindent The second and the third term can be rewritten as follows: $$ \begin{aligned} & \Big| \langle \hat{v}_x ( \cdot + C_1), \hat{v} (\cdot + C_1) - \hat{v} ( \cdot )\rangle_H \\ & \qquad\qquad - \langle \hat{v}_x ( \cdot + C_2), \hat{v} ( \cdot + C_2) - \hat{v} (\cdot )\rangle_H \Big| \\ & \qquad = \Big| \langle \hat{v}_x , \hat{v}(\cdot - C_2) - \hat{v} ( \cdot - C_1 )\rangle_H \Big| \\ & \qquad \le \|\hat{v}_x\|_H \left( \int_{\mathbb{R}} \left( \int_{C_1}^{C_2} \hat{v}_x ( \cdot + y )\, dy \right)^2 \, dx \right)^{\frac 12} \\ & \qquad \le \|\hat{v}_x\|_H |C_1 - C_2| \|\hat{v}_x\|_H \end{aligned} $$ so that also these two terms are Lipschitz continuous with Lipschitz constant independent of $t$. \end{proof} \medskip \noindent In the following let \begin{equation} \label{tildeU} \tilde{u} (t) := u(t) + \hat{v}(t) - \tilde{v}(t) = v(t) - \tilde{v} (t)\, . \end{equation} \begin{proposition} \label{prop1_3} Let $u = v - \hat{v}(t) \in L^\infty \left( [0,T]; H \right) \cap L^2 \left( [0,T]; V\right)$ be a solution of \eqref{RDDet} and $\tilde{u}$ be given by \eqref{tildeU}. Then $\tilde{u}\in L^\infty \left( [ 0,T]; H \right) \cap L^2 \left( [0,T ]; V\right)$ again and $\tilde{u}$ satisfies the evolution equation \begin{equation} \label{RDDet1} \begin{aligned} \frac{d\tilde{u}}{dt} (t) & = \nu \Delta \tilde{u}(t) + b \tilde{G} \left( t, \tilde{u}(t)\right) - (\dot{C} (t) -c) \tilde{v}_x (t) \\ & = \nu \Delta \tilde{u} (t) + b f' \left( \tilde{v} (t) \right) \tilde{u} (t) + b \tilde{R} \left( t, \tilde{u} (t)\right) - (\dot{C} (t) -c)\tilde{v}_x (t) \end{aligned} \end{equation} with $$ \begin{aligned} \tilde{G}(t,u) & = f \left( u + \tilde{v} (t)\right) - f \left( \tilde{v}(t)\right) \, , \\ \tilde{R} (t,u) & = \tilde{G} (t,u) - f' \left( \tilde{v} (t)\right) u \, . \end{aligned} $$ \end{proposition} \noindent The proof of the Proposition is an immediate consequence of \eqref{RDDet} and \eqref{ODE} (resp. \eqref{ODE1}). {\bf (B3)} implies for the remainder $\tilde{R}$ the following estimate \begin{equation} \label{remainder} \begin{aligned} \langle \tilde{R} (t,u),u\rangle & \le \eta_2 \int (1 + |u|)|u|^3\, dx \le \eta_2 \left( \|u\|_\infty + \|u\|_\infty^2 \right) \|u\|_H^2 \\ & \le \eta_2 \left( \|u\|_H + \|u\|_H^2 \right) \|u\|_V^2 \, . \end{aligned} \end{equation} \medskip \noindent We are now ready to state our main result in the deterministic case: \begin{theorem} \label{th2} Recall the definition of $\kappa_\ast$ and $C_\ast$ in Theorem \ref{th1}. Let $m\ge C_\ast$. If the initial condition $v_0 = u_0 + \hat{v}$ is close to $\hat{v}$ in the sense that $$ \|u_0\|_H < \left(\delta \frac{\kappa_\ast}{2b\eta_2}\right)\wedge 1 $$ for some $\delta < 1$ and $v(t) = u(t) + \hat{v} (t)$, where $u(t)$ is the unique solution of \eqref{RDDet}, then $$ \|v(t) - \hat{v} \left( \cdot + C(t)\right)\|_H \le e^{-(1-\delta)\kappa_\ast t} \|v_0 -\hat{v}\|_H\, . $$ \end{theorem} \begin{proof} Let $\tilde{u} (t) := v(t) - \tilde{v} (t)$ be as in \eqref{tildeU}. Then Proposition \ref{prop1_3} and equation \eqref{remainder} imply that \begin{equation} \label{EstTh2_1} \begin{aligned} \frac 12 \frac{d}{dt} \|\tilde{u}(t)\|^2_H & = \langle \nu \Delta\tilde{u}(t) + bf^\prime (\tilde{v}(t)) \tilde{u} (t), \tilde{u}(t)\rangle + b\langle \tilde{R} (t, \tilde{u} (t)) , \tilde{u} (t) \rangle \\ & \quad - m\langle \tilde{v}_x(t) , \tilde{u} (t)\rangle^2 \\ & \le \langle \nu \Delta\tilde{u}(t) + bf^\prime (\tilde{v}(t)) \tilde{u} (t), \tilde{u}(t)\rangle \\ & \quad + b\eta_2 \left( \|\tilde{u} (t)\|_H + \|\tilde{u}(t)\|_H^2 \right) \|\tilde{u}(t)\|_V^2 - m \langle \tilde{v}_x (t) , \tilde{u} (t)\rangle^2\, . \end{aligned} \end{equation} Using translation invariance of $\nu \Delta$ and $\int u_x^2\, dx$, Theorem \ref{th1} yields the estimate \begin{equation} \label{EstTh2_2} \begin{aligned} \langle \nu \Delta\tilde{u}(t) & + bf^\prime (\tilde{v}^{TW}(t)) \tilde{u} (t), \tilde{u}(t)\rangle \\ & \le - \kappa_\ast \|\tilde{u} (t)\|_V^2 + C_\ast \langle \tilde{u}(t), \tilde{v}_x\rangle^2 \, . \end{aligned} \end{equation} Inserting \eqref{EstTh2_2} into \eqref{EstTh2_1} yields that $$ \begin{aligned} \frac 12 \frac d{dt} \|\tilde{u} (t) \|_H^2 & \le - \kappa_\ast \|\tilde{u} (t) \|^2_V + b\eta_2 \left( \|\tilde{u} (t)\|_H + \|\tilde{u}(t)\|_H^2 \right) \|\tilde{u} (t) \|_V^2 \, . \end{aligned} $$ \medskip \noindent In the next step we define the stopping time $$ T := \inf \left\{ t\ge 0 \mid \|\tilde{u}(t)\|_H \ge \left( \delta \frac{\kappa_\ast}{2b\eta_2}\right) \wedge 1\right\} $$ with the usual convention $\inf \emptyset = \infty$. Continuity of $t\mapsto \|\tilde{u}(t)\|_H$ implies that $T > 0$ since $\|u_0\|_H < \left( \delta \frac{\kappa_\ast}{2b\eta_2}\right)\wedge 1$. For $t < T$ note that $$ \frac 12 \frac d{dt} \|\tilde{u} (t) \|_H^2 \le - (1- \delta )\kappa_\ast \|\tilde{u} (t) \|^2_V \le - (1- \delta )\kappa_\ast \|\tilde{u} (t) \|^2_H $$ which implies that $$ \|\tilde{u}(t)\|^2_H \le e^{-2(1-\delta )\kappa_\ast t} \|u_0\|^2_H $$ for $t < T$. Suppose now that $T < \infty$. Then continuity of $t\mapsto \|\tilde{u}(t)\|_H$ implies on the one hand that $\|\tilde{u}(T)\|_H = \left( \delta \frac{\kappa_\ast}{2b\eta_2} \right)\wedge 1$ and on the other hand, using the last inequality, $$ \|\tilde{u}(T)\|_H = \lim_{t\uparrow T} \|\tilde{u} (t)\|_H \le e^{-(1-\delta )\kappa_\ast T} \|u_0\|_H < \left( \delta \frac{\kappa_\ast}{2b\eta_2}\right)\wedge 1 $$ which is a contradiction. Consequently, $T = \infty$ and thus $$ \|\tilde{u} (t)\|_H \le e^{-(1-\delta )\kappa_\ast t} \|u_0\|_H \qquad\forall t\ge 0 $$ which implies the assertion. \end{proof} \section{The reaction-diffusion equation with noise} \label{sec2} \noindent In this section we will generalize the stability result for the reaction-diffusion equation \eqref{RDE} to the stochastic case. To this end we cosider the following equation \begin{equation} \label{StochRDE} \begin{aligned} dv (t) & = \left[ \nu\partial^2_{xx} v (t) + bf(v(t))\right]\, dt + \Sigma_0 (v(t))\, dW (t) \end{aligned} \end{equation} where $W = (W (t))_{t\ge 0}$ is a cylindrical Wiener process with values in some separable real Hilbert space $U$ defined on some underlying filtered probability space $(\Omega , \mathcal{F} , (\mathcal{F} (t))_{t\ge 0}, P)$ and $$ \Sigma_0 : \hat{v} + H \mapsto L_2 (U, H) $$ is a measurable map with values in the linear space of all Hilbert-Schmidt operators from $U$ to $H$ such that there exists some constant $L_{\Sigma_0}$ with \begin{equation} \label{dispersion} \| \Sigma_0 ( \hat{v} + u_1 ) - \Sigma_0 (\hat{v} + u_2)\|_{L_2 (U,H)} \le L_{\Sigma_0} \|u_1 - u_2\|_H \quad\forall u_1 \, , u_2 \in H\, . \end{equation} For the theory of cylindrical Wiener processes see \cite{PR}. To simplify presentation of the results we also assume the following translation invariance \begin{equation} \label{TransInv} \|\Sigma_0 (\hat{v}(\cdot - C))\|_{L_2 (U, H)} = \|\Sigma_0 (\hat{v} + \left(\hat{v}(\cdot - C)- \hat{v}\right))\|_{L_2 (U, H)} \, \forall C\in\mathbb{R}\, . \end{equation} \medskip \noindent A typical example covered by the assumptions is $$ dv (t) = \left[ \nu\partial^2_{xx} v (t) + bf(v(t))\right]\, dt + \sigma(v(t))\, dW^Q (t) $$ where $\sigma : \mathbb{R} \to\mathbb{R}$ is Lipschitz, $\sigma (0) = \sigma (1) = 0$, $W^Q$ is a $Q$-Wiener process with covariance operator $Q$ for which its square-root $\sqrt{Q}$ admits a kernel $k_{\sqrt{Q}} (x,y)\in L^2 (\mathbb{R}^2)$ satisfying $$ \sup_{x\in\mathbb{R}} \int k^2_{\sqrt(Q)}(x,y)\, dy < \infty $$ (see \cite{St}). \medskip \noindent Similar to the deterministic case we can give the equation a rigorous formulation as a stochastic evolution equation with values in the Hilbert space $H = L^2 (\mathbb{R})$ by decomposing $v(t) = u(t) + \hat{v} (t)$ w.r.t. the travelling wave to obtain the following stochastic evolution equation \begin{equation} \label{StochRelRDE} \begin{aligned} du(t) & = \left[ \nu\Delta u (t) + bG(t, u(t))\right]\, dt + \Sigma (t , u (t))\, dW(t) \end{aligned} \end{equation} where the nonlinear term $G$ is as in \eqref{Nonlinear} and \begin{equation} \label{RealizationDispersion} \Sigma (t,u)h := \Sigma_0 \left( \hat{v}(t) + u\right) h \, , \quad u\in H\, , h\in U\, , \end{equation} is a continuous mapping $$ \Sigma (\cdot , \cdot ) : [ 0, \infty ) \times H\to L_2 (U,H) \, . $$ The assumptions \eqref{dispersion} and \eqref{TransInv} on the dispersion operator imply \begin{equation} \label{Dispersion1} \| \Sigma (t, u_1 ) - \Sigma (t, u_2)\|_{L_2 (U,H)} \le L_{\Sigma_0} \|u_1 - u_2\|_H \end{equation} and \begin{equation} \label{Dispersion2} \| \Sigma (t, u)\|_{L_2 (U,H)} \le \| \Sigma_0 (\hat{v})\|_{L_2 (U,H)} + L_{\Sigma_0} \|u\|_H \, . \end{equation} \medskip \noindent We now consider the equation \eqref{StochRDE} w.r.t. the same triple $V\hookrightarrow H \equiv H^\prime\hookrightarrow V^\prime$ as in the deterministic case. Due to the properties \eqref{Est1}, \eqref{Est2}, \eqref{Est3} and \eqref{Est4}, we can deduce from Theorem 1.1. in \cite{LR} for all finite $T$ and all (deterministic) initial conditions $u_0\in H$ the existence and uniqueness of a solution $(u(t))_{t\in [0, T]}$ of \eqref{StochRDE} satisfying the moment estimate $$ E\left( \sup_{t\in[0,T]} \|u(t)\|^2_H + \int_0^T \|u(t)\|_V^2\, dt \right) < \infty \, . $$ In particular, for any $m\in\mathbb{R}$, we can apply Proposition \ref{prop3_4} to a typical trajectory $u(\cdot )(\omega )$ to obtain a unique solution $C(\cdot )(\omega )$ of the ordinary differential equation \eqref{ODE1}. It is also clear that the resulting stochastic process $(C(t))_{t\ge 0}$ is $(\mathcal{F}_t )_{t\ge 0}$-adapted, since $(u(t))_{t\ge 0}$ is. \medskip \noindent In the next step let us consider the stochastic process $$ \tilde{u} (t) = u(t) + \hat{v}(t) - \hat{v}(\cdot + C(t)) = v(t) -\tilde{v}(t) $$ which is $(\mathcal{F}_t)_{t\ge 0}$ adapted too and satisfies the stochastic evolution equation $$ d\tilde{u} (t) = \left[ \nu\Delta \tilde{u} (t) + b \tilde{G} (t, \tilde{u}(t)) - (\dot{C} (t) -c)\tilde{v}_x (t)\right] \, dt + \tilde{\Sigma} (t, \tilde{u} (t))\, dW(t) \, , $$ where $$ \tilde{G} (t,u) = f(u + \tilde{v} (t) ) - f(\tilde{v}(t)) \, , \tilde{\Sigma} (t,u) = \Sigma (t, u + \tilde{v} (t)) \, , $$ and the moment estimates $$ E\left( \sup_{t\in [0,T]} \|\tilde{u} (t)\|_H^2 + \int_0^T \|\tilde{u} (t)\|^2_V\, dt \right) < \infty\, . $$ Theorem 4.2.5 in \cite{PR} now implies that the real-valued stochastic process $\|\tilde{u}\|^2_H (t)$ is a continuous local semimartingale so that we have in particular the following time-dependent Ito-formula \begin{equation} \label{ItoFormula} \begin{aligned} \varphi (t,\|\tilde{u}(t)\|^2_H ) & = \int_0^t \varphi_t (s, \|\tilde{u} (s)\|^2_H) + 2\varphi_x (s,\|\tilde{u} (s)\|^2_H) \langle \nu\Delta \tilde{u} (s) \\ & \quad + b\tilde{G}(s, \tilde{u} (s)) - \dot{C} (s) \tilde{w} (s), \tilde{u} (s) \rangle \\ & \quad + \varphi_x (s,\|\tilde{u} (s)\|^2_H)\|\tilde{\Sigma} (s,\tilde{u} (s))\|^2_{L_2(H)} \\ & \quad + \varphi_{xx} (s,\|\tilde{u} (s)\|_H^2) 2\|\tilde{\Sigma}^\ast (s,\tilde{u} (s))\tilde{u} (s)\|_H^2\, ds \\ & \quad + \int_0^t \varphi_x (s, \|\tilde{u} (s)\|^2_H )\, d\tilde{M}_s \end{aligned} \end{equation} for any $\varphi\in C^{1,2}([0,T]\times \mathbb{R}_+)$. Here, $\tilde{\Sigma}^\ast (s,u)$ denotes the adjoint operator of $\tilde{\Sigma} (s,u)$. \begin{theorem} \label{th3} Recall the definition of $\kappa_\ast$ and $C_\ast$ in Theorem \ref{th1} and assume that $L^2_{\Sigma_0} \le \frac{\kappa_\ast} 4$. Let $v_0 = u_0 + \hat{v}$ and $v(t) = u(t) + \hat{v} (t)$, where $u(t)$ is the unique solution of the stochastic evolution equation \eqref{StochRDE} and $\tilde{u}(t) = u(t) + \hat{v} (t) - \tilde{v}(t)$. Then $$ P\left( T < \infty \right) \le \frac 1{c^2_\ast} \left( \|\tilde{u} (0)\|^2_H + \frac 4{\kappa_\ast} \|\Sigma_0 (\hat{v})\|^2_{L_2 (U,H)} \right) $$ where $T$ denotes the first exit time \begin{equation} \label{ExitTime} T := \inf\{t\ge 0\mid \|\tilde{u} (t) \|_H > c_\ast \} \, , \qquad c_\ast = \left( \frac{\kappa_\ast}{4b\eta_2}\right)\wedge 1\, , \end{equation} with the usual convention $\inf\emptyset = \infty$. \end{theorem} \begin{proof} Similar to the proof of Theorem \ref{th1} we have the following inequality $$ \begin{aligned} & \langle \nu\Delta \tilde{u} (t) + b\tilde{G}(t, \tilde{u} (t)) - (\dot{C} (t) - c) \tilde{v}_x (t), \tilde{u} (t) \rangle \\ & \qquad \le -\kappa_\ast \|\tilde{u} (t)\|^2_V + b\eta_2 \left(\|\tilde{u}(t)\|_H + \|\tilde{u}(t)\|_H^2 \right) \|\tilde{u}(t)\|^2_V\, . \end{aligned} $$ In particular, $$ \langle \nu\Delta \tilde{u} (t) + b\tilde{G}(t, \tilde{u} (t)) - (\dot{C} (t)-c) \tilde{v}_x (t), \tilde{u} (t) \rangle \le -\frac{\kappa_\ast}2 \|\tilde{u} (t)\|^2_V $$ for $t\le T$, where $T$ is as in \eqref{ExitTime}. \eqref{Dispersion2} and \eqref{TransInv} imply $$ \|\tilde{\Sigma} (\tilde{u} (t))\|_{L_2 (H)}^2 \le 2 \left( L_{\Sigma_0}^2 \|\tilde{u}(t)\|^2_H + \|\Sigma_0 (\hat{v})\|^2_{L_2 (U,H)} \right) $$ and therefore $$ \begin{aligned} & 2\langle \nu\Delta \tilde{u} (t) + b\tilde{G}(t, \tilde{u} (t)) - \dot{C} (t) \tilde{v}_x (t), \tilde{u} (t) \rangle + \|\tilde{\Sigma} (t,\tilde{u} (t))\|^2_{L_2 (H)} \\ \qquad & \le -\frac{\kappa_\ast}2 \|\tilde{u} (t)\|^2_V + 2 \|\Sigma_0 (\hat{v})\|^2_{L_2 (U,H)}\, . \end{aligned} $$ Applying Ito's formula \eqref{ItoFormula} to $e^{\frac {\kappa_\ast} 2 t}x$, then yields for $t < T$ that $$ \begin{aligned} e^{\frac{\kappa_\ast} 2 t}\|\tilde{u} (t)\|^2_H & \le \|\tilde{u}(0)\|^2_H + \frac 4{\kappa_\ast} \left( e^{\frac{\kappa_\ast} 2 t} - 1\right)\|\Sigma_0 (\hat{v})\|^2_{L_2 (U,H)} \\ & \qquad + \int_0^t e^{\frac{\kappa_\ast} 2 s}\, d\tilde{M}_s\, . \end{aligned} $$ Taking expectations we obtain $$ E\left( \|\tilde{u} (t\wedge T )\|^2_H \right) \le \|\tilde{u}(0)\|^2_H + \frac 4{\kappa_\ast}\|\Sigma_0 (\hat{v})\|^2_{L_2 (U,H)} $$ and thus in the limit $t\uparrow\infty$ $$ \begin{aligned} c_\ast^2 P\left( T < \infty \right) & = E\left( \|\tilde{u} (T)1_{T < \infty} \|^2_H \right) \le \lim_{t\uparrow\infty} E\left( \|\tilde{u}(t\wedge T) \|^2_H \right) \\ & \le \|\tilde{u}(0)\|^2_H + \frac 4{\kappa_\ast} \|\Sigma_0 (\hat{v})\|^2_{L_2 (U,H)} \end{aligned} $$ which implies the assertion. \end{proof} \section {Proof of Lemma \ref{lem1_1}, Proposition \ref{prop0} and Theorem \ref{th0}} \label{Section0} \subsection{Proof of Lemma \ref{lem1_1} and Proposition \ref{prop0}} \begin{proof} {\bf (of Lemma \ref{lem1_1})} For the proof of (i) note that $\hat{v}_x \ge 0$ and $\int^{\infty}_{-\infty} \hat{v}_x dx = \lim_{x\to \infty } \hat{v} (x) - \hat{v} (-x) = 1$. In particular, $\hat{v}_x \in L^1 (\mathbb{R})$ which implies that $\lim_{n\to\infty} \hat{v}_x (x_n) = 0$ for some sequence $x_n\uparrow\infty$. It follows for all $x$ that $$ \begin{aligned} \hat{v}^2_x (x) & = \hat{v}_x^2 (x_n) - 2 \int^{x_n}_{x} \hat{v}_{xx} \hat{v}_x \, dx \\ & = \hat{v}^2_x (x_n) - 2 \frac{c}{\nu} \int^{x_n}_{x}\hat{v}^2_x \, dx + 2 \frac{b}{\nu} \int^{x_n}_x f (\hat{v}) \hat{v}_x \, dx \\ & \le \hat{v}_x^2 (x_n) + 2 \frac{b}{\nu} \int^{\hat{v} (x_n)}_{\hat{v} (x)} f(v) \, dv \qquad \forall n\, . \end{aligned} $$ Consequently, $$ \hat{v} _x^2 (x) \le \lim_{n\to\infty} \hat{v}^2_x (x_n) + 2 \frac{b}{\nu} \int^{\hat{v} (x_n)}_{\hat{v} (x)} f (v)\, dv = \frac{2b}{\nu} \int^1_{\hat{v} (x)} f(v)\, dv \, . $$ In particular, $$ \lim_{x\to\infty} \hat{v}_x^2 (x) \le \limsup_{x\to\infty} \frac{2b}{\nu} \int^1_{\hat{v} (x)} f (v)\, dv = 0 $$ and thus also $\lim_{x\to\infty} e^{ - \alpha\frac{c}{\nu}x} \hat{v}^2_x (x) = 0$ for all $\alpha\ge 0$. \medskip \noindent For the proof of (ii) note that for all $\alpha \in \mathbb{R}$ \begin{equation} \label{eq_lem1_1_1} \begin{aligned} \frac{d}{dx} ( e ^{- \alpha \frac{c}{\nu} x } \hat{v}^2_x ) & = \left( - \alpha \frac{c}{\nu}\hat{v}_x + 2 \hat{v}_{xx}\right) e^{- \alpha \frac{c}{\nu} x} \hat{v}_x \\ & = (2 - \alpha) \frac{c}{\nu} e ^{- \alpha\frac{c}{\nu} x} \hat{v}^2_x - \frac{b}{\nu} e^{- \alpha \frac{c}{\nu} x } f (\hat{v}) \hat{v}_x\, . \end{aligned} \end{equation} Taking $\alpha = 2 $ we conclude in particular that $\frac{d}{dx} \left( e^{- \alpha \frac{c}{\nu} x } \hat{v}^2_x \right) \ge 0$ (resp. $\le 0$) for $x \le v^{-1} (a)$ (resp. $x\ge v^{-1} (a)$), since $v_x \ge 0$ and $f(\hat{v} (x)) \le 0 $ (resp. $\ge 0$) for $x\le v^{-1} (a)$ (resp. $x\ge v^{-1} (a)$). Consequently, for $c\ge 0$, $$ \lim_{x\to-\infty} e^{- 2\frac{c}{\nu} x } \hat{v}^2_x (x) = \inf_{x \le v^{-1} (a)} e^{- 2\frac{c}{\nu} x } \hat{v}^2_x (x) =: \gamma < \infty $$ and thus for $\alpha < 2$ $$ \lim_{x\to -\infty} e^{- \alpha\frac{c}{\nu} x} \hat{v}^2_x (x) \le \limsup_{x\to -\infty} e^{(2 - \alpha ) \frac{c}{\nu} x } \gamma = 0\, . $$ Similarly in the case $c\le 0$ $$ \lim_{x\to\infty} e^{- 2\frac{c}{\nu} x } \hat{v}^2_x (x) = \inf_{x \ge v^{-1} (a)} e^{- 2\frac{c}{\nu} x } \hat{v}^2_x (x) =: \gamma < \infty $$ and thus for $\alpha < 2$ $$ \lim_{x\to\infty} e^{- \alpha\frac{c}{\nu} x} \hat{v}^2_x (x) \le \limsup_{x\to\infty} e^{(2 - \alpha ) \frac{c}{\nu} x } \gamma = 0\, . $$ Combining with (i) we obtain the assertion. \end{proof} \bigskip \noindent Let us now turn to the proof of Proposition \ref{prop0}. Let $x_\ast = \hat{v}^{-1} (v_\ast )$ and $w(x) := e^{-\frac c{2\nu} x} \hat{v}_x (x)$. Then $$ w_{xx} = \left( \left( \frac c{2\nu}\right)^2 - \frac b\nu f'(\hat{v})\right) w\, , $$ since differentiating $c\hat{v}_x = \nu \hat{v}_{xx} + bf(\hat{v})$ implies $c\hat{v}_{xx} = \nu \hat{v}_{xxx} + bf' (\hat{v})\hat{v}_x$. \medskip \noindent {\bf Proof of Proposition \ref{prop0} (i)} Note that $$ \frac d{dx} \left( w_x^2 + \left( \frac b\nu f' (\hat{v}) - \left( \frac c{2\nu}\right)^2 \right) w^2\right) = \frac b\nu f'' \left( \hat{v}\right) \hat{v}_x w^2 $$ is strictly increasing (resp. decreasing ) for $x < x_\ast $ (resp. $x > x_\ast $). According to Lemma \ref{lem1_1} $$ \lim_{|x| \to\infty} \left( w_x^2 + \left( \frac b\nu f' (\hat{v}) - \left( \frac c{2\nu}\right)^2 \right) w^2\right) = 0 $$ so that $$ w_x^2 + \left( \frac b\nu f' (\hat{v}) - \left( \frac c{2\nu}\right)^2 \right) w^2 \ge 0 \qquad \forall x\, . $$ Using $w_x = \left( \frac{c}{2 \nu} - \frac{b}{\nu} \frac{f(\hat{v})}{\hat{v}} \right) w$, we conclude that $$ \left( \frac{c}{2 \nu} - \frac{b}{\nu} \frac{f (\hat{v})}{\hat{v} _x } \right)^2 + \frac{b}{\nu} f^{\prime} (\hat{v}) - \left( \frac{c}{2 \nu} \right)^2 > 0 \, . $$ or equivalently \begin{equation} \label{th0:eq1} \frac{b}{\nu} f^{\prime} (\hat{v}) - \frac{b}{\nu} \frac{f (\hat{v})}{\hat{v} _x}\left( \frac c\nu - \frac{b}{\nu} \frac{ f(\hat{v})}{\hat{v}_x}\right) > 0 \, . \end{equation} In particular, $$ \begin{aligned} \frac b\nu \frac{d}{dx} \frac{f(\hat{v})}{\hat{v}_x} & = \frac b\nu f^{\prime} (\hat{v}) - \frac b\nu \frac{f (\hat{v})}{\hat{v}_x} \frac{\hat{v}_{xx}}{\hat{v}_x} > 0 \end{aligned} $$ so that $\frac{f(\hat{v})}{\hat{v}_x}$ is strictly increasing which implies that $\hat{v}_x$ is log-concave, because $$ - \frac{d^2}{dx^2} \log\hat{v}_x = -\frac{d}{dx} \frac{\hat{v}_{xx}}{\hat{v}_x} = -\frac{d}{dx} \left( \frac c\nu - \frac b\nu \frac{f(\hat{v})}{\hat{v_x}} \right) > 0\, . $$ \medskip \noindent For the proof of part (ii) of Proposition \ref{prop0} we will first need the following \begin{lemma} \label{lem0_01} Let $K_+ := \frac {1-\hat{v}(x_0)}{\hat{v}_x (x_0)}$ and $K_- := \frac {\hat{v}(x_0)}{\hat{v}_x(x_0)}$. Then \begin{itemize} \item[(i)] $\frac {1-\hat{v}(x)}{\hat{v}_x (x)} \le K_+$ for $x\ge x_0$, \item[(ii)] $\frac {\hat{v}(x)}{\hat{v}_x (x)} \le K_-$ for $x\le x_0$. \end{itemize} \end{lemma} \begin{proof} (i) Consider the function $h := \frac{ 1-\hat{v}}{\hat{v}_x}$. Clearly, $\dot{h} = - 1 - \frac{\hat{v}_{xx}}{\hat{v}_x} h$ is negative, hence $h$ decreasing, in a neighborhood of $x_0$. Since $\hat{v}_x$ is log-concave it follows that $-\frac{\hat{v}_{xx}}{\hat{v}_x}$ is increasing on $[x_0 , \infty)$. We may assume in the following that there exists some $x_+ > x_0$ with $$ -\frac{\hat{v}_{xx}}{\hat{v}_x} (x_+) = \frac{\hat{v}_x}{1-\hat{v}} (x_+) \, . $$ In fact, if this is not the case, then $\dot{h} \le 0$ for all $x\ge x_0$, hence $h$ decreasing on $[x_0 , \infty )$ which already implies the assertion. \smallskip \noindent So let us assume that $h$ is decreasing on $[x_0 , x_+]$ only. In particular, $\frac {1-\hat{v}(x)}{\hat{v}_x (x_+)} \le K_+$. For $x\ge x_+$ it follows that $-\frac{\hat{v}_{xx}}{\hat{v}_x} (x) \ge - \frac{\hat{v}_{xx}}{\hat{v}_x} (x_+) = \frac{\hat{v}_x}{1-\hat{v}} (x_+)$, hence $\frac{d}{dx} \left( e^{-\frac{\hat{v}_x}{1-\hat{v}} (x_+) x} \hat{v}_x \right) \le 0$, and consequently, $$ \begin{aligned} 1-\hat{v}(x) & = \int_x^\infty \hat{v}_x(s)\, ds = \int_x^\infty e^{\frac{\hat{v}_x}{1-\hat{v}} (x_+) s} \left( e^{-\frac{\hat{v}_x}{1-\hat{v}}(x_+) (s)} \hat{v}_x(s) \right) \, ds \\ & \le \int_x^\infty e^{\frac{\hat{v}_x}{1-\hat{v}} (x_+)s} \, ds \left( e^{-\frac{\hat{v}_x}{1-\hat{v}}(x_+) x} \hat{v}_x(x) \right) \\ & = \frac{\hat{v}}{1-\hat{v}_x}(x_+) \hat{v}_x (x) \le K_+ \hat{v}_x (x)\, . \end{aligned} $$ \smallskip \noindent (ii) is shown similar. \end{proof} \medskip \noindent {\bf Proof of Proposition \ref{prop0} (ii)} Since $f(0) = 0$ it follows that $\lim_{v\to 0} \frac{|f(v)|}{v} < \infty$ and thus $$ \limsup_{x\to -\infty} \frac{|f(\hat{v})|}{\hat{v}_x} (x) = \limsup_{x\to -\infty} \frac{|f(\hat{v})|}{\hat{v}} \frac{\hat{v}}{\hat{v}_x} (x) < \infty $$ due to the previous Lemma \ref{lem0_01}. Similarly, $f(1) = 0$ implies that $\lim_{v\to 1} \frac{f(v)}{1-v} < \infty$ and thus $$ \limsup_{x\to\infty} \frac{f(\hat{v})}{\hat{v}_x} (x) = \limsup_{x\to\infty} \frac{f(\hat{v})}{1-\hat{v}} \frac{1-\hat{v}}{\hat{v}_x} (x) < \infty\, . $$ To compute $\gamma_-$ note that $\frac b\nu \frac{f(\hat{v})}{\hat{v}_x}$ is increasing in $x$, hence $\gamma_{-} = \lim_{x\to -\infty} \frac b\nu\frac{f(\hat{v})}{\hat{v}_x} (x) = \inf_{x\in\mathbb{R}} \frac b\nu \frac{f(\hat{v})}{\hat{v}_x}(x)$ exists, must be strictly negative and is finite. Applying l'Hospital's rule we obtain that $$ \gamma_- = \lim_{x\to -\infty} \frac b\nu \frac{f(\hat{v})}{\hat{v}_x}(x) = \lim_{x\to -\infty} \frac b\nu f^\prime (\hat{v})(x) \frac{\hat{v}_x}{\hat{v}_{xx}} (x) = \frac b\nu f^{\prime} (0) \frac 1{\frac c\nu - \gamma_{-}} $$ or equivalently, $\gamma_{-} \left(\frac c\nu - \gamma_{-}\right) = \frac b\nu f^\prime (0)$. Since $\gamma_{-} < 0$ we obtain the assertion. $\gamma_+$ can be computed similarly. \medskip \noindent {\bf Proof of Proposition \ref{prop0} (iii)} The previous part implies for the logarithmic derivative of $\hat{v}_x$ that $$ \lim_{x\to -\infty}\frac{\hat{v}_{xx}}{\hat{v}_x} = \frac c\nu - \gamma_- > \frac c\nu $$ and $$ \lim_{x\to\infty}\frac{\hat{v}_{xx}}{\hat{v}_x} = \frac c\nu - \gamma_+ < \frac c\nu $$ so that for every $\alpha$ satisfying $\alpha \frac c\nu < \frac c\nu - \gamma_-$ (resp. $\alpha \frac c\nu > \frac c\nu - \gamma_+$) it follows that $e^{-\alpha\frac c\nu x}\hat{v}_x$ is increasing for small $x$ (resp. decreasing for large $x$). Hence $\int_{-\infty}^0 e^{-\alpha\frac c\nu x}\hat{v}_x^2\, dx < \infty$ (resp. $\int_0^\infty e^{-\alpha\frac c\nu x}\hat{v}_x^2\, dx < \infty$) in both cases. \medskip \noindent We can also now estimate $$ \int_{-\infty}^0 e^{-\alpha\frac c\nu x} \hat{v}_{xx}^2\, dx \le \sup_{x\in\mathbb{R}} \frac{|\hat{v}_{xx}|}{\hat{v}_x} \int_{-\infty}^0 e^{-\alpha\frac c\nu x} \hat{v}_x^2\, dx < \infty $$ for $\alpha\frac c\nu < \frac c\nu -\gamma_-$ and $$ \int_0^\infty e^{-\alpha\frac c\nu x} \hat{v}_{xx}^2\, dx \le \sup_{x\in\mathbb{R}} \frac{|\hat{v}_{xx}|}{\hat{v}_x} \int_0^\infty e^{-\alpha\frac c\nu x} \hat{v}_x^2\, dx < \infty $$ for $\alpha\frac c\nu > \frac c\nu -\gamma_+$, since $$ \sup_{x\in\mathbb{R}} \frac{|\hat{v}_{xx}|}{\hat{v}_x} \le \frac {|c|}\nu + \sup_{x\in\mathbb{R}} \frac {|f(\hat{v})|}{\hat{v}_x} < \infty $$ again due to the previous part (ii). \subsection{Proof of Theorem \ref{th0}} \bigskip \noindent Inequality \eqref{FunctIneq} is equivalent to \begin{equation} \label{Dissipativity1} \frac b\nu f^\prime (\hat{v}) + 2\frac b\nu \frac{ f(\hat{v})}{\hat{v}_x} \left( \frac b\nu \frac{ f(\hat{v})}{\hat{v}_x} - \frac c\nu\right) \ge \kappa\, . \end{equation} Since $$ \frac b\nu f^\prime (\hat{v}) + 2\frac b\nu \frac{ f(\hat{v})}{\hat{v}_x} \left( \frac b\nu \frac{ f(\hat{v})}{\hat{v}_x} - \frac c\nu\right) > \frac b\nu \frac{ f(\hat{v})}{\hat{v}_x} \left( \frac b\nu \frac{ f(\hat{v})}{\hat{v}_x} - \frac c\nu\right) $$ and $\lim_{x\to\pm\infty} \frac b\nu \frac{ f(\hat{v})}{\hat{v}_x} \left( \frac b\nu \frac{ f(\hat{v})}{\hat{v}_x} - \frac c\nu\right) > 0$, it remains to prove that \begin{equation} \label{th0:eq2} g_2 := \frac 12 f^\prime (\hat{v})\hat{v}_x^2 - f(\hat{v}) \hat{v}_{xx} > 0 \end{equation} for $x$ with $0\le \frac b\nu \frac{f(\hat{v})}{\hat{v}_x}(x)\le \frac c\nu$ in order to be able to find $\kappa > 0$ satisfying \eqref{Dissipativity1}. In the particular case $c = 0$ this is obvious. \medskip \noindent We therefore assume from now on that $c > 0$. Since $\frac{f(\hat{v})}{\hat{v}_x}$ is strictly increasing, it follows that for all $\alpha\in ]\inf \frac bc \frac{f(\hat{v})}{\hat{v}_x} , \sup \frac bc \frac{f(\hat{v})}{\hat{v}_x} [$ there exists a unique $x_\alpha\in\mathbb{R}$ with $$ \frac{b}{\nu} \frac{f (\hat{v})}{\hat{v} _x} (x_\alpha ) = \alpha \frac c\nu\, . $$ In particular, $\hat{v}(x_0) = a$ and $\hat{v}(x_1) $ is the unique root of $\hat{v}_{xx}$, that is, $x_1$ is the location of the maximum of $\hat{v}_x$ and $x_0 \le x_1$ and both, $f(\hat{v})$, $f^\prime(\hat{v}) \ge 0$ on $[x_0, x_1]$. \medskip \noindent We will subdivide the proof of \eqref{th0:eq2} into the three cases $x\in[x_0 , x_{0.5}\wedge x_\ast ]$, $x\in [x_{0.5}\vee x_\ast , x_1]$ and $x\in [x_{0.5}\wedge x_\ast , x_{0.5}\vee x_\ast ]$. \begin{lemma} \label{th0:lem1} $g_2 (x) > 0$ for $x\in [x_0 , x_{0.5}\wedge x_\ast ]$. \end{lemma} \begin{proof} We may suppose that $x_\ast \ge x_0$, because otherwise, the interval is empty. Let $$ \bar{x} := \inf \{ x\ge x_0 \mid g_2 (x) = 0\} \, . $$ We will show that $\bar{x} > x_{0.5}\wedge x_\ast$. Since $g_2 (x_0) = \frac 12 f^\prime (a)\hat{v}_x^2 (x_0) > 0$ we certainly have that $\bar{x} > x_0$. Suppose now that $\bar{x}\le x_{0.5}\wedge x_\ast$. Then for all $m\in\mathbb{N}$ $$ \begin{aligned} \frac d{dx} f(\hat{v})\hat{v}_{xx} \hat{v}_x^m & = f^\prime (\hat{v}) \hat{v}_{xx} \hat{v}_x^{1+m} + f (\hat{v}) \left( \hat{v}_{xxx} + m\frac{\hat{v}_{xx}^2}{\hat{v}_x} \right) \hat{v}_x^{m} \\ & = \frac c{\nu} f(\hat{v}) \hat{v}_{xx} \hat{v}_x^{m} + f^\prime (\hat{v}) \left( \frac c\nu - 2\frac b\nu \frac{f(\hat{v})}{\hat{v}_x} \right) \hat{v}_x^{m+2} \\ & \qquad + mf (\hat{v})\hat{v}^2_{xx} \hat{v}_x^{m-1} \end{aligned} $$ which implies \begin{equation} \label{th0:lem1-1} \begin{aligned} f(\hat{v})\hat{v}_{xx}\hat{v}_x^{m} (\bar{x}) & = \int_{x_0}^{\bar{x}} e^{\frac c\nu (\bar{x}-s)} f^\prime (\hat{v}) \left( \frac c\nu - 2\frac b\nu \frac{f(\hat{v})}{\hat{v}_x} \right) \hat{v}_x^{m+2} \, ds \\ & \qquad + m\int_{x_0}^{\bar{x}} e^{\frac c\nu (\bar{x}-s)} f(\hat{v}) \hat{v}^2_{xx} \hat{v}_x^{m-1} \, ds \\ & =: I + II\, , \quad\mbox{say.} \end{aligned} \end{equation} Now $f^{(2)}(\hat{v})\ge 0$, hence $f^\prime (\hat{v})$ increasing, and $\frac c\nu - 2\frac b\nu \frac{f(\hat{v})}{\hat{v}_x} \ge 0$ due to $x\le x_{0.5}\wedge x_\ast$, implies that $$ \begin{aligned} I & \le f^\prime(\hat{v}) (\bar{x}) \int_{x_0}^{\bar{x}} e^{\frac c\nu (\bar{x}-s)} \left( \frac c\nu - 2\frac b\nu \frac{f(\hat{v})}{\hat{v}_x} \right) \hat{v}_x^{m+2} \, ds \\ & = \frac 12 f^\prime (\hat{v}) (\bar{x}) \left( \hat{v}_x^{m+2} (\bar{x}) - e^{\frac c\nu (\bar{x}-x_0)}\hat{v}_x^{m+2} (x_0) - m \int_{x_0}^{\bar{x}} e^{\frac c\nu (\bar{x}-s)} \hat{v}_{xx}\hat{v}_x^{m+1} \, ds \right) \\ & \quad + \frac 12 f^\prime (\hat{v}) (\bar{x}) \int_{x_0}^{\bar{x}} e^{\frac c\nu (\bar{x}-s)} \left( \frac c{2\nu} - \frac b\nu \frac{f(\hat{v})}{\hat{v}_x} \right) \hat{v}_x^{m+2} \, ds \end{aligned} $$ thereby using $e^{\frac c\nu (\bar{x}-s)}\left( \frac c\nu \hat{v}_x - 2\frac b\nu f(\hat{v})\right) \hat{v}_x = \frac d{ds} e^{\frac c\nu (\bar{x}-s)}\hat{v}_x^2$. Inserting the last estimate into \eqref{th0:lem1-1} and using $g_2 (s) \ge 0$ for $s\le \bar{x}$, hence $$ II \le \frac m2 \int_{x_0}^{\bar{x}} e^{\frac c\nu (\bar{x}-s)} f^\prime(\hat{v}) \hat{v}_{xx} \hat{v}_x^{m+1}\, ds\, , $$ we arrive at $$ \begin{aligned} f(\hat{v})\hat{v}_{xx}\hat{v}_x^{m} (\bar{x}) & < \frac 12 f^\prime (\hat{v})\hat{v}_x^{m+2} (\bar{x}) \\ & \qquad - \frac m2 \int_{x_0}^{\bar{x}} e^{\frac c\nu (\bar{x}-s)} \left( f^\prime(\hat{v}) (\bar{x}) - f^\prime (\hat{v}) (s)\right) \hat{v}_{xx}\hat{v}_x^{m+1} \, ds \\ & \qquad + \frac 12 f^\prime (\hat{v}) (\bar{x}) \int_{x_0}^{\bar{x}} e^{\frac c\nu (\bar{x}-s)} \left( \frac c{2\nu} - \frac b\nu \frac{f(\hat{v})}{\hat{v}_x}\right)\hat{v}_x^{m+2} \, ds \, . \end{aligned} $$ We can now choose $m$ sufficiently large such that $$ \begin{aligned} f^\prime (\hat{v}) & (\bar{x}) \int_{x_0}^{\bar{x}} e^{\frac c\nu (\bar{x}-s)} \left( \frac c{2\nu} - \frac b\nu \frac{f(\hat{v})}{\hat{v}_x} \right)\hat{v}_x^{m+2} \, ds \\ & < m \int_{x_0}^{\bar{x}} e^{\frac c\nu (\bar{x}-s)} \left( f^\prime(\hat{v}) (\bar{x}) - f^\prime (\hat{v}) (s)\right) \hat{v}_{xx}\hat{v}_x^{m+1} \, ds \end{aligned} $$ since $f^\prime (\hat{v}) (\bar{x}) - f^\prime (\hat{v}) (s) > 0$ for $s < \bar{x}$. It follows that $f(\hat{v})\hat{v}_{xx}\hat{v}_x^{m} (\bar{x}) < \frac 12 f^\prime(\hat{v})\hat{v}_x^{m+2}(\bar{x})$, which is a contradiction to the definition of $\bar{x}$. It follows that $\bar{x} > x_{0.5}\wedge x_\ast$ and thus $g_2 (x) > 0$ on $[x_0 , x_{0.5}\wedge x_\ast]$. \end{proof} \medskip \noindent We now turn to the second subinterval $[x_{0.5}\vee x_\ast , x_1]$ where $f^\prime (\hat{v})$ decreases. \begin{lemma} \label{th0:lem2} $g_2 (x) > 0$ for $x\in [x_{0.5} \vee x_\ast , x_1]$. \end{lemma} \begin{proof} We may assume that $x_\ast \le x_1$. Otherwise the interval $[x_0 \vee x_\ast , x_1]$ is empty. Let $$ \bar{x} := \sup \{ x\in [x_{0.5}\vee x_\ast , x_1] \mid g_2 (x) = 0\} \, . $$ In this case we will show that $\bar{x} < x_{0.5}\vee x_\ast$. Since $g_2 (x_1) = \frac 12 f^\prime (v)\hat{v}_x^2 (x_1) > 0$ we certainly have that $\bar{x} < x_1$. Suppose now that $\bar{x}\ge x_{0.5}\vee x_\ast$. Then for all $m\in\mathbb{N}$ we have that $$ \begin{aligned} \frac d{dx} f(\hat{v})\hat{v}_{xx} \hat{v}_x^{-m} & = f^\prime (\hat{v}) \hat{v}_{xx} \hat{v}_x^{1-m} + f (\hat{v}) \left( \hat{v}_{xxx} - m\frac{\hat{v}_{xx}^2}{\hat{v}_x} \right) \hat{v}_x^{-m} \\ & = \frac c{\nu} f(\hat{v}) \hat{v}_{xx} \hat{v}_x^{-m} - f^\prime (\hat{v}) \left( 2\frac b\nu f(\hat{v}) - \frac c\nu \hat{v}_x \right) \hat{v}_x^{1-m} \\ & \qquad - mf (\hat{v})\hat{v}^2_{xx} \hat{v}_x^{-(m+1)} \end{aligned} $$ which implies \begin{equation} \label{th0:lem2-1} \begin{aligned} f(\hat{v})\hat{v}_{xx}\hat{v}_x^{-m} (\bar{x}) & = \int_{\bar{x}}^{x_1} e^{\frac c\nu (\bar{x}-s)} f^\prime (\hat{v}) \left( 2\frac b\nu f(\hat{v}) - \frac c\nu \hat{v}_x \right) \hat{v}_x^{1-m} \, ds \\ & \qquad + m\int_{\bar{x}}^{x_1} e^{\frac c\nu (\bar{x}-s)} f(\hat{v}) \hat{v}^2_{xx} \hat{v}_x^{-(m+1)} \, ds \\ & =: I + II\, , \quad\mbox{say.} \end{aligned} \end{equation} Now $f^{(2)}(\hat{v})\le 0$, hence $f^\prime (\hat{v})$ decreasing, and $2\frac b\nu f(\hat{v}) - \frac c\nu \hat{v}_x \ge 0$ due to $x\ge x_{0.5}\vee x_\ast$, implies that $$ \begin{aligned} I & \le f^\prime(\hat{v}) (\bar{x}) \int_{\bar{x}}^{x_1} e^{\frac c\nu (\bar{x}-s)} \left( 2\frac b\nu \frac{f(\hat{v})}{\hat{v}_x} - \frac c\nu \right) \hat{v}_x^{2-m} \, ds \\ & = \frac 12 f^\prime (\hat{v}) (\bar{x}) \left( \hat{v}_x^{2-m} (\bar{x}) - e^{\frac c\nu (\bar{x}-x_1)}\hat{v}_x^{2-m} (x_1) - m \int_{\bar{x}}^{x_1} e^{\frac c\nu (\bar{x}-s)} \hat{v}_{xx}\hat{v}_x^{1-m} \, ds \right) \\ & \quad + \frac 12 f^\prime (\hat{v}) (\bar{x}) \int_{\bar{x}}^{x_1} e^{\frac c\nu (\bar{x}-s)} \left( \frac b\nu \frac{f(\hat{v})}{\hat{v}_x} - \frac c{2\nu} \right) \hat{v}_x^{2-m} \, ds \\ \end{aligned} $$ thereby using $e^{\frac c\nu (\bar{x}-s)}\left( 2\frac b\nu f(\hat{v}) - \frac c\nu \hat{v}_x \right) \hat{v}_x = - \frac d{ds} e^{\frac c\nu (\bar{x}-s)}\hat{v}_x^2$. Inserting the last estimate into \eqref{th0:lem2-1} and using $g_2 (s) \ge 0$ for $s\ge \bar{x}$, hence $$ II \le \frac m2 \int_{\bar{x}}^{x_1} e^{\frac c\nu (\bar{x}-s)} f^\prime(\hat{v}) \hat{v}_{xx} \hat{v}_x^{1-m}\, ds\, , $$ we arrive at $$ \begin{aligned} f(\hat{v})\hat{v}_{xx}\hat{v}_x^{-m} (\bar{x}) & < \frac 12 f^\prime (\hat{v})\hat{v}_x^{2-m} (\bar{x}) \\ & \qquad - \frac m2 \int_{\bar{x}}^{x_1} e^{\frac c\nu (\bar{x}-s)} \left( f^\prime(\hat{v}) (\bar{x}) - f^\prime (\hat{v}) (s)\right) \hat{v}_{xx}\hat{v}_x^{1-m} \, ds \\ & \qquad + \frac 12 f^\prime (\hat{v}) (\bar{x}) \int_{\bar{x}}^{x_1} e^{\frac c\nu (\bar{x}-s)} \left( \frac b\nu \frac{f(\hat{v})}{\hat{v}_x} - \frac c{2\nu}\right)\hat{v}_x^{2-m} \, ds \, . \end{aligned} $$ We can now choose $m$ sufficiently large such that $$ \begin{aligned} f^\prime (\hat{v}) & (\bar{x}) \int_{\bar{x}}^{x_1} e^{\frac c\nu (\bar{x}-s)} \left( \frac b\nu \frac{f(\hat{v})}{\hat{v}_x} - \frac c{2\nu}\right)\hat{v}_x^{2-m} \, ds \\ & < m \int_{\bar{x}}^{x_1} e^{\frac c\nu (\bar{x}-s)} \left( f^\prime(\hat{v}) (\bar{x}) - f^\prime (\hat{v}) (s)\right) \hat{v}_{xx}\hat{v}_x^{1-m} \, ds \end{aligned} $$ since $f^\prime (\hat{v}) (\bar{x}) - f^\prime (\hat{v}) (s) > 0$ for $s > \bar{x}$. It follows that $f(\hat{v})\hat{v}_{xx}\hat{v}_x^{-m} (\bar{x}) < \frac 12 f^\prime(\hat{v})\hat{v}_x^{2-m}(\bar{x})$, which is a contradiction to the definition of $\bar{x}$. It follows that $\bar{x} < x_{0.5}\vee x_\ast$ and thus $g_2 (x) > 0$ on $[x_{0.5}\vee x_\ast , x_1 \vee x_\ast]$. \end{proof} \noindent Finally we consider the third subinterval $[x_{0.5}\wedge x_\ast , x_{0.5}\vee x_\ast]$. \begin{lemma} \label{th0:lem3} $g_2 (x) > 0$ for $x\in [x_{0.5}\wedge x_\ast , x_{0.5}\vee x_\ast]$. \end{lemma} \begin{proof} We consider the two cases $x_\ast \le x_{0.5}$ and $x_{0.5} > x_\ast$ separately. \medskip \noindent {\bf Case 1:} $x_\ast \le x_{0.5}$, hence $[x_{0.5}\wedge x_\ast , x_{0.5}\vee x_\ast] = [x_\ast , x_{0.5}]$. \medskip \noindent In this case $f^\prime (\hat{v})$ is decreasing and $\frac{f(\hat{v})}{\hat{v}_x}\frac{\hat{v}_{xx}}{\hat{v}_x}$ increases, since $$ \frac d{dx} \frac{f(\hat{v})}{\hat{v}_x}\frac{\hat{v}_{xx}}{\hat{v}_x} = \left( \frac c\nu - 2\frac b\nu \frac{f(\hat{v})}{\hat{v}_x} \right) \frac d{dx} \frac{f(\hat{v})}{\hat{v}_x} \ge 0\, . $$ Hence $$ \begin{aligned} g_2 (x) & = \hat{v}_x^2 (x) \left( \frac 12 f^\prime (\hat{v}) - \frac{f(\hat{v})}{\hat{v}_x}\frac{\hat{v}_{xx}}{\hat{v}_x}\right) (x) \\ & \ge \hat{v}_x^2 (x) \left( \frac 12 f^\prime (\hat{v}) - \frac{f(\hat{v})}{\hat{v}_x}\frac{\hat{v}_{xx}}{\hat{v}_x}\right) (x_{0.5}) \\ & \ge \frac{\hat{v}_x^2 (x)}{\hat{v}_x^2 (x_{0.5})} g_2 (x_{0.5}) > 0 \end{aligned} $$ according to Lemma \ref{th0:lem2}. \medskip \noindent {\bf Case 2:} $x_{0.5} < x_\ast$, hence $[x_{0.5}\wedge x_\ast , x_{0.5}\vee x_\ast] = [x_{0.5} , x_\ast]$. \medskip \noindent In this case $f^\prime (\hat{v}$ is increasing and $\frac{f(\hat{v})}{\hat{v}_x}\frac{\hat{v}_{xx}}{\hat{v}_x}$ decreases, since $$ \frac d{dx} \frac{f(\hat{v})}{\hat{v}_x}\frac{\hat{v}_{xx}}{\hat{v}_x} = \left( \frac c\nu - 2\frac b\nu \frac{f(\hat{v})}{\hat{v}_x} \right) \frac d{dx} \frac{f(\hat{v})}{\hat{v}_x} \le 0\, . $$ Hence $$ \begin{aligned} g_2 (x) & = \hat{v}_x^2 (x) \left( \frac 12 f^\prime (\hat{v}) - \frac{f(\hat{v})}{\hat{v}_x}\frac{\hat{v}_{xx}}{\hat{v}_x}\right) (x) \\ & \ge \hat{v}_x^2 (x) \left( \frac 12 f^\prime (\hat{v}) - \frac{f(\hat{v})}{\hat{v}_x}\frac{\hat{v}_{xx}}{\hat{v}_x}\right) (x_{0.5}) \\ & \ge \frac{\hat{v}_x^2 (x)}{\hat{v}_x^2 (x_{0.5})} g_2 (x_{0.5} ) > 0 \end{aligned} $$ according to Lemma \ref{th0:lem1}. \end{proof} \section {Proof of Theorem \ref{th1}} \label{Proofth1} \noindent Recall that the travelling wave satisfies the equation $c\hat{v}_x = \nu\hat{v}_{xx} + bf (\hat{v})$, hence $c\hat{v}_{xx} = \nu\hat{v}_{xx} + b f ^{\prime} (\hat{v}) \hat{v}_x$. Given a function $u\in C^1_c (\mathbb{R})$ and writing $u = h \hat{v}_x$ it follows that $$ \nu\Delta u + b f^\prime (\hat{v}) u = \nu h_{xx} \hat{v}_x + 2\nu\hat{v}_{xx} h_x + c\hat{v}_{xx}h $$ which implies \begin{equation} \label{eq2_1} \begin{aligned} & -\langle \nu\Delta u + b f^\prime (\hat{v}) u, u \rangle = -\int (\nu h_{xx} + 2\nu\frac{\hat{v}_{xx}}{\hat{v}_x} hx ) \ h \, \hat{v}_x^2 dx - c\int h\hat{v}_{xx} h \hat{v}_x dx \\ & \qquad = \nu\int h^2_x \hat{v}^2_x \, dx + c\int h_x h \hat{v}^2_x dx \\ & \qquad = \nu \int \left( h e^{\frac c{2\nu}x}\right)^2_x e^{-\frac c\nu x} \hat{v}^2_x\, dx - \nu\left(\frac{c}{2\nu} \right)^2 \int h^2 \hat{v}^2_x dx \\ & \qquad =: \mathcal{E} (h) \, . \end{aligned} \end{equation} In the following, consider the two functions $h_0 (x) = 1$ and $h_1 (x) = e^{-\frac c{2\nu} x}$. Notice that $\mathcal{E} (h_0) = 0$ and $\mathcal{E} (h_1) = \nu\left(\frac{c}{2\nu} \right)^2 \int e^{-\frac c\nu x} \hat{v}^2_x dx > 0$. Consequently, the Schr\"odinger operator $\nu \Delta u + bf^\prime (\hat{v})u$ is not negative definite on the subspace $\mathcal{N} := \mbox{ span} \{ \hat{v}_x , e^{-\frac c{2\nu} x}\hat{v}_x \}$. $\hat{v}_x$ can be interpreted as the vector pointing in the tangential direction of the orbit of the travelling wave solutions, since $\frac{d}{dt} \hat{v} (\cdot + ct) = c \hat{v}_x (\cdot + ct)$ and the second function $h_1 (x) = e^{-\frac c{2\nu} x}$ measures the infinitesimal variation of the linearization of $\nu \Delta u + b \left( f(u+ \hat{v}) - f(\hat{v})\right)$ w.r.t. time. Notice that in the case $c=0$ of a stationary wave both functions coincide, since the linearization is independent of the time. \medskip \noindent Using the representation \eqref{eq2_1} we will now first consider the gradient form $\int h_x^2 w^2 \, dx$, where $w = e^{-\frac c{2\nu} x}\hat{v}_x$. The logarithmic derivative $$ \theta (x) := \frac{w_x (x)}{w(x)} = \frac c{2\nu} - \frac b\nu \frac{f(\hat{v})}{\hat{v}_x} (x) = \frac{\hat{v}_{xx}}{\hat{v}_x} - \frac c{2\nu} $$ of $w$ satisfies the inequality $$ \begin{aligned} - \theta ' + \theta^2 & = - \frac d{dx} \frac{\hat{v}_{xx}}{\hat{v}_x} + \left( \frac{\hat{v}_{xx}}{\hat{v}_x} - \frac c{2\nu}\right)^2 \\ & = - \frac d{dx} \frac{\hat{v}_{xx}}{\hat{v}_x} + \left( \frac{\hat{v}_{xx}}{\hat{v}_x}\right)^2 -\frac c\nu \frac{\hat{v}_{xx}}{\hat{v}_x} + \left( \frac c{2\nu}\right)^2 \ge \kappa + \left( \frac c{2\nu}\right)^2 \end{aligned} $$ for some $\kappa > 0$ according to Theorem \ref{th0}. Proposition \ref{PropHardy} below now implies the weighted Hardy type inequality \begin{equation} \label{WeightedHardy} \int h^2 w^2 \, dx \le \frac 1{\kappa + \left( \frac c{2\nu}\right)^2} \int h_x^2\, w^2\, dx \end{equation} for any $h\in C_b^1 (\mathbb{R})$ with $h(x_{0.5}) = 0$, where $x_{0.5}$ is the unique root of $\frac b\nu \frac{f(\hat{v}}{\hat{v}_x} (x) = \frac c{2\nu}$ (recall that $\frac{f(\hat{v})}{\hat{v}_x}$ is strictly monotone increasing). Clearly, the last inequality implies the Poincare inequality \begin{equation} \label{Poincare} \int h^2 w^2\, dx \le \frac 1{\kappa + \left( \frac c{2\nu}\right)^2} \int h_x^2 w^2\, dx + Z^{-1} \left( \int hw^2\, dx \right)^2 \end{equation} for the normalizing constant $Z = \int e^{-\frac c\nu x} \hat{v}_x^2\, dx$ and for any $h\in C_b^1 (\mathbb{R})$. Unfortunately, this is not yet enough, since for $u = he^{\frac c{2\nu} x}\hat{v}_x$ we cannot control the tangential direction $\int h w^2\, dx = \int u e^{-\frac c{2\nu} x}\hat{v}_x\, dx$ but only the tangential direction $\int h e^{\frac c{2\nu} x} w^2\, dx = \int u \hat{v}_x\, dx$. This is done in the following \begin{proposition} \label{prop2_1a} For $h\in C_b^1 (\mathbb{R})$ the following inequality holds: \begin{equation} \label{Step1-eq1} \int h^2\, w^2\, dx \le \frac 1{\kappa + \left( \frac c{2\nu}\right)^2} \int h_x^2\, w^2\, dx + C_\ast \left( \int he^{\frac c{2\nu}x}w^2\, dx \right)^2 \, . \end{equation} with $$ C_{\ref{prop2_1a}} = \frac {\kappa + \left( \frac c{2\nu}\right)^2}\kappa \frac{\int e^{-\frac c\nu x}\hat{v}_x^2\, dx} {\left( \int e^{-\frac c{2\nu}x} \hat{v}_x^2\, dx\right)^2} \, . $$ \end{proposition} \medskip \noindent The proof of Proposition \ref{prop2_1a} requires the following lemma. \begin{lemma} \label{lem1a} There exists a function $g\in C^1 (\mathbb{R})\cap L^2 (\mathbb{R}, w^2\, dx)$, $g\ge 0$, satisfying the equation \begin{equation} \label{lem1a_eq1} \left( \kappa + \left( \frac c{2\nu}\right)^2\right) g - \left( g_{xx} + \left(\frac c\nu - \frac b\nu \frac{f(\hat{v})}{\hat{v}_x} \right)g_x\right) = \left( \kappa + \left( \frac c{2\nu}\right)^2\right) e^{\frac c{2\nu} x} \, . \end{equation} Moreover, $|g_x (x)|\le \frac c{2\nu} g(x)$ for all $x\in\mathbb{R}$ and we have the lower bound $\int g^2 w^2\, dx \ge\frac{\left( \int e^{-\frac c{2\nu}x} \hat{v}_x^2\, dx\right)^2}{\int e^{-\frac c\nu x}\hat{v}_x^2\, dx}$. \end{lemma} \begin{proof} Fix a 1D-Brownian motion $(W_t)_{t\ge 0}$ defined on some underlying probability space $(\Omega , \mathcal{A} , P)$. For all initial conditions $x\in \mathbb{R}$ let $X_t (x)$ be the unique strong solution of the stochastic differential equation \begin{equation} \label{lem2_1a_eq2} dX_t (x) = \left(\frac c\nu - \frac b\nu \frac{f(\hat{v})}{\hat{v}_x}(X_t (x))\right)\, dt + dW_t \, , X_0 (x) = x\, . \end{equation} The family of solutions is a Markov process on $\mathbb{R}$ having invariant measure $w^2\, dx$, i.e., $$ \int_{\mathbb{R}} E\left( h(X_t (h) \right) w^2\, dx = \int_{\mathbb{R}} h\, w^2\, dx\, , t\ge 0\, . $$ It follows that the associated semigroup of transition operators $p_t h(x) := E\left( h(X_t (x))\right)$ induces a contraction semigroup of Markovian integral operators on $L^p (\mathbb{R} , w^2\, dx)$ for all $p\in [1, \infty ]$. \medskip \noindent Theorem V.7.4 in \cite{Kr95} yields that the function $$ g(x) := \left( \kappa + \left( \frac c{2\nu}\right)^2\right) \int_0^\infty e^{-\left( \kappa + \left( \frac c{2\nu}\right)^2\right) t } E\left( e^{\frac c{2\nu} X_t (x)} \right) \, dt $$ is twice continuously differentiable and solves equation \eqref{lem1a_eq1}. Since $e^{\frac c{2\nu}x}\in L^2 (\mathbb{R}, w^2\, dx )$ we also have that $g\in L^2 (\mathbb{R}, w^2\, dx)$. \medskip \noindent We will show next the pointwise estimate of the derivative $g_x$. The solution $X_t (x)$ of the stochastic differential equation \eqref{lem2_1a_eq2} is differentiable w.r.t. its initial condition $x$. Its differential $DX_t (x)$ is the solution of the linear linear differential equation $$ dDX_t (x) = - 2\frac d{dx} \frac b\nu \frac{f(\hat{v})}{\hat{v}_x} \left( X_t (x) \right) DX_t (x)\, dt \, , DX_0 (x) = 1\, , $$ with explicit solution $$ DX_t (x) = \exp \left( - 2\frac b\nu \int_0^t \frac d{dx} \frac{f(\hat{v})}{\hat{v}_x} \left( X_s (x) \right)\, ds \right) < 1 $$ for all $t > 0$, since $\frac d{dx} \frac b\nu \frac{f(\hat{v})}{\hat{v}_x} > 0$ according to Proposition \ref{prop0}. Consequently, $$ \begin{aligned} g_x (x) & = \frac c{2\nu}\left( \kappa + \left( \frac c{2\nu}\right)^2\right) \int_0^\infty e^{- \left( \kappa + \left( \frac c{2\nu}\right)^2\right) t} E\left( e^{\frac c{2\nu} X_t (x)} DX_t (x) \right) \, dt \\ & = \frac c{2\nu}\left( \kappa + \left( \frac c{2\nu}\right)^2\right) \int_0^\infty e^{- \left( \kappa + \left( \frac c{2\nu}\right)^2\right) t} E\left( e^{\frac c{2\nu} X_t (x) - 2\frac b\nu \int_0^t \frac d{dx} \frac b\nu \frac{f(\hat{v})}{\hat{v}_x} \left( X_s (x) \right)\, ds}\right) \, dt \end{aligned} $$ which implies that $$ |g_x (x)| < \frac c{2\nu} \left( \kappa + \left( \frac c{2\nu}\right)^2\right) \int_0^\infty e^{- \left( \kappa + \left( \frac c{2\nu}\right)^2\right)t } E\left( e^{\frac c{2\nu} X_t (x)} \right)\, dt = \frac c{2\nu} g(x) \, . $$ \medskip \noindent It remains to prove the lower bound. To this end note that invariance of the measure $w^2\, dx$ implies $$ \begin{aligned} \int g^2 w^2\, dx & \ge \left( \int w^2\, dx \right)^{-1} \left( \int g w^2\, dx \right)^2 \\ & = \left( \int w^2\, dx \right)^{-1} \left( \left( \kappa + \left( \frac c{2\nu}\right)^2\right) \int_0^\infty e^{-\left( \kappa + \left( \frac c{2\nu}\right)^2\right) t} \int p_t \left( e^{\frac c{2\nu} x} \right) w^2\, dx\, dt\, \right)^2 \\ & = \left( \int w^2\, dx \right)^{-1} \left(\left( \kappa + \left( \frac c{2\nu}\right)^2\right) \int_0^\infty e^{-\left( \kappa + \left( \frac c{2\nu}\right)^2\right) t} \, dt \int e^{\frac c{2\nu} x} w^2\, dx\, \right)^2 \\ & = \left( \int w^2\, dx \right)^{-1}\left( \int e^{\frac c{2\nu} x} w^2\, dx\, \right)^2 \, . \end{aligned} $$ \end{proof} \bigskip \begin{proof} (of Proposition \ref{prop2_1a}). Let $\tilde{h} := h - \frac{h(x_1)}{g(x_1)} g$, hence $\tilde{h}(x_1) = 0$. Then Proposition \ref{PropHardy} implies that $$ \int \tilde{h}^2\, w^2\, dx \le \frac 1{\kappa + \left( \frac c{2\nu}\right)^2} \int \tilde{h}_x^2\, w^2\, dx $$ or equivalently, $$ \int h^2 w^2 \, dx \le \frac 1{\kappa + \left( \frac c{2\nu}\right)^2} \int h_x^2\, w^2\, dx + T (h) $$ with the remainder $$ \begin{aligned} T(h) & := \frac 1{\kappa + \left( \frac c{2\nu}\right)^2} \left( -2 \int h_x \frac{h(x_1)}{g(x_1)} g_x w^2 dx + \left( \frac{h(x_1)}{g(x_1)} \right)^2 \int g_x^2 w^2\, dx \right) \\ & \quad + 2\int h\frac{h(x_1)}{g(x_1)} g w^2 dx - \left( \frac{h(x_1)}{g(x_1)} \right)^2 \int g w^2\, dx \end{aligned} $$ Using Lemma \ref{lem1a} we obtain that $$ \begin{aligned} T(h) & = 2 \frac{h(x_1)}{g(x_1)} \int \left( g -\frac 1{\kappa + \left( \frac c{2\nu}\right)^2} \left( g_{xx} - \left(\frac c\nu - 2\frac b\nu \frac{f(\hat{v})}{\hat{v}_x} \right) g_x \right) \right) h\, w^2\, dx \\ & \qquad + \left( \frac{h(x_1)}{g(x_1)} \right)^2 \left( \frac 1{\kappa + \left( \frac c{2\nu}\right)^2} \int g_x^2\, w^2\, dx - \int g^2\, w^2\, dx \right) \\ & \le 2\frac{h(x_1)}{g(x_1)} \int e^{\frac c{2\nu} x} h\, w^2\, dx - \left( \frac{h(x_1)}{g(x_1)} \right)^2 \frac{\kappa}{\kappa + \left( \frac c{2\nu}\right)^2} \int g^2 \, w^2\, dx \, . \end{aligned} $$ In the last inequality we have used the pointwise estimate $|g_x(x)|\le \frac c{2\nu} g(x)$. Using the lower bound $\int g^2 w^2\, dx \ge \frac{\left( \int e^{-\frac c{2\nu}x} \hat{v}_x^2\, dx\right)^2}{\int e^{-\frac c\nu x}\hat{v}_x^2\, dx}$ obtained in the previous Lemma we conclude that $$ \begin{aligned} T(h) & \le \frac{\kappa + \left( \frac c{2\nu}\right)^2}{\kappa} \left( \int g^2 w^2\, dx \right)^{-1} \left( \int h e^{\frac c{2\nu}x}w^2\, dx\right)^2 \\ & \le C_{\ref{prop2_1a}} \left( \int h e^{\frac c{2\nu}x}w^2\, dx\right)^2 \end{aligned} $$ with $$ C_{\ref{prop2_1a}} = \frac {\kappa + \left( \frac c{2\nu}\right)^2}\kappa \frac{\int e^{-\frac c\nu x}\hat{v}_x^2\, dx}{\left( \int e^{-\frac c{2\nu}x} \hat{v}_x^2\, dx\right)^2} $$ which implies the assertion. \end{proof} \medskip \noindent Having Proposition \ref{prop2_1a} we can now state the following \begin{proposition} \label{prop2_1} Let $u\in C_c^1 (\mathbb{R})$ and write $u = hw$ for $h\in C_c^1 (\mathbb{R})$. Then $$ \begin{aligned} \langle \nu \Delta u + b f^\prime (\hat{v} ) u,u\rangle & \le -\nu \frac{\kappa}{\kappa + \left( \frac c{2\nu}\right)^2} \int h_x^2 w^2\, dx \\ & \qquad + \nu\left(\frac c{2\nu}\right)^2 C_{\ref{prop2_1a}} \left( \int u \hat{v}_x\, dx\right)^2 \, . \end{aligned} $$ \end{proposition} \begin{proof} First note that $h\in C_c^1 (\mathbb{R})$, and thus equations \eqref{eq2_1} and Proposition \ref{prop2_1a} imply that $$ \begin{aligned} & \langle \nu \Delta u + b f^\prime (\hat{v} ) u,u\rangle = - \nu \int h^2_x w^2 \, dx + \nu\left(\frac{\nu}{2c} \right)^2 \int h^2 w^2 dx \\ & \qquad \le - \nu \frac{\kappa}{\kappa + \left( \frac c{2\nu}\right)^2} \int h_x^2\, w^2\, dx + \nu\left(\frac c{2\nu}\right)^2 C_{\ref{prop2_1a}} \left( \int \tilde{h} e^{\frac c{2\nu}x} w^2\, dx\right)^2 \\ & \qquad = - \nu \frac{\kappa}{\kappa + \left( \frac c{2\nu}\right)^2} \int h_x^2\, w^2\, dx + \nu\left(\frac c{2\nu}\right)^2 C_{\ref{prop2_1a}} \left( \int u \hat{v}_x \, dx\right)^2 \, . \end{aligned} $$ \end{proof} \noindent In the next step we will show that for $u\in C_c^1 (\mathbb{R})$ and $u = h\hat{v}_x$ its $V$-norm $\|u\|_V$ can be controlled by $\int \left( he^{\frac c{2\nu} x}\right)^2_x w^2\, dx$. \begin{lemma} \label{lem2_1} Let $u\in C_c^1 (\mathbb{R} )$ and write $ u = hw$. Then $$ \|u\|_V^2 \le q_1 \int h^2_x w^2\, dx + q_2 \langle u , \hat{v}_x\rangle^2 $$ where $$ q_1 := \left( 1 + \left( \frac {b\eta}{\nu} + 1\right) \frac 1{\kappa + \left( \frac c{2\nu}\right)^2} \right)\, , \qquad q_2 := \left( \frac{b\eta}{\nu} + 1\right) C_{\ref{prop2_1a}} \, , $$ and $$ \eta := \max_{v\in [0,1]} f^\prime (v)\, . $$ \end{lemma} \begin{proof} Using \eqref{eq2_1} we have that $$ \begin{aligned} \nu \int u_x^2\, dx & = - \langle \nu\Delta u + b f^\prime (\hat{v} ) u , u\rangle + b\langle f^\prime (\hat{v}) u , u \rangle \\ & \le \nu \int h^2_x w^2\, dx + b\eta \|u\|_H^2 \, . \end{aligned} $$ Proposition \ref{prop2_1a} now implies $$ \begin{aligned} \|u\|_V^2 & \le \left( 1 + \left( \frac {b\eta}{\nu} + 1\right) \frac 1{\kappa + \left( \frac c{2\nu}\right)^2} \right) \int h^2_x w^2\, dx \\ & \qquad + \left( \frac{b\eta}{\nu} + 1\right) C_{\ref{prop2_1a}} \langle u,\hat{v}_x\rangle^2 \, , \end{aligned} $$ which implies the assertion. \end{proof} \begin{proof} (of Theorem \ref{th1}) First let $u\in C_c^1 (\mathbb{R} )$. Then Proposition \ref{prop2_1} implies the estimate $$ \begin{aligned} \langle \nu \Delta u + bf^\prime (\hat{v}) u , u \rangle & \le - \nu \frac{\kappa}{\kappa + \left( \frac c{2\nu}\right)^2} \int h_x^2\, w^2\, dx \\ & \qquad + \nu \left(\frac c{2\nu}\right)^2 C_{\ref{prop2_1a}}\langle u , \hat{v}_x\rangle^2 \, . \end{aligned} $$ Combining the last estimate with the previous Lemma \ref{lem2_1}, we obtain that $$ \begin{aligned} \langle \nu \Delta u + b f^\prime (\hat{v})u,u\rangle & \le - \frac{\kappa}{\kappa + \left( \frac c{2\nu}\right)^2} \frac {\nu}{q_1} \|u\|_V^2 \\ & \qquad + \left(\frac\kappa{\kappa + \left( \frac c{2\nu}\right)^2} \frac{\nu q_2}{q_1} + \nu \left(\frac c{2\nu}\right)^2 C_{\ref{prop2_1a}} \right) \langle u,\hat{v}_x\rangle^2\, \end{aligned} $$ which implies Theorem \ref{th1} with $$ \kappa_\ast = \frac{\kappa}{\kappa + \left( \frac c{2\nu}\right)^2} \frac {\nu}{q_1} $$ and $$ C_\ast = \left( \kappa_\ast q_2 + \frac{\nu}{\kappa} \left(\frac c{2\nu}\right)^2 \left( \kappa + \left(\frac c{2\nu}\right)^2\right) \frac{\int e^{-\frac c\nu x }\hat{v}_x^2\, dx}{\left(\int e^{-\frac c{2\nu}x }\hat{v}_x^2\, dx \right)^2} \right)\, . $$ \end{proof} \medskip \noindent It remains to prove the weighted Hardy type inequality \eqref{WeightedHardy} which is of independent interest. \begin{proposition} \label{PropHardy} Let $w\in C_b^2 (\mathbb{R})$ and $\theta = \frac{w_x}{w}$. Suppose that $$ \inf_{x\in\mathbb{R}} - \theta ' (x) + \theta^2 (x) \ge \kappa_0 > 0 $$ and that there exists $\hat{x}$ such that $\theta (\hat{x}) = 0$. Then $$ \int h^2\, w^2\, dx \le \frac 1\kappa \int h_x^2\, w^2\, dx $$ for any $h\in C_b^1 (\mathbb{R})$ with $h(\hat{x}) = 0$. \end{proposition} \begin{proof} Define the function $g(x) := \left(-\theta ' (x) + \theta^2 (x)\right)\exp \left( - \int_{\hat{x}}^x \theta (s)\, ds \right)$ and notice that $$ \exp \left( - \int_{\hat{x}}^x \theta (s)\, ds \right) = \exp \left( - \log w (x) + \log w (\hat{x})\right) = \frac{w(\hat{x})}{w(x)} $$ and thus $$ g(x) = \left( - \theta ' (x) + \theta ^2 (x)\right) \frac{w(\hat{x})}{w(x)} \ge \kappa \frac{w(\hat{x})}{w(x)}\, . $$ Then for $x\ge \hat{x}$ we have that $$ \begin{aligned} \left( h(x) - h(\hat{x} )\right)^2 & = \left( \int_{\hat{x}}^x h_x (s)\, ds \right)^2 \le \int_{\hat{x}}^x \frac 1{g(s)} h_x^2 (s)\, ds \int_{\hat{x}}^x g(s)\, ds \\ & = \int_{\hat{x}}^x \frac 1{g(s)} h_x^2 (s) \, ds \left( - \theta (x)\exp\left( -\int_{\hat{x}}^x \theta (s)\, ds\right) \right) \\ & = \int_{\hat{x}}^x \frac 1{g(s)} h_x^2 (s) \, ds \left( - \frac{w_x (x)}{w(x)} \frac{w (\hat{x} )}{w(x)}\right) \\ & \le \frac 1\kappa \int_{\hat{x}}^x \frac{w(s)}{w(\hat{x})} h_x^2 (s) \, ds \left( - \frac{w_x (x)}{w(x)} \frac{w (\hat{x} )}{w(x)}\right) \, . \end{aligned} $$ Integrating against $w^2\, dx$ for $x\ge \hat{x}$ now yields the following estimate \begin{equation} \label{eq_1_2_1} \begin{aligned} \int_{\hat{x}}^\infty \left( h - h(\hat{x} )\right)^2 w^2 \, dx & \le \frac 1\kappa \int_{\hat{x}}^\infty \frac{w(s)}{w(\hat{x})} h_x^2 (s) \int_s^\infty - w_x (x) w(\hat{x})\, dx \, ds \\ & = \frac 1\kappa \int_{\hat{x}}^\infty h_x^2 (s) w^2 (s) \, ds \, . \end{aligned} \end{equation} Similarly, for $x\le \hat{x}$ we have that $$ \begin{aligned} \left( h(\hat{x}) - h(x)\right)^2 & = \left( \int_x^{\hat{x}} h_x (s)\, ds \right)^2 \le \int_x^{\hat{x}} \frac 1{g(s)} h_x^2 (s)\, ds \int_x^{\hat{x}} g(s)\, ds \\ & = \int_x^{\hat{x}} \frac 1{g(s)} h_x^2 (s) \, ds \left( \theta (x) \exp \left(- \int_{\hat{x}}^x \theta (s)\, ds\right) \right) \\ & = \int_x^{\hat{x}} \frac 1{g(s)} h_x^2 (s) \, ds \frac{w_x (x)}{w(x)} \frac{w (\hat{x} )}{w(x)} \\ & \le \frac 1\kappa \int_x^{\hat{x}} \frac{w(s)}{w(\hat{x})} h_x^2 (s) \, ds \frac{w_x (x)}{w(x)} \frac{w (\hat{x} )}{w(x)} \, . \end{aligned} $$ Integrating against $w^2\, dx$ now for $x\le \hat{x}$ yields \begin{equation} \label{eq_1_2_2} \begin{aligned} \int_{-\infty}^{\hat{x}} \left( h-h(\hat{x} )\right)^2 & w^2\, dx \\ & \le \frac 1\kappa \int_{-\infty}^{\hat{x}} \frac{w(s)}{w(\hat{x})} h_x^2 (s) \int_{-\infty}^{\hat{x}} w_x (x) w(\hat{x})\, dx \, ds \\ & = \frac 1\kappa \int_{-\infty}^{\hat{x}} h_x^2 (s) w^2 (s) \, ds \, . \end{aligned} \end{equation} The assertion now follows from estimates \eqref{eq_1_2_1} and \eqref{eq_1_2_2}. \end{proof} \bigskip \noindent {\bf Acknowlegdement} This work is supported by the BMBF, FKZ 01GQ1001B.
1,314,259,995,364
arxiv
\section*{References}% \submitted{The Astrophysical Journal, {\it in press}} \title{X-ray Observations of Gravitationally Lensed Quasars; Evidence for a Hidden Quasar Population} \author{G. Chartas\altaffilmark{1}} \altaffiltext{1}{Astronomy and Astrophysics Department, The Pennsylvania State University, University Park, PA 16802.} \begin{abstract} X-ray observations are presented of gravitationally lensed quasars with redshifts ranging between 1 and 4. The large magnification factors of gravitationally lensed (GL) systems allow us to investigate the properties of quasars with X-ray luminosities that are substantially lower than those of unlensed ones and also provide an independent means of estimating the contribution of faint quasars to the hard X-ray component of the cosmic X-ray background. Spectral indices have been estimated in the rest frame energy bands 0.5 - 1keV (soft), 1 - 4keV(mid) and 4 - 20keV(hard). Our spectral analysis indicate a flattening of the spectral index in the hard band for 2 radio-loud quasars in the GL quasar sample for which the data have moderate signal-to-noise ratio. These results are consistent with the reported spectral properties of non-lensed radio-loud quasars, however, there are no indications of spectral hardenning towards fainter X-ray fluxes. We have identified a large fraction of Broad Absorption Line (BAL) quasars amongst the GL quasar population. We find that approximately 35$\%$ of radio-quiet GL quasars contain BAL features which is significantly larger than the 10$\%$ fraction of BAL quasars presently found in optically selected flux limited quasar samples. We present a simple model that estimates the effects of attenuation and lens magnification on the luminosity function of quasars and that explains the observed fraction of GL BAL quasars. These observations suggest that a large fraction of BAL quasars are missed from flux limited optical surveys. Modeling of several X-ray observations of the GL BAL quasar PG1115+080 suggests that the observed large X-ray variability may be caused in part by a variable intrinsic absorber consistent with previously observed variability of the BAL troughs in the UV band. The observed large X-ray flux variations in PG1115+080 offer the prospect of considerably reducing errors in determining the time delay with future X-ray monitoring of this system and hence constraining the Hubble constant H$_{0}$. \end{abstract} \keywords{gravitational lensing --- quasars: ---X-rays: galaxies} \section{INTRODUCTION} Several attempts have been made to characterize the properties of distant and faint quasars and compare them to those of relatively nearby and bright ones. (Bechtold et al. 1994; Elvis et al. 1994; Vikhlinin et al. 1995; Cappi et al. 1997; Schartel et al. 1996; Laor et al. 1997; Yuan et al. 1998; Brinkmann et al. 1997; Reeves et al. 1997; Fiore et al. 1998). The evolution of quasar properties is in part studied by identifying changes in spectral properties with redshift. Information obtained from estimating the X-ray properties of quasars as a function of X-ray luminosity and redshift may be useful in constraining physical accretion disk models that explain the observed AGN continuum emission. The study of X-ray properties of quasars with relatively low luminosity may also provide clues to the nature of the remaining unresolved portion of the hard component of the cosmic X-ray background (XRB) (see, for example, Inoue et al. 1996; Ueda 1996; Ueda et al. 1998). The Gravitational Lensing (GL) effect has been widely employed as an analysis tool in a variety of astrophysical situations. The study of GL systems (GLS) in the radio and optical community has proven to be extremely rewarding by providing constraints on cosmological parameters $\Lambda$ and H$_{0}$ (Kochanek 1996, Falco et al. 1997), by probing the evolution of mass-to-light ratios of high redshift galaxies (Keeton et al. 1998), by probing the evolution of the interstellar medium of distant galaxies (Nadeau et al. 1991), and by determining the total mass, spatial extent and ionization conditions of intervening absorption systems. The study of GL quasars in the X-ray band has been limited until now, the main limiting factor being the collecting area and spectral resolution of current X-ray telescopes combined with the small angular separations of lensed quasar images. One of the main objectives of this paper is to make use of the magnification effect of GL systems to investigate the X-ray properties of faint radio-loud and radio-quiet quasars. In many cases the available X-ray spectra of distant quasars have relatively low signal-to-noise ratio (S/N). This has lead to the development of various techniques to aid the study of faint quasars with 0.2 - 2 keV X-ray fluxes below 1 $\times$ 10$^{-14}$ erg s$^{-1}$ cm$^{-2}$. Most of these observational and analysis techniques employed to date to study the evolution and spectral emission mechanism of faint quasars are in general a slight variation of two distinct approaches. On the one end we find techniques that are based on summing the individual spectra of many faint X-ray sources taken from a large and complete sample (see for example Schartel et al. 1996; Vikhlinin et al. 1995). The goal of stacking is to obtain a single, high signal-to-noise ratio spectrum that contains enough counts to allow spectral fitting with quasar emission models. In some cases, where the initial sample is large enough, the spectra can be summed into bins of X-ray flux, redshift and radio luminosity class. Schartel et al. 1996 applied the stacking technique to a complete quasar sample and found that the mean spectral index for the stacked radio-quiet quasar spectrum was significantly ($\sim2\sigma$) steeper than that of the stacked radio-loud quasar spectrum. Several of the assumptions made in this analysis which were related to the general properties of the quasars may, however, strongly influence the further interpretation of the results. It was assumed, for example, that quasar spectra follow single power-laws over rest frame energies ranging between (1+z)$E_{min}$ and (1+z)$E_{max}$, where $E_{min}$ and $E_{max}$ are the minimum and maximum energy bounds, respectively, of the bandpass of the X-ray observatory and z is the redshift of the quasar. Analyzed in the observer's frame, quasar spectra with concave spectral slopes of increasing redshift would appear to be flatter as a consequence of the cosmological redshift. Any implications of evolutionary change within the quasar spectrum derived from the analysis of stacked spectra in observed frames would need to include the effect of cosmological redshift. Detailed spectra of quasars however indicate in general the presence of several components each associated to a different physical process. For example, several known processes contributing to the observed X-ray spectra are Compton reflection of photons in the disk coronae by the disk which becomes significant at rest energies above $\sim$ 10keV, inverse Compton scattering of photons in the disk coronae by UV photons originating from the disk resulting in a boost of photon energies from the UV range into the soft X-ray range (this is the mechanism that produces the observed power-law spectrum in the 2-10keV range), accretion disk emission, absorption by highly ionized gas (warm absorbers), beamed X-ray emission from jets which may be a large contributor for distant radio-loud quasars, absorption by accretion disk winds, and intervening absorption by damped Lyman alpha systems. The stacked spectrum therefore contains contributions from quasars of different redshifts and possibly spectral shapes that make the interpretation of the results difficult. Vikhlinin et al. 1995, using the ROSAT EMSS sample of 2678 sources, produced stacked spectra within several flux bins. They find a significant continuous flattening of the fitted spectral slopes from higher towards lower X-ray fluxes. One interesting result of their study is that the spectral slope at the very faint end is approximately equal to the slope of the hard (2 - 10keV) X-ray background. The unknown nature of many of the point sources included in the Vikhlinin sample, the inclusion of sources with different redshifts and the calculation of observed rest spectral indices complicates the interpretation of the results. A second technique used to study the general properties of quasars is based on obtaining deep X-ray observations of a few quasars. One advantage of such an approach is that the properties of individual quasars are not smeared out as with stacking methods. The faint fluxes however require extremely long observing times to achieve useable S/N. When total counts are low the quasar X-ray spectra are commonly characterized by a hardness ratio defined as R = {(H - S)}/{(H + S)}, where H and S are the number of counts within some defined hard and soft energy band in the observer's frame respectively. In this paper we outline an alternative approach to investigating the emission mechanism of radio-loud and quiet quasars at high redshift. The gravitational lensing magnification of distant quasars allows us to investigate the X-ray properties of quasars with luminosities relatively lower than those of unlensed quasars of similar redshifts. The amplification factors produced by lensing depends on the geometry of the lensing system and for our sample range between 2 and 30. The moderate-S/N spectra of our sample allow us to employ spectral models with multiple power-law slopes and perform fits in rest-frame energy bands. Our analysis makes use of the GL amplification effect to extend the study of quasar properties to unlensed X-ray flux levels as low as a few $\times$ 10$^{-16}$ erg s$^{-1}$ cm$^{-2}$. The limiting sensitivity of the ROSAT All-Sky Survey, for example, on which many recent studies are based is a few $\times$ 10$^{-13}$ erg s$^{-1}$ cm$^{-2}$. For a GL system with a lens that can be modeled well with a singular isothermal sphere (SIS) model the amplification is straightforward to derive analytically. In most observed cases, however, the deflector is a galaxy or cluster of galaxies with a gravitational potential that does not follow the SIS model and more sophisticated potential models need to be invoked to successfully model these GL systems. To estimate the intrinsic X-ray luminosity of GL quasars in our sample we have incorporated magnification factors determined from modeling of the GL systems with a variety of lens potentials. We performed fits of spectral models to the X-ray data in three rest energy bins, soft from 0.5 - 1 keV, mid from 1 - 4 keV, and high from 4 - 20 keV. Working in the quasar rest frame as opposed to the observers rest frame allows us to distinguish between true spectral evolution of quasars with redshift and apparent change in quasar spectra due to the cosmological redshift of quasar spectra through a fixed energy window in the observer's rest frame. Our search of the ROSAT and ASCA archives yielded 16 GL systems detected in X-rays out of a total of approximately 40 GL candidates. Six of the GL quasars have observed X-ray spectra of medium-S/N. Our search for X-ray counterparts to known GL quasars resulted in the identification of the relatively X-ray bright radio-quiet quasar SBS0909+532 with an estimated 0.2-2 keV flux of about 7 $\times$ 10$^{-13}$ erg s$^{-1}$ cm$^{-2}$. Another interesting result of our search was the identification of a relatively large fraction of radio-quiet GL quasars with BAL quasars. In particular we find that at least 35\% of the known radio-quiet GL quasars are BAL quasars. This value is significantly larger than the $\sim$ 10\% value presently quoted from optical surveys. In section 2 we present details of X-ray observations of GL quasars and describe the analysis techniques used to extract and fit the X-ray spectra. Estimates of the flux magnification factors and unlensed luminosities for the GL systems studied in this paper are presented in section 3. A description of the properties of each GL quasar is presented in section 4. Included in this section are results from spectral modeling of several X-ray observations of the variable GL BAL quasar PG1115+080. Finally, in section 5 we summarize the spectral properties of faint quasars as implied by spectral fits to a sample of GL quasar spectra and provide a plausible explanation for the apparently large fraction of GL BAL quasars that we observe. \section{X-RAY OBSERVATIONS AND DATA ANALYSIS} The X-ray observations presented here were performed with the ROSAT and ASCA observatories. Results for the spectral analyses in the X-ray band for the GL quasars Q0957+561, HE1104-1805, PKS1830-211 and Q1413+117 have already been published (Chartas et al. 1995, 1998; Reimers et al. 1995; Mathur et al. 1997; Green \& Mathur, 1996) while results from X-ray spectral analyses for the quasars SBS0909+532, B1422+231, PG1115+080, 1208+1011 and QJ0240-343 are presented here for the first time. We have included in Table 1 several additional GL systems observed in the X-ray band. These observations however yielded either very low-S/N detections or were made with the ROSAT HRI which provides very limited spectral information. X-ray spectra of the GL quasars Q0957+561, 1422+231, HE1104-1805, SBS0909+532 and PG1115+080 with best fit models are presented in Figure 1 through Figure 5 respectively. \begin{figure*}[t] \plotfiddle{Chartas_tab1.ps}{5.5in}{0}{100.}{100.}{-340}{-250} \end{figure*} \begin{figure*}[t] \plotfiddle{Chartas_tab2.ps}{3.4in}{0}{100.}{100.}{-340}{-430} \end{figure*} \begin{figure*}[t] \plotfiddle{Chartas_tab3.ps}{4.3in}{0}{100.}{100.}{-340}{-345} \end{figure*} \begin{figure*}[t] \plotfiddle{Chartas_fig1.ps}{4.19in}{90}{60.}{60.}{230}{-20} \protect\caption {\footnotesize ROSAT PSPC and ASCA GIS spectra of Q0957+561 with best fit models. \label{fig:fig1} } \end{figure*} \begin{figure*}[t] \plotfiddle{Chartas_fig2.ps}{4.185in}{90}{60.}{60.}{230}{-20} \protect\caption {\footnotesize ASCA SIS spectrum of 1422+231 with best fit models. \label{fig:fig2} } \end{figure*} \begin{figure*}[t] \plotfiddle{Chartas_fig3.ps}{4.185in}{90}{60.}{60.}{230}{-20} \protect\caption {\footnotesize ROSAT PSPC and ASCA SIS spectra of HE1104-1805 with best fit models. \label{fig:fig3} } \end{figure*} \begin{figure*}[t] \plotfiddle{Chartas_fig4.ps}{4.185in}{90}{60.}{60.}{230}{-20} \protect\caption {\footnotesize ROSAT PSPC spectrum of SBS0909+532 with best fit model. \label{fig:fig4} } \end{figure*} \begin{figure*}[t] \plotfiddle{Chartas_tab4.ps}{3.in}{0}{100.}{100.}{-340}{-450} \end{figure*} \begin{figure*}[t] \plotfiddle{Chartas_fig5.ps}{4.185in}{90}{60.}{60.}{230}{-20} \protect\caption {\footnotesize Einstein IPC and ROSAT PSPC spectra of PG1115+080 observed on Dec 5, 1979 and Nov 21, 1991 respectively, accompanied by best fit models. \label{fig:fig5} } \end{figure*} \begin{figure*}[t] \plotfiddle{Chartas_fig6.ps}{4.185in}{90}{60.}{60.}{230}{-20} \protect\caption {\footnotesize Estimated 0.2-2keV flux levels of PG1115+080 for the three available X-ray observations. \label{fig:fig6} } \end{figure*} \begin{figure*}[t] \plotfiddle{Chartas_tab5.ps}{3.in}{0}{100.}{100.}{-340}{-440} \end{figure*} \begin{figure*}[t] \plotfiddle{Chartas_tab6.ps}{2.3in}{0}{100.}{100.}{-340}{-495} \end{figure*} \begin{figure*}[t] \plotfiddle{Chartas_tab7.ps}{2.2in}{0}{100.}{100.}{-340}{-490} \end{figure*} \begin{figure*}[t] \plotfiddle{Chartas_tab8.ps}{4.in}{0}{100.}{100.}{-340}{-380} \end{figure*} \begin{figure*}[t] \plotfiddle{Chartas_tab9.ps}{3.5in}{0}{100.}{100.}{-340}{-440} \end{figure*} The wide separation quasar pairs MG 0023+171, Q1120+0195, LBQS 1429-008 and QJ0240-343 are considered problematic GL candidates that may be binary quasars and not gravitational lenses (Kochanek et al. 1999). However, recent STIS spectroscopy of Q1120+0195(UM425) has revealed broad absorption line features in both lensed images thus confirming the lens nature of this wide separation system (Smette et al. 1998). The origin of the detected X-ray emission for the GL systems RXJ0921+4528 and RXJ0911.4+0551 cannot be determined from the presently available poor S/N ASCA GIS and ROSAT HRI observations respectively. The most likely origins are either the lensed quasar and/or a possible lensing cluster. In Table 1 we list the ASCA and ROSAT observations of GL quasars with detected X-rays. The spatial resolution for on axis pointing of the ROSAT HRI, ROSAT PSPC and ASCA GIS is about 5$''$, 25$''$ and 3$'$ respectively. Thus only for the ROSAT HRI observation of Q0957+561 was it possible to resolve the lensed X-ray images (Chartas et al. 1995). For the data reduction of the ASCA and ROSAT observations we used the FTOOLS package XSELECT. The ASCA SIS data used in this study were all taken in BRIGHT mode and high, medium and low bit rate data were included in the analysis. We created response matrices and ancillary files for each chip using the FTOOLS tasks {\tt sisrmg} and {\tt ascaarf} respectively. For the ROSAT PSPC data reduction we used the response matrix {\tt pspcb\_gain2\_256.rmf} supplied by the ROSAT GOF and created ancillary files using the FTOOLS task {\tt pcarf}. Net events shown in Table 1 are corrected for background, vignetting, exposure and point spread function effects using the source analysis tool {\it SOSTA} (part of the software package XIMAGE). For the spectral analyses we are mostly limited by the energy resolution and counting statistics to fitting simple absorbed power-law models to the data. In addition to spectral fits in the quasar rest frame we also performed spectral fits in the observer's reference frame to facilitate the comparison of our results to those of previous studies. In Table 2 we show the results from fits of absorbed power-law models within three quasar rest frame energy intervals. In most cases no data are available in the soft interval since the corresponding observed energy interval is redshifted below the low energy quantum efficiency cutoff of the ROSAT XRT/PSPC. To estimate any significant difference of the spectral properties of our GL sample from those of unlensed quasar samples we computed the merit function, ${\chi^{2}}\over{N}$, defined by the expression, \begin{equation} {{\chi^{2}}\over{N}} = \sum_{i=0}^{N} {{[\alpha_{\nu,GL}(i) - \alpha_{\nu,UL}(i)]^{2}}\over{{\sigma_{UL}}^{2}(i)}} \end{equation} \noindent where $\alpha_{\nu,GL}$(i) and $\alpha_{\nu,UL}$(i) are the spectral indices of the GL and unlensed quasar sample respectively, $\sigma_{UL}(i)$ are the errors of the spectral indices of the unlensed samples and N is the number of spectral indices compared. We computed the merit function between our data set and that of Fiore et al. 1998. For the comparison we computed the spectral indices $\alpha_{S}$(0.1 - 0.8 keV) and $\alpha_{H}$(0.4 - 2.4 keV) in the quasar observed frames. Incorporating the 1$\sigma$ uncertainties in the fitted values for the GL spectral indices we obtain a distribution of values for the merit function with a most likely value of ${\chi^{2}}\over{N}$ $\sim$ 0.5 with N = 7. We therefore conclude that there is no significant difference at the $\Delta\alpha_\nu$ = 0.3 level between our lensed sample and the Fiore et al. 1998 sample. Spectral modeling of the radio-loud and (non-BAL) radio-quiet quasars of our GL sample that have unlensed X-ray fluxes ranging between 3 $\times$ 10$^{-16}$ and 1 $\times$ 10$^{-12}$ erg s$^{-1}$ cm$^{-2}$ have indices that are consistent with those of brighter quasars (see Figure 7). In spite of the fact that most of the quasars in our sample have intervening absorption systems (see comments on individual systems) we find that the estimated photon indices for the very faint non-BAL quasars of our sample do not approach the level of 1.4 of the hard X-ray background. In Figure 8 we also show that the photon indices of our GL quasar sample do not show any signs of hardenning over a range of three orders of magnitude in unlensed 2-10keV luminosity. The three apparently harder spectra of Figure 7 and Figure 8 correspond to two BAL quasars and one absorbed blazar of our sample. The X-ray spectra of BAL quasars that are modeled as power-laws with Galactic absorption and no intrinsic absorption will erroneously imply relatively low photon indices. For example, spectral fits of a simple power-law plus Galactic absorption model to the X-ray spectrum of PG1115+080 (Fit 1, Table 8) yields a relatively low photon index of 1.4. The presently available spectra of the two BAL quasars of our sample have poor S/N and can not provide significant constrains on the intrinsic absorber column densities. \section{QUASAR UNLENSED LUMINOSITY} The apparent surface brightness of gravitationally lensed images is a conserved quantity, however the observed X-ray flux is amplified due to the geometric distortion of the GL images. The lensed quasar images of our sample are not spatially resolved so we only observe the total magnification of the X-ray flux and not the spatial distortion of the image. Gravitational lensing is in general an achromatic effect, however possible differential absorption in the multiple images, microlensing from stars in the lens galaxy and source spectral variability combined with the expected time delay between photon arrival for each image may produce distinct features in the multiply lensed spectrum of the quasar. We estimated the unlensed X-ray luminosity of the quasars in our sample by scaling the lensed luminosity determined from the spectral fits to the GL magnification factors. The magnification parameters were derived from fits of singular isothermal ellipsoid SIE lens models (Keeton, Kochanek, \& Falco, 1997) to optical and radio observables (e.g. image and lens positions and flux ratios) and incorporating the best fit parameters to derive the convergence, $\kappa$, of the lens. For a SIE lens the convergence $\kappa({\bf x})$ is given by, \begin{equation} \kappa({\bf x}) = {{b}\over{x\sqrt{1 + e\;\cos{(2(\theta - \theta_{0}))}}}} \end{equation} where $b$ is the best-fit critical radius, $x$ is the distance from the lens galaxy center, $\theta$ is the position angle of point ${\bf x}$ with respect to the lens galaxy, $e$ is the ellipticity parameter of the lens and $\theta_{0}$ is the major axis position angle. The magnification $\mu(x)$ of each lensed image for an SIE lens is given by (Kormann, Schneider, and Bartelmann, 1994), \begin{equation} \mu(x) = {1 \over{ (1 - 2\kappa)}} \end{equation} In Table 3 we provide GL model parameters and magnification factors for several GL systems. \section{COMMENTS ON INDIVIDUAL SOURCES} \subsection{The Newly Identified GL X-Ray Source SBS 0909+532} The radio-quiet quasar SBS0909+532 was recently identified as a candidate gravitational lens system with a source redshift of 1.377, and an image separation of 1.107$''$. The lens has not yet been clearly identified, however, GL statistics place the most likely redshift for the lens galaxy at z$_{l}$ $\simeq$ 0.5 with 90$\%$ confidence bounds of 0.18 $<$ z$_{l}$ $<$ 0.83 (Kochanek et al. 1997). Optical spectroscopy (Kochanek et al. 1997; Oscoz et al. 1997) has identified heavy element absorption lines of CIII, FeII and Mg II at z = 0.83. The optical data at this point cannot clearly discern whether the heavy-element absorber is associated with the lensing galaxy. We searched the HEASARC archive and found a bright X-ray source within 7$''$ of the optical location of SBS0909+532, well within the error bars of the ROSAT pointing accuracy of $\sim$ 30$''$. The position of the X-ray counterpart as determined using the {\it detect} routine, which is part of the XIMAGE software package, is 09h 13m 1.7s, 52$^{\circ}$ 59$'$ 39.5$''$ (J2000), whereas the optical source coordinates of SBS0909+532 are 09h 13m 2.4s, 52$^{\circ}$ 59$'$ 36.4$''$ (J2000). This X-ray counterpart was observed serendipitously with the ROSAT PSPC on April 17 1991, April 28 1992 and October 27 1992 with detected count rates of 0.057 $\pm$ 0.006, 0.064 $\pm$ 0.005 and 0.09 $\pm$ 0.01 cnts s$^{-1}$ respectively. We performed simultaneous spectral fits in the observer frame to the three ROSAT PSPC observations. The results are summarized in Table 4. We considered two types of spectral models. In fit 1 of Table 4 we incorporated a redshifted power-law plus Galactic absorption and in fit 2 we included additional absorption at a redshift of 0.83 (possible lens redshift). Fits 1 and 2 yield acceptable reduced $\chi^{2}$(dof) of 1.00(32) and 1.02(31) respectively. We can rule out absorption columns at z = 0.83 of more than 7.5 $\times$ 10$^{20}$ cm$^{-2}$ at the 68.3\% confidence level. We also performed spectral fits in the quasar rest frame bands 0.5 - 1keV (soft band) and 1 - 4 keV (mid band). Simple spectral fits with an absorbed power-law assuming Galactic absorption of 1.72 $\times$ 10$^{20}$ cm$^{-2}$ result in spectral indices of $1.62^{+1.0}_{-0.64}$ and $2.25^{+0.74}_{-0.78}$ for the soft and mid bands respectively. All errors quoted in this paper are at the 68.3$\%$ confidence level unless mentioned otherwise. No X-ray data are presently available for the high energy band 4 - 20 keV. The ROSAT PSPC can only detect photons with energies up to about 3 keV in the rest frame of SBS0909+532. In Figure 4 we show the ROSAT PSPC spectrum of SBS0909+532 together with the best fit absorbed power-law model. \subsection{B1422+231} B1422+231 is a well studied quadrupole GLS with the lensed source being a radio-loud quasar at a redshift of 3.62 (Patnaik et al. 1992) and the lens consisting of a group of galaxies at a redshift of about 0.34 (Tonry, 1998). CIV doublets were found at redshifts of 3.091, 3.382, 3.536 and 3.538 (Bechtold et al. 1995). Strong Mg II and Mg I absorption lines at z = 0.647 have been identified in the quasar spectrum (Angonin - Willaime et al. 1993). X-ray observations of B1422+231 were made on Jan 14, 1995 for about 21.5ks and July 17, 1995 for about 13 ks with the ASCA satellite. The spectra were extracted from circular regions of 2.5$'$ in radius centered on B1422+231 and the backgrounds were estimated from similar sized circular regions located on a source-free region on the second CCD. We first modeled the spectra of the two observations separately. Spectral fits in the observer's frame, incorporating power-law models and absorption due to Galactic cold material, yield photon indices of 1.55$_{-0.08}^{+0.08}$ and 1.46$_{-0.1}^{+0.1}$ for the Jan 1995 and July 1995 observations respectively. We searched for possible departures from single power-law models by considering broken power-law models with a break energy fixed at 4 keV (rest frame). The Jan 14, 1995 data are suggestive of spectral flattening at higher energies while the poor S/N of the July 1995 spectrum cannot significantly constrain the spectral slopes. The 2-10keV X-ray fluxes for the Jan 14, 1995 and July 17, 1995 observations of B1422+231 are estimated to be 1.70$_{-0.37}^{+0.46}$ and 1.93$_{-0.35}^{+0.40}$ $\times$ 10$^{-12}$ erg s$^{-1}$ cm$^{-2}$ respectively (fits 1 and 4 in Table 5). Spectral fits in the quasar mid and high rest-frame bands for the Jan 14, 1995 observation with absorbed power-law models and assuming Galactic absorption of 2.52 $\times$ 10$^{20}$ cm$^{-2}$ yielded spectral indices of $2.02^{+0.46}_{-0.53}$ and $1.66^{+0.13}_{-0.12}$ respectively. \subsection{HE1104-1805} HE1104-1805 is a GL radio-quiet high redshift (z=2.316) quasar with an intervening damped Ly$_{\alpha}$ system and a metal absorption system at z = 1.66 and a Mg II absorption system at z =1.32. Recent deep near IR imaging of HE1104-1805 (Courbin, Lidman, \& Magain, 1998) detect the lensing galaxy at a redshift of 1.66 thus confirming the lens nature of this system. HE1104-1805 was observed with the ROSAT satellite on June 15 1993 for 13100 sec and with the ASCA satellite on May 31 1996 for 35989 sec with SIS0 and 35597 sec with SIS1. Reimers et al. (1995) have fit the ROSAT spectrum of HE1104-1805 in the 0.2 - 2 keV range with an absorbed power law model and find a photon index of 2.24 $\pm$ 0.16, consistent with our fitted value of 2.05 $\pm$ 0.2. The main difference between the Reimers et al. and Chartas 1999 models, used for the fits to the ROSAT spectrum of HE1104-1805, is that the former model allows the column density to be a free parameter in the spectral fit while in the latter model the column density is frozen to the Galactic value of 0.045 $\times$ 10$^{22}$ cm$^{-2}$. For the data reduction of the ASCA SIS0 and SIS1 observations we extracted grade 0234 events within circular regions centered on HE1104-1805 and with radii of 3.2$'$. The background was estimated by extracting events within circular regions in source free areas. High, medium and low bit rate data were combined and only Bright mode data were used in the analysis. We performed several spectral fits to the extracted ASCA spectrum with results summarized in Table 6. A simple spectral fit in the observers frame with an absorbed power-law model yields an acceptable fit with a photon index of 1.91$_{-0.06}^{+0.06}$ and a 2-10 keV flux of about 9.4$_{-1.4}^{+1.5}$ $\times$ 10$^{-13}$ erg s$^{-1}$ cm$^{-2}$. In Figure 3 we show the fit of this model to the ASCA data. Spectral fits to the ASCA and ROSAT X-ray spectra with absorbed power-law models in the mid and high energy bands result in spectral indices of $1.93^{+0.27}_{-0.28}$ and $2.01^{+0.1}_{-0.1}$ respectively. For the spectral model we assumed Galactic absorption with N$_{H}$ = 4.47 $\times$ 10$^{20}$ cm$^{-2}$. \subsection{The Variable GL BAL Quasar PG1115+080} Recent observations of PG1115+080 in the FAR - UV with IUE (Michalitsianos et al. 1996) suggest the presence of a variable BAL region. In particular OVI${\lambda}$ 1033 emission and BAL absorption with peak outflow velocities of $\sim$ 6,000 km s$^{-1}$ were observed to vary over timescales of weeks down to about 1 day. Variations in the BAL absorption features may be due to changes in the ionization state of the BAL material that could lead to changes in the column density. A model proposed by Barlow et al. (1992) to explain the 1 day fluctuations considers the propagation of an ionization front in the BAL flow. We expect variations in the BAL column densities to also manifest themselves as large variations in the observed X-ray flux. We searched the HEASARC archives and found that PG1115+080 was observed with the Einstein IPC on Dec 5 1979, the ROSAT PSPC on Nov 21, 1991 and with the ROSAT HRI on May 27 1994. Using the XIMAGE tool {\it detect} on the ROSAT HRI and PSPC images of the PG1115+080 observations and searching the NED database we found several X-ray sources within a 15 arcmin radius of PG1115+080. Most of these sources were detected in the ROSAT International X-ray Optical Survey (RIXOS). A list of their coordinates and NED identifications is shown in Table 7. The source extraction regions used in the analysis of the PG1115+080 event files were circles centered on PG1115+080 with radii of 1.5 arcmin and 4 arcmin for the PSPC and Einstein observations of PG1115+080, respectively. We excluded regions containing the nearby RIXOS sources. The background regions were circles in the near vicinity of PG1115+080. We performed various spectral fits to the PG1115+080 data with results summarized in Table 8. The observed Einstein IPC and ROSAT PSPC spectra of PG1115+080 accompanied by best fit models are shown in Figure 5. The X-ray observations of PG1115+080 with the ROSAT HRI show that all the detected X-ray emission is localized within a few arcsecs. We therefore do not expect any contamination from possible extended lenses. The HRI image of the field near PG1115+080 is shown in Figure 9. We modeled the observed spectra as power-laws with Galactic and intrinsic absorption. Our spectral fits to the Nov 21, 1991 observation imply absorption in excess to Galactic with a modeled intrinsic absorption of 1.43$_{-1.3}^{+1.3}$ $\times$ 10$^{22}$ cm$^{-2}$ assuming a power-law photon index of 2.3 appropriate for high redshift radio-quiet quasars (Fiore et al. 1998; Yuan et al. 1998). For photon indices ranging between 2 and 2.6 the best fit values for the intrinsic absorption ranges between 0.2 and 3.5 $\times$ 10$^{22}$ cm$^{-2}$. To evaluate the statistical significance of the existence of intrinsic absorption we calculated the F statistic formed by taking the ratio of the difference of $\chi^{2}$ between a fit with only Galactic absorption (fit 3 in Table 8) and a new fit that in addition to Galactic assumes intrinsic absorption (fit 4 in Table 8) to the reduced $\chi^{2}$ of the new fit. We find an F value of 21 between fits 3 and 4 (see Table 8) implying that the addition of an intrinsic absorption component improves the fit to the Nov 21 1991 observation of PG1115+080 with a probability of exceeding F by chance of about 0.005. Our spectral fits to the Dec 5 1979 observations do not indicate absorption in excess to the Galactic one. In contrast to the Nov 21 1991 observation of PG1115+080, the inclusion of intrinsic absorption into our model for spectral fits to the Dec 5 1979 observation produces a significantly larger reduced $\chi^2$. The best fit value for the intrinsic absorber column for the Nov 21 1991 observation (fit 2, Table 8) is poorly constrained to be 1.2$^{+1.1}_{-1.1}$ $\times$ 10$^{23}$ cm$^{-2}$. Notice however from Table 8 that this is a very model dependent result. Our spectral model fits to the presently available X-ray observations of PG1115+080 indicate a decrease of about a factor of 13 of the 0.2-2keV flux between Dec 5 1979 and Nov 21 1991 and an increase by a factor of about 5 between the Nov 21 1991 and May 27 1994 observations. Figure 6 shows the estimated 0.2-2keV flux levels of PG1115+080 for the three X-ray observations. The poor S/N of the available spectra make it difficult to discern the cause of the X-ray flux variability. Possible origins may include a change in the column density of the BAL absorber, intrinsic variability of the quasar or a combination of both these effects. \subsection{Q1208+1011, Q1413+117, QJ0240-343} Q1208+1011 was observed with the ROSAT PSPC on Dec 16 1991 and June 3 1992, for 2,786 sec and 2,999 sec respectively. These short observations provide a weak constraint of 2.66$^{+2.1}_{-0.91}$ on the mid band 1-4 keV rest-frame photon index. The magnification factor of this lens system is estimated to be approximately 4, assuming a singular isothermal sphere lens potential and a lens redshift of z=1.1349 (Siemiginowska et al. 1998). A recent application of the proximity effect, however, measured in the Lyman absorption spectrum of Q1208+1011 (Giallongo et al. 1998) implies an amplification factor as large as 22. Q1413+117 is a BAL GL quasar observed with the ROSAT PSPC on July 20, 1991 for 27,863sec. Using the standard detect and spectral fitting software tools XIMAGE-SOSTA, XIMAGE-detect and XSELECT-XSPEC we detect Q1413+117 at the 3$\sigma$ level in the ROSAT PSPC observation. The ROSAT PSPC and optical (HST) source coordinates of Q1413+117 are 14h 15m 46.4s, +11$^{\circ}$ 29$'$ 56.3$''$ (J2000), and 14h 15m 45s, +11$^{\circ}$ 29$'$ 42$''$ (J2000) respectively. The ROSAT and HST positions are well within the uncertainty of the ROSAT PSPC pointing accuracy. The improvements made in the processing of the ROSAT raw data by the U.S. ROSAT Science Data Center from the revision 0 product (rp700122) to the revision 2 product (rp700122n00), used in this analysis, may explain the non-detection of Q1413+117 in the Green \& Mathur 1995 paper. We fitted the poor S/N PSPC spectrum of Q1413+117 with a power-law model that included Galactic and intrinsic absorption due to cold gas at solar abundances. For photon indices ranging between 2.0 and 2.6 our spectral fits imply intrinsic column densities ranging between 2 and 14 $\times$ 10$^{22}$ cm$^{-2}$. Recently a pair of bright UV-excess objects, QJ0240-343 A and B, with a separation of 6.1 $''$ were discovered by Tinney (1997). The redshift of both objects was found to be 1.4, while no lens has been detected. Monitoring of this system in the optical indicates that it is variable on timescales of a few years. Spectra taken with the 3.9m Anglo-Australian telescope show a metal-line absorption system at z = 0.543 and a possible system at z = 0.337. QJ0240-343 was observed with the ROSAT PSPC in January 1992 with a detected count rate of 2.9$\pm$0.7 $\times$ 10$^{-3}$ cnts s$^{-1}$. GL theory predicts that the lens for this system lies at about z = 0.5. The geometry of this system is very similar to that of the double lens Q0957+561. The large angular image separation of the proposed GL system QJ0240-343 suggests the presence of a lens consisting of a galaxy cluster. The lens however has yet to be detected and it has been suggested that this may be a binary quasar system. \section{DISCUSSION} \subsection{X-ray Properties of Faint Quasars} Our present sample of moderate to high S/N ASCA and ROSAT X-ray spectra of GL quasars contains two radio-loud quasars, Q0957+561 and B1422+231 (see Figures 1 and 2), and three radio-quiet quasars, HE1104-1805 (see Figure 3), SBS0909+532 (see figure 4) and Q1208+1011. Derived photon indices in the soft, mid and hard bands for these objects are presented in Table 2. For the two radio-loud quasars Q0957+561 and B1422+231 we observe a flattening of the spectra between mid and hard bands while for the radio-quiet quasar HE1104-1805 we do not observe any significant change in spectral slope between mid and hard bands. The spectral flattening of radio-loud quasars between mid and hard energy bands has been reported for non-lensed quasars (e.g. Wilkes \& Elvis 1987; Fiore et al. 1998; Laor et al. 1997). The present findings for GL quasars are consistent with those for non-lensed quasars and imply that the underlying mechanism responsible for the spectral hardening in the hard band persists for the relatively high redshift GL quasars of our sample with X-ray luminousities that are less (by magnification factors ranging between 2 and 30) than previously observed objects at similar redshifts. \begin{figure*}[t] \plotfiddle{Chartas_fig7.ps}{3.3in}{90}{40.}{40.}{160}{-20} \protect\caption {\footnotesize Photons indices for radio-loud and radio-quiet GL quasars of our sample as a function of the unlensed 2-10keV flux. \label{fig:fig7} } \end{figure*} \begin{figure*}[t] \plotfiddle{Chartas_fig8.ps}{3.3in}{90}{40.}{40.}{160}{-20} \protect\caption {\footnotesize Photons indices for radio-loud and radio-quiet GL quasars of our sample as a function of the 2-10keV luminosity. X-ray luminosities have been corrected for the magnification effect. \label{fig:fig8} } \end{figure*} Our analysis makes use of the GL amplification effect to extend the study of quasar properties to X-ray flux levels as low as a few $\times$ 10$^{-16}$ erg s$^{-1}$ cm$^{-2}$. The limiting sensitivity of the ROSAT All-sky Survey, for example, on which many recent studies are based, is a few $\times$ 10$^{-13}$ erg s$^{-1}$ cm$^{-2}$. We find that the spectral slopes of the radio-loud and non BAL quiet-quasars of our sample are consistent with those found in quasars of higher flux levels and do not appear to approach the observed spectral index of $\sim$ 1.4 of the hard X-ray background. Absorption due to known intervening systems in Q0957+561, B1422+231, SBS0909+532, Q1208+1011 and HE1104-1805 apparently does not lead to the spectral hardening observed in the Vikhlinin et al. (1995) sample at flux levels below $\sim$ 10$^{-13}$ erg s$^{-1}$ cm$^{-2}$. However, we do find that modeling the two radio-quiet BAL quasars PG1115+080 and Q1413+117 and the radio-loud quasar PKS1830-211, which shows strong X-ray absorption (Mathur et al. 1997), with simple power-law models with Galactic absorption results in very low spectral indices (see Table 2). Similar unlensed sources will therefore contribute to the remaining unresolved portion of the XRB. The presently available sample size of X-ray detected BAL radio-quiet quasars, however, will have to be significantly increased before we can make a statisticly significant quantitative assessment of the BAL quasar contribution to the hard XRB. \subsection{X-ray Properties of Gravitationally Lensed BAL Quasars} Approximately 10\% of optically selected quasars have optical/UV spectra that show deep, high-velocity Broad Absorption Lines (BAL) due mostly to highly ionized species such as C IV, Si IV, N V and O VI. However, a small fraction of BAL quasars show low ionization transitions of Mg II, Al III, Fe II and Fe III as well (e.g. Wampler et al. 1995). The observed absorption troughs are found bluewards of the associated resonance lines and are attributed (see Turnshek et al. 1988) to highly ionized gas flowing away from the central source at speeds ranging between 5,000 and 30,000 km s$^{-1}$. Recent polarization observations (Goodrich 1997) indicate that the true fraction of BAL's and BAL covering factors may be substantially larger ($>$ 30\%) than the presently quoted value of 10\%. Only a very small number of BAL quasars and AGN have been reported in the literature with detections in the X-ray band (PHL5200, Mrk231, SBS1542+541, 1246-057, and Q1120+0195(UM425)). With this work we also add to the list of X-ray detected BAL quasars the GL quasars PG1115+080, Q1413+117 and possibly RXJ0911.4+0551. We consider PG1115+080 and RXJ0911.4+0551 intermediate BAL quasars because of the relatively low peak velocities of the ouflowing absorbers (see Table 9 for a list of outflowing velocities of GL BAL quasars). The X-ray spectra obtained from X-ray observations of BAL quasars have modest (PHL5200, \& Mrk231) to poor S/N and cannot accurately constrain the BAL column densities. \begin{figure*}[t] \plotfiddle{Chartas_fig9.ps}{3.2in}{-90}{43.}{43.}{-160}{240} \protect\caption {\footnotesize HRI image of the field near PG1115+080. \label{fig:fig9} } \end{figure*} \begin{figure*}[t] \plotfiddle{Chartas_fig10.ps}{3.2in}{90}{40.}{40.}{160}{-20} \protect\caption {\footnotesize Plot of the estimated observed fraction of GL BAL quasars as a function of attenuation value A and magnification factor $<M>$. \label{fig:fig10} } \end{figure*} Several of the GL quasars of our sample are known to contain intervening and intrinsic absorption. In particular PG1115+080 is known to contain a variable BAL system (Michalitsianos et al. 1996). The available X-ray observations indicate that PG1115+080 is a highly variable X-ray source. The large X-ray flux variations (a factor of about 13 decrease in X-ray flux between December 5 1979 and November 21 1991 and about a factor of 5 increase between November 21 1991 and May 27 1994) may possibly be used to substantially reduce the errors in the determination of the time delay of this GL system. Such a monitoring program will have to await the launch of the Chandra X-ray Observatory (CXO), a.k.a. AXAF, which has the spatial resolution to resolve the lensed images. The GL quasars Q1413+117, Q1120+0195 and RXJ0911.4+0551 have also been detected in X-rays and are known to contain BAL features. Unfortunately the available X-ray data have poor S/N and we have only provided estimates of their X-ray flux and luminosity. Recent high resolution optical and NIR imaging of RXJ0911.4+0551 have resolved the object into four lensed images and a lensing galaxy (Burud et al. 1998). They also detect a candidate galaxy cluster 38$''$ away from the image A1 with an estimated redshift of 0.7. It is possible that a large fraction of the detected X-ray emission in the ROSAT HRI observations of RXJ0911.4+0551 is originating from the cluster of galaxies. An interesting finding made by searching through the literature is the apparantly large fraction of optically detected GL BAL quasars. In particular we found seven GL BAL quasars out of a total of about 20 radio-quiet GL quasar candidates known to date. The probability of finding 7 or more BAL quasars out of a sample of 20 GL radio-quiet quasars assuming a true BAL fraction (amongst radio-quiet quasars only) of 0.11 is about 4 $\times$ 10$^{-3}$. In Table 9 we list several properties of these GL BAL quasars. Thus we find that at least 35$\%$ of radio-quiet gravitationally lensed quasars contain BAL features which is significantly larger than the 10$\%$ fraction of BAL quasars found in optically selected quasar samples (almost all BAL's are radio-quiet and about 90$\%$ of optically selected quasars are radio-quiet). Recently, BAL's have also been identified in a few radio-loud quasars (Brotherton et al. 1998). These observations suggest that a large fraction of BAL quasars are missed from flux limited optical surveys, a view that has also been proposed by Goodrich (1997) based on polarization measurements of BAL quasars. One plausible explanation for the over-abundance of BAL quasars amongst radio-quiet GL quasars is based on the GL magnification effect which causes the luminosity distributions of BAL quasars and GL BAL quasars to differ considerably such that presently available flux limited surveys of BAL quasars detect more GL BAL quasars. We have created a simple model that can explain the difference between the observed GL BAL fraction of $\sim$ 35\% and the observed non-lensed BAL quasar fraction of $\sim$ 10\%.\\ Our model makes use of the quasar luminosity function as parameterized by Pei (1995), assumes that only 20\% of BALs observed in optical surveys of unlensed quasars are attenuated by a factor A (see Goodrich, 1997) and it uses the Warren et al. (1994) optical limits for non-lensed quasars and the CASTLES survey optical limits for lensed quasars (Kochanek et al. 1998). To simplify the analysis we assume an average magnification factor of $<$M$>$ for the GL quasars rather than incorporate each magnification factor separately. A survey of lensed quasars, with luminosity limits between L$_{1}$ and L$_{2}$, will have a {\it true} luminosity range, assuming an average lens magnification factor of $<$M$>$, that lies between ${L_{1}}\over{<M>}$ and ${L_{2}}\over{<M>}$ for unattenuated lensed BAL quasars and that lies between ${L_{1}A\over<M>}$ and ${{L_{2}A}\over{<M>}}$ for attenuated lensed BAL quasars. Following the arguments of Goodrich (1997) we assume that only an observed fraction of 20\% of BAL quasars are attenuated (this is approximately the observed fraction of BAL quasars with significant polarization). Based on what we have just discussed, the observed fraction , $f_{ogb}(L_{1},L_{2})$, of GL BAL quasars in the luminosity range of L$_{1}$ to L$_{2}$ can be approximated with the observed fraction, $f_{ob}({{L_{1}}\over{<M>}},{{L_{2}}\over{<M>}})$, of non-GL BAL quasars in the {\it observed} luminosity range of ${{L_{1}}\over{<M>}}$ to ${{L_{2}}\over{<M>}}$. \begin{equation} f_{ogb}(L_{1},L_{2}) = f_{ob}({{L_{1}}\over{<M>}},{{L_{2}}\over{<M>}}) \end{equation} \noindent We separate the observed BAL quasar fraction, $f_{ob}({{L_{1}}\over{<M>}},{{L_{2}}\over{<M>}})$, into the fraction that is attenuated, $f_{oba}({{L_{1}}\over{<M>}},{{L_{2}}\over{<M>}})$, and the fraction $f_{obna}({{L_{1}}\over{<M>}},{{L_{2}}\over{<M>}})$ that is not attenuated, \begin{eqnarray} f_{ob}({{L_{1}}\over{<M>}},{{L_{2}}\over{<M>}}) = \nonumber \\ f_{oba}({{L_{1}}\over{<M>}},{{L_{2}}\over{<M>}}) + \nonumber \\ f_{obna}({{L_{1}}\over{<M>}},{{L_{2}}\over{<M>}}) \end{eqnarray} \noindent If we assume that the luminosity distribution of non-attenuated BAL quasars is similar to that of non BAL quasars we expect the fraction $f_{obna}({{L_{1}}\over{<M>}},{{L_{2}}\over{<M>}})$ to be independent of luminosity range and therefore approximately equal to the observed value $f_{obna}({L_{3}},{L_{4}})$ $\sim$ 8$\%$, where L$_{3}$ and L$_{4}$ are the Warren et at. (1994) optical luminosity limits. As pointed out by Goodrich 1997 the attenuation expected to be present in about 20$\%$ of all BAL quasars causes the observed luminosity function for BAL quasars to be considerably different from the true luminosity function for BAL quasars. The ratio of observed, $f_{oba}$, to true, $f_{tba}$, fraction of attenuated BAL quasars can be determined if one incorporates the effect of attenuation in the quasar luminosity function. In particular if we define $N({{L_{1}}\over{<M>}},{{L_{2}}\over{<M>}}z_{1},z_{2})$ as the integral of the quasar luminosity function as parametrized by Pei (1995) over the luminosity range ${L_{1}}\over{<M>}$ and ${L_{2}}\over{<M>}$ and the redshift range $z_{1}$ and $z_{2}$ then we may write the ratio of observed to true fraction of attenuated BAL quasars within this luminosity range as, \begin{equation} {{f_{oba}( {{L_{1}}\over{<M>}},{{L_{2}}\over{<M>}} ) }\over{f_{tba}}}= {{N({{L_{1}A}\over{<M>}},{{L_{2}A}\over{<M>}},z_{1},z_{2})}\over{N({{L_{1}}\over{<M>}},{{L_{2}}\over{<M>}},z_{1},z_{2})}} \end{equation} The observed fraction of about 2\% of attenuated non-lensed BAL quasars, however, is measured within the Warren et al. (1994) optical limits of L$_{3}$ = 2.7 $\times$ 10$^{46}$ erg s$^{-1}$ and L$_{4}$ = 3.8 $\times$ 10$^{47}$ erg s$^{-1}$ and redshift range of z$_{3}$ = 2 and z$_{4}$ = 4.5. We therefore write the ratio of observed to true fraction of attenuated BAL quasars within the L$_{3}$ and L$_{4}$ range as, \begin{equation} {{f_{oba}( {{L_{3}}},{{L_{4}}} ) }\over{f_{tba}}}= {{N({{L_{3}A}},{{L_{4}A}},z_{3},z_{4})}\over{N({{L_{3}}},{{L_{4}}},z_{3},z_{4})}} \end{equation} Combining equations 4, 5, 6 and 7 we obtain the following expression for the observed fraction of GL BAL quasars as a function of average GL magnification $<M>$ and BAL attenuation factor A, \begin{eqnarray} f_{ogb}(L_{1},L_{2})= f_{obna}(L_{3},L_{4}) + {f_{oba}(L_{3},L_{4})} \nonumber \\ {\times}{{N({{L_{3}}},{{L_{4}}},z_{3},z_{4})}\over{N({{L_{3}A}},{{L_{4}A}},z_{3},z_{4})}} {{N({{L_{1}A}\over{<M>}},{{L_{2}A}\over{<M>}},z_{1},z_{2})}\over{N({{L_{1}}\over{<M>}},{{L_{2}}\over{<M>}},z_{1},z_{2})}} \end{eqnarray} In Figure 10 we plot the expected observed fraction of GL BAL quasars as a function of attenuation values A and magnification factors $<M>$. The magnification effect of GL quasars alone cannot explain the observed enhanced GL BAL quasar fraction of $\sim$ 35$\%$. By combining, however, the magnification effect with the presence of an attenuation of the continuum in a fraction of BAL quasars, as suggested by the polarization observations by Goodrich 1997, our simple model can reproduce the observed GL BAL quasar fraction of $\sim$ 35$\%$. For a range of average magnification factors $<M>$ between 5 and 15 we obtain attenuation values A ranging between 5 and 4.5. The range of attenuation values of 4.5 to 5, suggested by the observed fraction of GL BALQSO's, is close to the range of 3 to 4 implied by the observed polarization distributions of BALQSO's and non-BAL radio-quiet quasars (Goodrich 1997), especially considering the uncertainties in both analyses. A value of $<M>$ $\sim$ 10 is consistent with typical estimated values for GL quasars, (see, for example, our GL model estimates in Table 3). \section{CONCLUSIONS} We have introduced a new approach in studying the X-ray properties of faint quasars. Our analysis makes use of the GL amplification effect to extend the study of quasar properties to X-ray flux levels as low as a few $\times$ 10$^{-16}$ erg s$^{-1}$ cm$^{-2}$. For the two radio-loud GL quasars Q0957+561 and B1422+231 we observe a flattening of the spectra between mid and hard bands (rest-frame) while for the radio-quiet quasar HE1104-1805 we do not observe any significant change in spectral slope between mid and hard bands. The present findings in GL quasars are consistent with those of non-lensed quasars and imply that the underlying mechanism responsible for the spectral hardening from mid to hard bands persists for the relatively high redshift GL radio-loud quasars of our sample with X-ray luminousities that are less (by the magnification factors indicated in Table 3) than previously observed objects at similar redshifts. Our results suggest that radio-loud and non-BAL radio-quiet quasars with unlensed fluxes as low as a few $\times$ 10$^{-16}$ erg s$^{-1}$ cm$^{-2}$ do not have spectral slopes that are any different from brighter quasars. Modeling the spectra of the two GL BAL quasars and the radio-loud quasar PKS1830-211 in our sample with simple power-laws and including only Galactic absorption leads to spectral indices that are considerably flatter than the average values for quasars. These results therefore imply that BAL quasars and quasars with associated absorption will contribute to the unresolved portion of the hard XRB. We must emphasize, however, that our present sample of GL quasars will need to be enlarged to assess the significance of the contribution of BAL quasars to the XRB. X-ray observations in the near future with the X-ray missions CXO, XMM and ASTRO-E will significantly aid in adding many more GL quasars to this sample. Our analysis of several X-ray observations of the GL BAL quasar PG1115+080 show that it is an extremely variable source. Fits of various models to the spectra obtained during these observations suggest that the X-ray variability is partly due to a variable BAL absorber. The X-ray flux variability in this source can be used to improve present measurements of the time delay. The large variability in the X-ray compared to optical band offers the prospect of substantially reducing the errors in deriving a time delay from cross-correlating image light curves. A precise measurement of the time delay combined with an accurate model for the mass distribution of the lens can be used to derive a Hubble constant that does not depend on the reliability of a ``standard candle''. The scheduled monitoring of PG1115+080 with the CXO will provide spatially resolved spectra and light curves for the individual lensed images. One of the significant findings of this work was a surprisingly large fraction of BAL quasars that are gravitationally lensed. In particular we find 7 BAL quasars out of a sample of 20 GL radio-quiet quasars. We have successfully modeled this effect and find that an attenuation factor A $\sim$ 5 of the BAL continuum of only 20$\%$ of all BAL quasars is consistent with the observed GL fraction of 35$\%$. We emphasize that the magnification effect alone cannot explain the observed difference between BAL fractions for lensed and non-lensed quasars. One needs to incorporate in addition an attenuation mechanism to produce the observed results. These observations therefore are suggestive of the existence of a hidden population of absorbed high redshift quasars which have eluded detection by present flux limited surveys. As X-ray and optical surveys approach lower flux limits we expect the fraction of BAL quasars found to increase. I would like to thank N. Brandt, M. Eracleous, G. Garmire, and J. Nousek for helpful discussions and comments. This work was supported by NASA grant NAS 8-38252. This research has made use of data obtained through the High Energy Astrophysics Science Archive Research Center Online Service, provided by the NASA/Goddard Space Flight Center.
1,314,259,995,365
arxiv
\section{\label{sec:level1}Introduction} Water has been studied extensively because it is a fundamental liquid with unique properties. For example, its boiling point under atmospheric pressure, 100$^\circ$C, is unusually high for its molecular weight, 18. In addition, its volume decreases as it melts, unlike other liquids. Moreover, it has an anomalous density maximum at 4$^\circ$C. These characteristics have been studied in detail using various methods. Consequently, many models have been suggested. Nevertheless, no definite mechanism has been established to explain these anomalous behaviors of water, even though each model can explain some specific phenomena. One sensitive method to investigate molecular states in solid and liquid forms is the positron annihilation lifetime (PALT) method. When positrons are injected into a liquid, some of them form positroniums with the surrounding electrons. The positronium lifetime is determined by the rate of annihilation between the positron in the positronium and surrounding electrons, also known as pick-off. As described later, according to the Ps-bubble model \cite{Brandt60,Brandt66,Tao72}, the lifetime should be longer at higher temperatures in liquid water. However, V\'{e}rtes $et\ al.$ \cite{Vertes} showed that the positronium lifetime has a general decreasing trend for higher temperature, although it oscillates between 38$^\circ$C\ to 60$^\circ$C. We measured PALT in liquid water between 0$^\circ$C\ and 50$^\circ$C. This paper explains that the positron lifetime decreases smoothly as the temperature rises. This behavior is explainable by combining the Ps-bubble model and a two-state mixture model of water. \section{Positron annihilation lifetime method and the Ps-bubble model} This section explains PALT and the Ps-bubble model. Positrons injected in material can undergo three different processes, each with a different lifetime. The shortest lifetime ($\tau_1$) occurs by a formation and annihilation of $para$-positronium ($p$-Ps) in which the spins are anti-parallel. In that situation, the electron forming $p$-Ps with the positron is captured from the surrounding molecules. The lifetime of $p$-Ps in vacuum, $\sim$ 0.1 ns, is too short to be affected by surrounding matter. The second lifetime ($\tau_2$) is caused by positron annihilation without forming a bound state. The third lifetime ($\tau_3$) results from formation of an $ortho$-positronium ($o$-Ps) with parallel spins. The $o$-Ps has an intrinsic lifetime of 142 ns in vacuum. However, in material, the lifetime is typically several nanoseconds because it annihilates with an electron in the surrounding molecules. As a result of this pick-off process, $\tau_3$ is sensitive to the electron-state of the surrounding substance. In liquid, $o$-Ps is generally considered to push out the surrounding molecules and form a Ps-bubble. The wave function of $o$-Ps exudes from the surface of Ps-bubble, which comprises electrons from surrounding substances. The overlap of $o$-Ps wave function and an electron wave function of the surroundings determines the pick-off rate. In the Ps-bubble model, the wave function of $o$-Ps is trapped inside a spherical infinite-depth well potential whose radius is the sum of the radius of the Ps-bubble and the exuding depth. Therefore, $\tau_3$\ is a function of the Ps-bubble radius \cite{Tao72,Eldrup82,MogensenBook}. Nakanishi $et\ al.$ introduced a semi-empirical correlation between $\tau_3$ and the Ps-bubble radius \cite{Nakanishi88} as: \begin{equation}\label{N-J} \tau_3 = \biggl[ 2\biggl\{ 1- \frac{R}{R+\Delta R} + \frac{1}{2 \pi} \sin \biggl( \frac{2 \pi R}{R+\Delta R} \biggr) \biggr\} \biggr]^{-1}, \end{equation} where $\tau_3$\ is measured in nanoseconds, $R$ is the Ps-bubble radius, and $\Delta R$ is the exuding depth of the $o$-Ps wave function into the surrounding electron wave functions. In this model, $R$ is determined by the balance between the zero-point energy of $o$-Ps and the surface tension of the surrounding substance. The balance is represented as \begin{equation}\label{0} \frac{\partial}{\partial R}(E+4 \pi R^2 \gamma )=0, \end{equation} where $E$ is the zero point energy of $o$-Ps and $\gamma$ is the surface tension of bulk water. $E$ is expressed as \begin{equation}\label{E} E \sim \frac{\hbar^2 k^2}{4 m_e} = \frac{\hbar^2 \pi^2}{4 m_e (R+\Delta R)^2}, \end{equation} where $m_e$ is the electron mass and $k$ is the momentum of $o$-Ps. For the case of water, $\gamma$ decreases as the temperature rises; thereby the bubble becomes larger. By this argument alone, $\tau_3$ is longer for higher temperatures. \section{Experiment} \subsection{Measurement of positron lifetime} We employed the positron annihilation lifetime technique using $ ^{22}$Na as a positron source. $ ^{22}$Na is put in the sample center. Positrons emitted into the sample are annihilated through the process mentioned in the previous section. When $ ^{22}$Na emits a positron, it simultaneously emits a 1275 keV photon. When a positron is annihilated with an electron, two 511 keV photons are emitted. In light of those facts, we measured the time difference between the emission of 1275 keV photon and the emission of two 511 keV photons. We used the apparatus shown in Fig.~\ref{fig:apparatus}. \begin{figure} \begin{center}\includegraphics[width=8cm]{./apparatus.eps} \caption{\label{fig:apparatus}A schematic of the apparatus. a. $\mbox{BaF}_2$ scintillators with photomultiplier tubes, b. Sample vial, c. Satellite water bath (60 mm $\times$ 60 mm $\times$ 100 mm (height)), d. Main water bath (300 mm $\times$ 500 mm $\times$ 160 mm (height)), e. Needle for pressure equilibrium, f. K-thermocouple, g. Positron source. } \end{center} \end{figure} Powder of 1.6 MBq $\ ^{22}$NaCl wrapped between 8 $\mu$m thick polyimide films was used as a positron source. The polyimide film is known to have no $\tau_3$\ component \cite{polyimide}. The source was held inside a 20 mm diameter 25 ml glass vial. The vial was then filled with de-gassed distilled water. We estimate that 4.4\% of the positrons were absorbed in polyimide film; more than 95.5\% of the positrons are absorbed within 0.5 mm of water \footnote { Using the linear absorption coefficient($\alpha$), $\alpha = 4 d/E^{1.6}_{\mbox{max}}(\mbox{cm}^{-1})$, where $d$ is the density of absorber and $E^{1.6}_{\mbox{max}}$ is the maximum energy of a positron in MeV. }. The vial was submerged in a water bath to maintain the sample water temperature within $\pm0.1$$^\circ$C. The sample water pressure was maintained at 1 atm by inserting a 0.9 mm inner diameter needle into the vial. The 1275 keV photon (start signal) was detected using a barium fluoride ($\mbox{BaF}_2$) scintillator (38.1 mm diameter, 25.4 mm thickness) with a photomultiplier tube (H3378-51; Hamamatsu Photonics, K.K.) and one 511 keV photon (stop signal) was detected by another $\mbox{BaF}_2$ detector. The two $\mbox{BaF}_2$ detectors viewed the vial at a 90$^\circ$ opening angle; they were held 27 mm away from the center of the sample water. The time difference between the start and stop signals was measured with a time-to-amplitude convertor and a multi-channel analyzer. The time resolution of this system is $\sim280$ ps. We measured the positron lifetime at 14 temperatures between 0$^\circ$C\ and 50$^\circ$C. In most cases, we performed four runs at each temperature point. We used a new water sample for each run. Each run collected one million events in 1500 s. \subsection{Analysis} A typical lifetime spectrum acquired in our experiment is shown in Fig.~\ref{fig:spectrum}. \begin{figure} \begin{center} \includegraphics[width=8cm]{./spect_fit_1011.eps} \caption{\label{fig:spectrum} A typical positron lifetime spectrum in liquid water (bottom) and the data-to-fit ratios (top). Data are shown in dots. A fitting result by MINUIT is shown in a solid line in the spectrum.} \end{center} \end{figure} To extract lifetimes, the spectrum was fitted for a model function, \begin{equation}\label{fit} S(t) = \sum_{i=1}^3 \frac{I_i}{\tau_i} \int\mbox{e}^{-t'/\tau_i} g(t-t') \theta(t'-t_0)dt' + k \sum_{i=1}^3 \frac{I_i}{\tau_i} \int\mbox{e}^{t'/\tau_i} g(t-t') \theta(t_0 -t')dt' + C, \end{equation} where $i$ is the index for different lifetimes, $I_i$ is intensity, \ $\tau_i$ is the lifetime, and $g(t-t')$ is the Gaussian describing the time resolution. Also, $\theta(t)$ represents a step function in which $\theta(t)=1\ (t\ge 0)$ and $0\ (t < 0)$, $t_0$ is the delayed timing of the start signal, and $C$ is a constant background rate. The second term is added to accommodate events in which a 511 keV photon gave the start signal and a 1275 keV photon gave the stop signal. Ten parameters,\ $I_{1,2,3}, \tau_{1,2,3}, t_0, k, C$, and the sigma of the Gaussian were fitted using MINUIT \cite{MINUIT} fitting code, which is commonly used in analyses of high-energy physics. A fitting result achieved using MINUIT is shown as a solid line in Fig. \ref{fig:spectrum}. We used POSITRONFIT \cite{posfit}, which is a popular code in positron lifetime analysis, only to crosscheck our results because POSITRONFIT requires the time resolution as an input parameter. The results are sensitive to the lower boundary of the fitting range. Both codes gave mutually consistent results. The mean value of the difference between both codes taken over all measurements was 0.008 ns; their standard deviation was 0.016 ns. \section{Results} Temperature dependence of $\tau_3$ is shown in Fig. \ref{fig:tau3_fit}. \begin{figure} \begin{center} \includegraphics[width=8.0cm]{./t3paw_01_4727.eps} \caption{\label{fig:tau3_fit}$\tau_3$ in water as a function of temperature. Dots show data. Vertical lines show errors. The solid line shows a fitting result by Ps-bubble model combined with the two-state model.} \end{center} \end{figure} The $\tau_3$ decreases smoothly by 10\% as the temperature is raised from 0$^\circ$C\ to 50$^\circ$C. Our $\tau_3$\ at 20$^\circ$C, $1.839\pm0.015$ ns, agrees with the measurement of Mogensen, 1.85 ns \cite{Mogensen_water}. Behaviors of other lifetime parameters are shown in Fig. \ref{fig:tau_all}. \begin{figure} \begin{center} \includegraphics[width=5.0cm] {./t1.eps} \includegraphics[width=5.0cm] {./t2.eps} \includegraphics[width=5.0cm] {./Iall.eps} \caption{\label{fig:tau_all} Temperature dependencies of lifetimes, $\tau_1$ and $\tau_2$ in water and intensities of three lifetime components. Dots show data and vertical lines indicate the fitting error. Solid lines are results of linear regression. a, b: lifetime component $\tau_1, \tau_2$, respectively, c: intensities of three components.} \end{center} \end{figure} We confirmed that the behavior of $\tau_3$\ is not caused by a change in other lifetime parameters. To do so, we produced artificial spectra that represent a sum of three exponentials, $\sum_{i=1}^3I_i\exp (t/\tau_i)/\tau_i$, convoluted by a Gaussian time resolution. The number of events in each 0.0495 ns time bin was made to follow a Poisson distribution. We produced such artificial data samples with different $\tau_i$ and $I_i$, and fitted for them. Figure \ref{influences} shows the fitted $\tau_3$ as a function of the varied parameters. Solid lines show the linear fit to the points. First, the fitted $\tau_3$ agrees with the input $\tau_3$ within $0.01\pm0.01$ ns, as shown in Fig. \ref{influences}(c). Second, the fitted $\tau_3$ does not depend on $\tau_1$, $\tau_2$, $I_2$, and $I_3$. Horizontal bars show the variations and errors of parameters in the measured temperature range. The maximum deviation of $\tau_3$\ is calculated by multiplying the slope of the linear fit and the quadratic sum of the variation and the error of the other parameters. Deviations resulting from $\tau_1$, $\tau_2$, $I_2$, and $I_3$ are $0.000\pm0.004$ ns, $0.0014\pm0.0008$ ns, $0.0015\pm0.0029$ ns, and $0.001\pm0.003$ ns, respectively. The only apparent dependence is on $I_1$, as shown in Fig. \ref{influences}(d); its effect on $\tau_3$ is $0.009\pm0.002$ ns. The total effect on $\tau_3$ that is attributable to the error and swing of other parameters in the temperature range is 0.011 ns. Consequently, we can conclude that the temperature dependence of $\tau_3$\ is not caused by changes in other lifetime parameters. \begin{figure}[width=11cm] \begin{center} \includegraphics[width=3.5cm] {./devT1.eps} \includegraphics[width=3.5cm] {./devT2.eps} \includegraphics[width=3.5cm] {./devT3.eps} \includegraphics[width=3.5cm] {./devI1.eps} \includegraphics[width=3.5cm] {./devI2.eps} \includegraphics[width=3.5cm] {./devI3.eps} \caption{\label{influences}Fitted $\tau_3$ as a function of other parameters studied with artificial data samples: (a) $\tau_1$, (b) $\tau_2$, (c) $\tau_3$, (d) $I_1$, (e) $I_2$, and (f) $I_3$. Horizontal solid bars show averages of the standard deviations of parameters at each temperature point; dashed bars show the maximum changes of parameters between 0 and 50$^\circ$C.} \end{center} \end{figure} \section{Discussion} The behavior of our $\tau_3$ cannot be explained by the original Ps-bubble model. For that reason, we apply the two-state mixture model to the Ps-bubble model. Vedamuthu $et\ al.$ showed that the two-state mixture model can reproduce density of $\mbox H_2 \mbox O$ in the temperature range between -30 and +70$^\circ$C\ with very high precision \cite{Vedamuthu94}. In this model, liquid water comprises two dynamically inter-converting mixed micro-domains whose bonding characteristics are similar to those of ice I$h$ (lower density) and ice II (higher density) \cite{Urquidi PRL 99}. The total specific volume of liquid water is given as \begin{equation} \label{Va} V(T,P) = [1-f_{\mbox {II}}(T,P)]V_{\mbox I}(T,P) + f_{\mbox {II}}(T,P)V_{\mbox {II}}(T,P), \end{equation} where I (II) indicates lower (higher) density bonding, $f_{\mbox {II}}$ is the mass fraction of the higher density bonding type, and $V_{\mbox{I(II)}}$ is the specific volume of the lower (higher) density type bond. We modified the Ps-bubble model by introducing the two-state mixture model with the following assumptions. \begin{enumerate} \item Positronium forms a bubble (Ps-bubble) with a radius $R$ in water as the original Ps-bubble model. \item Water consists of two different molecular bond types, I and II; the ratio $f_{\mbox {II}}$ is a function of the temperature as given by Vedamuthu $et\ al$. \cite{Vedamuthu94}. \item The pick-off rate is determined by an overlap between the $o$-Ps wave function and the electron wave function of the water; it can be parameterized by exuding depths, $\Delta R_{\mbox {I}}$ and $\Delta R_{\mbox {II}}$ for each bonding type. \item The bubble radius, $R$, is given by a macroscopic surface tension $\gamma$, which is determined by the temperature \cite{gamma}, $\Delta R_{\mbox {I}}$ and $\Delta R_{\mbox {II}}$ . \end{enumerate} The modified equation (\ref{N-J}) is: \begin{eqnarray}\label{VinNJ} \tau_3 = \biggl[ 2\biggl\{ 1- \frac{R}{R+(1-f_{\mbox {II}})\Delta R_{\mbox I} + f_{\mbox {II}} \Delta R_{\mbox {II}}} + \frac{1}{2 \pi} \sin \biggl( \frac{2 \pi R}{R+(1-f_{\mbox {II}}) \Delta R_{\mbox I}+f_{\mbox {II}} \Delta R_{\mbox {II}}} \biggr) \biggr\} \biggr]^{-1}. \end{eqnarray} Figure \ref{fig:tau3_fit} shows the fitted $\tau_3$ using Eq. (\ref{VinNJ}) as a function of temperature. The fitting parameters are $\Delta R_{\mbox {I}}$ and $\Delta R_{\mbox {II}}$. The reduced $\chi^2$ is $1.7$. The modified Ps-bubble model represent the temperature dependence of $\tau_3$ well. The fitted exuding depths are $\Delta R_{\mbox {I}} = 0.130\pm0.005$ nm and $\Delta R_{\mbox {II}} = 0.218\pm0.003$ nm. In the Ps-bubble model, $\Delta R$ should correlate with the van der Waals radius of the surrounding atoms. In fact, $\Delta R$, which is determined by fitting data of well-characterized small-pore materials such as zeolites for Eq. (\ref{N-J}), is 0.166 nm \cite{ujihira,Nakanishi88}, which agrees with the 0.166 nm average of van der Waals radii of atoms of SiO$_4$, the main component of zeolite. In our result, $\Delta R_{\mbox {I}}$ = 0.130 nm is close to the van der Waals radius of oxygen (0.155 nm) and hydrogen (0.120 nm). Contrarily, $\Delta R_{\mbox {II}}$ is too large for the van der Waals radius. This indicates that the higher density state has a larger contribution to the pick-off process than their own depth of the electron wave function. This constitutes evidence that the higher density state, which has a bending hydrogen bond \cite{Urquidi PRL 99}, is more active than the lower density state. Next, the fitted $R$ supports the Ps-bubble idea. As shown in Fig. \ref{fig:radius}, the radius of the Ps-bubble, $R$, based on our fit, is about 0.3 nm. \begin{figure} \begin{center} \includegraphics[width=8.0cm]{./radius.eps} \caption{\label{fig:radius}Temperature dependence of the Ps-bubble radius, as calculated with $\gamma$ and fitted $\Delta R_{\mbox I}$ and $\Delta R_{\mbox {II}}$.} \end{center} \end{figure} Liquid water has intrinsic vacancies in its structure while its structure is changing rapidly. Those vacancies produce a hexagonal structure having an oxygen atom at each vertex connected by hydrogen bonds. Considering the distance between the nearest O$\cdots$O, 0.28 nm \cite{Bosio83}, and the van der Waals radius of water, the vacancy is smaller than a Ps-bubble. Therefore $o$-Ps should spread water molecules apart to exist in water. This fact supports the need for a balance between the zero-point of $o$-Ps energy and the surface tension of water. In addition, our model shows that $R$ has a minimum at 8$^\circ$C. The radius of Ps-bubble at 50$^\circ$C\ is 1.009 times the radius at 8$^\circ$C. This size relationship is consistent with the fact that the distance between the nearest neighbors of O$\cdots$O pair at 50$^\circ$C\ is 1.011 times the distance at 4$^\circ$C\ \cite{Narten67}. \section{Conclusion} We precisely measured the temperature dependence of the long lifetime of a positron ($\tau_3$) in water at temperatures between 0 and 50$^\circ$C. The $\tau_3$\ decreases smoothly as temperature rises; it is $1.839\pm0.015$ ns at 20$^\circ$C. This fact is explained by combining two models in which the water consists of two types of molecular bonds, and in which positroniums push those bonds apart to form Ps-bubbles. We also found that $\tau_3$\ is sensitive to the electron states of the two types of molecular bonds.
1,314,259,995,366
arxiv
\section{Introduction} Let $S$ be a connected orientable surface without boundary, with finitely many punctures and negative Euler characteristic. The \textit{Teichm{\"u}ller space} $\Teich(S)$ of $S$ is the space of isotopy classes of complete, finite area hyperbolic structures on $S$. For a pair of points $g_1,g_2\in \Teich(S)$, Thurston \cite{ThurstonStretch} introduces the function $$d_{\textnormal{Th}}(g_1,g_2):=\log\sup_{c} \bigg(\frac{L_{g_2}(c)}{L_{g_1}(c)}\bigg),$$ \noindent where the supremum is taken over all free isotopy classes $c$ of closed curves in $S$ and, for $g\in \Teich(S)$, the number $L_g(c)$ denotes the length of the unique geodesic in the class $c$, with respect to the metric $g$. In \cite[Theorem 3.1]{ThurstonStretch} Thurston shows that $d_{\textnormal{Th}}(\cdot,\cdot)$ defines an asymmetric distance on $\Teich(S)$, and investigates many properties of this metric. For instance, he shows (see \cite[Theorem 8.5]{ThurstonStretch}) that $d_{\textnormal{Th}}(g_1,g_2)$ coincides with the least possible Lipschitz constant of homeomorphisms from $(S,g_1)$ to $(S,g_2)$ isotopic to $\textnormal{id}_S$, and constructs families of geodesic rays for this metric, called \textit{stretch lines}. Thurston also constructs a Finsler norm $\Vert\cdot\Vert_{\textnormal{Th}}$ on the tangent bundle of Teichm\"uller space: For $v\in T_g\Teich(S)$, he sets \begin{equation}\label{eq: finsler teich introd} \Vert v\Vert_{\textnormal{Th}}:=\displaystyle\sup_{c} \frac{ \mathrm{d}_g(L_\cdot (c))(v)}{ L_g(c)}. \end{equation} \noindent This is indeed a non-symmetric Finsler norm, namely it is non-negative, non degenerate, $(\mathbb{R}_{\geq 0})$-homogeneous and satisfies the triangle inequality. Moreover, Thurston shows that the path metric on $\Teich(S)$ induced by this Finsler norm coincides with $d_{\textnormal{Th}}(\cdot,\cdot)$. Assume now that $S$ is closed. Then $\Teich(S)$ identifies with a connected component $\mathfrak{T}(S)$ of the character variety $$\mathfrak{X}(\pi_1(S),\mathsf{PSL}(2, \mathbb{R})):=\textnormal{Hom}(\pi_1(S), \mathsf{PSL}(2, \mathbb{R}))/\!\!/\mathsf{PSL}(2, \mathbb{R}).$$ \noindent For a conjugacy class $[\gamma]$ in $\pi_1(S)$ and a point $\rho\in\mathfrak{T}(S)$, we set $$L_\rho^{2\lambda_1}([\gamma]):=2\lambda_1(\rho(\gamma)),$$ \noindent where $\lambda_1(\rho(\gamma))$ denotes the logarithm of the spectral radius of $\rho(\gamma)$. Identifying isotopy classes of closed curves in $S$ with conjugacy classes in $\pi_1(S)$, one deduces from Thurston's result that \begin{equation}\label{e.Th}d^{2\lambda_1}_{\textnormal{Th}}(\rho_1,\rho_2):=\sup_{[\gamma]\in[\pi_1(S)]} \log \bigg(\frac{L^{2\lambda_1}_{\rho_2}([\gamma])}{L^{2\lambda_1}_{\rho_1}([\gamma])}\bigg) \end{equation} \noindent defines an asymmetric distance on $\mathfrak{T}(S)$. Similarly, one gets an expression for the associated Finsler norm. The main goal of this note is to generalize this viewpoint, constructing asymmetric metrics and Finsler norms in other representation spaces that share many features with $\mathfrak{T}(S)$, namely, spaces of \textit{Anosov} representations, with a particular attention to \textit{Hitchin}, \textit{Benoist} and \textit{positive} representations. \subsection{Results} For a finitely generated group $\Gamma$ and a semisimple Lie group $\sf G$ of non-compact type, we denote by $\cha$ the character variety $$\cha:= \textnormal{Hom}(\Gamma, \sf G)/\!\!/\sf G.$$ We furthermore denote by $\mathfrak{a}^+$ a chosen Weyl chamber of $\sf G$, and by $\lambda:\sf G\to \mathfrak{a}^+$ the Jordan projection. A functional $\varphi\in\mathfrak{a}^*$ is \emph{positive on the limit cone} of a representation $\rho\in\cha$ if for all $\gamma\in\Gamma$ of infinite order one has $\varphi(\lambda(\rho(\gamma)))\geq c\|\lambda(\rho(\gamma))\|$ for some $c>0$ and some norm on $\mathfrak{a}$. With this at hand, for any functional $\varphi\in\mathfrak{a}^*$ positive on the limit cone of $\rho\in\cha$, we can consider its \emph{$\varphi$-marked length spectrum} $$ L^\varphi_{\rho}(\gamma):= \varphi(\lambda(\rho(\gamma))), $$ and its $\varphi$-\emph{entropy} $$h_{\rho}^\varphi:=\displaystyle\limsup_{t\to\infty}\frac{1}{t}\log\#\{[\gamma]\in[\Gamma]:L_\rho^\varphi(\gamma)\leq t\}\in[0,\infty]. $$ If $\frakX\subset \cha$ is a subset, let $\varphi\in\mathfrak{a}^*$ be a functional positive on the limit cone of each representation $\rho\in\frakX$. Naively, one would like to define $d_{\textnormal{Th}}^{\varphi}: \frakX \times \frakX \to \mathbb{R}\cup\{\infty\}$ by \begin{equation}\label{e.ThurINTRO} d_{\textnormal{Th}}^\varphi(\rho_1,\rho_2):=\log\left(\displaystyle\sup_{[\gamma]\in[\Gamma]}\frac{ L_{\rho_2}^\varphi(\gamma) }{L_{\rho_1}^\varphi(\gamma)}\right) \end{equation} \noindent and prove that it defines an asymmetric metric for some specific choices of $\frakX$. However, in this general setting, there could exist pairs of representations so that the $\varphi$-length spectrum of $\rho_1$ is uniformly larger than the $\varphi$-length spectrum of $\rho_2$: with the above definition, in that situation we would have $d_{\textnormal{Th}}^\varphi(\rho_1,\rho_2)<0$ (see Remark \ref{rem: AvoidDomination} and references therein). To resolve this issue, we normalize the length ratio by the entropy: $$d_{\textnormal{Th}}^\varphi(\rho_1,\rho_2):=\log\left(\displaystyle\sup_{[\gamma]\in[\Gamma]}\frac{h_{\rho_2}^\varphi}{h_{\rho_1}^\varphi}\frac{ L_{\rho_2}^\varphi(\gamma) }{L_{\rho_1}^\varphi(\gamma)}\right)$$ \noindent (see Definition \ref{def: asymmetric distance anosov reps} for more details in the case when $\Gamma$ has torsion). Observe that in the case when $\frakX$ is the Teichmüller space, $h_{\rho}^{2\lambda_1}=1$, and thus this definition is compatible with the one given in Equation \eqref{e.Th}. By construction $d_{\textnormal{Th}}^\varphi$ satisfies the triangular inequality. Our first result determines a setting in which such function is furthermore positive and separates points. For this we consider the definition of the space of $\Theta$-\textit{Anosov} representations, an open subset of the character variety $\cha$ depending on a subset $\Theta$ of the set of simple roots $\Pi$ of $\sf G$ (we refer the reader to Section \ref{sec, AnosovReps} for the precise definition). For any such set $\Theta$ we denote by $$\mathfrak{a}_\Theta:=\bigcap_{\alpha\in\Pi\setminus \Theta}\ker \alpha$$ and by $\mathfrak{a}_\Theta^*<\mathfrak{a}^*$ the set functionals invariant under the unique projection $p_\Theta:\mathfrak{a}\to\mathfrak{a}_\Theta$ invariant under the subgroup $\mathsf{W}_\Theta$ of the Weyl group of $\mathsf{G}$ fixing $\mathfrak{a}_\Theta$ pointwise. \begin{teo}[See Theorems \ref{thm: dth for anosov} and \ref{thm:rigidity}]\label{thm:INTROZdense} Assume that $\mathsf{G}$ is connected, real algebraic, simple and center free. Assume furthermore that $\frakX\subset \cha$ consists only of Zariski dense $\Theta$-Anosov representations. Let $\varphi\in\mathfrak{a}_\Theta^*$ be positive on the limit cone of each representation in $\frakX$, and suppose that an automorphism $\tau:\sf G\to\sf G$ leaving $\varphi$ invariant is necessarily inner. Then $d_{\textnormal{Th}}^\varphi(\cdot,\cdot)$ defines a (possibly asymmetric) metric on $\frakX$. \end{teo} The Thurston distance on the Teichm\"uller space of a closed surface is complete, however in general the distance $d_{\textnormal{Th}}^\varphi$ might be incomplete also due to the entropy renormalization. This is for example the case for the Teichm\"uller space of surfaces with boundary of variable length. It would be interesting to investigate the relation between suitable metric completions and subsets of the length spectrum compactification, as introduced in \cite{Parreau}. Provided we have a good understanding of all possible Zariski closures in a given subset $\frakX\subset\cha$, we can weaken the Zariski density assumption. This is for instance the case for the set of \emph{Benoist representations}. A Benoist representation is a representation $\rho:\Gamma\to\mathsf{PGL}(d+1,\mathbb{R})$ that preserves and acts cocompactly on a strictly convex domain $\Omega_\rho\subset\mathbb{P}(\mathbb{R}^{d})$. We let $\textnormal{Ben}_d(\Gamma)$ be the space of conjugacy classes of Benoist representations, which by work of Koszul \cite{Koszul} and Benoist \cite{BenoistDivIII} is a union of connected components of the character variety $\frak X(\Gamma,\mathsf{PGL}(d+1,\mathbb{R}))$. Benoist representations are $\Theta$-Anosov for $\Theta=\{\alpha_1,\alpha_d\}$, see \cite{BenoistDivI} and \cite[Proposition 6.1]{GW}. In particular, the logarithm of the spectral radius $\lambda_1$ and the \textit{Hilbert length function} $\textnormal{H}:=\lambda_1-\lambda_{d+1}$ belong to $\mathfrak{a}_\Theta^*$. Here we recall that $\lambda_{d+1}(g)$ denotes the logarithm of the smallest eigenvalue of $g$. Since Benoist computed the possible Zariski closures of a Benoist representation \cite{BenoistAutomorphismes}, the argument of Theorem \ref{thm:INTROZdense} can be pushed further to show the following. \begin{teo}[See Corollary \ref{cor: asymm for benoist hilbert} and Remark \ref{rem: other functionals in benoist components}]\label{thm:INTROBenoist} The following holds: \begin{enumerate} \item The function $d_{\textnormal{Th}}^{\lambda_1}: \textnormal{Ben}_d(\Gamma) \times \textnormal{Ben}_d(\Gamma) \to \mathbb{R}$ given by $$d_{\textnormal{Th}}^{\lambda_1}(\rho,\widehat{\rho}):=\log\left(\displaystyle\sup_{[\gamma]\in[\Gamma]} \frac{h_{\widehat{\rho}}^{\lambda_1}}{h_{\rho}^{\lambda_1}} \frac{ L_{\widehat{\rho}}^{\lambda_1}(\gamma) }{L_{\rho}^{\lambda_1}(\gamma)}\right)$$ \noindent defines a (possibly asymmetric) distance on $\textnormal{Ben}_d(\Gamma)$. \item The function $d_{\textnormal{Th}}^{\textnormal{H}}: \textnormal{Ben}_d(\Gamma) \times \textnormal{Ben}_d(\Gamma) \to \mathbb{R}$ given by $$d_{\textnormal{Th}}^{\textnormal{H}}(\rho,\widehat{\rho}):=\log\left(\displaystyle\sup_{[\gamma]\in[\Gamma]} \frac{h_{\widehat{\rho}}^{\textnormal{H}}}{h_{\rho}^{\textnormal{H}}} \frac{ L_{\widehat{\rho}}^{\textnormal{H}}(\gamma) }{L_{\rho}^{\textnormal{H}}(\gamma)}\right)$$ \noindent is non-negative, and one has $$d_{\textnormal{Th}}^{\textnormal{H}}(\rho,\widehat{\rho})=0 \Leftrightarrow \rho=\widehat{\rho} \textnormal{ or } \widehat{\rho}=\rho^\star,$$ \noindent where $\rho^\star$ is the \textnormal{contragredient} of $\rho$. \end{enumerate} \end{teo} A similar result holds for a class of representations of fundamental groups of closed real hyperbolic manifolds into $\mathsf{PO}_0(2,q)$ called \textit{AdS-quasi-Fuchsian}. These were introduced by Mess \cite{Mess} and Barbot-M\'erigot \cite{Barbot,BM}. See Corollary \ref{cor: asymm for AdSQF hilbert}. The renormalization by the entropy in Equation \eqref{e.ThurINTRO} while necessary to ensure positivity, might seem inconvenient: it may be difficult to obtain concrete control on the entropy, and thus the relation between such distance and the best Lipschitz constant of associated equivariant maps is lost. There are, however, natural classes of representations on which the entropy of some explicit functionals in the Levi-Anosov subspace $\mathfrak{a}_\Theta^*$ is constant. For instance, this is the case for the \textit{unstable Jacobian} $\textnormal{J}_{d-1}:=d\lambda_1+\lambda_{d+1}$ on Benoist components, thanks to work of Potrie-Sambarino \cite[Corollary 1.7]{PS}. In Corollary \ref{cor: unst Jac for benoist hilbert} we define the corresponding metric. Another important example is the case of \emph{Hitchin representations}, the representations in the connected component $\textnormal{Hit}(S,\mathsf{G})$ of $\mathfrak X(\pi_1(S),\sf G)$, for a split real Lie group $\sf G$ and the fundamental group of a closed surface $S$, containing the composition of a lattice embedding $\pi_1(S)\to\sf{PSL}(2,\mathbb{R})$ and the principal embedding $\sf{PSL}(2,\mathbb{R})\to\sf G$ \cite{Lab, FG}. Hitchin representations are Anosov with respect to the minimal parabolic \cite{FG,GLW}, so that $\mathfrak{a}_\Theta^*=\mathfrak{a}^*$ and the entropy with respect to all simple roots is constant on $\textnormal{Hit}(S,\mathsf{G})$ and equal to one, when $\mathsf{G}$ is classical \cite{PS,PSW1}. All possible Zariski closures of $\mathsf{PSL}(d,\mathbb{R})$-Hitchin representations have been determined by Guichard \cite{Guichard}, and recently a written proof appeared in \cite{sambarino2020infinitesimal}. This result also covers $\mathsf{PSp}(2r,\mathbb{R})$ and $\mathsf{PSO}(p,p+1)$-Hitchin representations, but not the Hitchin component of $\mathsf{PSO}_0(p,p)$ (see Subsection \ref{subsec: rigidity hitchin} for details). As we explain in Subsection \ref{subsec: rigidity hitchin}, Sambarino's approach also works in that case. We deduce the following. \begin{teo}[See Corollary \ref{cor: asymm for hitchin roots}]\label{thm:INTROHitchin} Let $\mathsf{G}$ be an adjoint, simple, real-split Lie group of classical type. Let $\alpha$ be any simple root of $\sf G$, with the exception of the roots listed in Table \ref{table:1}. Then the function $d_{\textnormal{Th}}^{\alpha}: \textnormal{Hit}(S,\mathsf{G}) \times \textnormal{Hit}(S,\mathsf{G}) \to \mathbb{R}$ given by $$d_{\textnormal{Th}}^{\alpha}(\rho,\widehat{\rho}):=\log\left(\displaystyle\sup_{[\gamma]\in[\Gamma]} \frac{ L_{\widehat{\rho}}^{\alpha}(\gamma) }{L_{\rho}^{\alpha}(\gamma)}\right)$$ \noindent defines an asymmetric distance on $\textnormal{Hit}(S,\sf G)$. \end{teo} \begin{table}[h!] \begin{center} \begin{tabular}{c|c|c|c} Type & Group & Diagram &Bad roots \\ \hline ${\sf A}_{2n-1}$ & $\sf{PSL}_{2n}(\mathbb{R})$ & $\begin{dynkinDiagram}A{oo.*.oo}\draw[thick] (root 1) to [out=-45, in=-135] (root 5);\draw[thick] (root 2) to [out=-45, in=-135] (root 4);\end{dynkinDiagram}$&$\{\alpha_n\}$ \\\hline \multirow{2}{*}{${\sf D}_n$} & ${\sf PO}(n,n)$ $\forall n\geq5$ & $\begin{dynkinDiagram}D{**.**oo}\draw[thick] (root 5) to [out=-45, in=45] (root 6);\end{dynkinDiagram}$ &$\{\alpha_1,\ldots, \alpha_{n-2}\}$\\ & ${\sf PO}(4,4)$ & $\begin{dynkinDiagram}D{****} \end{dynkinDiagram}$&$\{\alpha_1,\ldots, \alpha_{4}\}$\\ \end{tabular} \caption{The roots marked in black are fixed by a non-trivial automorphism, and are therefore not covered by Theorem \ref{thm:INTROHitchin}.}\label{table:1} \end{center} \end{table} Also in this case, even for the bad roots we can understand precisely when two representations have distance zero. See Subsection \ref{s.Zdense} for further families of representations for which we can generalize Theorem \ref{thm:INTROHitchin}; this is notably the case for some connected components of \emph{$\Theta$-positive representations} of fundamental groups of surfaces in ${\sf PO}(p,p+1)$ \cite{GWTheta}, which are smooth and conjectured to only consist of Zariski dense representations \cite[Conjecture 1.7]{Collier}. \medskip As a second theme in the paper we give an explicit formula for the Finsler norm associated to the distance on the set $\ha$ of $\Theta$-Anosov representations. More specifically, we introduce a function $\Vert\cdot\Vert_{\textnormal{Th}}^\varphi:T\ha\to \mathbb{R}\cup\{\pm\infty\}$ which is defined as follows. For a given tangent vector $v\in T_\rho\ha$, we set $$\Vert v\Vert_{\textnormal{Th}}^\varphi:= \displaystyle\sup_{[\gamma]\in[\Gamma]} \frac{\mathrm{d}_{\rho}(h_{\cdot}^\varphi)(v)L_\rho^\varphi(\gamma)+h_\rho^\varphi \mathrm{d}_\rho(L_\cdot^\varphi(\gamma))(v)}{h_\rho^\varphi L_\rho^\varphi(\gamma)}. $$ \noindent If $\rho\mapsto h_{\rho}^\varphi$ is constant, then this expression naturally generalizes Thurston's Finsler norm (\ref{eq: finsler teich introd}). We prove \begin{prop}[See Corollary \ref{cor: link finsler and asymm for reps}] Let $\{\rho_s\}_{s\in (-1,1)}\subset \ha$ be a real analytic family and set $\rho:=\rho_0$ and $ v:=\left.\frac{\mathrm{d}}{\mathrm{d} s}\right\vert_{s=0}\rho_s$. Then $s\mapsto d_{\textnormal{Th}}^\varphi(\rho,\rho_s)$ is differentiable at $s=0$ and $$\Vert v\Vert_{\textnormal{Th}}^\varphi=\left.\frac{\mathrm{d}}{\mathrm{d} s}\right\vert_{s=0}d_{\textnormal{Th}}^\varphi(\rho,\rho_s).$$ \end{prop} It is natural to ask whether $\Vert\cdot\Vert_{\textnormal{Th}}^\varphi$ defines a Finsler norm. In this direction we show: \begin{teo}[See Corollary \ref{cor: finsler for reps}]\label{thm: finsler INTRO} Let $\rho\in\ha$ be a point admitting an analytic neighbourhood in $\ha$. Then the function $\Vert\cdot\Vert_{\textnormal{Th}}^\varphi:T_\rho\ha\to\mathbb{R}\cup\{\pm\infty\}$ is real valued and non-negative. Furthermore, it is $(\mathbb{R}_{>0})$-homogeneous, satisfies the triangle inequality and one has $\Vert v\Vert_{\textnormal{Th}}^\varphi=0$ if and only if \begin{equation}\label{eq: non deg finsler INTRO} d_\rho (L_\cdot^\varphi(\gamma))(v)=-\frac{\mathrm{d}_{\rho}(h_{\cdot}^\varphi)(v)}{h_\rho^\varphi}L_\rho^\varphi(\gamma) \end{equation} \noindent for all $\gamma\in\Gamma$. In particular, if the function $\widehat{\rho}\mapsto h_{\widehat{\rho}}^\varphi$ is constant, then $$\Vert v\Vert_{\textnormal{Th}}^\varphi=0\Leftrightarrow d_\rho (L_\cdot^\varphi(\gamma))(v)=0$$ \noindent for all $\gamma\in\Gamma$. \end{teo} Condition (\ref{eq: non deg finsler INTRO}) has been studied by Bridgeman-Canary-Labourie-Sambarino \cite{BCLS,BCLSSIMPLEROOTS} in some situations. By applying their results we obtain: \begin{cor}[See Corollaries \ref{cor: finsler norm hitchin first root} and \ref{cor: finsler norm hitchin spectral}] The functions $\Vert\cdot\Vert^{\alpha_1}_{\textnormal{Th}}$ and $\Vert\cdot\Vert^{\lambda_1}_{\textnormal{Th}}$ define Finsler norms on $\textnormal{Hit}_d(S):=\textnormal{Hit}(S,\mathsf{PSL}(d,\mathbb{R}))$. \end{cor} We don't know, in this general setting, if the length metric induced by the Finsler norm $\Vert\cdot\Vert^{\varphi}_{\textnormal{Th}}$ agrees with the distance $d_{\textnormal{Th}}^{\varphi}$: indeed it is not clear if the latter distance is geodesic. Our final result is an application of Labourie-Wentworth's computation of the derivative of some length functions on $\textnormal{Hit}_d(S)$ along some special directions \cite{VariationAlongFuchsian}. By the work of Hitchin \cite{Hitchin_HitchinComponent}, fixing a Riemann surface structure $X_0$ on $S$, we can parametrize $\textnormal{Hit}_d(S)$ by a vector space of holomorphic differentials (of different degrees) over $X_0$. Given an holomorphic differential $q$ of degree $k$, we associate to a ray $t\mapsto tq$ for $t\geq 0$ a family $\{\rho_t\}_{t\geq 0}$ of Hitchin representations by the above mentioned Hitchin's parametrization. We denote by $v(q)\in T_{X_0}\textnormal{Hit}_d(S)$ its tangent direction at $t=0$. The holomorphic differential $q$ also defines a function $\textnormal{Re}(q):T^1X_0\to\mathbb{R}$. Details for this construction will be given in Subsection \ref{subsec: finsler for hitchin}. \begin{teo}[See Proposition \ref{prop: finsler fuchsian locus hitchin}]\label{thm:INTROFinslerhitchin} There exist constants $C_1$ and $C_2$, only depending on $d$ and $k$, such that for every vector $v=v(q)\in T_{X_0} \textnormal{Hit}_d(S)$ as above, one has $$\Vert v(q)\Vert_{\textnormal{Th}}^{\lambda_1}=C_1\displaystyle\sup_{[\gamma]\in[\Gamma]} \int \textnormal{Re}(q) \mathrm{d} \delta_\phi(a_{[\gamma]}) $$\noindent and $$\Vert v(q)\Vert_{\textnormal{Th}}^{\alpha_1}=C_2\displaystyle\sup_{[\gamma]\in[\Gamma]} \int \textnormal{Re}(q) \mathrm{d}\delta_\phi(a_{[\gamma]}), $$ \noindent where $\phi$ denotes the geodesic flow of $X_0$, $a_{[\gamma]}\subset T^1X_0$ denotes the $\phi$-periodic orbit coresponding to $[\gamma]$, and $\delta_\phi(a_{[\gamma]})$ denotes the $\phi$-invariant Dirac probability measure supported on $a_{[\gamma]}$. \end{teo} \subsection{Outline of the proofs}\label{subsec: outlineINTRO} The proofs of our main results follow closely the approach by Guillarmou-Knieper-Lefeuvre \cite{GeodesicStretchPressureMetric}, which is based on work of Knieper \cite{KnieperVolumeGrowth} and Bridgeman-Canary-Labourie-Sambarino \cite{BCLS}. In \cite{GeodesicStretchPressureMetric}, the authors work with the space $\mathfrak{M}$ of isometry classes of negatively curved, entropy one Riemannian metrics on a closed manifold $M$. For $g\in\mathfrak{M}$ and an isotopy class $c$ of closed curves in $M$, one may define $L_g(c)$ as we did when $g$ was a point in Teichm\"uller space. Guillarmou-Knieper-Lefeuvre define $$d_{\textnormal{Th}}(g_1,g_2):= \log \sup_{c} \frac{L_{g_2}(c)}{L_{g_1}(c)},$$ \noindent where the supremum is taken over all isotopy classes $c$ of closed curves in $M$. In \cite[Proposition 5.4]{GeodesicStretchPressureMetric} the authors show \begin{equation}\label{eq: ineq GKL} d_{\textnormal{Th}}(g_1,g_2)\geq 0 \end{equation} \noindent for all $g_1,g_2\in\mathfrak{M}$, and moreover \begin{equation}\label{eq: eq GKL} d_{\textnormal{Th}}(g_1,g_2)= 0 \Leftrightarrow L_{g_1}=L_{g_2}. \end{equation} Guillarmou-Lefeuvre's Local Length Spectrum Rigidity Theorem \cite[Theorem 1]{LocalLengthRigidity} (see also \cite[Theorem 1.1]{GeodesicStretchPressureMetric}) gives that Equation (\ref{eq: eq GKL}) is equivalent to $g_1=g_2$, provided that these two metrics are sufficiently regular and close enough in some appropriate topology. Hence, $d_{\textnormal{Th}}(\cdot,\cdot)$ defines an asymmetric metric on a neighbourhood of the diagonal of $\mathfrak{M}'\subset\mathfrak{M}$, where $\mathfrak{M}'$ is the subset of $\mathfrak{M}$ consisting of sufficiently regular metrics (see \cite{LocalLengthRigidity,GeodesicStretchPressureMetric} for details). Guillarmou-Knieper-Lefeuvre also construct an associated Finsler norm \cite[Lemma 5.6]{GeodesicStretchPressureMetric}. Even though the Local Length Spectrum Rigidity Theorem is a geometric statement, the proofs of (\ref{eq: ineq GKL}) and (\ref{eq: eq GKL}) can be abstracted to a more general dynamical framework inspired from \cite[Section 3]{BCLS}. We develop this general dynamical framework in detail in Sections \ref{sec: thermodynamics} and \ref{sec: asymmetric metric and finsler norm for flows}, as well as the specific statements needed for the construction of an asymmetric distance and a Finsler norm in that setting. As we explain, these general constructions can then be applied not only to the space $\mathfrak{M}$ as in Guillarmou-Knieper-Lefeuvre, but also to other geometric settings, such as spaces of Anosov representations. We expect that this can be applicable in many more geometric contexts. The general dynamical framework in Guillarmou-Knieper-Lefeuvre's setting arises as follows: Gromov observed that the geodesic flows of any two $g_1,g_2\in \mathfrak{M}$ are \textit{orbit equivalent} \cite{GromovGeodesicFlow}. Roughly speaking, this means that the two flows have the same orbits, travelled at possibly different ``speeds" (see Subsection \ref{subsec: orbit equiv} for details). The change of speed (or \textit{reparametrization}) is encoded by a positive H\"older continuous function $r=r_{g_1,g_2}$ on the unit tangent bundle $X:=T^1M$ of $M$. To be more precise, the function $r_{g_1,g_2}$ is only well defined up to an equivalence relation, called \textit{Liv\v{s}ic cohomology} (see Definition \ref{dfn: livsic}). Thus, we work in the general dynamical setting of studying the ``geometry" of the space $\mathcal{L}_1(X)$ of Liv\v{s}ic cohomology classes of entropy one H\"older functions on $X$ over the geodesic flow $\phi$ of $g_1$. Since $\phi$ is an Anosov flow, one may study $\mathcal{L}_1(X)$ through the lens of \textit{Theormodynamic Formalism} (see Subsection \ref{subsec:coding and metric Anosov}). Crucial for us is the following rigidity result by Bridgeman-Canary-Labourie-Sambarino \cite[Proposition 3.8]{BCLS} (see Proposition \ref{prop: BCLS renorm int rigidity} below): there exists a distinguished $\phi$-invariant probability measure $m^{\textnormal{BM}}(\phi)$ so that \begin{equation}\label{eq: BCLS rigidity intro} \int r\mathrm{d} m^{\textnormal{BM}}(\phi) \geq 1 \end{equation} \noindent and equality holds if and only if $r$ is Liv\v{s}ic cohomologous to the constant function $1$, namely the periods of periodic orbits of $\phi$ and the reparametrized flow by $r$ coincide. Thus \begin{equation}\label{eq: erg optim introd} \displaystyle\sup_{m}\int r\mathrm{d} m \geq 1, \end{equation} \noindent where the supremum is taken over all $\phi$-invariant probability measures, and equality in the above formula holds if and only if $r$ is Liv\v{s}ic cohomologous to $1$. By Proposition \ref{prop:sup of periods and measures}, the quantity in (\ref{eq: erg optim introd}) coincides with the supremum of ratios of periods of periodic orbits for $\phi$ and the reparametrized flow by $r$. These general dynamical considerations, when applied specifically to reparametrizing functions associated to $g_1,g_2\in\mathfrak{M}$, readily imply (\ref{eq: ineq GKL}) and (\ref{eq: eq GKL}). Now as in \cite{BCLS} for their construction of a \textit{pressure metric} (see Subsections \ref{subsec: other related work} and \ref{subsec: comparison pressure norm} for a detailed comparison), the above general approach can also be applied to study spaces of Anosov representations. We use Sambarino's Reparametrizing Theorem \cite{Quantitative} (see Theorem \ref{thm: reparametrizing theorem} below) to map $\ha$ to a space of Liv\v{s}ic cohomology classes of H\"older functions over \textit{Gromov's geodesic flow} $\textnormal{U}\Gamma$ of $\Gamma$. More precisely, we associate to each Anosov representation $\rho$ and each $\varphi\in\mathfrak{a}_\Theta^*$ a H\"older reparametrization of the geodesic flow $\textnormal{U}\Gamma$ encoding the $\varphi$-spectral data of $\rho$. This procedure is more involved than in the case of negatively curved metrics, not only because it depends on the additional choice of the functional $\varphi$, but also because the entropy of $\varphi$ is, in general, non-constant. While, when working with the space $\mathfrak{M}$ one can bypass this problem by normalizing the metric, this is not a natural procedure in our setting, this is why the extra normalization appears in the expression for $d_{\textnormal{Th}}^\varphi(\cdot,\cdot)$ (see Remark \ref{rem: domination flows} for further comments on this point). Nevertheless, Bridgmeman-Canary-Labourie-Sambarino's rigidity statement (\ref{eq: BCLS rigidity intro}) is adapted to the setting of arbitrary entropy and we deduce \begin{equation}\label{eq: ineq CDPW} d_{\textnormal{Th}}^\varphi(\rho_1,\rho_2)\geq 0 \end{equation} \noindent for all $\rho_1,\rho_2\in\ha$, and moreover \begin{equation}\label{eq: eq CDPW} d_{\textnormal{Th}}^\varphi(\rho_1,\rho_2)= 0 \Leftrightarrow h_{\rho_1}^\varphi L_{\rho_1}^\varphi=h_{\rho_2}^\varphi L_{\rho_2}^\varphi, \end{equation} \noindent which are the exact analogues of Equations (\ref{eq: ineq GKL}) and (\ref{eq: eq GKL}). To finish the proof of Theorems \ref{thm:INTROZdense}, \ref{thm:INTROBenoist} and \ref{thm:INTROHitchin} we need to understand under which conditions one can guarantee \textit{Renormalized Length Spectrum Rigidity}, that is, under which conditions the equality $h_{\rho_1}^\varphi L_{\rho_1}^\varphi=h_{\rho_2}^\varphi L_{\rho_2}^\varphi$ implies that $\rho_1$ and $\rho_2$ are conjugate. As in the case of negatively curved metrics, where length spectrum rigidity is only known to hold locally, this typically requires to restrict to a subset of $\ha$. More precisely, we need to control the Zariski closure $\mathsf{G}_{\rho_i}$ of $\rho_i$, for $i=1,2$. Since central elements and compact factors are invisible to the Jordan projection, we must require that $\mathsf{G}_{\rho_i}$ is center free and without compact factors. Once this is assumed, and if we assume moreover that $\mathsf{G}_{\rho_i}$ is semisimple, renormalized length spectrum rigidity follows essentially from properties of Benoist's limit cone (see Theorem \ref{thm:rigidity} and \cite[Corollary 11.6]{BCLS}). In some special cases, such as Hitchin components and some components of Benoist and positive representations, these arguments can be pushed further to guarantee global rigidity (see Theorem \ref{Thm:rigitity length hitchin} and Section \ref{s.other}). We study the Finsler norm on $\ha$ following the same approach, namely, by finding a general dynamical construction inspired by \cite{GeodesicStretchPressureMetric}, and then pulling back this construction to spaces of Anosov representations. Observe, however, that in this case we need a more complicated expression than what's available in \cite{GeodesicStretchPressureMetric} because we cannot assume that the entropy is constant. We may summarize the above discussion by saying that the results of this paper are obtained by adapting the corresponding constructions in \cite{GeodesicStretchPressureMetric} to the context of Anosov representations: we can rely on the Thermodynamical Formalism, on which part of the constructions in \cite{GeodesicStretchPressureMetric} are based, using the work of Sambarino \cite{Quantitative} and Bridgeman-Canary-Labourie-Sambarino \cite{BCLS}, and the local rigidity statement needed in \cite{GeodesicStretchPressureMetric} is replaced here by rigidity statements for Anosov representations from \cite{BCLS}. One of the strong points of our approach is to find a suitable general setup where both contexts can be encompassed, and which might prove useful for other geometric situations. \subsection{Other related work}\label{subsec: other related work} In \cite{BCLS,BCLSSIMPLEROOTS} the authors construct $\mathsf{Out}(\Gamma)$-invariant analytic Riemannian metrics on $\ha$: they deduce from the aforementioned rigidity result that the Hessian of the renormalized intersection (\ref{eq: BCLS rigidity intro}) is a semidefinite non-negative form, called the \textit{pressure form}. This can be pulled back to spaces of Anosov representations, sometimes yielding a positive definite form \cite{BCLS,BCLSSIMPLEROOTS}. The construction of this paper is different: instead of integrating with respect to a given measure and taking a second derivative, we integrate with respect to all invariant measures (see Subsection \ref{subsec: comparison pressure norm} for more detailed comparisons). The rigidity result in Equation (\ref{eq: BCLS rigidity intro}) was previously known to hold in other settings. When restricted to geodesic flows of closed hyperbolic surfaces, this is a reinterpretation of Bonahon's Rigidity Intersection Theorem \cite[p. 156]{BonahonCurrents} (see Appendix \ref{appendix: currents} for more details). More generally, that same result was known to hold for pairs of convex co-compact, rank one representations $\rho_1$ and $\rho_2$ of a word hyperbolic group $\Gamma$: see Burger \cite[p. 219]{BurgerManhattan}. Burger's results readily imply that $$d_{\textnormal{Th}}(\rho_1,\rho_2):=\log\sup_{[\gamma]\in[\Gamma]} \frac{h_{\rho_2}}{h_{\rho_1}}\frac{ L_{\rho_2}(\gamma) }{L_{\rho_1}(\gamma)}$$ \noindent defines an asymmetric distance on a subset of the space of conjugacy classes of convex co-compact representations $\Gamma\to\mathsf{G}$, where $\mathsf{G}$ has real rank one (note that in a rank one situation the choice of a functional $\varphi$ is irrelevant). Burger also relates the number \begin{equation}\label{eq: sup of ratios no entropy} \sup_{[\gamma]\in[\Gamma]} \frac{ L_{\rho_2}(\gamma) }{L_{\rho_1}(\gamma)} \end{equation} \noindent with one of the asymptotic slopes of the corresponding \textit{Manhattan curve}: see \cite[Theorem 1]{BurgerManhattan}. Gu\'eritaud-Kassel \cite[Proposition 1.13]{GK} extend Burger's asymmetric metric to some not necessarily convex co-compact representations into the isometry group of the real hyperbolic space. They also show that in some situations the value (\ref{eq: sup of ratios no entropy}) coincides with the best possible Lipschitz constant for maps between the two underlying real hyperbolic manifolds. Our construction of the asymmetric metric is done on a very general dynamical setting, and pulled back to Anosov representations spaces through Sambarino's Reparametrizing Theorem. For reparametrizations of the geodesic flow of a closed surface, a construction with similar flavor was introduced by Tholozan \cite[Theorem 1.31]{ThoHighest}. His construction leads to a symmetric distance, and it is described in terms of the projective geometry of some appropriate Banach space (see \cite{ThoHighest} and Remark \ref{rem: tholozan symmetric} for further details). It would be intriguing to understand the relation between Tholozan's construction and the approach we carry out here. \subsection{Plan of the paper} In Section \ref{sec: thermodynamics} we discuss the dynamical setup, and in Section \ref{sec: asymmetric metric and finsler norm for flows} we construct the asymmetric metric and the corresponding Finsler norm in this general setting. In Section \ref{sec, AnosovReps} we recall the definition and main examples of interest of Anosov representations. In Section \ref{sec: anosov flows and reps} we recall Sambarino's Reparametrizing Theorem. In Section \ref{sec, generalizedThurston} we pull back the construction of Section \ref{sec: asymmetric metric and finsler norm for flows} to spaces of Anosov representations and also discuss the renormalized length spectrum rigidity in general. In Sections \ref{sec: hitchin} and \ref{s.other} we specify the discussion to Hitchin representations, as well as some components of Benoist and positive representations. In Appendix \ref{appendix: currents} we discuss in detail the link between the rigidity statement (\ref{eq: BCLS rigidity intro}) and Bonahon's Rigidity Intersection Theorem. \subsection{Acknowledgements} We are grateful to Gerhard Knieper, Rafael Potrie and Andr\'es Sambarino for several helpful discussions and comments. \section{Thermodynamical formalism}\label{sec: thermodynamics} We begin by recalling some important terminology and results about the dynamics of topological flows on compact metric spaces. In Subsection \ref{subsec: orbit equiv} we recall the notions of H\"older orbit equivalence and Liv\v{s}ic cohomology. In Subsection \ref{subsec: measures and pressure} we recall the important concept of pressure, and fix some terminology that will be used throughout the paper. In Subsection \ref{subsec:coding and metric Anosov} we recall the notion of \textit{Markov coding} of a topological flow, and state the main consequences of admitting such a coding. We also recall the notion of \textit{metric Anosov flows}, an important class of flows that admit Markov codings. Finally, in Subsection \ref{subsec: inter and renormalized intersection} we recall the notion of \textit{renormalized intersection}, which is central in our study of the asymmetric metric. The exposition follows closely Bridgeman-Canary-Labourie-Sambarino \cite[Section 3]{BCLS}. \subsection{Topological flows, reparametrizations and (orbit) equivalence}\label{subsec: orbit equiv} Let $\phi=(\phi_t: X\to X)$ be a H\"older continuous flow on a compact metric space $X$. In this paper we always assume that $\phi$ is \textit{topologically transitive}. This means that $\phi$ has a dense orbit. The choice of a continuous function $r:X\to\mathbb{R}_{>0}$ induces a ``reparametrization" $\phi^r$ of the flow $\phi$. Informally, this is a flow with the same orbits than $\phi$, but travelled at a different ``speed". To define this notion properly, we first let $\kappa_r:X\times\mathbb{R}\to\mathbb{R}$ be given by $$\kappa_r(x,t):=\displaystyle\int_{0}^tr(\phi_s(x))\mathrm{d} s.$$ \noindent The function $\kappa_r(x,\cdot):\mathbb{R}\to\mathbb{R}$ is an increasing homeomorphism for all $x\in X$ and therefore admits an (increasing) inverse $\alpha_r(x,\cdot):\mathbb{R}\to\mathbb{R}$. That is, we have $$\kappa_r(x,\alpha_r(x,t))=\alpha_r(x,\kappa_r(x,t))=t$$ \noindent for all $x\in X$ and $t\in\mathbb{R}$. \begin{dfn} The \textit{reparametrization} of $\phi$ by a continuous function $r:X\to\mathbb{R}_{>0}$ is the flow $\phi^r=(\phi^r_t:X\to X)$ defined by the formula $$\phi_t^r(x):=\phi_{\alpha_r(x,t)}(x)$$ \noindent for all $x\in X$ and $t\in\mathbb{R}$. We say that $\phi^r$ is a \textit{H\"older reparametrization} of $\phi$ if $r$ is H\"older continuous. We let $\textnormal{HR}(\phi)$ be the set of H\"older reparametrizations of $\phi$. \end{dfn} The reader may wonder why we choose the function $\alpha_r$ to reparametrize, instead of directly considering the function $\kappa_r$. One reason is the following. Let $\psi\in\textnormal{HR}(\phi)$, and denote by $r_{\phi,\psi}$ the corresponding reparametrizing function, i.e. $\psi=\phi^{r_{\phi,\psi}}$. Denote by $\mathcal{O}$ the set of periodic orbits of $\psi$ (note that this set is independent of the choice of $\psi$). Given $a\in\mathcal{O}$ we denote by $p_\psi(a)$ the period, according to the flow $\psi$, of the periodic orbit $a$. Then for every $x\in a$ one has the following equality $$\displaystyle\int_{0}^{p_{\phi}(a)}r_{\phi,\psi}(\phi_t(x))\mathrm{d} t=p_{\psi}(a).$$ \noindent Hence, by choosing the function $\alpha_{r_{\phi,\psi}}$ (instead of $\kappa_{r_{\phi,\psi}}$) we avoid a cumbersome formula involving the integral of $1/r_{\phi,\psi}$ when computing the periods of the new flow. If we take another point $\widehat{\psi}\in\textnormal{HR}(\phi)$, then $\widehat{\psi}$ is a reparametrization of $\psi$, that is, one has $\widehat{\psi}=\psi^{r_{\psi,\widehat{\psi}}}$ for some positive continuous function $r_{\psi,\widehat{\psi}}$. In fact, an explicit computation shows \begin{equation}\label{eq: rep funtion fro psi to widehatpsi} r_{\psi,\widehat{\psi}}=\frac{r_{\phi,\widehat{\psi}}}{r_{\phi,\psi}}. \end{equation} \noindent As above, for every $a\in\mathcal{O}$ and every $x\in a$ one has \begin{equation}\label{eq: integral of reparametrizing over periodic orbit} \displaystyle\int_{0}^{p_{\psi}(a)}r_{\psi,\widehat{\psi}}(\psi_t(x))\mathrm{d} t=p_{\widehat{\psi}}(a). \end{equation} There are two notions of equivalence between topological flows that we now recall. A H\"older continuous flow $\phi'=(\phi_t':X'\to X')$ on a compact metric space $X'$ is said to be \textit{(H\"older) conjugate} to $\phi$ if there is a (H\"older) homeomorphism $h:X\to X'$ satisfying $$h\circ\phi_t=\phi'_t\circ h$$ \noindent for all $t\in\mathbb{R}$. A weaker notion is that of orbit equivalence: the flow $\phi'=(\phi'_t:X'\to X')$ is said to be \textit{(H\"older) orbit equivalent} to $\phi$ if it is (H\"older) conjugate to a (H\"older) reparametrization of $\phi$. One can see that every flow in the orbit equivalence class of $\phi$ is topologically transitive. To single out elements in $\textnormal{HR}(\phi)$ which are conjugate to $\phi$, one introduces \textit{Liv\v{s}ic cohomology}. To motivate this notion, consider a H\"older continuous function $V:X\to\mathbb{R}$ of class C$^1$ along $\phi$, and let $$r(x):=\left(\left.\dfrac{\mathrm{d}}{\mathrm{d} t}\right\vert_{t=0}V(\phi_t(x))\right)+1.$$ \noindent If $r$ is positive, then $\phi^r$ is conjugate to $\phi$. Explicitly, if one defines $h(x):=\phi_{V(x)}(x)$, then $$h\circ \phi_t^r=\phi_t\circ h$$ \noindent for all $t\in\mathbb{R}$. \begin{dfn}\label{dfn: livsic} Two H\"older continuous functions $f,g:X\to\mathbb{R}$ are said to be \textit{Liv\v{s}ic cohomologous} (with respect to $\phi$) if there is a H\"older continuous function $V:X\to\mathbb{R}$ of class C$^1$ along the direction of $\phi$, so that for all $x\in X$ one has $$f(x)-g(x)=\left.\frac{\mathrm{d}}{\mathrm{d} t}\right\vert_{t=0}V(\phi_t(x)).$$ \noindent In that case we write $f\sim_\phi g$, and denote the Liv\v{s}ic cohomology class of $f$ with respect to $\phi$ by $[f]_\phi$. \end{dfn} \subsection{Invariant measures, entropy and pressure}\label{subsec: measures and pressure} For $\psi\in\textnormal{HR}(\phi)$ we denote by $\mathscr{P}(\psi)$ the set of $\psi$-invariant probability measures on $X$. This is a convex compact metrizable space. We also let $\mathscr{E}(\psi)\subset\mathscr{P}(\psi)$ be the subset consisting of \textit{ergodic} measures, that is, the subset of measures for which $\psi$-invariant measurable subsets have measure either equal to zero or one. The set $\mathscr{E}(\psi)$ is the set of extremal points of $\mathscr{P}(\psi)$. By the Choquet Representation Theorem (see Walters \cite[p. 153]{WalterErgodicBook}), every element $m\in\mathscr{P}(\psi)$ admits an \textit{Ergodic Decomposition}. This means that there exists a unique probability measure $\tau_m$ on $\mathscr{E}(\psi)$ such that $$\int_X f(x) \mathrm{d}m(x)= \int_{\mathscr{E}(\psi)} \bigg(\int_X f(x) \mathrm{d}\mu(x)\bigg)\mathrm{d}\tau_{m}(\mu)$$ \noindent holds for every continuous function $f$ on $X$. The set of periodic orbits of $\psi$ embeds into $\mathscr{P}(\psi)$ as follows: for $a\in\mathcal{O}$, we denote by $\delta_{\psi}(a)\in\mathscr{P}(\psi)$ the \textit{Dirac mass} supported on $a$, that is the push-forward of the Lebesgue probability measure on $S^1\cong [0,1]/\sim$ (where $0\sim 1$) under the map $$S^1 \to X: t\mapsto \psi_{p_{\psi}(a)t}(x),$$ \noindent where $x$ is any point in $a$. Note that $\delta_{\psi}(a)\in\mathscr{E}(\psi)$. Using Equation (\ref{eq: integral of reparametrizing over periodic orbit}), we conclude that for every $\widehat{\psi}\in\textnormal{HR}(\phi)$ one has \begin{equation}\label{eq: integral of reparametrizing over delta in periodic orbit} p_{\widehat{\psi}}(a)=p_{\psi}(a)\displaystyle\int_{X}r_{\psi,\widehat{\psi}}\mathrm{d}\delta_{\psi}(a). \end{equation} More generally, for $m\in\mathscr{P}(\psi)$, the map $m\mapsto\widehat{m}$ given by \begin{equation}\label{eq: iso between ppsi and pwpsi} \mathrm{d} \widehat{m}:=\frac{r_{\psi,\widehat{\psi}}\mathrm{d} m}{\int r_{\psi,\widehat{\psi}}\mathrm{d} m} \end{equation} \noindent defines an isomorphism $\mathscr{P}(\psi)\cong\mathscr{P}(\widehat{\psi})$. We now recall the notion of \textit{topological pressure}, which will be central for our purposes. \begin{dfn} Let $f: X\to \mathbb{R}$ be a continuous function (or \textit{potential}). The \emph{topological pressure} (or \emph{pressure}) of $f$ is defined by \begin{equation}\label{eq: def pressure} \textbf{P}(\phi,f):=\sup\limits_{m\in\mathscr{P}(\phi)}\left(h(\phi,m)+ \int_X f \mathrm{d} m\right), \end{equation} \noindent where $h(\phi,m)$ is the \textit{metric entropy} of $m$. \end{dfn} The metric entropy (or \emph{measure theoretic entropy}) $h(\phi,m)$ is defined using $m$-measurable partition of $X$ and is a metric isomorphism invariant (see \cite[Chapter 4]{WalterErgodicBook}). When there is no risk of confusion we will omit the flow $\phi$ in the notation and simply write $\textbf{P}(f)=\textbf{P}(\phi,f)$. A special and important case is the pressure of the potential $f\equiv 0$, which is called the \textit{topological entropy} of $\phi$. It is denoted by $h_{\textnormal{top}}(\phi)$, or simply by $h_\phi$. The topological entropy is a topological invariant: conjugate flows have the same topological entropy. In contrast, the topological entropy is not invariant under reparametrizations. A measure $m \in \mathscr{P}(\phi)$ realizing the supremum in Equation (\ref{eq: def pressure}) is called an \emph{equilibrium state} of $f$. An equilibrium state for $f\equiv 0$ is called a \emph{measure of maximal entropy} of $\phi$. Liv\v{s}ic cohomologous functions share some common invariants defined in thermodynamical formalism. \begin{rem}\label{LivsicCommon} If $f: X\to \mathbb{R}$ and $g: X\to \mathbb{R}$ are Liv\v{s}ic cohomologous functions (w.r.t $\phi$), then $\textbf{P}(\phi,f)=\textbf{P}(\phi,g)$ and $m\in\mathscr{P}(\phi)$ is an equilibrium state for $f$ if and only if it is an equilibrium state for $g$. Indeed, if $f\sim_\phi g$ and $m\in\mathscr{P}(\phi)$ then $$\displaystyle\int_Xf\mathrm{d}m=\displaystyle\int_Xg\mathrm{d}m.$$ \noindent This is a consequence of $\phi$-invariance of $m$ and the Mean Value Theorem for derivatives of real functions. \end{rem} The following is well-known and useful. \begin{prop}[Bowen-Ruelle {\cite[Proposition 3.1]{BowenRuelle}, Sambarino \cite[Lemma 2.4]{Quantitative}}]\label{prop: pressureZero} Let $\phi=(\phi_t:X\to X)$ be a H\"older continuous flow on a compact metric space $X$ and $r: X \to \mathbb{R}_{>0}$ be a H{\"o}lder continuous function. Then a real number $h$ satisfies $$\textnormal{\textbf{P}}(\phi, -hr)=0$$ \noindent if and only if $h=h_{\phi^{r}}$. \end{prop} \subsection{Symbolic coding and metric Anosov flows}\label{subsec:coding and metric Anosov} We now specify an important class of topological flows for which pressure, equilibrium states and Liv\v{s}ic cohomology behave particularly well. The property we are interested in is the existence of a \textit{strong Markov coding} for the flow. Informally speaking, a Markov coding provides a way of modelling the flow by a suspension flow over a shift space. This allows us to obtain many properties about the dynamics of the flow, by studying the corresponding properties at the symbolic level. The reader can find a general introduction on how to model flows by Markov codings and suspension flows in Bowen \cite{Bowen-Symbolic} and Parry-Pollicott \cite[Appendix III]{ZetaFunction_Pollicott}. We give a cursory introduction of suspension flows and Markov partitions here. Suppose $(\Sigma, \sigma_A)$ is a two-sided shift of finite type. Given a ``roof function" $r: \Sigma \to \mathbb{R}_{>0}$, the \emph{suspension flow} of $(\Sigma, \sigma_A)$ under $r$ is the quotient space $$\Sigma_r:=\{(x,t)\in \Sigma\times\mathbb{R}: 0\leq t\leq r(x), x\in \Sigma\}/(x,r(x))\sim (\sigma_A(x), 0)$$ \noindent equipped with the natural flow $\sigma^r_{A,s}(x,t):=(x,t+s)$. \begin{dfn} A \emph{Markov coding} for the flow $\phi=(\phi_t:X \to X)$ is a 4-tuple $(\Sigma, \sigma_A, \pi, r)$ where $(\Sigma,\sigma_A)$ is an irreducible two-sided subshift of finite type, the function $r:\Sigma \to \mathbb{R}_{>0}$ and the map $\pi:\Sigma_r \to X$ are continuous, and the following conditions hold: \begin{itemize} \item The map $\pi$ is surjective and bounded-to-one. \item The map $\pi$ is injective on a set of full measure (for any ergodic measure of full support) and on a dense residual set. \item For all $t\in\mathbb{R}$ one has $\pi\circ \sigma^r_{A,t}=\phi_t \circ \pi$. \end{itemize} If both $\pi$ and $r$ are H{\"o}lder continuous, we call the Markov coding a \emph{strong Markov coding}. \end{dfn} The proof of the following proposition can be found in Sambarino \cite[Lemma 2.9]{Quantitative}. \begin{prop}\label{prop: coding for reparametrization} Let $\phi=(\phi_t:X\to X)$ be a topological flow admitting a strong Markov coding. Then every flow in the H\"older orbit equivalence class of $\phi$ admits a strong Markov coding. \end{prop} Thanks to the previous proposition, if $\phi$ admits a strong Markov coding, then every element $\psi\in\textnormal{HR}(\phi)$ also does. This has deep consequences for the dynamics of $\psi$ that we will discuss in this section. However, before doing that we will discuss an important class of topological flows that admit Markov codings, namely, \textit{metric Anosov} flows. This class is important to us because, as proved by Bridgeman-Canary-Labourie-Sambarino \cite[Sections 4 and 5]{BCLS}, every Anosov representation induces a \textit{geodesic flow} which is a topologically transitive and metric Anosov. Among flows of class C$^1$ on compact manifolds, \textit{Anosov} flows provide an important class exhibiting many interesting dynamical properties. They were introduced by Anosov \cite{Anosov} in his study of the geodesic flow of closed negatively curved manifolds. Anosov flows were generalized to \textit{Axiom A} flows by Smale \cite{SmaleDifferentiableDynamicalSystems}; we do not give full definitions here and refer the reader to Smale's original paper. An example of an Axiom A flow which is not Anosov is the geodesic flow of a noncompact convex cocompact real hyperbolic manifold, the restriction of the flow to the set of vectors tangent to geodesics in the convex hull of the limit set shares many dynamical properties with Anosov flows, even though this set is not a manifold. In some contexts (and particularly in the setting we are focusing on), C$^1$-regularity is too much to expect; \textit{Metric Anosov} flows form a class that further generalize Axiom A flows to the topological setting and still share many desirable properties with them. They were introduced by Pollicott \cite{PolMetricAnosov}, who also showed that these flows admit a Markov coding, generalizing the corresponding results for Axiom A flows obtained previously by Bowen \cite{Bowen-Symbolic}. Let $\phi=(\phi_t: X\to X)$ be a continuous flow on a compact metric space $X$. For $\varepsilon>0$, we define the $\varepsilon$-\textit{local stable set} of $x$ by $$W_\varepsilon^s(x):=\{y\in X: d(\phi_t x, \phi_t y)\leq \varepsilon, \forall t\geq 0 \textnormal{ and } d(\phi_t x, \phi_t y) \to 0 \textnormal{ as } t\to \infty\}$$ \noindent and the $\varepsilon$-\textit{local unstable set} of $x$ by $$W_\varepsilon^u(x):=\{y\in X: d(\phi_{-t} x, \phi_{-t} y)\leq \varepsilon, \forall t\geq 0\textnormal{ and } d(\phi_{-t} x, \phi_{-t} y) \to 0 \textnormal{ as } t\to \infty\}.$$ \begin{dfn}\label{dfn,metricAnosov} A topological flow $\phi=(\phi_t: X\to X)$ is \textit{metric Anosov} if the following conditions hold: \begin{enumerate} \item There exist positive constants $C, \lambda, \varepsilon$ such that $$d(\phi_t(x), \phi_t(y))\leq C e^{-\lambda t}d(x,y) \text{ for all $y\in W^{s}_{\varepsilon}(x)$ and $t\geq 0$},$$ \noindent and $$d(\phi_{-t}(x), \phi_{-t}(y))\leq C e^{-\lambda t}d(x,y) \text{ for all $y\in W^{u}_{\varepsilon}(x)$ and $t\geq 0$}.$$ \item There exists $\delta>0$ and a continuous function $v$ on the set $$X_\delta:=\{(x,y)\in X\times X: d(x,y)\leq \delta\}$$ such that for every $(x,y)\in X_\delta$, the number $v=v(x,y)$ is the unique value for which $W_{\varepsilon}^u(\phi_v x)\cap W_{\varepsilon}^s(y)$ is not empty consists of a single point, denoted by $\langle x, y \rangle$. \end{enumerate} \end{dfn} \begin{teo}[Pollicott \cite{PolMetricAnosov}]\label{teo: pollicott coding metric anosov} A topologically transitive metric Anosov flow on a compact metric space admits a Markov coding. \end{teo} For the rest of the section, we fix a topologically transitive flow $\phi=(\phi_t:X\to X)$ admitting a strong Markov coding. In this case the entropy of $\phi$ agrees with the exponential growth rate of periodic orbits: \begin{equation}\label{eq: entropy} h_\phi=\displaystyle\lim_{t\to\infty}\frac{1}{t}\log\#\{a\in\mathcal{O}: p_\phi(a)\leq t\}. \end{equation} \noindent Moreover this number is positive and finite (see Bowen \cite{PeriodicOrbits-Bowen} and Pollicott \cite{PolMetricAnosov}). Another useful consequence of the existence of a Markov coding is the density of $\mathcal{O}$ in $\mathscr{E}(\phi)$. Combined with the Ergodic Decomposition (c.f. Subsection \ref{subsec: measures and pressure}), it provides a nice way of relating invariant measures and periodic orbits. \begin{teo}\label{teo: periodic orbits dense} Let $\phi=(\phi_t:X\to X)$ be a topologically transitive flow admitting a strong Markov coding. Then for every measure $m\in\mathscr{E}(\phi)$ there is a sequence of periodic orbits $\{a_j\}\subset\mathcal{O}$ such that, as $j\to\infty$, $$\delta_\phi(a_j)\to m$$ \noindent in the weak-$\star$ topology. \end{teo} \begin{proof} This is well known in hyperbolic dynamics (see e.g. Sigmund \cite[Theorem 1]{Sigmund_InvariantMeasures} when $\phi$ is Axiom A). We comment briefly on the ingredients of the proof, since we haven't found an explicit reference in our specific setting. By Pollicott \cite[p.195]{PolMetricAnosov} there is a $\sigma_A$-invariant ergodic measure $\mu$ on $\Sigma$ so that $m=\pi_*(\widehat{\mu})$, where $\widehat{\mu}$ is the probability measure on $\Sigma_r$ induced by the measure on $\Sigma\times\mathbb{R}$ given by $$\frac{\mu\otimes \mathrm{d} t}{\int r\mathrm{d} \mu}.$$ \noindent Hence, it suffices to prove that $\mu$ can be approximated by periodic orbits of $\sigma_A$. This is a consequence of two dynamical properties of $\sigma_A$, called \textit{expansiveness} and the \textit{pseudo-orbit tracing property} (see e.g. \cite[Definition 3.2.11]{KH} and \cite[Theorem 1]{Walters}). Indeed, provided these properties Sigmund's argument \cite[Theorem 1]{SigmundHomeos} can be carried out in the present framework. \end{proof} With respect to equilibrium states we have the following theorem. \begin{teo}[Bowen-Ruelle \cite{BowenRuelle}, Pollicott \cite{PolMetricAnosov}, Parry-Pollicott {\cite[Proposition 3.6]{ZetaFunction_Pollicott}}]\label{teo: CohomologousEquilibrium} Let $\phi=(\phi_t:X\to X)$ be a topologically transitive flow admitting a strong Markov coding. For every H{\"o}lder continuous function $f:X\to\mathbb{R}$, there exists a unique equilibrium state $m_f(\phi)$ for $f$ with respect to $\phi$. Furthermore, the equilibrium state is ergodic. Finally, if $g:X\to\mathbb{R}$ is H\"older continuous and $m_f(\phi)=m_g(\phi)$, then there exists a constant function $c$ so that $f-g\sim_\phi c$. \end{teo} The equilibrium state for $f\equiv 0$ is called the \textit{Bowen-Margulis measure} of $\phi$, and denoted by $m^{\textnormal{BM}}(\phi)$. For Anosov flows, the existence of this measure was proved by Margulis in his PhD Thesis \cite{MarThesis}. Uniqueness was originally conjectured by Bowen \cite{Bowen-Symbolic} and this justifies the name. In a more geometric context, e.g. for the geodesic flow of a convex cocompact real hyperbolic manifold, Sullivan \cite{SulDensity} gave a description of this measure using Patterson-Sullivan theory. Because of this, the measure of maximal entropy in those contexts is sometimes called the \textit{Bowen-Margulis-Sullivan measure}. If $f\sim_\phi g$ then the integrals of $f$ and $g$ over every periodic orbit coincide. In the present setting we also have a converse statement. \begin{teo} [Liv\v{s}ic {\cite{Livsic}}] \label{teo: livsic} Let $\phi=(\phi_t:X\to X)$ be a topologically transitive flow admitting a strong Markov coding. Suppose that $f$ and $g$ are two H{\"o}lder continuous functions such that for all $a\in\mathcal{O}$ and all $x\in a$ one has $$\displaystyle\int_0^{p_\phi(a)}f(\phi_t(x))\mathrm{d}t=\displaystyle\int_0^{p_\phi(a)}g(\phi_t(x))\mathrm{d}t.$$ \noindent Then $f\sim_\phi g$. \end{teo} A proof of Liv\v{s}ic's Theorem \ref{teo: livsic} can be found in \cite[Theorem 4.3]{Walkden}: even though it is stated for C$^1$ hyperbolic flows, the proof only uses the existence of the Markov partition. The final property of metric Anosov flows we will need is convexity of the pressure function, and a characterization of its first derivative in terms of equilibrium states. Let $M$ be a C$^k$ (resp. smooth, analytic) manifold. A family of functions $\{f_s: X\to \mathbb{R}\}_{s\in M}$ is said to be a C\emph{$^k$ (resp. smooth, analytic) family}, if for all $x\in X$, the function $s\mapsto f_s(x)$ is C$^k$ (resp. smooth, analytic). \begin{prop}[Parry-Pollicott {\cite[Propositions 4.10 and 4.12]{ZetaFunction_Pollicott}}]\label{prop: firstDevPressure} Let $\phi=(\phi_t:X\to X)$ be a topologically transitive flow admitting a strong Markov coding. Then: \begin{enumerate} \item For every pair of H\"older continuous functions $f,g:X\to\mathbb{R}$, the function $$s\mapsto\mathbf{P}(\phi,f+sg)$$ \noindent is convex. Furthermore, it is strictly convex if $g$ is not Liv\v{s}ic cohomologous (w.r.t. $\phi$) to a constant function. \item Let $\{f_s\}_{s\in(-1,1)}$ be a \textnormal{C}$^k$ (resp. smooth, analytic) family of $\upsilon$-H{\"o}lder continuous functions on $X$. Then $s\mapsto \mathbf{P}(\phi,f_s)$ is a \textnormal{C}$^k$ (resp, smooth, analytic) function, and $$\frac{\mathrm{d} \textnormal{\textbf{P}}(\phi,f_s)}{\mathrm{d} s}\bigg|_{s=0}= \int_X \left(\frac{\mathrm{d} f_s}{\mathrm{d} s}\bigg|_{s=0}\right) \mathrm{d}m_{f_0},$$ \noindent where $m_{f_0}=m_{f_0}(\phi)$ is the equilibrium state of $f_0$ (w.r.t $\phi$). \end{enumerate} \end{prop} \subsection{Intersection and renormalized intersection}\label{subsec: inter and renormalized intersection} Intersection and renormalized intersection provide a way of ``measuring the difference" between two points in $\textnormal{HR}(\phi)$. The notion of intersection was introduced by Thurston in the context of Teichm\"uller space (see Wolpert \cite{Wolpert}), and then reinterpreted by Bonahon \cite{BonahonCurrents} (see also Appendix \ref{appendix: currents}). Burger \cite{BurgerManhattan} generalized this notion to pairs of convex cocompact representations into Lie groups of real rank equal to one, and noticed a rigid inequality for this number after renormalizing by entropy. Bridgeman-Canary-Labourie-Sambarino \cite[Section 3.4]{BCLS} further generalized this (renormalized) intersection in the abstract dynamical setting we are focusing on. We will use these notions to study the asymmetric distance and Finsler norm in $\textnormal{HR}(\phi)$ in Section \ref{sec: asymmetric metric and finsler norm for flows}. \begin{dfn} Let $\psi,\widehat{\psi}\in\textnormal{HR}(\phi)$. For $m\in\mathscr{P}(\psi)$, the $m$-\textit{intersection number} between $\psi,\widehat{\psi}\in \textnormal{HR}(\phi)$ is defined by $$\textbf{I}_m(\psi,\widehat{\psi}):=\displaystyle\int_{X}r_{\psi,\widehat{\psi}}\mathrm{d} m.$$ \end{dfn} Recall that $\phi$ is a topologically transitive flow admitting a strong Markov coding. Intersection numbers and ratios of periods are linked as follows. \begin{prop} \label{prop:sup of periods and measures} For every $\psi,\widehat{\psi}\in\textnormal{HR}(\phi)$ the following equality holds $$ \displaystyle\sup_{a\in\mathcal{O}}\frac{p_{\widehat{\psi}}(a)}{p_{\psi}(a)}=\displaystyle\sup_{m\in\mathscr{P}(\psi)} \mathbf{I}_m(\psi,\widehat{\psi}).$$ \end{prop} \begin{proof} The proof follows closely Guillarmou-Knieper-Lefeuvre \cite[Lemma 4.10]{GeodesicStretchPressureMetric}. We include it for completeness. First of all we observe that \begin{equation}\label{eq: sup intersections on ergodic} \displaystyle\sup_{m\in\mathscr{P}(\psi)}\textbf{I}_m(\psi,\widehat{\psi})=\displaystyle\sup_{m\in\mathscr{E}(\psi)}\textbf{I}_m(\psi,\widehat{\psi}). \end{equation} \noindent Indeed, let $m_0\in\mathscr{P}(\psi)$ be such that $$\displaystyle\sup_{m\in\mathscr{P}(\psi)}\textbf{I}_m(\psi,\widehat{\psi})=\mathbf{I}_{m_0}(\psi,\widehat{\psi}).$$ \noindent By Ergodic Decomposition (c.f. Subsection \ref{subsec: measures and pressure}) we have \begin{align*} \mathbf{I}_{m_0}(\psi,\widehat{\psi}) &= \displaystyle\int_{\mathscr{E}(\psi)}\left(\displaystyle\int_X r_{\psi,\widehat{\psi}}(x)\mathrm{d}\mu(x)\right)\mathrm{d}\tau_{m_0}(\mu)\\ &\leq \displaystyle\sup_{m\in\mathscr{E}(\psi)}\textbf{I}_m(\psi,\widehat{\psi}) \times \displaystyle\int_{\mathscr{E}(\psi)}\mathrm{d}\tau_{m_0}(\mu)\\ &= \displaystyle\sup_{m\in\mathscr{E}(\psi)}\textbf{I}_m(\psi,\widehat{\psi}). \end{align*} \noindent The reverse inequality being trivial, this proves Equality (\ref{eq: sup intersections on ergodic}). We now prove $$ \displaystyle\sup_{a\in\mathcal{O}}\frac{p_{\widehat{\psi}}(a)}{p_{\psi}(a)}\leq \displaystyle\sup_{m\in\mathscr{E}(\psi)} \textbf{I}_m(\psi,\widehat{\psi}).$$ \noindent To do that, take a sequence $a_j\in\mathcal{O}$ such that $$ \displaystyle\sup_{a\in\mathcal{O}}\frac{p_{\widehat{\psi}}(a)}{p_{\psi}(a)}= \displaystyle\lim_{j\rightarrow\infty}\frac{p_{\widehat{\psi}}(a_j)}{p_{\psi}(a_j)}.$$ \noindent Since $\mathscr{E}(\psi)$ is compact we may assume $\delta_{\psi}(a_j) \rightarrow m$ for some $m\in\mathscr{E}(\psi)$. By Equation (\ref{eq: integral of reparametrizing over delta in periodic orbit}) we have $$\displaystyle\sup_{a\in\mathcal{O}}\frac{p_{\widehat{\psi}}(a)}{p_{\psi}(a)} =\displaystyle\lim_{j\rightarrow\infty}\displaystyle\int_{X} r_{\psi,\widehat{\psi}}\mathrm{d}\delta_{\psi}(a_j)=\displaystyle\int_{X} r_{\psi,\widehat{\psi}}\mathrm{d} m\leq \displaystyle\sup_{m\in\mathscr{E}(\psi)} \textbf{I}_m(\psi,\widehat{\psi}).$$ To finish the proof, it remains to show $$ \displaystyle\sup_{a\in\mathcal{O}}\frac{p_{\widehat{\psi}}(a)}{p_{\psi}(a)}\geq \displaystyle\sup_{m\in\mathscr{E}(\psi)} \textbf{I}_m(\psi,\widehat{\psi}).$$ \noindent By Theorem \ref{teo: periodic orbits dense}, given $m\in\mathscr{E}(\psi)$ we may find a sequence $a_j\in\mathcal{O}$ such that $\delta_{\psi}(a_j)\rightarrow m $. Proceeding as above we have $$ \displaystyle\sup_{a\in\mathcal{O}}\frac{p_{\widehat{\psi}}(a)}{p_{\psi}(a)} \geq \displaystyle\lim_{j\to\infty}\displaystyle\int_{X} r_{\psi,\widehat{\psi}}\mathrm{d}\delta_{\psi}(a_j)=\displaystyle\int_{X} r_{\psi,\widehat{\psi}}\mathrm{d} m =\textbf{I}_m(\psi,\widehat{\psi}).$$ \noindent The result follows taking supremum over all $m\in\mathscr{E}(\psi)$. \end{proof} The supremum $$\displaystyle\sup_{m\in\mathscr{P}(\psi)} \mathbf{I}_m(\psi,\widehat{\psi})=\displaystyle\sup_{m\in\mathscr{P}(\psi)} \int r_{\psi,\widehat{\psi}}\mathrm{d} m$$ \noindent is a well studied quantity in dynamics. Indeed, this number and the measure(s) attaining the $\sup$ is the subject of study of \textit{Ergodic Optimization}. A general belief in this area is that ``typically" among sufficiently regular functions, the maximizing measure is unique, and supported on a periodic orbit. See Jenkinson \cite{jenkinson} and references therein for a nice survey. However, for the geometric applications we have in mind these types of generic results are not enough. In the specific case of reparametrizing functions arising from points in the Teichm\"uller space of a closed surface, Thurston gives a description of the measures realizing the $\sup$ above: these are always (partially) supported on a topological lamination on the surface, and this lamination is typically a simple closed geodesic (see \cite[p.4 and Section 10]{ThurstonStretch} for details). The function $m\mapsto\textbf{I}_m(\psi,\widehat{\psi})$ is continuous with respect to the weak-$\star$ topology on $\mathscr{P}(\psi)$. Since $\mathscr{P}(\psi)$ is compact, Proposition \ref{prop:sup of periods and measures} implies \begin{equation}\label{eq: sup of periods is finite} \displaystyle\sup_{a\in\mathcal{O}}\frac{p_{\widehat{\psi}}(a)}{p_{\psi}(a)}<\infty. \end{equation} \begin{rem}\label{rem: domination flows} Thanks to the above remark one may try to use directly the $\log$ of the number in (\ref{eq: sup of periods is finite}) to produce a metric on $\textnormal{HR}(\phi)$. However, the following problem arises. For a constant function $r=c>1$, we have $$\log\left(\displaystyle\sup_{a\in\mathcal{O}}\frac{p_{\phi}(a)}{p_{\phi^r}(a)}\right)=\log\left(\frac{1}{c}\right)<0.$$ \noindent Hence, the quantity in Equation \eqref{eq: sup of periods is finite} cannot define a distance in $\textnormal{HR}(\phi)$. This problem also arises in the geometric setting we will focus on (c.f. Remark \ref{rem: AvoidDomination}). \end{rem} A way of resolving the above issue, natural from the viewpoint of dynamical systems, is to normalize by the entropy. Together with Proposition \ref{prop:sup of periods and measures}, this motivates the following definition. \begin{dfn} Let $\psi,\widehat{\psi}\in\textnormal{HR}(\phi)$ and $m\in\mathscr{P}(\psi)$. The $m$-\textit{renormalized intersection} between $\psi$ and $\widehat{\psi}$ is $$\textbf{J}_m(\psi,\widehat{\psi}):=\frac{h_{\widehat{\psi}}}{h_{\psi}}\textbf{I}_m(\psi,\widehat{\psi}).$$ \end{dfn} Considering renormalized intersection fixes the above issue: \begin{prop}[Bridgeman-Canary-Labourie-Sambarino {\cite[Proposition 3.8]{BCLS}}]\label{prop: BCLS renorm int rigidity} For every $\psi,\widehat{\psi}\in\textnormal{HR}(\phi)$ one has $$\mathbf{J}_{m^{\textnormal{BM}}(\psi)}(\psi,\widehat{\psi})\geq 1.$$ \noindent Moreover, equality holds if and only if $(h_{\widehat{\psi}}r_{\phi,\widehat{\psi}})\sim_\phi( h_{\psi}r_{\phi,\psi})$. \end{prop} \begin{proof} By Equation (\ref{eq: rep funtion fro psi to widehatpsi}) we have $$\mathbf{J}_{m^{\textnormal{BM}}(\psi)}(\psi,\widehat{\psi})= \frac{h_{\widehat{\psi}}}{h_\psi} \displaystyle\int \left(\frac{r_{\phi,\widehat{\psi}}}{r_{\phi,\psi}}\right)\mathrm{d} m^{\textnormal{BM}}(\psi).$$ \noindent Now the statement becomes precisely that of \cite[Proposition 3.8]{BCLS}. \end{proof} \section{Asymmetric metric and Finsler norm for flows}\label{sec: asymmetric metric and finsler norm for flows} As always we assume that $\phi$ is a topologically transitive flow admitting a strong Markov coding. We want to use the formula $$\log\left(\displaystyle\sup_{a\in\mathcal{O}}\frac{h_{\widehat{\psi}}}{h_\psi}\frac{p_{\widehat{\psi}}(a)}{ p_\psi(a)}\right)=\log\left(\frac{h_{\widehat{\psi}}}{h_\psi}\displaystyle\sup_{a\in\mathcal{O}}\frac{p_{\widehat{\psi}}(a)}{ p_\psi(a)}\right)$$ \noindent to define a distance on a suitable quotient of $\textnormal{HR}(\phi)$. We begin understanding which pairs are at distance zero: \begin{lema}\label{lem: equivalence rel in hr} For $\psi$ and $\widehat{\psi}$ in $\textnormal{HR}(\phi)$ the following are equivalent: \begin{enumerate} \item For every $a\in\mathcal{O}$, $h_{\widehat{\psi}}p_{\widehat{\psi}}(a)=h_\psi p_\psi(a)$. \item $(h_{\widehat{\psi}}r_{\phi,\widehat{\psi}}) \sim_\phi (h_\psi r_{\phi,\psi})$. \item $r_{\psi,\widehat{\psi}}\sim_\psi h_\psi/h_{\widehat{\psi}}$. \item There exists a constant function $c$ so that $r_{\psi,\widehat{\psi}}\sim_\psi c$. \end{enumerate} \end{lema} \begin{proof} Since $\psi$ and $\widehat{\psi}$ are topologically transitive and admit a strong Markov coding (c.f. Proposition \ref{prop: coding for reparametrization}), all results from Section \ref{sec: thermodynamics} apply. In particular, the equivalence between (3) and (4) follows from Equation (\ref{eq: entropy}). The implications (2)$\Rightarrow$(1) and (3)$\Rightarrow$(1) are straightforward. The implications (1)$\Rightarrow$(2) and (1)$\Rightarrow$(3) hold thanks to Liv\v{s}ic's Theorem \ref{teo: livsic} (applied to $\phi$ and $\psi$ respectively). \end{proof} We say that $\psi$ and $\widehat{\psi}$ in $\textnormal{HR}(\phi)$ are \textit{projectively equivalent} (and denote $\psi\sim\widehat{\psi}$) if any of the equivalent conditions of Lemma \ref{lem: equivalence rel in hr} hold. We denote by $\mathbb{P}\textnormal{HR}(\phi)$ the quotient space under this relation, and denote by $[\psi]\in\mathbb{P}\textnormal{HR}(\phi)$ the equivalence class of $\psi$. \subsection{Asymmetric metric on $\mathbb{P}\textnormal{HR}(\phi)$}\label{subsec: asymm metric flows} Define $d_{\textnormal{\textnormal{Th}}}: \mathbb{P}\textnormal{HR}(\phi)\times\mathbb{P}\textnormal{HR}(\phi) \to \mathbb{R}$ by $$d_{\textnormal{Th}}([\psi],[\widehat{\psi}]):=\log\left(\displaystyle\sup_{a\in\mathcal{O}}\frac{h_{\widehat{\psi}}}{h_{\psi}} \frac{p_{\widehat{\psi}}(a)}{p_{\psi}(a)}\right),$$ \noindent where $\psi$ and $\widehat{\psi}$ are representatives of $[\psi]$ and $[\widehat{\psi}]$ respectively. Lemma \ref{lem: equivalence rel in hr} guarantees that $d_{\textnormal{Th}}$ is well-defined, as it does not depend on the choice of these representatives. \begin{teo}\label{teo: asymmetric distance flows} The function $d_{\textnormal{Th}}$ defines a (possibly asymmetric) distance on $\mathbb{P}\textnormal{HR}(\phi)$. \end{teo} By ``possibly asymmetric" we mean that there is no reason to expect that the equality $d_{\textnormal{Th}}([\psi],[\widehat{\psi}])=d_{\textnormal{Th}}([\widehat{\psi}],[\psi])$ holds for all pairs $[\psi],[\widehat{\psi}]\in\mathbb{P}\textnormal{HR}(\phi)$. In fact, in some specific situations it is possible to show that $d_{\textnormal{Th}}(\cdot,\cdot)$ is indeed asymmetric (c.f. Remark \ref{rem: asymmetric}). \begin{proof}[Proof of Theorem \ref{teo: asymmetric distance flows}] Let $[\psi],[\widehat{\psi}]\in\mathbb{P}\textnormal{HR}(\phi)$ and pick representatives $\psi,\widehat{\psi}\in\textnormal{HR}(\phi)$. By Proposition \ref{prop:sup of periods and measures} we have $$d_{\textnormal{Th}}([\psi],[\widehat{\psi}])=\log\left(\displaystyle\sup_{m\in \mathscr{P}(\psi)} \textbf{J}_m(\psi,\widehat{\psi})\right).$$ \noindent Proposition \ref{prop: BCLS renorm int rigidity} implies $$\displaystyle\sup_{m\in \mathscr{P}(\psi)} \textbf{J}_m(\psi,\widehat{\psi})\geq \textbf{J}_{m^{\textnormal{BM}}(\psi)}(\psi,\widehat{\psi})\geq 1,$$ \noindent and therefore $d_{\textnormal{Th}}([\psi],[\widehat{\psi}])\geq 0$. Moreover, if $d_{\textnormal{Th}}([\psi],[\widehat{\psi}])= 0$, then Proposition \ref{prop: BCLS renorm int rigidity} implies $(h_{\widehat{\psi}}r_{\phi,\widehat{\psi}}) \sim_\phi (h_{\psi}r_{\phi,\psi})$, which by Lemma \ref{lem: equivalence rel in hr} means $[\psi]=[\widehat{\psi}]$. Since the triangle inequality for $d_{\textnormal{Th}}(\cdot,\cdot)$ is easily verified, the proof is complete. \end{proof} \begin{rem}\label{rem: tholozan symmetric} When $\phi$ is a (not necessarily H\"older) continuous parametrization of the geodesic flow of a closed orientable surface of genus $g\geq 2$, Tholozan \cite{ThoHighest} defined a symmetric distance in $\mathbb{P}\textnormal{HR}(\phi)$ which has similar flavor to our $d_{\textnormal{Th}}(\cdot,\cdot)$. More precisely, he works in the space of (not necessarily H\"older) continuous reparametrizations of $\phi$ and considers an appropriate equivalence relation on this space, which restricts to $\sim$ in the H\"older setting. Tholozan proves that the quotient space under this equivalence relation sits as an open, weakly proper, convex domain in the projective space of some Banach space. Hence, it carries a natural \textit{Hilbert metric} (see \cite[Proposition 1.29]{ThoHighest} for details). In \cite[Theorem 1.31]{ThoHighest}, he gives an expression for this Hilbert metric which is a symmetrized version of $d_{\textnormal{Th}}(\cdot,\cdot)$. \end{rem} \subsection{Finsler norm}\label{subsection, FinslerNorm} We now define a Finsler norm $\Vert\cdot\Vert_{\textnormal{Th}}$ on the ``tangent space" $T_{[\psi]}\mathbb{P}\textnormal{HR}(\phi)$ of every $[\psi]\in\mathbb{P}\textnormal{HR}(\phi)$, and provide a link with the asymmetric distance $d_{\textnormal{Th}}(\cdot,\cdot)$ (Proposition \ref{prop: FinslerNorm} below). Recall that a \textit{Finsler norm} on a vector space $V$ is a function $\Vert\cdot\Vert:V\to\mathbb{R}$ such that for all $v,w\in V$ and all $a\geq 0$ one has: \begin{itemize} \item $\Vert v\Vert \geq 0$, with equality if and only if $v=0$, \item $\Vert a v\Vert=a\Vert v\Vert$, and \item $\Vert v+w\Vert\leq \Vert v\Vert+\Vert w\Vert$. \end{itemize} Before starting we need to make sense of the ``tangent space" $T_{[\psi]}\mathbb{P}\textnormal{HR}(\phi)$ (c.f. also \cite[Subsection 3.5.2]{BCLS}). To do this, we express our space of reparametrizations as a level set of the pressure function, and apply Proposition \ref{prop: firstDevPressure} and the Implicit Function Theorem in Banach spaces \cite{ImplicitFunctiontheorem}. We need to be careful though, because the space of H\"older continuous functions on $X$ is not closed in the topology of uniform convergence. To fix this issue, we will fix a H\"older exponent $\upsilon$ and work restricted to the space $\mathcal{H}^\upsilon(X)$ of $\upsilon$-H\"older functions. In the geometric applications we have in mind, namely for spaces of Anosov representations, this is not a strong assumption as discussed in \cite[Section 6]{BCLS} (see also Subsection \ref{subsec: finsler for anosov reps} below). Fix $\upsilon>0$ and endow $\mathcal{H}^\upsilon(X)$ with the Banach norm $$\Vert f\Vert_\upsilon:= \Vert f\Vert_\infty+\sup_{x\neq y}\frac{\vert f(x)-f(y)\vert}{d(x,y)^\upsilon},$$ \noindent where $\Vert\cdot\Vert_\infty$ denotes the uniform norm. Let $\mathcal{B}^\upsilon(X)\subset\mathcal{H}^\upsilon(X)$ be the space of $\phi$-Liv\v{s}ic \textit{coboundaries}, that is, the set of $\upsilon$-H\"older functions on $X$ which are $\phi$-Liv\v{s}ic cohomologus to zero. By Liv\v{s}ic's Theorem \ref{teo: livsic}, $\mathcal{B}^\upsilon(X)$ is a closed (vector) subspace of $\mathcal{H}^\upsilon(X)$. We endow the quotient space $\mathcal{L}^\upsilon(X):=\mathcal{H}^\upsilon(X)/\mathcal{B}^\upsilon(X)$ of Liv\v{s}ic cohomology classes in $\mathcal{H}^\upsilon(X)$ with the norm $$ [f]_{\phi} \mapsto\inf_{u\in [f]_{\phi}} \Vert u \Vert_{\upsilon},$$ \noindent which by abuse of notations will also be denoted by $\Vert \cdot\Vert_\upsilon$. Note that $(\mathcal{H}^\upsilon(X),\Vert\cdot\Vert_\upsilon)$ is a Banach space. Let $\textnormal{HR}^\upsilon(\phi)$ be the set of reparametrizations $\psi\in\textnormal{HR}(\phi)$ so that $r_{\phi,\psi}\in\mathcal{H}^\upsilon(X)$, and $\mathbb{P}\textnormal{HR}^\upsilon(\phi)$ be its projection to $\mathbb{P}\textnormal{HR}(\phi)$. Let $[\psi]\in\mathbb{P}\textnormal{HR}^\upsilon(\phi)$ be any point and take a representative $\psi\in\textnormal{HR}^\upsilon(\phi)$ satisfying $h_\psi=1$. By Proposition \ref{prop: pressureZero} we have $$\mathbf{P}(\phi,-r_{\phi,\psi})=0.$$ \noindent Moreover, if $\widehat{\psi}\in[\psi]$ is another representative satisfying $h_{\widehat{\psi}}=1$, Lemma \ref{lem: equivalence rel in hr} states that $r_{\phi,\widehat{\psi}}\sim_\phi r_{\phi,\psi}$. We then have an injective map from $\mathbb{P}\textnormal{HR}^\upsilon(\phi)$ to the space $$\mathcal{P}^\upsilon(X):=\left\{[r]_\phi\in\mathcal{L}^\upsilon(X): \mathbf{P}(\phi,-r)=0\right\}.$$ \noindent Hence, $\mathbb{P}\textnormal{HR}^\upsilon(\phi)$ identifies with the open subset of $\mathcal{P}^\upsilon(X)$ consisting of Liv\v{s}ic cohomology classes of pressure zero, strictly positive, $\upsilon$-H\"older continuous functions on $X$. In view of this discussion, throughout this section all representatives $\psi$ of points $[\psi]$ in $\mathbb{P}\textnormal{HR}^\upsilon(\phi)$ are assumed to satisfy $h_\psi=1$. From now on we simply denote $[r]_\phi$ by $[r]$, omitting the underlying flow $\phi$. By Proposition \ref{prop: firstDevPressure}, for any positive $g\in\mathcal{H}^\upsilon(X)$ one has $$\mathrm{d}_{[r]}\mathbf{P}(\phi,\cdot)([g])>0.$$ \noindent That same proposition and the Implicit Function Theorem in Banach spaces imply that the tangent space to $\mathcal{P}^\upsilon(X)$ at $[r]$ is given by $$T_{[r]}\mathcal{P}^\upsilon(X)=\left\{[g]\in\mathcal{L}^\upsilon(X): \int_X g \mathrm{d}m_{-r}=0 \right\},$$ \noindent where $m_{-r}=m_{-r} (\phi)$ denotes the equilibrium state of $-r$ (w.r.t. $\phi$). Since $\mathbb{P}\textnormal{HR}^\upsilon(\phi)$ sits as an open subset of $\mathcal{P}^\upsilon(X)$, it is natural to define the \textit{tangent space} to $\mathbb{P}\textnormal{HR}^\upsilon(\phi)$ at $[\psi]$ by $$T_{[\psi]}\mathbb{P}\textnormal{HR}^\upsilon(\phi):=T_{[r_{\phi,\psi}]}\mathcal{P}^\upsilon(X).$$ We are now ready to define our Finsler norm. \begin{dfn} \label{def:Finslernorm} Let $[g]$ be a vector in $T_{[\psi]}\mathbb{P}\textnormal{HR}^\upsilon(\phi)$. We define $$\Vert [g] \Vert_{\textnormal{Th}}:=\displaystyle\sup_{m\in\mathscr{P}(\phi)}\frac{\int g\mathrm{d} m}{\int r_{\phi,\psi}\mathrm{d} m}.$$ \end{dfn} \noindent Note that this is well-defined, i.e. it does not depend on the choice of the representatives $g$ and $r_{\phi,\psi}$ in the respective $\phi$-Liv\v{s}ic cohomology classes (c.f. Remark \ref{LivsicCommon}). Furthermore, by Equation (\ref{eq: iso between ppsi and pwpsi}) we have the following more succinct expression: \begin{equation}\label{eq: finsler succint} \Vert[g]\Vert_{\textnormal{Th}}=\sup_{m\in\mathscr{P}(\psi)}\int \left(\frac{g}{r_{\phi,\psi}}\right)\mathrm{d} m. \end{equation} By definition of the tangent space, $\Vert[g]\Vert_{\textnormal{Th}}\geq 0$ Moreover, $(\mathbb{R}_{>0})$-homogeneity and the triangle inequality are easily verified. Hence, the following shows that $\Vert\cdot\Vert_{\textnormal{Th}}$ is a Finsler norm. \begin{lema}\label{lem: norm is non degenerate} Let $[g]\in T_{[\psi]}\mathbb{P}\textnormal{HR}^\upsilon(\phi)$ be such that $\Vert [g]\Vert_{\textnormal{Th}}=0$. Then $[g]=0$. \end{lema} \begin{proof} To prove the lemma it suffices to show that $g$ is Liv\v{s}ic cohomologous (w.r.t. $\phi$) to a constant function $c$. Indeed, if this is the case, then by Remark \ref{LivsicCommon} we have $$c=\displaystyle\int c\mathrm{d} m_{-r_{\phi,\psi}}=\displaystyle\int g\mathrm{d} m_{-r_{\phi,\psi}}=0.$$ \noindent Hence $[g]=0$ as desired. Let us assume by contradiction that $g$ is not Liv\v{s}ic cohomologous to a constant. By Proposition \ref{prop: firstDevPressure} the function $s\mapsto\mathbf{P}(\phi,-r_{\phi,\psi}+sg)$ is then strictly convex and $$\left.\frac{\mathrm{d}}{\mathrm{d} s}\right\vert_{s=0}\mathbf{P}(\phi,-r_{\phi,\psi}+sg)=\displaystyle\int g\mathrm{d} m_{-r_{\phi,\psi}}=0.$$ \noindent Strict convexity implies then $$\mathbf{P}(\phi,-r_{\phi,\psi}+g)>\mathbf{P}(\phi,-r_{\phi,\psi})=0.$$ On the other hand, we show that $\Vert[g]\Vert_{\textnormal{Th}}=0$ implies $\mathbf{P}(\phi,-r_{\phi,\psi}+g)\leq 0$, giving the desired contradiction. Indeed, note that $$\mathbf{P}(\phi,-r_{\phi,\psi}+g)\leq \displaystyle\sup_{m\in\mathscr{P}(\phi)}\left( h(\phi,m)-\int r_{\phi,\psi}\mathrm{d} m\right)+\displaystyle\sup_{m\in\mathscr{P}(\phi)} \int g\mathrm{d} m.$$ \noindent Since $\Vert [g]\Vert_{\textnormal{Th}}=0$ and $r_{\phi,\psi}$ is positive, we have $$ \displaystyle\sup_{m\in\mathscr{P}(\phi)}\int g\mathrm{d} m\leq 0,$$ \noindent and therefore $$\mathbf{P}(\phi,-r_{\phi,\psi}+g)\leq \displaystyle\sup_{m\in\mathscr{P}(\phi)}\left( h(\phi,m)-\int r_{\phi,\psi}\mathrm{d} m\right)=\mathbf{P}(\phi,-r_{\phi,\psi})=0.$$ \end{proof} We now link the Finsler norm $\Vert\cdot\Vert_{\textnormal{Th}}$ and the asymmetric distance $d_{\textnormal{Th}}(\cdot,\cdot)$. A path $\{[\psi^s]\}_{s\in(-1,1)}\subset\mathbb{P}\textnormal{HR}^\upsilon(\phi)$ is \textit{analytic} (resp. C$^k$, \textit{smooth}) if there is an analytic (resp. C$^k$, smooth) path $\{\widetilde{g}_s\}_{s\in(-1,1)}\subset\mathcal{H}^\upsilon(X)$ of strictly positive functions so that $\left[\phi^{\widetilde{g}_s}\right]=[\psi^s]$ for all $s\in(-1,1)$. Pick a path $\{[\psi^s]\}_{s\in(-1,1)}\subset\mathbb{P}\textnormal{HR}^\upsilon(\phi)$ of class C$^1$ and let $\{\widetilde{g}_s\}_{s\in(-1,1)}\subset\mathcal{H}^\upsilon(X)$ be as above. By Bridgeman-Canary-Labourie-Sambarino \cite[Proposition 3.12]{BCLS}, the function $s\mapsto h_{\phi^{\widetilde{g}_s}}$ is of class C$^1$. Hence, $s\mapsto g_s:=h_{\phi^{\widetilde{g}_s}}\widetilde{g}_s$ is also C$^1$. Furthermore, we have $$\left[\phi^{g_s}\right]=\left[ \phi^{\widetilde{g}_s} \right]=[\psi^s]$$ \noindent for all $s$, and therefore we may write $\psi^s=\phi^{g_s}$. By construction we have $h_{\psi^s}=1$, that is, $\mathbf{P}(\phi,-g_s)=0$ for all $s\in(-1,1)$ (Proposition \ref{prop: pressureZero}). If we denote $\dot{g}_0:= \left.\frac{\mathrm{d}}{\mathrm{d} s}\right\vert_{s=0} g_{s}$, we have $$\left[\dot{g}_0\right]=\left.\frac{\mathrm{d}}{\mathrm{d} s}\right\vert_{s=0}\left[g_s\right],$$ \noindent and Proposition \ref{prop: firstDevPressure} gives $$0=\displaystyle\int\left(-\dot{g}_0\right)\mathrm{d} m_{-g_0},$$ \noindent where $m_{-g_0}=m_{-g_0}(\phi)$ is the equilibrium state of $-g_0$ (w.r.t. $\phi$). That is, setting $\psi:=\psi^0$ we have $[\dot{g}_0]\in T_{[\psi]}\mathbb{P}\textnormal{HR}^\upsilon(\phi)$. \begin{prop}\label{prop: FinslerNorm} With the notations above, the function $s\mapsto d_{\textnormal{Th}}([\psi],[\psi^s])$ is differentiable at $s=0$. Furthermore, one has $$ \left\Vert \left[\dot{g}_0\right]\right\Vert_{\textnormal{Th}}=\left.\frac{\mathrm{d} }{\mathrm{d} s}\right\vert_{s=0}d_{\textnormal{Th}}([\psi],[\psi^s]).$$ \end{prop} \begin{proof} Compare Guillarmou-Knieper-Lefeuvre \cite[Lemma 5.6]{GeodesicStretchPressureMetric}. Let $$r_s:=\frac{g_s}{r_{\phi,\psi}}=\frac{g_s}{g_0},$$ \noindent which is the reparametrizing function from $\psi$ to $\psi^s$. Note that $$\dot{r}_0:=\left.\frac{\mathrm{d}}{\mathrm{d} s}\right\vert_{s=0} r_{s}=\frac{\dot{g}_0}{r_{\phi,\psi}},$$ and by Equation (\ref{eq: finsler succint}) we have \begin{equation}\label{eq: finsler and dth} \left\Vert \left[\dot{g}_0\right]\right\Vert_{\textnormal{Th}}=\displaystyle\sup_{m\in\mathscr{P}(\psi)}\int \dot{r}_0\mathrm{d} m. \end{equation} On the other hand, let $u(s):=e^{d_{\textnormal{Th}}([\psi],[\psi^s])}$. Since $h_{\psi^s}\equiv 1$, we have $$u(s)=\displaystyle\sup_{m\in\mathscr{P}(\psi)}\int r_s\mathrm{d} m.$$ \noindent It suffices to show that $u$ is differentiable at $s=0$ and $u'(0)=\Vert[\dot{g}_0]\Vert_{\textnormal{Th}}$. Since $r_0\equiv 1$, we have $$\frac{u(s)-u(0)}{s}=\frac{\displaystyle\sup_{m\in\mathscr{P}(\psi)}\int r_s\mathrm{d} m-\displaystyle\sup_{m\in\mathscr{P}(\psi)}\int 1 \mathrm{d} m}{s}=\displaystyle\sup_{m\in\mathscr{P}(\psi)}\int \left(\frac{r_s-1}{s}\right)\mathrm{d} m,$$ \noindent and thanks to Equation (\ref{eq: finsler and dth}) we need to show $$\displaystyle\lim_{s\to 0}\left(\displaystyle\sup_{m\in\mathscr{P}(\psi)}\int \left(\frac{r_s-1}{s}\right)\mathrm{d} m\right)= \displaystyle\sup_{m\in\mathscr{P}(\psi)}\int \dot{r}_0\mathrm{d} m.$$ Fix some $\varepsilon>0$. The Mean Value Theorem implies that $\frac{r_s-1}{s}$ converges uniformly to $\dot{r}_0$ as $s\to 0$. There exists then $\delta>0$ so that, for all $0<\vert s \vert<\delta$ one has $$\sup\limits_{x\in X}\left\vert\frac{r_s(x)-1}{s}- \dot{r}_0(x)\right\vert<\varepsilon.$$ \noindent Fix any $s$ so that $0<\vert s \vert<\delta$. For every $m\in\mathscr{P}(\psi)$ we have $$\left\vert\int \frac{r_s-1}{s} \mathrm{d} m - \int \dot{r}_0 \mathrm{d} m\right\vert \leq \sup\limits_{x\in X}\left\vert\frac{r_s(x)-1}{s}- \dot{r}_0(x)\right\vert <\varepsilon.$$ \noindent Therefore $$\int \dot{r}_0 \mathrm{d} m-\varepsilon <\int \frac{r_s-1}{s} \mathrm{d} m<\int \dot{r}_0 \mathrm{d} m+\varepsilon,$$ \noindent for all $m\in\mathscr{P}(\psi)$. Taking supremum over all $m\in\mathscr{P}(\psi)$ the result follows. \end{proof} \begin{rem}\label{rem: first derivative of rint} \begin{enumerate} \item Keeping the notations from above, Proposition \ref{prop: FinslerNorm} can be restated as $$\left\Vert \left[\left.\frac{\mathrm{d}}{\mathrm{d} s}\right\vert_{s=0}g_s\right]\right\Vert_{\textnormal{Th}}=\left.\frac{\mathrm{d}}{\mathrm{d} s}\right\vert_{s=0}\left(\displaystyle\sup_{m\in\mathscr{P}(\psi)}\mathbf{J}_m(\psi,\psi^s)\right).$$ \noindent We will come back to this equality in Subsection \ref{subsec: comparison pressure norm}, comparing our viewpoint with previous work of Bridgeman-Canary-Labourie-Sambarino \cite{BCLS}. \item Notice that although $\Vert\cdot\Vert_{\textnormal{Th}}$ is a Finsler norm induced from the asymmetric distance $d_{\textnormal{Th}}(\cdot,\cdot)$, it is not clear whether $d_{\textnormal{Th}}(\cdot,\cdot)$ is the length distance induced from $\Vert\cdot\Vert_{\textnormal{Th}}$. In the context of Teichm\"uller space (c.f. Remark \ref{rem: asymmetric}), Thurston \cite{ThurstonStretch} shows that $d_{\textnormal{Th}}(\cdot,\cdot)$ coincides with the length distance induced by the Finsler norm. \item The Finsler norm $\Vert\cdot\Vert_{\textnormal{Th}}$ is, in general, not induced by an inner product. Indeed, in some concrete examples (c.f. Remark \ref{rem: asymmetric}) one may find tangent vectors $[g]$ for which $$\Vert [g]\Vert_{\textnormal{Th}}\neq\Vert -[g]\Vert_{\textnormal{Th}}.$$ \end{enumerate} \end{rem} \subsection{Comparison with pressure norm}\label{subsec: comparison pressure norm} Thurston also introduced a Riemannian metric on the Teichm\"uller space of a closed surface $S$, which agrees with the Weil-Petersson metric (see Wolpert \cite{Wolpert}). McMullen \cite{McMullen} reinterpreted this construction using Thermodynamical Formalism, and Bridgeman-Canary-Labourie-Sambarino \cite{BCLS} took inspiration from this to produce a Euclidean norm $\Vert\cdot\Vert_\mathbf{P}$ on $T_{[\psi]}\mathbb{P}\textnormal{HR}^\upsilon(\phi)$. We now briefly recall the construction of \cite{BCLS} and point out the difference with our approach. Let $[\psi]\in\mathbb{P}\textnormal{HR}^\upsilon(\phi)$ and $[g]\in T_{[\psi]}\mathbb{P}\textnormal{HR}^\upsilon(\phi)$ be a tangent vector. Thanks to Proposition \ref{prop: firstDevPressure}, one has $\left.\frac{\mathrm{d}^2}{\mathrm{d} s^2}\right\vert_{s=0}\mathbf{P}(-r_{\phi,\psi}+sg)\geq 0$. Hence, one may define $$\Vert[g]\Vert_\mathbf{P}:=\sqrt{\frac{\left.\frac{\mathrm{d}^2}{\mathrm{d} s^2}\right\vert_{s=0}\mathbf{P}(-r_{\phi,\psi}+sg)}{\int r_{\phi,\psi}\mathrm{d} m_{-r_{\phi,\psi}}}}.$$ \noindent Work of Ruelle and Parry-Pollicott implies that $\Vert\cdot\Vert_{\mathbf{P}}$ is a norm\footnote{In particular one has to show that $\Vert[g]\Vert_\mathbf{P}=0$ if and only if $[g]=0$.} on $T_{[\psi]}\mathbb{P}\textnormal{HR}^\upsilon(\phi)$, called the \textit{pressure norm}. Moreover, this norm is induced from an inner product, and in fact one has $$\Vert[g]\Vert_\mathbf{P}^2=\frac{\displaystyle\lim_{T\to\infty}\frac{1}{T}\int\left(\displaystyle\int_0^T g(\phi_s(x))\mathrm{d} s\right)^2\mathrm{d} m_{-r_{\phi,\psi}}(x)}{\int r_{\phi,\psi}\mathrm{d} m_{-r_{\phi,\psi}}}.$$ \noindent See \cite[Subsection 3.5.1]{BCLS} for details. As noticed in \cite[Subsection 3.5.2]{BCLS} the pressure norm is related to the $m^{\textnormal{BM}}(\psi)$-renormalized intersection. Indeed, consider the function $\mathbf{J}_{[\psi]}(\cdot)$ on $\mathbb{P}\textnormal{HR}^\upsilon(\phi)$ given by $$\mathbf{J}_{[\psi]}([\widehat{\psi}]):=\textbf{J}_{m^{\textnormal{BM}}(\psi)}(\psi,\widehat{\psi}),$$ \noindent where $\psi$ (resp. $\widehat{\psi}$) is a representative of $[\psi]$ (resp. $[\widehat{\psi}]$). One may check that this is a well-defined function, as it does not depend on the choice of these representatives. Furthermore, by Proposition \ref{prop: BCLS renorm int rigidity} this function has a minimum at $[\psi]$ and therefore its Hessian at $[\psi]$ defines a non-negative symmetric bilinear form on $T_{[\psi]}\mathbb{P}\textnormal{HR}^\upsilon(\phi)$. In fact, if we let $\{g_s\}_{s\in(-1,1)}$ be a smooth path as in Proposition \ref{prop: FinslerNorm}, then one has $$\left\Vert\left[\left.\frac{\mathrm{d}}{\mathrm{d} s}\right\vert_{s=0}g_s\right]\right\Vert_\mathbf{P}^2=\left.\frac{\mathrm{d}^2}{\mathrm{d} s^2}\right\vert_{s=0}\mathbf{J}_{[\psi]}([\psi^s]).$$ \noindent See \cite[Proposition 3.11]{BCLS} for details. Hence, the second derivative of the $m^{\textnormal{BM}}(\psi)$-renormalized intersection defines an inner product on $T_{[\psi]}\mathbb{P}\textnormal{HR}^\upsilon(\phi)$. In contrast, our viewpoint is different: rather than taking a second derivative of the renormalized intersection with respect to a given measure, we take the supremum of renormalized intersections over all measures, and then take a first derivative (c.f. Remark \ref{rem: first derivative of rint}). \section{Anosov representations} \label{sec, AnosovReps} Anosov representations were introduced by Labourie \cite{Lab} for fundamental groups of negatively curved manifolds, and then extended by Guichard-W. \cite{GW} to general word hyperbolic groups. They provide a stable class of discrete representations with finite kernel into semisimple Lie groups, that share many features with holonomies of convex cocompact hyperbolic manifolds. We will briefly recall this notion in Subsection \ref{subsec: anosov reps}, after fixing some notations and terminology in Subsection \ref{subsec: structure lie groups}. In Subsection \ref{subsec: examples anosov reps} we discuss examples. For a more complete account on the state of the art of the field, see e.g. \cite{KasICM,PozBourbaki,WieICM} and references therein. \subsection{Structure of semisimple Lie groups}\label{subsec: structure lie groups} Standard references for this part are the books of Knapp \cite{Kna} and Helgason \cite{Hel}. Let $\mathsf{G}$ be a connected real semisimple algebraic group of non compact type with Lie algebra $\mathfrak{g}$. Let $\mathsf{K}$ be a maximal compact subgroup of $\mathsf{G}$ and $\tau$ be the corresponding Cartan involution of $\mathfrak{g}$. Let $$\mathfrak{p}:=\{v\in\mathfrak{g}: \tau v=-v\}.$$ \noindent We fix a Cartan subspace $\mathfrak{a}\subset\mathfrak{p}$ and let $\mathsf{M}$ be the centralizer of $\mathfrak{a}$ in $\mathsf{K}$. A natural dynamical system one may look at when studying a discrete subgroup $\Delta<\mathsf{G}$, is the right action of $\mathfrak{a}$ on $\Delta\backslash\mathsf{G}/\mathsf{M}$. When $\mathsf{G}$ has real rank equal to one, this action is conjugate to the action of the geodesic flow of the underlying negatively curved manifold. However, in general it may be hard to study the action $\mathfrak{a}\curvearrowright \Delta\backslash\mathsf{G}/\mathsf{M}$. In many situations (including the setting we are aiming for), it proves useful to consider a ``more hyperbolic" dynamical system, namely, the action of the center of the Levi group associated to a parallel set. We now fix the terminology needed to define this dynamical system. Denote by $\Sigma$ the set of \textit{roots} of $\mathfrak{a}$ in $\mathfrak{g}$, that is, the set of functionals $\alpha\in\mathfrak{a}^*\setminus\{0\}$ for which the \textit{root space} $$\mathfrak{g}_{\alpha}:=\{Y\in \mathfrak{g}: [X,Y] =\alpha(X)Y \textnormal{ for all } X\in\mathfrak{a}\}$$ \noindent is non zero. Fix a positive system $\Sigma^{+}\subset\Sigma$ associated to a closed Weyl chamber $\mathfrak{a}^+\subset\mathfrak{a}$. The set of simple roots for $\Sigma^{+}$ is denoted by $\Pi$. \begin{ex}\label{ex: roots} Suppose $\mathsf{G}=\mathsf{PSL}(V)$, where $V$ is a real (resp. complex) vector space of dimension $d\geq 2$. The Lie algebra of $\mathsf{G}$ is the space of traceless linear operators in $V$. Hence every element of $\mathfrak{g}$ acts on $V$. A maximal compact subgroup is the subgroup of orthogonal (resp. unitary) matrices with respect to an inner (resp. Hermitian inner) product $o$ in $V$. A Cartan subspace $\mathfrak{a}\subset\mathfrak{p}$ is the subalgebra of matrices which are diagonal on a given projective basis $\mathcal{E}$ of $V$ orthogonal with respect to $o$. The choice of a closed Weyl chamber $\mathfrak{a}^+\subset\mathfrak{a}$ corresponds to the choice of a total order $\{\ell_1,\dots,\ell_d\}$ on $\mathcal{E}$. Explicitly, if $\lambda_j(X)$ denotes the eigenvalue of $X\in\mathfrak{a}$ on the eigenline $\ell_j$, the Weyl chamber $\mathfrak{a}^+$ is given by the set of matrices $X\in\mathfrak{a}$ for which $$\lambda_1(X)\geq\dots\geq\lambda_d(X).$$ \noindent For $i\neq j$ we let $\alpha_{i,j}(X):=\lambda_i(X)-\lambda_j(X)$. Then $$\Sigma=\{\alpha_{i,j}:i\neq j\} \textnormal{ and } \Sigma^+=\{\alpha_{i,j}:i< j\}.$$ \noindent The set of simple roots is $$\Pi=\{\alpha_{i,i+1}:i=1,\dots,d-1\}.$$ \noindent Sometimes we will write the elements of $\Pi$ simply by $\alpha_i:=\alpha_{i,i+1}$. \end{ex} Let $\mathsf{W}$ be the \textit{Weyl group} of $\Sigma$. We realize it as $$\mathsf{W}\cong\mathsf{N}_{\mathsf{K}}(\mathfrak{a})/\mathsf{M},$$ \noindent where $\mathsf{N}_{\mathsf{K}}(\mathfrak{a})$ is the normalizer of $\mathfrak{a}$ in $\mathsf{K}$. The group $\mathsf{W}$ acts simply transitively on the set of Weyl chambers in $\mathfrak{a}$, thus there exists a unique element $w_0\in\mathsf{W}$ taking $\mathfrak{a}^{+}$ to $-\mathfrak{a}^{+}$. The \textit{opposition involution} associated to $\mathfrak{a}^+$ is $\iota:=-w_0$. We will furthermore need the structure of parabolic subgroups of $\mathsf{G}$. Fix a non empty subset $\Theta \subset \Pi$. Consider the subalgebras $$\mathfrak{p}_{\Theta}:= \mathfrak{g}_0 \oplus\bigoplus_{\alpha \in\Sigma^{+}} \mathfrak{g}_{\alpha}\bigoplus_{\alpha\in\langle\Pi-\Theta\rangle}\mathfrak{g}_{-\alpha}$$ \noindent and $$\overline{\mathfrak{p}_{\Theta}}:= \mathfrak{g}_0 \oplus\bigoplus_{\alpha \in\Sigma^{+}} \mathfrak{g}_{-\alpha}\bigoplus_{\alpha\in\langle\Pi-\Theta\rangle}\mathfrak{g}_{\alpha},$$ \noindent where $\langle\Pi-\Theta\rangle$ denotes the set of positive roots generated by roots in $\Pi-\Theta$. We let $\mathsf{P}_\Theta$ and $\overline{\mathsf{P}}_\Theta$ be the corresponding subgroups of $\mathsf{G}$. Every parabolic subgroup of $\mathsf{G}$ is conjugate to a unique $\mathsf{P}_{\Theta}$. Note that $\overline{\mathsf{P}}_\Theta$ is conjugate to $\mathsf{P}_{\iota(\Theta)}$, where $$\iota(\Theta):=\{\alpha\circ\iota: \alpha\in\Theta\}.$$ \noindent The parabolic subgroup $\overline{\mathsf{P}}_\Theta$ is \textit{opposite} to $\mathsf{P}_\Theta$. Let $$\mathscr{F}_\Theta:=\mathsf{G}/\mathsf{P}_\Theta \textnormal{ and }\overline{\mathscr{F}}_\Theta:=\mathsf{G}/\overline{\mathsf{P}}_\Theta$$ \noindent be the corresponding \textit{flag manifolds} of $\mathsf{G}$. Two flags $\xi\in\mathscr{F}_\Theta$ and $\overline{\xi}\in\overline{\mathscr{F}}_\Theta$ are \textit{transverse} if $(\overline{\xi},\xi)$ belongs to $\mathscr{F}^{(2)}_\Theta$, the unique open orbit of the action of $\mathsf{G}$ on $\overline{\mathscr{F}}_\Theta\times\mathscr{F}_\Theta$. We also let $\mathscr{F}:=\mathscr{F}_\Pi$ and $\mathscr{F}^{(2)}:=\mathscr{F}_\Pi^{(2)}$. \begin{ex}\label{ex: flags in psl} Let $\mathsf{G}$ be as in Example \ref{ex: roots}. The choice of $\Theta$ is in this case equivalent to the choice of a subset $\{1\leq i_1<\dots<i_p\leq d-1\}$, for some $1\leq p\leq d-1$. Then $\mathscr{F}_\Theta$ identifies with the space of \textit{partial flags} indexed by $\Theta$, that is, the space of sequences $\xi$ of the form $(\xi^{i_1}\subset\dots\subset\xi^{i_p})$, where $\xi^{i_j}$ is a linear subspace of $V$ of dimension $i_j$, for all $j=1,\dots,p$. Furthermore, one has $\iota(\Theta)=\{1\leq d-i_p<\dots<d-i_1\leq d-1\}$. A flag $\overline{\xi}\in\overline{\mathscr{F}}_\Theta$ is transverse to $\xi\in\mathscr{F}_\Theta$ if and only if for all $j=1,\dots ,p$ the sum $\overline{\xi}^{d-i_j}+\xi^{i_j}$ is direct. \end{ex} A point in $(\overline{\xi},\xi)\in\mathscr{F}^{(2)}_\Theta$ determines a \textit{parallel set} of the Riemannian symmetric space $X_\mathsf{G}$ of $\mathsf{G}$. It is the union of all parametrized flat subspaces $f$ of $X_\mathsf{G}$ so that the flag associated to $f(\mathfrak{a}^+)$ (resp. $f(-\mathfrak{a}^+)$) belongs to the fiber over $\xi$ (resp. $\overline{\xi}$), for the fibration $\mathscr{F}\to\mathscr{F}_\Theta$ (resp. $\mathscr{F}\to\overline{\mathscr{F}}_\Theta$). When the real rank of $\mathsf{G}$ is equal to $1$, this is just a geodesic of $X_\mathsf{G}$. When $\Theta=\Pi$, it is a maximal flat subspace of $X_\mathsf{G}$. Any parallel set is identified with the Riemannian symmetric space of the Levi subgroup $L_\Theta = \mathsf{P}_\Theta\cap\overline{\mathsf{P}}_\Theta$, a reductive subgroup of $\mathsf{G}$. Let $$\mathfrak{a}_{\Theta} := \bigcap_{\alpha \in \Pi -\Theta} \text{ker } \alpha$$ \noindent be the Lie algebra of the center of $L_\Theta =\mathsf{P}_{\Theta}\cap \overline{\mathsf{P}}_{\Theta}$ (in particular, $\mathfrak{a}_\Pi=\mathfrak{a}$). There is a unique projection $p_{\Theta}: \mathfrak{a} \to \mathfrak{a}_{\Theta}$ invariant under the group $$\mathsf{W}_{\Theta}:=\{w\in \mathsf{W}: w\vert_{\mathfrak{a}_{\Theta}}=\operatorname{id}_{\mathfrak{a}_{\Theta}}\}.$$ \noindent The dual space $\mathfrak{a}_\Theta^*$ identifies naturally with $\{\varphi\in\mathfrak{a}^*: \varphi\circ p_\Theta=\varphi\}$. We will use this identification throughout the paper. Consider the space $\mathscr{F}_\Theta^{(2)}\times\mathfrak{a}_\Theta$, endowed with the action of $\mathfrak{a}_\Theta$ by translations on the last coordinate. This action commutes with a natural action of $\mathsf{G}$ that we now describe, and the quotient dynamics is the ``more hyperbolic" dynamical system we have referred to at the beginning of this subsection. Let $\mathsf{N}$ be the \textit{unipotent radical} of $\mathsf{P}=\mathsf{P}_\Pi$, i.e. the connected subgroup of $\mathsf{G}$ associated to the Lie algebra $\sum_{\alpha\in\Sigma^+}\mathfrak{g}_\alpha$. The \textit{Iwasawa Decomposition} is $$\mathsf{G}=\mathsf{K}\exp(\mathfrak{a})\mathsf{N}.$$ \noindent In particular, $\mathscr{F}\cong\mathsf{K}/\mathsf{M}$ and for $\xi\in\mathscr{F}$ we may find $k\in\mathsf{K}$ such that $k\mathsf{M}=\xi$. Quint \cite{QuintCocylce} defines a map $\sigma: \mathsf{G}\times \mathscr{F} \to \mathfrak{a}$ by the formula $$gk=l \exp(\sigma(g,k\mathsf{M}))n,$$ \noindent where $n\in\mathsf{N}$ and $l\in\mathsf{K}$. Quint \cite[Lemme 6.11]{QuintCocylce} also shows that $p_{\Theta}\circ\sigma:\mathsf{G} \times \mathscr{F} \to \mathfrak{a}_{\Theta}$ factors through a map $\sigma_{\Theta}: \mathsf{G} \times \mathscr{F}_{\Theta} \to \mathfrak{a}_{\Theta}$. For every $g, h\in \mathsf{G}$ and $\xi\in \mathscr{F}_{\Theta}$ one has $$\sigma_{\Theta}(gh,\xi)=\sigma_{\Theta}(g,h\cdot\xi)+\sigma_{\Theta}(h,\xi).$$ \noindent The map $\sigma_\Theta$ is called the $\Theta$-\textit{Busemann-Iwasawa cocycle} of $\mathsf{G}$. Observe that the action of $\mathfrak{a}_\Theta$ on $\mathscr{F}_\Theta^{(2)}\times\mathfrak{a}_\Theta$ commutes with the action of $\mathsf{G}$ given by $$g\cdot(\overline{\xi},\xi,X):=(g\cdot \overline{\xi},g\cdot \xi,X-\sigma_\Theta(g,\xi)).$$ \begin{rem} The Busemann-Iwasawa cocycle of $\mathsf{G}$ is a vector valued version of the \textit{Busemann function} of the Riemannian symmetric space $X_\mathsf{G}$ of $\mathsf{G}$. Indeed, when $\mathsf{G}$ has real rank equal to one, then $\mathscr{F}$ identifies with the visual boundary $\partial X_\mathsf{G}$ of $X_\mathsf{G}$. Let $o\in X_\mathsf{G}$ be the point fixed by $\mathsf{K}$. After identifying $\mathfrak{a}$ with $\mathbb{R}$ suitably, one has $$\sigma(g,\xi)=b_\xi(o,g^{-1}\cdot o),$$ \noindent where $b_\cdot(\cdot,\cdot):\partial X_\mathsf{G}\times X_\mathsf{G}\times X_\mathsf{G}\to\mathbb{R}$ is the Busemann function. A similar interpretation holds in higher rank (c.f. \cite[Lemme 6.6]{QuintCocylce}). \end{rem} In Section \ref{sec: anosov flows and reps} we will consider a flow space which is even better behaved than the action of $\mathfrak{a}_\Theta$ associated to a parallel set. It will be induced by the choice of a functional in $\mathfrak{a}_\Theta^*$. Natural generators of $\mathfrak{a}_\Theta^*$ are the \textit{fundamental weights} associated to $\Theta$, whose definition we now recall. Denote by $(\cdot,\cdot)$ the inner product on $\mathfrak{a}^*$ dual to the Killing form of $\mathfrak{g}$. For $\varphi,\psi\in\mathfrak{a}^*$ set $$\langle\varphi,\psi\rangle:=2\frac{(\varphi,\psi)}{(\psi,\psi)}.$$ \noindent Given $\alpha\in\Pi$, the corresponding \textit{fundamental weight} is the functional $\omega_\alpha\in\mathfrak{a}^*$ defined by the formulas $ \langle\omega_\alpha,\beta\rangle=\delta_{\alpha\beta}$ for $\beta\in\Pi$. One has \begin{equation}\label{eq: fund weight and ptheta} \omega_\alpha\circ p_\Theta =\omega_\alpha \end{equation}\noindent for all $\alpha\in\Theta$ (c.f. Quint \cite[Lemme II.2.1]{QuiDivergence}). In particular, we have $\omega_\alpha\in\mathfrak{a}_\Theta^*$. Fundamental weights are related to a special set of linear representations of $\mathsf{G}$ introduced by Tits \cite{Tits}. If $\Lambda:\mathsf{G}\to\mathsf{PGL}(V)$ is an irreducible representation, a functional $\chi\in\mathfrak{a}^*$ is a \textit{weight} of $\Lambda$ if the \textit{weight space} $$V_\chi:=\{v\in V: \Lambda(\exp(X))\cdot v=e^{\chi(X)}v, \textnormal{ for all } X\in\mathfrak{a}\}$$ \noindent is non zero. Tits \cite{Tits} shows that there exists a unique weight $\chi_\Lambda$ which is maximal with respect to the order given by $\chi\geq \chi'$ if $\chi-\chi'$ is a linear combination of simple roots with non-negative coefficients. The functional $\chi_\Lambda$ is called the \textit{highest weight} of $\Lambda$ and the representation is \textit{proximal} if the associated weight space $V_{\chi_\Lambda}$ is one dimensional. The next proposition is useful. \begin{prop}[Tits \cite{Tits}]\label{prop: tits} For every $\alpha\in\Pi$ there exists a finite dimensional real vector space $V_\alpha$ and a proximal irreducible representation $\Lambda_\alpha:\mathsf{G}\to\mathsf{PGL}(V_\alpha)$ such that the highest weight $\chi_\alpha=\chi_{\Lambda_\alpha}$ is of the form $k_\alpha\omega_\alpha$, for some integer $k_\alpha\geq 1$. \end{prop} We fix from now on a set of representations $\{\Lambda_\alpha\}_{\alpha\in\Pi}$ as in Proposition \ref{prop: tits}. Observe that for all $\alpha\in\Theta$ we have\begin{equation}\label{eq: highest weight and ptheta} \chi_\alpha\circ p_\Theta =\chi_\alpha, \end{equation} \noindent and therefore $\chi_\alpha$ belongs to $\mathfrak{a}_\Theta^*$. We conclude recalling the definitions of Cartan and Jordan projections of $\mathsf{G}$ for later use. The \textit{Cartan projection} of $g\in\mathsf{G}$ is the unique element $\mu(g)\in\mathfrak{a}^+$ satisfying $$g\in\mathsf{K}\exp(\mu(g))\mathsf{K}.$$ \noindent The \textit{Jordan projection} of $g$ is defined by $$\lambda(g):=\displaystyle\lim_{n\to\infty}\frac{\mu(g^n)}{n}.$$ \noindent One may show that for all $\alpha\in\Pi$ and all $g\in\mathsf{G}$ one has \begin{equation}\label{eq: spectral radious and fund weight} \lambda_1(\Lambda_\alpha(g))=\chi_\alpha(\lambda(g))=k_\alpha\omega_\alpha(\lambda(g)). \end{equation} \noindent We denote $$\mu_{\Theta}:=p_{\Theta}\circ \mu \textnormal{ and } \lambda_{\Theta}:=p_{\Theta}\circ \lambda.$$ \subsection{Anosov representations and their length functions}\label{subsec: anosov reps} We now define Anosov representations and their corresponding length functions and entropies. The definition that we present here is not the original definition, but an equivalent one established in \cite{KLPanosovcharacterizations,GGKW,BPS}. Let $\Gamma$ be a finitely generated group and $\vert\cdot\vert$ be the word length associated to a finite generating set (that we fix from now on). \begin{dfn}\label{def: anosov rep} Let $\Theta\subset\Pi$ be a non empty set. A representation $\rho: \Gamma \to \mathsf{G}$ is $\mathsf{P}_\Theta$-\textit{Anosov} (or $\Theta$-\textit{Anosov}) if there exist positive constants $C$ and $c$ such that for all $\alpha\in\Theta$ one has $$\alpha(\mu(\rho(\gamma)))\geq C\vert\gamma\vert-c.$$ \noindent When $\Theta=\Pi$ and $\mathsf{G}$ is split, $\rho$ is sometimes called \textit{Borel-Anosov}. When $\mathsf{G}=\mathsf{PSL}(V)$ with $V$ as in Example \ref{ex: roots}, $\{\alpha_1\}$-Anosov representations are also called \textit{projective Anosov}. \end{dfn} An immediate consequence of Definition \ref{def: anosov rep} is that Anosov representations are quasi-isometric embeddings from $\Gamma$ to $\mathsf{G}$. In particular, they are discrete and have finite kernels. A deeper consequence is a theorem by Kapovich-Leeb-Porti \cite[Theorem 1.4]{KLPmorse} (see also \cite[Section 3]{BPS}): if $\rho:\Gamma\to\mathsf{G}$ is $\Theta$-Anosov then $\Gamma$ is word hyperbolic. Throughout the paper we shall assume that $\Gamma$ is non elementary and denote by $\partial\Gamma$ its Gromov boundary. We also let $\partial^{(2)}\Gamma$ be the space of ordered pairs of different points in $\partial\Gamma$. Every infinite order element $\gamma\in\Gamma$ has a unique attracting (resp. repelling) fixed point in $\partial\Gamma$, denoted by $\gamma_+$ (resp. $\gamma_-$). We let $\Gamma_{\textnormal{H}}\subset\Gamma$ be the subset consisting of infinite order elements. The conjugacy class of $\gamma\in\Gamma$ is denoted by $[\gamma]$, and the set of conjugacy classes of elements of $\Gamma$ (resp. $\Gamma_{\textnormal{H}}$) will be denoted by $[\Gamma]$ (resp. $[\Gamma_{\textnormal{H}}]$). A central feature of $\Theta$-Anosov representations is that they admit \textit{limit maps}. By definition, these are H\"older continuous, $\rho$-equivariant, dynamics preserving maps $$\xi_\rho: \partial \Gamma \to \mathscr{F}_\Theta \textnormal{ and } \overline{\xi}_\rho: \partial \Gamma \to \overline{\mathscr{F}}_\Theta,$$ \noindent which are moreover \textit{transverse}, that is, for every $x\neq y$ in $\partial\Gamma$ one has $$ (\overline{\xi}_\rho(x),\xi_\rho(y))\in\mathscr{F}^{(2)}_\Theta. $$ \noindent The limit maps exist and are unique (see \cite{BPS,GGKW,KLPanosovcharacterizations} for details). \begin{ex}\label{ex: limit map in grassmanian} Let $\mathsf{G}$ be as in Example \ref{ex: roots} and $\Theta=\{1\leq i_1<\dots<i_p\leq d-1\}$ for some $1\leq p\leq d-1$ (c.f. Example \ref{ex: flags in psl}). For $j=1,\dots ,p$, we let $$\xi_\rho^{i_j}:\partial\Gamma\to\mathbb{G}_{i_j}(V)$$ \noindent be the $i_j$-coordinate of $\xi_\rho$ into the Grassmannian $\mathbb{G}_{i_j}(V)$ of $i_j$-dimensional subspaces of $V$. \end{ex} The set of $\Theta$-Anosov representations from $\Gamma$ to $\mathsf{G}$ is an open subset of the space of all representations $\Gamma\to\mathsf{G}$. This is a consequence of the original definition \cite{Lab,GW}. Indeed, the original definition requires \textit{a priori} the word hyperbolicity of $\Gamma$ and the existence of the limit maps, with them one constructs a flow space which, by definition, satisfies certain form of uniform hyperbolicity. General results in hyperbolic dynamics give that this is an open condition. Projective Anosov representations are very general: \begin{prop}[Guichard-W. {\cite[Proposition 4.3]{GW}}]\label{prop: anosov and tits} Let $\rho:\Gamma\to\mathsf{G}$ be $\Theta$-Anosov. Then for every $\alpha\in\Theta$ the representation $\Lambda_\alpha\circ\rho:\Gamma\to\mathsf{PGL}(V_\alpha)$ is projective Anosov. \end{prop} We denote by $\ha$ the space of conjugacy classes of $\mathsf{P}_\Theta$-Anosov representations from $\Gamma$ to $\mathsf{G}$. Length functions and entropies are important invariants to study this space. By work of Sambarino that we recall in Section \ref{sec: anosov flows and reps}, they provide a way of associating to each $\rho\in\ha$ certain flow space as in Sections \ref{sec: thermodynamics} and \ref{sec: asymmetric metric and finsler norm for flows}, and therefore one may use the Thermodynamical Formalism to study $\ha$. To define length functions and entropies properly we need to recall the definition of a fundamental object, introduced by Benoist \cite{Benoist_AsymtoticLinearGroups} for general discrete subgroups of $\mathsf{G}$. \begin{dfn} The $\Theta$-\textit{limit cone} of $\rho\in \ha$ is the smallest closed cone $\mathscr{L}_\rho^\Theta\subset\mathfrak a_\Theta ^+$ containing the set $\{\lambda_{\Theta}(\rho (\gamma)): \gamma\in \Gamma\}$. The \textit{limit cone} $\mathscr{L}_\rho$ of $\rho$ is the $\Pi$-limit cone. \end{dfn} In the above definition we abuse notations, because $\rho$ is a conjugacy class of representations. However, it is clear that the $\Theta$-limit cone is independent of the choice of a representative in this conjugacy class. Under the assumption that $\rho$ is Zariski dense, Benoist \cite{Benoist_AsymtoticLinearGroups} showed that $\mathscr{L}_\rho$ is a convex cone with non empty interior\footnote{In fact, Benoist shows this result for any Zariski dense discrete subgroup of $\mathsf{G}$.}. Since $p_\Theta$ is a surjective linear map, the same properties hold for the $\Theta$-limit cone. Let $$(\mathscr{L}_\rho^\Theta)^*:=\{\varphi \in \mathfrak{a}_{\Theta}^{*}: \varphi|_{\mathscr{L}_\rho^\Theta}\geq 0\}$$ \noindent be the \textit{dual cone}. We denote by $\text{int} ((\mathscr{L}_\rho^\Theta)^*)$ the interior of $(\mathscr{L}_\rho^\Theta)^*$, that is, the set of functionals in $\mathfrak{a}_\Theta^*$ which are positive on $\mathscr{L}_\rho^\Theta\setminus\{0\}$. Fix a functional $$\varphi \in \bigcap_{{\rho}\in\ha} \text{int} ((\mathscr{L}_\rho^\Theta)^*).$$ \noindent The above intersection is non empty. For example, it contains $\lambda_1$ and more generally $\omega_\alpha$ for all $\alpha\in\Pi$. \begin{dfn} The $\varphi$-\textit{marked length spectrum} (or simply $\varphi$-\textit{length spectrum}) of $\rho\in\ha$ is the function $L^\varphi_{\rho}:\Gamma \to \mathbb{R}_{\geq 0} $ given by \begin{align*} L^\varphi_{\rho}&(\gamma):= \varphi(\lambda_{\Theta}(\rho(\gamma))). \end{align*} \end{dfn} \noindent Observe that for a $\Theta$-Anosov representation $\rho$, $L_\rho^\varphi(\gamma)>0$ if and only if $\gamma\in\Gamma_{\textnormal{H}}$ (that is, if it has infinite order). Furthermore the $\varphi$-length spectrum is invariant under conjugation in $\Gamma$ and therefore descends to a function $[\Gamma]\to\mathbb{R}_{\geq 0}$. We will often abuse notations and denote this function by $L^\varphi_{\rho}$ as well. \begin{dfn} The $\varphi$-\textit{entropy} of $\rho$ is defined by $$h_\rho^\varphi:=\displaystyle\limsup_{t\to\infty}\frac{1}{t}\log\#\{[\gamma]\in[\Gamma]:L_\rho^\varphi(\gamma)\leq t\}\in[0,\infty].$$ \end{dfn} The $\varphi$-entropy of $\rho$ was introduced by Sambarino \cite{HyperconvexRepsExponentialGrowth,Quantitative}, who showed that this quantity is defined by a true limit, is positive, finite, and coincides with the topological entropy of a suitable flow associated to $\rho$ and $\varphi$. We will briefly recall these results and facts in Section \ref{sec: anosov flows and reps}. \begin{ex} Here is a concrete set of length spectra that will be of interest (the corresponding entropies are named accordingly). Let $\mathsf{G}=\mathsf{PSL}(V)$ with $V$ as in Example \ref{ex: roots}: \begin{itemize}\label{list, Lengths} \item If $\rho:\Gamma\to\mathsf{G}$ is $\Theta$-Anosov and $\alpha_i\in\Theta$ belongs to $\mathfrak{a}_\Theta^*$ (this is always the case if $\Theta=\Pi$), then $L_\rho^{\alpha_i}$ is called the $i^{\textnormal{\textnormal{th}}}$-\textit{simple root length spectrum} of $\rho$. \item If $\rho:\Gamma\to\mathsf{G}$ is projective Anosov, then $L_\rho^{\alpha_{1,d}}$ is called the \textit{Hilbert length spectrum} of $\rho$. We denote it by $L_\rho^\mathrm{H}$. \item If $\rho:\Gamma\to\mathsf{G}$ is projective Anosov, then $L_\rho^{\lambda_1}$ is called the \textit{spectral radius length spectrum} of $\rho$. \end{itemize} \end{ex} \subsection{Examples of Anosov representations}\label{subsec: examples anosov reps} Schottky type constructions as in Benoist \cite{BenPropres} provide basic examples of $\Theta$-Anosov representations of free groups. In this subsection we give a list of other examples that will be of interest to us. \begin{ex}[Teichm\"uller space] Let $S$ be a connected, closed, orientable surface of genus $\geq 2$ and $\Gamma=\pi_1(S)$ be its fundamental group (in short, $\Gamma$ is a \textit{surface group}). The \textit{Teichm\"uller space} of $S$ is the space of isotopy classes of Riemannian metrics on $S$ of constant curvature equal to $-1$. Throughout the paper we identify this space with a connected component $\mathfrak{T}(S)$ of the space of $\mathsf{PSL}(2,\mathbb{R})$-conjugacy classes of faithful and discrete representations $\Gamma\to\mathsf{PSL}(2,\mathbb{R})$. By the \v{S}varc-Milnor Lemma (see \cite[Proposition 19 of Ch. 3]{GdlH}), representations in $\mathfrak{T}(S)$ are Anosov. \end{ex} \begin{ex}[Hitchin representations]\label{ex: hitchin and positive} An important class of Anosov representations is given by Hitchin representations. For every split real Lie group $\sf G$, we denote by $\tau:\sf{PSL}(2,\mathbb R)\to\sf G$ the \emph{principal embedding} \cite{Kos}, which is well defined up to conjugation. In the case of $\mathsf{G}=\sf{PSL}(d,\mathbb R)$, $\tau$ gives the unique irreducible linear representation of $\sf{PSL}(2,\mathbb R)$. It was proven by Labourie \cite{Lab} and Fock-Goncharov \cite{FG} that, given the holonomy $\rho_h:\Gamma\to\sf{PSL}(2,\mathbb R)$ of any chosen hyperbolization $h$ of $S$, the entire connected component of $\tau\circ\rho_h$ consists of Borel-Anosov representations. This component is usually referred to as the \emph{Hitchin component}. An element in it is called a (conjugacy class of) \emph{Hitchin representation}. We will denote by $\textnormal{Hit}_d(S)$ (resp. $\textnormal{Hit}(S,\sf G)$) the Hitchin component of $\Gamma$ in $\sf{PSL}(d,\mathbb R)$ (resp. in $\sf G$). Any Hitchin-representation is Borel-Anosov, i.e. it is Anosov with respect to any subset of $\Pi$. It was proven in \cite{PS,PSW1} that the entropy of each simple root is constant and equal to one on each Hitchin component, when $\sf G$ is not of exceptional type. \end{ex} \begin{ex}[$\Theta$-positive representations]\label{ex:positive} A general framework encompassing all cases of connected components of character varieties of fundamental groups of surfaces only consisting of Anosov representations was proposed by Guichard-W. \cite{GWTheta}, see also \cite{GLW}. They introduce the class of $\Theta$-positive representations, which includes, apart from Hitchin components, \emph{maximal representations} in Hermitian Lie groups, as well as the conncected components of representation in the ${\sf PO}_0(p,q)$-character variety and some components in the character varieties of the four exceptional Lie groups with restricted root system of type $\mathsf{F}_4$. While Hitchin representations are Borel-Anosov, the other representations are, in general, only Anosov with respect to a proper subset $\Theta<\Pi$, which consists of a single root in the case of maximal representations, and has $p-1$ elements in the case of ${\sf PO}_0(p,q)$-positive representations. It was proven in \cite{PSW2} that for maximal and $\Theta$-positive representations in ${\sf PO}_0(p,q)$ the entropy with respect to any root in $\Theta$ is equal to one. \end{ex} \begin{ex}[Hyperconvex representations]\label{ex: hyperconvex} Another important class of Anosov representations are \emph{$(1,1,p)$-hyperconvex representations} studied in \cite{PSW1}. These are representations $\rho:\Gamma\to{\sf PGL}(d,\mathbb R)$ that are $\{\alpha_1,\alpha_p\}$-Anosov, and satisfy the additional transversality property that for all triples of pairwise distinct points $x,y,z\in\partial\Gamma$, the sum $\xi_\rho^1(x)+\xi_\rho^1(y)+\xi_\rho^{d-p}(z)$ is direct. If $\Gamma$ is a cocompact lattice in ${\sf PO}(1,p)$, so that $\partial\Gamma=\mathbb S^{p-1}$, it follows from \cite{PSW1} that $\xi_\rho^1(\partial\Gamma)$ is a C$^1$-submanifold of $\mathbb{P}(\mathbb{R}^d)$. Furthermore it was proven in \cite{PSW2} that for these representations, which sometimes admit non-trivial deformations, the entropy for the functional $p\omega_{\alpha_1}-\omega_{\alpha_p}$ is constant and equal to 1. Important examples of this class are the groups $\Gamma$ dividing a properly convex domain in $\mathbb{P}(\mathbb{R}^d)$ studied by Benoist \cite{BenoistDivII,BenoistDivI,BenoistDivIII,BenoistDivIV}. These are $(1,1,d-1)$-hyperconvex, and were already studied by Potrie-Sambarino \cite{PS}. \end{ex} \begin{ex}[AdS-quasi-Fuchsian representations]\label{ex: AdSquasi-fuchsian} Let $q\geq 2$ and $\Gamma$ be the fundamental group of a closed $q$-dimensional manifold. A representation $\rho:\Gamma\to\mathsf{PO}(2,q)$ is said to be \textit{AdS-quasi-Fuchsian} if it is faithful, discrete and preserves an acausal topological $(q-1)$-sphere on the boundary of the anti-de Sitter space $\mathbb{A}\textnormal{d}\mathbb{S}^{1,q}$. Recall that $\mathbb{A}\textnormal{d}\mathbb{S}^{1,q}$ is defined as the set of negative lines for the underlying quadratic form $\langle\cdot,\cdot\rangle_{2,q}$, and its boundary is the space $\partial\mathbb{A}\textnormal{d}\mathbb{S}^{1,q}$ of isotropic lines. A subset of $\partial\mathbb{A}\textnormal{d}\mathbb{S}^{1,q}$ is said to be \textit{acausal} if it lifts to a cone in $\mathbb{R}^{2+q}\setminus\{0\}$ in which all $\langle\cdot,\cdot\rangle_{2,q}$-products of non collinear vectors are negative. The fundamental example of an AdS-quasi-Fuchsian representation is given by \textit{AdS-Fuchsian} representations, i.e. representations of the form $$\Gamma\to\mathsf{PO}(1,q)\to\mathsf{PO}(2,q),$$ \noindent where the first map is the holonomy of a closed real hyperbolic manifold, and the second arrow is the standard embedding stabilizing a negative line in $\mathbb{R}^{2+q}$. AdS-quasi-Fuchsian representations were introduced in seminal work by Mess \cite{Mess} for $q=2$, and then generalized by Barbot-M\'erigot and Barbot \cite{BM,Barbot} for $q>2$. They are $\{\alpha_1\}$-Anosov representations, where $\alpha_1$ is the simple root in $\mathsf{PO}(2,q)$ corresponding to the stabilizer of an isotropic line (see \cite{BM}). Furthermore, the space of AdS-quasi-Fuchsian representations is a union of connected components of the representation space (see \cite{Barbot}). AdS-quasi-Fuchsian representations were generalized to $\mathbb{H}^{p-1,q}$-\textit{convex-cocompact} representations by Danciger-Gu\'eritaud-Kassel \cite{DGKHpqCC}. \end{ex} \section{Flows associated to Anosov representations}\label{sec: anosov flows and reps} We now recall Sambarino's Reparametrizing Theorem \cite{HyperconvexRepsExponentialGrowth,Quantitative}. This result associates to each $\rho\in\ha$ and each $\varphi\in\textnormal{int}((\mathscr{L}_\rho^\Theta)^*)$ a topological flow on a compact space, recording the data of the $\varphi$-length spectrum of $\rho$, and admitting a strong Markov coding. Through the Thermodynamical Formalism, this provides a powerful tool to study the representation $\rho$ and the space $\ha$ of $\mathsf{P}_\Theta$-Anosov representations. Sambarino deals originally with Anosov representations of the fundamental group of a closed negatively curved manifold. In that case he uses the geodesic flow of the manifold (which is Anosov) as a ``reference" flow, and from $\rho$ and $\varphi$ builds a H\"older reparametrization of that flow encoding the periods $L_\rho^\varphi(\gamma)=\varphi(\lambda_\Theta(\rho(\gamma)))$. In the present framework, we are dealing with more general word hyperbolic groups. Nevertheless, his result is known to still hold: one may replace the reference geodesic flow of the manifold by the \textit{Gromov-Mineyev geodesic flow} of $\Gamma$. This is a topologically transitive H\"older continuous flow on a compact metric space $\textnormal{U}\Gamma$, well defined up to H\"older orbit equivalence. It was introduced by Gromov \cite{Gro} (see also Mineyev \cite{Min} for details). To define this flow space one considers a proper and cocompact action of $\Gamma$ on $\partial^{(2)}\Gamma\times\mathbb{R}$, extending the natural action of $\Gamma$ on $\partial^{(2)}\Gamma$. The space $\partial^{(2)}\Gamma\times\mathbb{R}$ equipped with this action will be denoted by $\widetilde{\textnormal{U}\Gamma}$, and we refer to this action as the $\Gamma$-\textit{action} on $\partial^{(2)}\Gamma\times\mathbb{R}$. In the sequel we will consider many different actions of $\Gamma$ on $\partial^{(2)}\Gamma\times\mathbb{R}$, depending on various choices, and this justifies this specific terminology and notation. The $\Gamma$-action commutes with the $\mathbb{R}$-action given by $$t:(x,y,s)\mapsto(x,y,s+t).$$\noindent We let $\phi=(\phi_t:\textnormal{U}\Gamma\to\textnormal{U}\Gamma)$ be the quotient \textit{Gromov-Mineyev geodesic} flow. Central in all what follows is a result by Bridgeman-Canary-Labourie-Sambarino \cite[Sections 4 \& 5]{BCLS}, stating that in the present setting $\phi$ is metric Anosov, and one has the following (see also \cite{CLT}). \begin{teo}[Bridgeman-Canary-Labourie-Sambarino {\cite{BCLS}}]\label{thm: gromov flow coding} Let $\Gamma$ be a word hyperbolic group admitting an Anosov representation. Then $\phi$ admits a strong Markov coding. \end{teo} \subsection{The Reparametrizing Theorem}\label{subsec: reparam thm} Provided Theorem \ref{thm: gromov flow coding}, Sambarino's Re\-pa\-ra\-me\-trizing Theorem carry on to this more general setting, as summarized in detail in \cite{SambarinoDichotomy}. More precisely, Sambarino shows that to define a H\"older re\-pa\-ra\-me\-tri\-zation of $\phi$ it suffices to consider a \textit{H\"older cocycle} over $\Gamma$ with non-negative \textit{periods} and finite \textit{entropy}. We do not give full definitions here and refer the reader to \cite[Sections 3.1 and 3.2]{SambarinoDichotomy} for details, but let us now recall how this construction works specifically for the $\varphi$-\textit{Busemann-Iwasawa cocycle} of $\rho$ (also called the $\varphi$-\textit{refraction cocycle} of $\rho$ in \cite[Definition 3.5.1]{SambarinoDichotomy}). Let $\rho\in\ha$ and consider the pullback $\beta^{\rho}_{\Theta}: \Gamma \times \partial \Gamma \to \mathfrak{a}_{\Theta}$ of the Busemann-Iwasawa cocycle of $\mathsf{G}$ through the representation $\rho$, that is, $$\beta^{\rho}_{\Theta}(\gamma,x):=\sigma_{\Theta}(\rho(\gamma), \xi_\rho(x)).$$ \noindent The group $\Gamma$ acts on $\partial^{(2)}\Gamma\times\mathbb{R}$ by $$\gamma\cdot (x,y,s):= (\gamma\cdot x, \gamma\cdot y, s- \varphi\circ\beta^{\rho}_{\Theta}(\gamma,y)) .$$ \noindent The space $\partial^{(2)}\Gamma\times\mathbb{R}$ equipped with this action will be denoted by $\widetilde{\textnormal{U}\Gamma}^{\rho,\varphi}$ and we refer to this action as the $(\rho,\varphi)$-\textit{refraction action} (or simply the $(\rho,\varphi)$-\textit{action}). We let $\textnormal{U}\Gamma^{\rho,\varphi}$ be the quotient space. The $(\rho,\varphi)$-action commutes with the $\mathbb{R}$-action given by $$t:(x,y,s)\mapsto(x,y,s-t).$$ \noindent We let $\phi ^{\rho,\varphi}=(\phi ^{\rho,\varphi}_t:\textnormal{U}\Gamma^{\rho,\varphi}\to\textnormal{U}\Gamma^{\rho,\varphi})$ be the quotient flow, called the $(\rho,\varphi)$-\textit{refraction} flow. As shown by Sambarino, to prove that $\phi^{\rho,\varphi}$ is H\"older orbit equivalent to $\phi$ one needs to analyse the \textit{periods} and \textit{entropy} of the $(\rho,\varphi)$-refraction cocycle. Let us now recall these notions. For every $\gamma\in\Gamma_{\textnormal{H}}$ one has $\beta^{\rho}_{\Theta}(\gamma,\gamma_{+})=\lambda_{\Theta}(\rho \gamma)$ (c.f. \cite[Lemma 7.5]{Quantitative}). In particular, the \textit{period} $\varphi(\beta^{\rho}_{\Theta}(\gamma,\gamma_{+}))=L_\rho^\varphi(\gamma)$ of $\gamma\in\Gamma_{\textnormal{H}}$ is positive. In \cite[Section 3.2]{SambarinoDichotomy}, the \textit{entropy} of $\varphi\circ\beta_\Theta^\rho$ is defined by $$\displaystyle\limsup_{t\to\infty}\frac{1}{t}\log\#\{[\gamma]\in[\Gamma_{\textnormal{H}}]:\varphi(\beta^{\rho}_{\Theta}(\gamma,\gamma_{+}))\leq t\}\in[0,\infty].$$ \noindent Note that the definition of this entropy differs from the $\varphi$-entropy of $\rho$ by the fact that here we are only considering conjugacy classes of infinite order elements in $\Gamma$, while for $h_\rho^\varphi$ we also allow conjugacy classes represented by finite order elements. However, the two numbers coincide: a theorem by Bogopolskii-Gerasimov \cite{BogoGera} (see also Brady \cite{Brady}), states that there exists a positive $K_\Gamma$ such that every finite subgroup of $\Gamma$ has at most $K_\Gamma$ elements. In particular, there are only finitely many conjugacy classes of finite order elements in $\Gamma$ and therefore \begin{equation}\label{eq: entropy of rho and entropy of cocycle} h_\rho^\varphi=\displaystyle\limsup_{t\to\infty}\frac{1}{t}\log\#\{[\gamma]\in[\Gamma_{\textnormal{H}}]:\varphi(\beta^{\rho}_{\Theta}(\gamma,\gamma_{+}))\leq t\}\in[0,\infty]. \end{equation} \noindent Moreover, the $\varphi$-entropy is positive and finite. Indeed, let $\alpha\in\Theta$ and consider the function $\mathbb{P}(\mathscr{L}_\rho^\Theta)\to\mathbb{R}_{>0}$ given by $$\mathbb{R} v\mapsto \frac{\varphi(v)}{\chi_\alpha(v)},$$ \noindent where $v\neq 0$ is any vector representing the line $\mathbb{R} v$. Since $\mathbb{P}(\mathscr{L}_\rho^\Theta)$ is compact, we find a constant $c>1$ so that $$c^{-1}\leq \frac{L_{\rho}^\varphi(\gamma)}{\chi_\alpha(\lambda(\rho(\gamma)))}\leq c$$ \noindent for all $\gamma\in\Gamma_{\textnormal{H}}$. Applying Equation (\ref{eq: spectral radious and fund weight}) we conclude $$c^{-1}\leq \frac{L_{\rho}^\varphi(\gamma)}{\lambda_1(\Lambda_\alpha(\rho(\gamma)))}\leq c$$ \noindent for all $\gamma\in\Gamma_{\textnormal{H}}$. Thanks to Proposition \ref{prop: anosov and tits}, to show $0<h_\rho^\varphi<\infty$ it suffices to show that the spectral radius entropy of a projective Anosov representation is positive and finite. On the one hand, finiteness follows by an easy geometric argument (see \cite[Lemma 5.1.2]{SambarinoDichotomy}). Positiveness though follows from dynamical reasons: the spectral radius entropy coincides with the topological entropy of the \textit{geodesic flow} of $\rho$, introduced in \cite[Section 4]{BCLS}. Since the latter flow is metric Anosov, we know by Subsection \ref{subsec:coding and metric Anosov} that its topological entropy is positive (see \cite[Theorem 5.1.3]{SambarinoDichotomy} for details). We have checked the hypothesis on periods and entropy needed to have Sambarino's Reparametrizing Theorem. \begin{teo}[see {\cite[Corollary 5.3.3]{SambarinoDichotomy}}]\label{thm: reparametrizing theorem} Let $\rho\in\ha$ and $\varphi\in\textnormal{int}((\mathscr{L}_\rho^\Theta)^*)$. Then there exists an equivariant H\"older homeomorphism $$\tilde{\nu}^{\rho,\varphi}:\widetilde{\textnormal{U}\Gamma}\to\widetilde{\textnormal{U}\Gamma}^{\rho,\varphi},$$ \noindent such that for all $(x,y)\in\partial^{(2)}\Gamma$ there exists an increasing homeomorphism $\tilde{h}_{(x,y)}^{\rho,\varphi}:\mathbb{R}\to\mathbb{R}$ satisfying \begin{equation}\label{eq: rep thm} \tilde{\nu}^{\rho,\varphi}(x,y,s)=(x,y,\tilde{h}_{(x,y)}^{\rho,\varphi}(s)) \end{equation} \noindent for all $s\in\mathbb{R}$. In particular, the $(\rho,\varphi)$-refraction action is proper and cocompact. Moreover, if we let $\nu^{\rho,\varphi}:\textnormal{U}\Gamma\to\textnormal{U}\Gamma^{\rho,\varphi}$ be the map induced by $\tilde{\nu}^{\rho,\varphi}$, then the flow $$(\nu^{\rho,\varphi})^{-1}\circ\phi^{\rho,\varphi}\circ\nu^{\rho,\varphi}$$ \noindent is a H\"older reparametrization of $\phi$. \end{teo} Define $\mathtt{R}_\varphi:\ha\to\mathbb{P}\textnormal{HR}(\phi)$ by $$\mathtt{R}_\varphi(\rho):=[(\nu^{\rho,\varphi})^{-1}\circ\phi^{\rho,\varphi}\circ\nu^{\rho,\varphi}].$$ \noindent The map $\mathtt{R}_\varphi$ is well defined because the map $\nu^{\rho,\varphi}$, while not canonical, is well defined up to Liv\v{s}ic equivalence. We will use $\mathtt{R}_\varphi$ together with the work in Sections \ref{sec: thermodynamics} and \ref{sec: asymmetric metric and finsler norm for flows} to define and study an asymmetric metric on a suitable quotient of $\ha$: $\mathtt{R}_\varphi$ might not be injective. To this aim we will relate, in Section \ref{sec, generalizedThurston}, the $\varphi$-length spectrum (resp. $\varphi$-entropy) of $\rho$ with the periods of periodic orbits (resp. topological entropy) of $\phi^{\rho,\varphi}$. We conclude this section discussing the equality: $$h_\rho^\varphi=h_{\textnormal{top}}(\phi^{\rho,\varphi}).$$ \noindent When $\Gamma$ is torsion free this follows directly from \cite[Theorem 3.2.2]{SambarinoDichotomy}; we include in the next subsection a proof allowing for finite order elements in $\Gamma$. \subsection{Strongly primitive elements, periodic orbits and entropy}\label{subsec: strongly primitive} The \textit{axis} of an element $\gamma\in\Gamma_{\textnormal{H}}$ is $A_\gamma:=(\gamma_-,\gamma_+)\times\mathbb{R}\subset\partial^{(2)}\Gamma\times\mathbb{R}$. The element $\gamma$ acts via $(\rho,\varphi)$ on $A_\gamma$ as translation by $-\varphi(\lambda_\Theta(\rho(\gamma)))=-L_\rho^\varphi(\gamma)$. The axis $A_\gamma$ descends to a periodic orbit $a_\rho^\varphi(\gamma)=a_\rho^\varphi([\gamma])$ of $\phi ^{\rho,\varphi}$: conjugate elements in $\Gamma$ determine the same periodic orbit. We let $\mathcal{O}^{\rho,\varphi}$ be the set of periodic orbits of $\phi^{\rho,\varphi}$. The period $p_{\phi^{\rho,\varphi}}(a_\rho^\varphi(\gamma))$ of $a_\rho^\varphi(\gamma)$ divides the number $L_\rho^\varphi(\gamma)$, and we say that $\gamma$ is \textit{strongly primitive} (w.r.t the pair $(\rho,\varphi)$) if this period is precisely $L_\rho^\varphi(\gamma)$. Denote by $\Gamma_{\textnormal{SP}}\subset\Gamma_{\textnormal{H}}$ the set of strongly primitive elements. A priori, this set depends on the $(\rho,\varphi)$-action. However, we will show in Lemma \ref{lem: strongly primitive for other reparametrization} that this is not the case. \begin{rem}\label{rem: primitive and s primitive} When $\Gamma$ is torsion free, strongly primitive elements coincide with \textit{primitive} elements of $\Gamma$, that is, elements that cannot be written as a power of another element. In that case, there is a one to one correspondence between periodic orbits of $\phi^{\rho,\varphi}$ and conjugacy classes of primitive elements in $\Gamma$. However, if $\Gamma$ contains finite order elements this correspondence no longer holds (see e.g. Blayac \cite[Section 3.4]{BlayacThesis} for a detailed discussion). \end{rem} The discussion above yields a well defined map \begin{equation}\label{eq: projection gh to periodic orbits} [\Gamma_{\textnormal{H}}]\to\mathcal{O}^{\rho,\varphi}\times(\mathbb{Z}_{>0}): [\gamma]\mapsto (a_\rho^\varphi(\gamma),n_\rho^\varphi(\gamma)), \end{equation} \noindent where $n_\rho^\varphi(\gamma)=n_\rho^\varphi([\gamma])$ is determined by the equality $$L_{\rho}^\varphi(\gamma)=n_\rho^\varphi(\gamma)p_{\phi^{\rho,\varphi}}(a_\rho^\varphi(\gamma)).$$ To prove the equality $h_\rho^\varphi=h_{\textnormal{top}}(\phi^{\rho,\varphi})$ we first show the following technical lemma (recall that $K_\Gamma>0$ is the constant given by Bogopolskii-Gerasimov's Theorem \cite{BogoGera}). \begin{lema}\label{lem: fibers of projection gh to periodic} The fibers of the map \textnormal{(\ref{eq: projection gh to periodic orbits})} have at most $K_\Gamma$ elements. \end{lema} \begin{proof} Take $(a,n)\in \mathcal{O}^{\rho,\varphi}\times(\mathbb{Z}_{>0})$ and fix $\gamma_0\in\Gamma_{\textnormal{SP}}$ such that $a_\rho^\varphi(\gamma_0)=a$. Let $H(\gamma_0)$ be the set of elements in $\Gamma_{\textnormal{H}}$ that act trivially on $A_{\gamma_0}$. Since the $(\rho,\varphi)$-action is proper, the subgroup $H(\gamma_0)$ is finite and therefore $\# H(\gamma_0)\leq K_\Gamma$. We conclude observing that the fiber over $(a,n)$ is contained in $$\left\{[\gamma_0^n\eta]: \eta\in H(\gamma_0)\right\}.$$ \end{proof} \begin{cor}\label{cor: varhpi entropy is entropy of traslation flow} Let $\rho\in\ha$ and $\varphi\in\textnormal{int}((\mathscr{L}_\rho^\Theta)^*)$. Then the $\varphi$-entropy of $\rho$ coincides with the topological entropy of the refraction flow $\phi^{\rho,\varphi}$. \end{cor} \begin{proof} The inequality $ h_{\textnormal{top}}(\phi_\rho^\varphi)\leq h_\rho^\varphi$ is easily seen. To show the reverse inequality, recall from Equation (\ref{eq: entropy of rho and entropy of cocycle}) that $$h_\rho^\varphi=\displaystyle\limsup_{t\to\infty}\frac{1}{t}\log\#\{[\gamma]\in[\Gamma_{\textnormal{H}}]:L_\rho^\varphi(\gamma)\leq t\}.$$ \noindent Lemma \ref{lem: fibers of projection gh to periodic} implies then $$h_\rho^\varphi\leq\displaystyle\limsup_{t\to\infty}\frac{1}{t}\log\#\{(a,n)\in\mathcal{O}^{\rho,\varphi}\times(\mathbb{Z}_{>0}): np_{\phi^{\rho,\varphi}}(a)\leq t\}.$$ \noindent If we let $$k:=\displaystyle\min_{a\in\mathcal{O}^{\rho,\varphi}}p_{\phi^{\rho,\varphi}}(a)>0,$$ \noindent we have $$\#\{(a,n)\in\mathcal{O}^{\rho,\varphi}\times(\mathbb{Z}_{>0}): np_{\phi^{\rho,\varphi}}(a)\leq t\}\leq \frac{t}{k}\times\#\{a\in\mathcal{O}^{\rho,\varphi}: p_{\phi^{\rho,\varphi}}(a)\leq t\}.$$ \noindent Equation (\ref{eq: entropy}) implies the desired inequality. \end{proof} \section{Thurston's metric and Finsler norm for Anosov representations}\label{sec, generalizedThurston} Fix a functional $$\varphi \in \bigcap_{{\rho}\in\ha} \text{int} ((\mathscr{L}_\rho^\Theta)^*).$$ \noindent Recall from Section \ref{sec: anosov flows and reps} that this induces a map $$\mathtt{R}_\varphi:\ha\to\mathbb{P}\textnormal{HR}(\phi),$$ \noindent where $\phi$ is a H\"older parametrization of the Gromov-Mineyev geodesic flow of $\Gamma$. In view of the contents of Section \ref{sec: asymmetric metric and finsler norm for flows} (and thanks to Theorem \ref{thm: gromov flow coding}), it is natural to try to ``pull back" the asymmetric metric on $\mathbb{P}\textnormal{HR}(\phi)$ to $\ha$ under this map. This motivates the following definition. \begin{dfn}\label{def: asymmetric distance anosov reps} Define $d_{\textnormal{Th}}^{\varphi}: \ha \times \ha \to \mathbb{R}\cup\{\infty\}$ by\footnote{When $\gamma\notin\Gamma_{\textnormal{H}}$ one has $L_{\rho}^\varphi(\gamma)=0=L_{\widehat{\rho}}^\varphi(\gamma)$. In the above definition it is understood that in that case we set $$\frac{ L_{\widehat{\rho}}^\varphi(\gamma)}{ L_{\rho}^\varphi(\gamma)}=0.$$} $$d_{\textnormal{Th}}^\varphi(\rho,\widehat{\rho}):=\log\left(\displaystyle\sup_{[\gamma]\in[\Gamma]}\frac{h_{\widehat{\rho}}^\varphi}{h_{\rho}^\varphi}\frac{ L_{\widehat{\rho}}^\varphi(\gamma) }{L_{\rho}^\varphi(\gamma)}\right).$$ \end{dfn} The main theorem of this section is the following. \begin{teo}\label{thm: dth for anosov} The function $d_{\textnormal{Th}}^\varphi(\cdot,\cdot)$ is real valued, non-negative, and satisfies the triangle inequality. Furthermore $$d_{\textnormal{Th}}^\varphi(\rho,\widehat{\rho})=0 \Leftrightarrow h_{\rho}^\varphi L_\rho^\varphi=h_{\widehat{\rho}}^\varphi L_{\widehat{\rho}}^\varphi.$$ \end{teo} We deduce Theorem \ref{thm: dth for anosov} from Theorem \ref{teo: asymmetric distance flows}: in Corollary \ref{cor: dist for reps coincides with distance for flows} we show that for all $\rho,\widehat{\rho}\in\ha$, $$d_{\textnormal{Th}}^\varphi(\rho,\widehat{\rho})=d_{\textnormal{Th}}(\mathtt{R}_\varphi(\rho),\mathtt{R}_\varphi(\widehat{\rho}))$$ and in Corollary \ref{cor: kernel or rvarphi} we prove that $\mathtt{R}_\varphi(\rho)=\mathtt{R}_\varphi(\widehat{\rho})$ if and only if $h_{\rho}^\varphi L_\rho^\varphi=h_{\widehat{\rho}}^\varphi L_{\widehat{\rho}}^\varphi$. Both Corollaries \ref{cor: dist for reps coincides with distance for flows} and \ref{cor: kernel or rvarphi} are straightforward when $\Gamma$ is torsion free (see Remark \ref{rem: primitive and s primitive}). We explain the details in Subsection \ref{subsec: proof of dth for representations} allowing for finite order elements in $\Gamma$. In Subsection \ref{subsec: renorm length rigidity} we discuss general conditions that guarantee renormalized length spectrum rigidity. As a consequence, we will have an asymmetric metric defined in interesting subsets of $\ha$ (under some assumptions on $\mathsf{G}$). More examples will be discussed in Sections \ref{sec: hitchin} and \ref{s.other}. In Subsection \ref{subsec: finsler for anosov reps} we use the map $\mathtt{R}_\varphi$ to pull back the Finsler norm of $\mathbb{P}\textnormal{HR}(\phi)$ to $\ha$. \subsection{Proof of Theorem \ref{thm: dth for anosov}}\label{subsec: proof of dth for representations} Let $\rho\in\ha$. Recall from Subsection \ref{subsec: strongly primitive} that $\gamma\in\Gamma_{\textnormal{H}}$ is strongly primitive (w.r.t $(\rho,\varphi)$) if the $(\rho,\varphi)$-action of $\gamma$ on the axis $A_\gamma$ is a translation by the period of the corresponding periodic orbit of $\phi^{\rho,\varphi}$. The following technical lemma implies in particular that this notion is independent of $\rho$ (recall the notation introduced in Equation (\ref{eq: projection gh to periodic orbits})). We note that this holds in the more general setting of H\"older reparametrizations of the Gromov geodesic flow (see also Remark \ref{rem: sp for geodesic flow} below). \begin{lema}\label{lem: strongly primitive for other reparametrization} Let $\rho$ and $\widehat{\rho}$ in $\ha$, then for every $\gamma\in\Gamma_{\textnormal{H}}$ one has $$n_\rho^\varphi(\gamma)=n_{\widehat{\rho}}^\varphi(\gamma).$$ \noindent In particular, $\gamma$ is strongly primitive for the $(\rho,\varphi)$-action if and only if it is strongly primitive for the $(\widehat{\rho},\varphi)$-action. \end{lema} \begin{proof} To ease notations we let $n:=n_\rho^\varphi(\gamma)$ and $\widehat{n}:=n_{\widehat{\rho}}^\varphi(\gamma)$. Suppose by contradiction that $n\neq \widehat{n}$, say $n<\widehat{n}$. Let $a=a_\rho^\varphi(\gamma)$ (resp. $\widehat{a}=a_{\widehat{\rho}}^\varphi(\gamma)$) be the periodic orbit of $\phi^{\rho,\varphi}$ (resp. $\phi^{\widehat{\rho},\varphi}$) associated to $[\gamma]$. Fix a strongly primitive $\gamma_0$ (resp. $\widehat{\gamma}_0$) representing $a$ (resp. $\widehat{a}$) for the $(\rho,\varphi)$-action (resp. $(\widehat{\rho},\varphi)$-action). By definition of $n$ and $\widehat{n}$ we have \begin{equation}\label{eq: in lemma n equals widehat n} L_\rho^\varphi(\gamma)=nL_\rho^\varphi(\gamma_0) \textnormal{ and } L_{\widehat{\rho}}^\varphi(\gamma)=\widehat{n}L_{\widehat{\rho}}^\varphi(\widehat{\gamma}_0). \end{equation} \noindent We may assume furthermore that $(\gamma_0)_\pm=(\widehat{\gamma}_0)_\pm$. On the other hand, by Theorem \ref{thm: reparametrizing theorem} there exists an equivariant H\"older homeomorphism $$\nu:\widetilde{\textnormal{U}\Gamma}^{\rho,\varphi}\to\widetilde{\textnormal{U}\Gamma}^{\widehat{\rho},\varphi},$$ \noindent such that for all $(x,y)\in\partial^{(2)}\Gamma$ there exists an increasing homeomorphism $h_{(x,y)}:\mathbb{R}\to\mathbb{R}$ satisfying $$\nu(x,y,s)=(x,y,h_{(x,y)}(s)).$$ \noindent Hence, for all $\eta\in\Gamma$ and all $(x,y,s)\in\widetilde{\textnormal{U}\Gamma}^{\rho,\varphi}$ one has $$h_{(\eta\cdot x,\eta\cdot y)}(s-\varphi\circ\beta_\Theta^\rho(\eta,y))=h_{( x, y)}(s)-\varphi\circ\beta_\Theta^{\widehat{\rho}}(\eta,y).$$ \noindent In particular, Equation (\ref{eq: in lemma n equals widehat n}) gives $$h_{((\gamma_0)_-,(\gamma_0)_+)}(s-nL_{\rho}^\varphi(\gamma_0))=h_{((\gamma_0)_-,(\gamma_0)_+)}(s-L_{\rho}^\varphi(\gamma))=h_{((\gamma_0)_-,(\gamma_0)_+)}(s)-L_{\widehat{\rho}}^\varphi(\gamma),$$ \noindent and therefore $$h_{((\gamma_0)_-,(\gamma_0)_+)}(s-nL_{\rho}^\varphi(\gamma_0))=h_{((\gamma_0)_-,(\gamma_0)_+)}(s)-\widehat{n}L_{\widehat{\rho}}^\varphi(\widehat{\gamma}_0).$$ \noindent Hence $$h_{((\gamma_0)_-,(\gamma_0)_+)}(s-nL_{\rho}^\varphi(\gamma_0))=h_{((\gamma_0)_-,(\gamma_0)_+)}(s)-L_{\widehat{\rho}}^\varphi(\widehat{\gamma}_0^{\widehat{n}})=h_{((\gamma_0)_-,(\gamma_0)_+)}(s-L_{\rho}^\varphi(\widehat{\gamma}_0^{\widehat{n}})).$$ \noindent We then conclude $$h_{((\gamma_0)_-,(\gamma_0)_+)}(s-nL_{\rho}^\varphi(\gamma_0))=h_{((\gamma_0)_-,(\gamma_0)_+)}(s-\widehat{n}L_{\rho}^\varphi(\widehat{\gamma}_0)).$$ \noindent This implies $$nL_{\rho}^\varphi(\gamma_0)=\widehat{n}L_{\rho}^\varphi(\widehat{\gamma}_0)>nL_{\rho}^\varphi(\widehat{\gamma}_0).$$ \noindent This is a contradiction because $\gamma_0$ was assumed to be strongly primitive for the $(\rho,\varphi)$-action. \end{proof} \begin{cor}\label{cor: dist for reps coincides with distance for flows} For every $\rho$ and $\widehat{\rho}$ in $\ha$ one has $$d_{\textnormal{Th}}^\varphi(\rho,\widehat{\rho})=d_{\textnormal{Th}}(\mathtt{R}_\varphi(\rho),\mathtt{R}_\varphi(\widehat{\rho})).$$ \end{cor} \begin{proof} By Corollary \ref{cor: varhpi entropy is entropy of traslation flow} we have $$d_{\textnormal{Th}}^\varphi(\rho,\widehat{\rho})=\log\left(\displaystyle\sup_{[\gamma]\in[\Gamma]}\frac{h_{\textnormal{top}}(\phi^{\widehat{\rho},\varphi})}{h_{\textnormal{top}}(\phi^{\rho,\varphi})}\frac{ L_{\widehat{\rho}}^\varphi(\gamma) }{L_{\rho}^\varphi(\gamma)}\right).$$ \noindent Equation (\ref{eq: projection gh to periodic orbits}) gives then $$d_{\textnormal{Th}}^\varphi(\rho,\widehat{\rho})=\log\left(\displaystyle\sup_{[\gamma]\in[\Gamma]}\frac{h_{\textnormal{top}}(\phi^{\widehat{\rho},\varphi})}{h_{\textnormal{top}}(\phi^{\rho,\varphi})}\frac{ n_{\widehat{\rho}}^\varphi(\gamma)}{n_{\rho}^\varphi(\gamma)}\frac{p_{\phi^{\widehat{\rho},\varphi}}(a_{\widehat{\rho}}^\varphi(\gamma))}{p_{\phi^{\rho,\varphi}}(a_{\rho}^\varphi(\gamma))}\right).$$ \noindent By Lemma \ref{lem: strongly primitive for other reparametrization} we have $$d_{\textnormal{Th}}^\varphi(\rho,\widehat{\rho})=\log\left(\displaystyle\sup_{[\gamma]\in[\Gamma]}\frac{h_{\textnormal{top}}(\phi^{\widehat{\rho},\varphi})}{h_{\textnormal{top}}(\phi^{\rho,\varphi})}\frac{ p_{\phi^{\widehat{\rho},\varphi}}(a_{\widehat{\rho}}^\varphi(\gamma))}{p_{\phi^{\rho,\varphi}}(a_{\rho}^\varphi(\gamma))}\right).$$ \noindent This finishes the proof. \end{proof} \begin{rem}\label{rem: AvoidDomination} There are geometric settings in which the renormalization by entropy in the definition of the asymmetric metric is essential (see also Section \ref{subsec: inter and renormalized intersection}). For instance, Tholozan \cite[Theorem B]{ThoEntropy} shows that there exist pairs $\rho$ and $j$ in $\textnormal{Hit}_3(S)$ for which there is a $c>1$ so that \begin{equation}\label{eq: domination} L^\mathrm{H}_{\rho}(\gamma) \geq c L^\mathrm{H}_{j}(\gamma) \end{equation} \noindent for all $\gamma\in \pi_1(S)$ (recall the notation introduced in Example \ref{list, Lengths}). Hence $$\log\left(\displaystyle\sup_{[\gamma]\in[\pi_1(S)]}\frac{L^\mathrm{H}_j(\gamma)}{L_\rho^\mathrm{H}(\gamma)}\right)\leq\log\left(\frac{1}{c}\right)<0.$$ On the other hand some length functions on some spaces of Anosov representations have constant entropies (c.f. Subsection \ref{subsec: examples anosov reps}). In these situations, renormalizing by entropy is not needed. \end{rem} We now compute the set of points which are identified under the map $\mathtt{R}_\varphi$, finishing the proof of Theorem \ref{thm: dth for anosov}. \begin{cor}\label{cor: kernel or rvarphi} Let $\rho$ and $\widehat{\rho}$ be two points in $\ha$. Then $$\mathtt{R}_\varphi(\rho)=\mathtt{R}_\varphi(\widehat{\rho})\Leftrightarrow h_{\rho}^\varphi L_\rho^\varphi=h_{\widehat{\rho}}^\varphi L_{\widehat{\rho}}^\varphi.$$ \end{cor} \begin{proof} By definition of $\mathbb{P}\textnormal{HR}(\phi)$ and Corollary \ref{cor: varhpi entropy is entropy of traslation flow} we have $$\mathtt{R}_\varphi(\rho)=\mathtt{R}_\varphi(\widehat{\rho})\Leftrightarrow h_\rho^\varphi p_{\phi^{\rho,\varphi}}(a_\rho^\varphi(\gamma))=h_{\widehat{\rho}}^\varphi p_{\phi^{\widehat{\rho},\varphi}}(a_{\widehat{\rho}}^\varphi(\gamma))$$ \noindent for all $\gamma\in\Gamma_{\textnormal{H}}$. Thanks to Lemma \ref{lem: strongly primitive for other reparametrization} this is equivalent to $$h_\rho^\varphi n_{\rho}^\varphi(\gamma) p_{\phi^{\rho,\varphi}}(a_\rho^\varphi(\gamma))=h_{\widehat{\rho}}^\varphi n_{\widehat{\rho}}^\varphi(\gamma) p_{\phi^{\widehat{\rho},\varphi}}(a_{\widehat{\rho}}^\varphi(\gamma))$$ \noindent for all $\gamma\in\Gamma_{\textnormal{H}}$. Since for all $\gamma\in\Gamma_{\textnormal{H}}$ we have $$ n_{\rho}^\varphi(\gamma) p_{\phi^{\rho,\varphi}}(a_\rho^\varphi(\gamma))=L_\rho^\varphi(\gamma) \textnormal{ and } n_{\widehat{\rho}}^\varphi(\gamma) p_{\phi^{\widehat{\rho},\varphi}}(a_{\widehat{\rho}}^\varphi(\gamma))=L_{\widehat{\rho}}^\varphi(\gamma),$$ \noindent the proof is finished. \end{proof} To finish this subsection we record the following technical remark for future use. \begin{rem}\label{rem: sp for geodesic flow} One may define the notion of strongly primitive elements for the action $\Gamma \curvearrowright \widetilde{\textnormal{U}\Gamma}$, in a way analogue to the definition for the action $\Gamma \curvearrowright \widetilde{\textnormal{U}\Gamma}^{\rho,\varphi}$. As in Lemma \ref{lem: strongly primitive for other reparametrization}, one shows that $\gamma$ is strongly primitive for $\Gamma \curvearrowright \widetilde{\textnormal{U}\Gamma}$ if and only if it is strongly primitive for the $(\rho,\varphi)$-action, for some (any) $\rho\in\ha$. On the other hand, if we let $\mathcal{O}$ be the set of periodic orbits of $\phi$, we may take for each $a\in\mathcal{O}$ a strongly primitive representative $\gamma_a\in\Gamma_{\textnormal{SP}}$. We see that $$a\mapsto [A_{\gamma_a}]$$ \noindent defines a one to one correspondence between $\mathcal{O}$ and $\mathcal{O}^{\rho,\varphi}$ for all $\rho\in\ha$, where $[A_{\gamma_a}]$ is the image of the axis $A_{\gamma_a}$ under the quotient map $\widetilde{\textnormal{U}\Gamma}^{\rho,\varphi}\to\textnormal{U}\Gamma^{\rho,\varphi} $. A set $\{\gamma_a\}_{a\in\mathcal{O}}$ of strongly primitive elements representing each periodic orbit will be fixed from now on. \end{rem} \subsection{Renormalized length spectrum rigidity}\label{subsec: renorm length rigidity} Recall that $\mathsf{G}$ is a connected semisimple real algebraic group of non-compact type. In this subsection we discuss necessary conditions that two $\Theta$-Anosov representations with the same renormalized length spectra must satisfy. For a Lie group $\mathsf{G}_1$ we denote by $(\mathsf{G}_1)_0$ the connected component, in the Hausdorff topology, containing the identity. If $\sigma:\mathsf{G}_1\to\mathsf{G}_2$ is a Lie group isomorphism, we denote, with a slight abuse of notation, by $\sigma:\mathfrak a_{\mathsf{G}_1}^+\to \mathfrak a_{\mathsf{G}_2}^+$ the induced linear isomorphism between Weyl chambers. Furthermore, if $\mathsf{G}_1<\mathsf{G}$ is a Lie group inclusion, we denote by $\pi_{\mathsf{G}_1}:\mathfrak a_{\mathsf{G}_1}^+\to \mathfrak a_{\mathsf{G}}^+$ the induced piecewise linear map. We will need the following fairly general classical rigidity result, which is an application of Benoist \cite[Theorem 1]{Benoist_AsymtoticLinearGroups}. See for instance \cite[Corollary 11.6]{BCLS}, Burger \cite{BurgerManhattan} and Dal'bo-Kim \cite{Criterion_Zariki}. \begin{teo}\label{thm:rigidity} Let $\rho$ and $\widehat{\rho}$ be two $\Theta$-Anosov representations into $\mathsf{G}$. Denote by $\mathsf{G}_{\rho}$ (resp. $\mathsf{G}_{\widehat{\rho}}$) the Zariski closure of $\rho(\Gamma)$ (resp. $\widehat{\rho}(\Gamma)$). Assume that $\mathsf{G}_{\rho}$ and $\mathsf{G}_{\widehat{\rho}}$ are simple, real algebraic and center-free. Assume furthermore $\rho(\Gamma)\subset (\mathsf{G}_\rho)_0$ and $\widehat{\rho}(\Gamma)\subset (\mathsf{G}_{\widehat{\rho}})_0$. Then if the equality $h^\varphi_{\rho}L^\varphi_{\rho}=h^\varphi_{{\widehat{\rho}}}L^\varphi_{\widehat{\rho}}$ holds, there exists an isomorphism $\sigma:(\mathsf{G}_{\rho})_0\to(\mathsf{G}_{\widehat{\rho}})_0$ such that $\sigma\circ \rho=\widehat{\rho}$. It furthermore holds $\varphi\circ \pi_{\mathsf{G}_{\widehat{\rho}}}\circ \sigma=\varphi\circ \pi_{\mathsf{G}_\rho}$. \end{teo} Denote by $\haz\subset\ha$ the subset consisting of Zariski dense representations. \begin{cor}\label{cor: distance in zariski dense components} Assume that $\mathsf{G}$ is simple, center-free, and for every non-inner automorphism $\sigma$ of $\mathsf{G}$ one has $\varphi\circ \sigma \neq \varphi$. Then $d_{\textnormal{Th}}^\varphi(\cdot,\cdot)$ defines a (possibly asymmetric) metric on $\haz$. \end{cor} \begin{rem} The group $\mathsf{G}$ needs to be center-free in Theorem \ref{thm:rigidity} and Corollary \ref{cor: distance in zariski dense components}: the Jordan and Cartan projections of $\mathsf{G}$ factor through the adjoint form of $\mathsf{G}$, thus any two representations differing by a central character will have the same renormalized length spectrum, and thus distance zero. \end{rem} \subsection{Finsler norm for Anosov representations}\label{subsec: finsler for anosov reps} Bridgeman-Canary-Labourie-Sambarino \cite{BCLS,BCLSSIMPLEROOTS} used the map $\mathtt{R}_\varphi$ to pull-back the pressure norm on $\mathbb{P}\textnormal{HR}^\upsilon(\phi)$ to produce a pressure metric on $\ha$ (for some choices of $\varphi$). We now imitate this procedure working with the Finsler norm defined in Subsection \ref{subsection, FinslerNorm}. A family of representations $\{\rho_z:\Gamma\to\mathsf{G}\}_{z\in D}$ parametrized by a real analytic disk $D$ is \textit{real analytic} if for all $\gamma\in\Gamma$ the map $z\mapsto\rho_z(\gamma)$ is real analytic. We fix a real analytic neighbourhood of $\rho\in\ha$ and a real analytic family $\{\rho_z\}_{z\in D}\subset \ha$, parametrized by some real analytic disk $D$ around $0$, so that $\rho_0=\rho$ and $\cup_{z\in D}\rho_z$ coincides with this neighbourhood. By abuse of notation we will sometimes identify the neighbourhood with $D$ itself. \begin{dfn}\label{dfn: finsler reps} Given a tangent vector $v\in T_\rho\ha$ we set $$\Vert v\Vert_{\textnormal{Th}}^\varphi:= \displaystyle\sup_{[\gamma]\in[\Gamma_{\textnormal{H}}]} \frac{\mathrm{d}_{\rho}(h_{\cdot}^\varphi)(v)L_\rho^\varphi(\gamma)+h_\rho^\varphi \mathrm{d}_\rho(L_\cdot^\varphi(\gamma))(v)}{h_\rho^\varphi L_\rho^\varphi(\gamma)}, $$ \noindent where $\mathrm{d}_{\rho}(h_{\cdot}^\varphi)$ (resp. $\mathrm{d}_{\rho}(L_{\cdot}^\varphi(\gamma))$) is the derivative of $\widehat{\rho}\mapsto h_{\widehat{\rho}}^\varphi$ (resp. $\widehat{\rho}\mapsto L_{\widehat{\rho}}^\varphi(\gamma)$) at $\rho$. In particular, if $\widehat{\rho}\mapsto h_{\widehat{\rho}}^\varphi$ is constant one has \begin{equation}\label{eq: finsler reps constant entropy} \Vert v\Vert_{\textnormal{Th}}^\varphi= \displaystyle\sup_{[\gamma]\in[\Gamma_{\textnormal{H}}]} \frac{ \mathrm{d}_\rho(L_\cdot^\varphi(\gamma))(v)}{ L_\rho^\varphi(\gamma)}.\end{equation} \end{dfn} \begin{rem}\label{rem: def finsler anosov} \begin{enumerate} \item Recall that by \cite[Section 8]{BCLS}, entropy varies in an analytic way over $\ha$. In particular, $h_\cdot^\varphi$ is differentiable. \item Equation (\ref{eq: finsler reps constant entropy}) generalizes Thurston's Finsler norm on Teichm\"uller space \cite[p.20]{ThurstonStretch}. \end{enumerate} \end{rem} We want conditions guaranteeing that $\Vert\cdot\Vert_{\textnormal{Th}}^\varphi$ defines a Finsler norm on $T_\rho\ha$; a priori it is not even clear that $\Vert\cdot\Vert_{\textnormal{Th}}^\varphi$ is real valued and non-negative. To link $\Vert\cdot\Vert_{\textnormal{Th}}^\varphi$ and the Finsler norm of Subsection \ref{subsection, FinslerNorm}, we need the following proposition. We fixed a set of strongly primitive elements $\{\gamma_a\}$ representing each periodic orbit $a\in\mathcal{O}$ in Remark \ref{rem: sp for geodesic flow}. \begin{prop}[{\cite[Proposition 6.2]{BCLS}, {\cite[Proposition 6.1]{BCLSSIMPLEROOTS}}}]\label{prop: Rvarhpi analytic} Let $\{\rho_z\}_{z\in D}$ be a real analytic family of $\Theta$-Anosov representations. Then up to restricting $D$ to a smaller disk around $0$, there exists $\upsilon>0$ and a real analytic family $\{\widetilde{g}_z:\textnormal{U}\Gamma\to \mathbb{R}_{>0}\}_{z\in D}\subset\mathcal{H}^\upsilon(\textnormal{U}\Gamma)$ so that for all $z\in D$, all $a\in\mathcal{O}$ and all $x\in a$ one has $$\displaystyle\int_0^{p_\phi(a)}\widetilde{g}_z(\phi_s(x))\mathrm{d} s=L_{\rho_z}^\varphi(\gamma_a).$$ \noindent In particular, the map $D\to \mathbb{P}\textnormal{HR}^\upsilon(\phi)$ given by $z\mapsto \mathtt{R}_\varphi(\rho_z)=\left[\phi^{\widetilde{g}_z}\right]$ is real analytic. \end{prop} \begin{proof} The argument follows \cite[Proposition 6.1]{BCLSSIMPLEROOTS}. Since $\{\omega_\alpha\}_{\alpha\in\Theta}$ span $\mathfrak{a}_\Theta^*$, there exist real numbers $a_\alpha$ so that $\varphi=\sum_{\alpha\in\Theta}a_\alpha\omega_\alpha$. \cite[Proposition 6.2]{BCLS} gives the result for projective Anosov representations and the spectral radius length function, thus the proof of \cite[Proposition 6.1]{BCLSSIMPLEROOTS} applies (c.f. Proposition \ref{prop: anosov and tits} and Equation (\ref{eq: spectral radious and fund weight})). \end{proof} Fix a real analytic family $\{\widetilde{g}_z\}$ as in Proposition \ref{prop: Rvarhpi analytic}. By \cite[Proposition 3.12]{BCLS} the function $z\mapsto h_{\phi^{\widetilde{g}_z}}$ is real analytic. By Corollary \ref{cor: varhpi entropy is entropy of traslation flow} we get that $z\mapsto h_{\rho_z}^\varphi$ is real analytic, as claimed in Remark \ref{rem: def finsler anosov}. Proposition \ref{prop: Rvarhpi analytic} bridges between $\Vert\cdot\Vert_{\textnormal{Th}}^\varphi$ and the Finsler norm on $\mathbb{P}\textnormal{HR}^\upsilon(\phi)$, as we now explain. First, observe that in Definition \ref{dfn: finsler reps} it suffices to consider only strongly primitive elements when taking the sup, that is: $$\Vert v\Vert_{\textnormal{Th}}^\varphi= \displaystyle\sup_{[\gamma]\in[\Gamma_{\textnormal{SP}}]} \frac{\mathrm{d}_{\rho}(h_{\cdot}^\varphi)(v)L_\rho^\varphi(\gamma)+h_\rho^\varphi \mathrm{d}_\rho(L_\cdot^\varphi(\gamma))(v)}{h_\rho^\varphi L_\rho^\varphi(\gamma)}.$$ \noindent Indeed the function $\widehat{\rho}\mapsto n_{\widehat{\rho}}^\varphi(\gamma)$ is constant for all $\gamma\in\Gamma_{\textnormal{H}}$ (Lemma \ref{lem: strongly primitive for other reparametrization}), and Remark \ref{rem: sp for geodesic flow} gives \begin{equation}\label{eq: finsler anosov rep with strongly primitive} \Vert v\Vert_{\textnormal{Th}}^\varphi= \displaystyle\sup_{a\in\mathcal{O}} \frac{\mathrm{d}_{\rho}(h_{\cdot}^\varphi)(v)L_\rho^\varphi(\gamma_a)+h_\rho^\varphi \mathrm{d}_\rho(L_\cdot^\varphi(\gamma_a))(v)}{h_\rho^\varphi L_\rho^\varphi(\gamma_a)}. \end{equation} \noindent Recalling the notations from Subsection \ref{subsection, FinslerNorm} we have the following. \begin{lema}\label{lem: finsler reps and flows} Let $\{\rho_z\}_{z\in D}\subset \ha$ be a real analytic family parametrizing an open neighbourhood around $\rho=\rho_0$. Fix an analytic path $z:(-1,1)\to D$ so that $z(0)=0$ and set $\rho_s:=\rho_{z(s)}$ and $ v:=\left.\frac{\mathrm{d}}{\mathrm{d} s}\right\vert_{s=0}\rho_s$. Let also $h_s:=h_{\rho_{s}}^\varphi$ and $g_s:=h_s\widetilde{g}_{z(s)}$. Then $$\Vert v\Vert_{\textnormal{Th}}^\varphi=\Vert [\dot{g}_0]\Vert_{\textnormal{Th}}.$$ \noindent \end{lema} In the above statement, by construction, the Liv\v{s}ic cohomology class $[\dot{g}_0]=[\dot{g}_0]_\phi$ belongs to the tangent space $T_{[\phi^{g_0}]}\mathbb{P}\textnormal{HR}^\upsilon(\phi)$. \begin{proof}[Proof of Lemma \ref{lem: finsler reps and flows}] Combining Equations (\ref{eq: finsler anosov rep with strongly primitive}) and (\ref{eq: integral of reparametrizing over delta in periodic orbit}), and Proposition \ref{prop: Rvarhpi analytic} we have $$\Vert v\Vert_{\textnormal{Th}}^\varphi=\displaystyle\sup_{a\in\mathcal{O}} \left.\frac{\mathrm{d}}{\mathrm{d} s}\right\vert_{s=0} \frac{h_sL_{\rho_s}^\varphi(\gamma_a) }{h_\rho^\varphi L_\rho^\varphi(\gamma_a)}=\displaystyle\sup_{a\in\mathcal{O}} \left.\frac{\mathrm{d}}{\mathrm{d} s}\right\vert_{s=0} \frac{h_s}{h_0}\frac{ \int \widetilde{g_s}\mathrm{d} \delta_\phi(a) }{ \int \widetilde{g_0}\mathrm{d} \delta_\phi(a)}.$$ \noindent Hence $$\Vert v\Vert_{\textnormal{Th}}^\varphi=\displaystyle\sup_{a\in\mathcal{O}} \left.\frac{\mathrm{d}}{\mathrm{d} s}\right\vert_{s=0} \frac{ \int g_s\mathrm{d} \delta_\phi(a) }{ \int g_0\mathrm{d} \delta_\phi(a)}=\displaystyle\sup_{a\in\mathcal{O}} \frac{ \int \dot{g}_0\mathrm{d} \delta_\phi(a) }{\int g_0\mathrm{d} \delta_\phi(a)}.$$ \noindent By Theorem \ref{teo: periodic orbits dense} we get $$\Vert v\Vert_{\textnormal{Th}}^\varphi=\displaystyle\sup_{m\in\mathscr{P}(\phi)} \frac{ \int \dot{g}_0\mathrm{d} m }{\int g_0\mathrm{d} m}.$$ \noindent This finishes the proof. \end{proof} From Propositions \ref{prop: FinslerNorm} and \ref{prop: Rvarhpi analytic}, and Corollary \ref{cor: dist for reps coincides with distance for flows} we obtain the following. \begin{cor}\label{cor: link finsler and asymm for reps} Keep the notations from Lemma \ref{lem: finsler reps and flows}. Then $s\mapsto d_{\textnormal{Th}}^\varphi(\rho,\rho_s)$ is differentiable at $s=0$ and $$\Vert v\Vert_{\textnormal{Th}}^\varphi=\left.\frac{\mathrm{d}}{\mathrm{d} s}\right\vert_{s=0}d_{\textnormal{Th}}^\varphi(\rho,\rho_s).$$ \end{cor} We now turn to the study of conditions guaranteeing that $\Vert\cdot\Vert_{\textnormal{Th}}^\varphi$ defines a Finsler norm. \begin{cor}\label{cor: finsler for reps} Let $\rho\in\ha$ be a point admitting an analytic neighbourhood in $\ha$. Then function $\Vert\cdot\Vert_{\textnormal{Th}}^\varphi:T_\rho\ha\to\mathbb{R}\cup\{\pm\infty\}$ is real valued and non-negative. Furthermore, it is $(\mathbb{R}_{>0})$-homogeneous, satisfies the triangle inequality and $\Vert v\Vert_{\textnormal{Th}}^\varphi=0$ if and only if \begin{equation}\label{eq: cor finsler resp} d_\rho (L_\cdot^\varphi(\gamma))(v)=-\frac{\mathrm{d}_{\rho}(h_{\cdot}^\varphi)(v)}{h_\rho^\varphi}L_\rho^\varphi(\gamma) \end{equation} \noindent for all $\gamma\in\Gamma_{\textnormal{H}}$. In particular, if the function $\rho\mapsto h_{\rho}^\varphi$ is constant, then $$\Vert v\Vert_{\textnormal{Th}}^\varphi=0\Leftrightarrow d_\rho (L_\cdot^\varphi(\gamma))(v)=0$$ \noindent for all $\gamma\in\Gamma_{\textnormal{H}}$. \end{cor} \begin{proof} By Lemma \ref{lem: norm is non degenerate} and Lemma \ref{lem: finsler reps and flows}, the function $\Vert\cdot\Vert_{\textnormal{Th}}^\varphi$ is real valued, non-negative, $(\mathbb{R}_{>0})$-homogeneous and satisfies the triangle inequality. Furthermore, keeping the notation from Lemma \ref{lem: finsler reps and flows}, if $\Vert v\Vert_{\textnormal{Th}}^\varphi=0$ then $ \dot{g}_0\sim_\phi 0$ and this condition is equivalent to $$0=\displaystyle\int_0^{p_\phi(a)} \dot{g}_0(\phi_t(x))\mathrm{d} t=\left.\frac{\mathrm{d}}{\mathrm{d} s}\right\vert_{s=0}\displaystyle\int_0^{p_\phi(a)} g_s(\phi_t(x))\mathrm{d} t$$ \noindent for all $a\in\mathcal{O}$ and $x\in a$. Hence $$0=\left.\frac{\mathrm{d}}{\mathrm{d} s}\right\vert_{s=0} h_s\displaystyle\int_0^{p_\phi(a)} \widetilde{g}_s(\phi_t(x))\mathrm{d} t=\left.\frac{\mathrm{d}}{\mathrm{d} s}\right\vert_{s=0} h_s L_{\rho_s}^\varphi(\gamma_a).$$ \noindent Thus $$d_\rho (L_\cdot^\varphi(\gamma_a))(v)=-\frac{\mathrm{d}_{\rho}(h_{\cdot}^\varphi)(v)}{h_\rho^\varphi}L_\rho^\varphi(\gamma_a)$$ \noindent for all $a\in\mathcal{O}$. Now by Lemma \ref{lem: strongly primitive for other reparametrization} for every $\gamma\in\Gamma_{\textnormal{H}}$ there is some $n\geq 1$ and $a\in\mathcal{O}$ so that $L_{\rho}^\varphi(\gamma)=nL_{\rho}^\varphi(\gamma_a)$ for all $\rho\in\ha$. This finishes the proof. \end{proof} In view of Corollary \ref{cor: finsler for reps}, to show that $\Vert\cdot\Vert_{\textnormal{Th}}^\varphi$ is a Finsler norm, one needs to guarantee that condition (\ref{eq: cor finsler resp}) implies $v=0$. These type of questions have been addressed by Bridgeman-Canary-Labourie-Sambarino \cite{BCLS,BCLSSIMPLEROOTS} in some situations. Rather than discussing these results here, we will recall them in the next sections, when needed. \section{Hitchin representations}\label{sec: hitchin} In this section we focus on Hitchin representations. The Zariski closures of $\mathsf{PSL}(d,\mathbb{R})$-Hitchin representations have been classified by Guichard. Hence, the results of the previous section apply nicely in this setting giving global rigidity results and leading to asymmetric distances in the whole component. This is explained in detail in Subsection \ref{subsec: rigidity hitchin}, where we also treat the case of $\mathsf{PSO}_0(p,p)$, the remaining classical case not covered by Guichard's classification, using recent results by Sambarino \cite{sambarino2020infinitesimal}. In Subsection \ref{subsec: finsler for hitchin} we discuss Finsler norms associated to some special length functionals in the $\mathsf{PSL}(d,\mathbb{R})$-Hitchin component, showing that they are non degenerate (this will be a consequence of Corollary \ref{cor: finsler for reps} and results in \cite{BCLS,BCLSSIMPLEROOTS}). Throughout this section we let $S$ be a closed oriented surface of genus $g\geq2$, and denote by $\Gamma=\pi_1(S)$ its fundamental group. We also let $\mathsf{G}$ be an adjoint, connected, simple real-split Lie group. Apart from exceptional cases, $\mathsf{G}$ is one of the following $$\mathsf{PSL}(d,\mathbb{R}),\mathsf{PSp}(2r,\mathbb{R}), \mathsf{SO}_0(p,p+1), \textnormal{ or } \mathsf{PSO}_0(q,q),$$ \noindent for $q>2$. Hitchin representations are $\Pi$-Anosov (c.f. Example \ref{ex: hitchin and positive}). We denote by $\textnormal{Hit}(S,\mathsf{G})$ the Hitchin component into $\mathsf{G}$, when $\mathsf{G}=\mathsf{PSL}(d,\mathbb{R})$ we also use the special notation $\textnormal{Hit}_d(S)$. \subsection{Length spectrum rigidity}\label{subsec: rigidity hitchin} For $\rho\in\textnormal{Hit}(S,\mathsf{G})$ denote $\mathscr{L}_\rho^*:=(\mathscr{L}^{\Pi}_\rho)^*$ and consider $\varphi \in \bigcap_{{\rho}\in \textnormal{Hit}(S,\mathsf{G})} \text{int} (\mathscr{L}_\rho^*) \subset \mathfrak{a}_{\Pi}^*=\mathfrak{a}^*$. The main goal of this section is to prove the following. \begin{teo}\label{Thm:rigitity length hitchin} Let $\mathsf{G}$ be an adjoint, simple, real-split Lie group of classical type. In the case $\mathsf{G}=\mathsf{PSO}_0(p,p)$, assume furthermore $p\neq 4$. Let $\varphi \in \bigcap_{{\rho}\in \textnormal{Hit}(S,\mathsf{G})} \textnormal{int} (\mathscr{L}_\rho^*)$ be so that $\varphi\circ\sigma\neq\varphi$ for every non inner automorphism of $\mathsf{G}$. If $\rho, \widehat{\rho} \in \textnormal{Hit}(S,\mathsf{G})$ satisfy $h^\varphi_{\rho}L^{\varphi}_{\rho}=h^\varphi_{\widehat{\rho}}L^{\varphi}_{\widehat{\rho}}$, then $\rho=\widehat{\rho}$. \end{teo} Before going into the proof of Theorem \ref{Thm:rigitity length hitchin} we make few remarks and establish the main corollaries of interest. \begin{rem} \begin{itemize} \item When $\mathsf{G}=\mathsf{PSL}(d,\mathbb{R})$, Bridgeman-Canary-Labourie-Sambarino \cite[Corollary 11.8]{BCLS} proved Theorem \ref{Thm:rigitity length hitchin} for the spectral radius length function $\varphi=\lambda_1$. The proof of Theorem \ref{Thm:rigitity length hitchin} follows the same approach. \item We aim to define a simple root asymmetric metric on $\textnormal{Hit}(S,\mathsf{G})$ (Corollary \ref{cor: asymm for hitchin roots} below). As every simple root of $\mathsf{PSO}_0(4,4)$ is fixed by a non inner automorphism, the function $$d_{\textnormal{Th}}^{\alpha}:\textnormal{Hit}(S,\mathsf{PSO}_0(4,4))\times\textnormal{Hit}(S,\mathsf{PSO}_0(4,4))\to\mathbb{R}$$ \noindent does not separate points for any simple root $\alpha$. This is the main reason why we exclude the case $\mathsf{G}=\mathsf{PSO}_0(4,4)$ in the statement of Theorem \ref{Thm:rigitity length hitchin}. \end{itemize} \end{rem} We have the following two consequences of Theorem \ref{Thm:rigitity length hitchin}. \begin{cor} \label{cor: asymm for hitchin roots} Let $\mathsf{G}$ be an adjoint, simple, real-split Lie group of classical type. Let $\alpha$ be any simple root of $\sf G$, with the exception of the roots listed in Table \ref{table:1}. Then the function $d_{\textnormal{Th}}^{\alpha}: \textnormal{Hit}(S,\mathsf{G}) \times \textnormal{Hit}(S,\mathsf{G}) \to \mathbb{R}$ given by $$d_{\textnormal{Th}}^{\alpha}(\rho,\widehat{\rho}):=\log\left(\displaystyle\sup_{[\gamma]\in[\Gamma]} \frac{ L_{\widehat{\rho}}^{\alpha}(\gamma) }{L_{\rho}^{\alpha}(\gamma)}\right)$$ \noindent defines an asymmetric distance on $\textnormal{Hit}(S,\sf G)$. \end{cor} \begin{proof} By Potrie-Sambarino \cite[Theorem B]{PS} and P.-Sambarino-W. \cite[Theorem 9.9]{PSW1} we have $h^{\alpha}_{\rho}=1$ for all $\rho\in \textnormal{Hit}(S,\mathsf{G})$. Since roots as in the statement are not fixed by non inner automorphisms of $\mathsf{G}$, then by Theorems \ref{thm: dth for anosov} and \ref{Thm:rigitity length hitchin} the function $d_{\textnormal{Th}}^\alpha$ defines a possibly asymmetric metric. It remains to show that $d_{\textnormal{Th}}^\alpha$ is indeed asymmetric. But Thurston \cite[p.5]{ThurstonStretch} exhibits examples of points $\rho,\widehat{\rho}\in \Teich(S)$ for which the distance from $\rho$ to $\widehat{\rho}$ is different from the distance from $\widehat{\rho}$ to $\rho$. Since $\textnormal{Hit}(S,\mathsf{G})$ contains a copy of $\Teich(S)$, the claim follows. \end{proof} \begin{cor} \label{cor: asymm for hitchin spectral} Let $\mathsf{G}=\mathsf{PSL}(d,\mathbb{R})$ and $\varphi=\lambda_1$ be the spectral radius length function. Then the function $d_{\textnormal{Th}}^{\lambda_1}: \textnormal{Hit}_d(S) \times \textnormal{Hit}_d(S) \to \mathbb{R}$ given by $$d_{\textnormal{Th}}^{\lambda_1}(\rho,\widehat{\rho})=\log\left(\displaystyle\sup_{[\gamma]\in[\Gamma]} \frac{h_{\widehat{\rho}}^{\lambda_1}}{h_{\rho}^{\lambda_1}} \frac{ L_{\widehat{\rho}}^{\lambda_1}(\gamma) }{L_{\rho}^{\lambda_1}(\gamma)}\right)$$ \noindent defines an asymmetric distance on $\textnormal{Hit}_d(S)$. \end{cor} \begin{proof} The action on $\mathfrak{a}$ of the unique non inner automorphism of $\mathsf{PSL}(d,\mathbb{R})$ coincides with the opposition involution $\iota$. When $d>2$ note that $\lambda_1 \neq \lambda_1 \circ \iota$, hence in this case the result follows from Theorems \ref{thm: dth for anosov} and \ref{Thm:rigitity length hitchin}. If $d=2$, the result follows from Theorem \ref{thm: dth for anosov} and the Length Spectrum Rigidity for hyperbolic surfaces. \end{proof} We now turn to the proof of Theorem \ref{Thm:rigitity length hitchin}. In view of the natural inclusions $$\textnormal{Hit}(S,\mathsf{PSp}(2r,\mathbb{R}))\subset \textnormal{Hit}_{2r}(S) \textnormal{ and } \textnormal{Hit}(S,\mathsf{SO}_0(p,p+1))\subset \textnormal{Hit}_{2p+1}(S),$$ \noindent we may assume that $\mathsf{G}$ is either $\mathsf{PSL}(d,\mathbb{R})$ or $\mathsf{PSO}_0(p,p)$. We will focus on the case $\mathsf{G}=\mathsf{PSO}_0(p,p)$, the argument for $\mathsf{G}=\mathsf{PSL}(d,\mathbb{R})$ is similar (and further, in that case the reader can also compare with \cite[Corollary 11.8]{BCLS}). The main step in the proof is to carefully analyse the possible Zariski closures of $\mathsf{PSO}_0(p,p)$-Hitchin representations, and show that they satisfy the hypotheses of Theorem \ref{thm:rigidity}. This is achieved in Corollaries \ref{cor: Z closure hitchin opp simple and center free} and \ref{cor: Z closure0 contains rho} below, as an application of recent work by Sambarino \cite{sambarino2020infinitesimal}. Let then $p>2$ and consider a principal embedding $\tau:\mathsf{PSL}(2,\mathbb{R})\to\mathsf{PSO}_0(p,p)$. Then $\tau$ factors as $$\tau:\mathsf{PSL}(2,\mathbb{R})\to\mathsf{SO}_0(p,p-1)\to\mathsf{PSO}_0(p,p),$$ \noindent where the first map is the irreducible representation into $\mathsf{SL}(2p-1,\mathbb{R})$, and the second is induced by the standard embedding stabilizing a non isotropic line $\ell_\tau\subset\mathbb{R}^{2p}$. We let $\pi_\tau$ be the complementary $(p,p-1)$ hyperplane. Note that $\tau$ lifts to a principal embedding $\widehat{\tau}:\mathsf{PSL}(2,\mathbb{R})\to\mathsf{SO}_0(p,p)$. A \textit{Fuchsian} representation is a Hitchin representation into $\mathsf{PSO}_0(p,p)$ (resp. $\mathsf{SO}_0(p,p-1)$) whose image is contained in a conjugate of $\tau(\mathsf{PSL}(2,\mathbb{R}))$ (resp. $\widehat{\tau}(\mathsf{PSL}(2,\mathbb{R}))$). The following is well-known (see e.g. \cite[p.25]{sambarino2020infinitesimal} for a proof). \begin{lema}\label{lem: hitchin PSOpp lifts} Let $\rho\in\textnormal{Hit}(S,\mathsf{PSO}_0(p,p))$. Then there exists a representation $\widehat{\rho}:\Gamma\to\mathsf{SO}_0(p,p)$ lifting $\rho$ that may be deformed to a Fuchsian representation. \end{lema} Here is another useful lemma. \begin{lema}\label{lem: Zclosure hitchin is reductive} Let $\widehat{\rho}:\Gamma\to\mathsf{SO}_0(p,p)$ be a Hitchin representation. Then the Zariski closure $\mathsf{G}_{\widehat{\rho}}$ of $\widehat{\rho}$ is reductive. \end{lema} \begin{proof} Suppose by contradiction that $\mathsf{G}_{\widehat{\rho}}$ is not reductive. Then $\mathsf{G}_{\widehat{\rho}}$ is contained in a proper parabolic subgroup of $\mathsf{SO}_0(p,p)$ \cite{BorelTits}. That is, we may assume $\widehat{\rho}(\Gamma)\subset \mathsf{P}_\Theta\subset\mathsf{SO}_0(p,p)$, for some subset $\Theta$ of simple roots. In particular, $\widehat{\rho}(\gamma)$ is centralized by $\exp(\mathfrak{a}_\Theta)$ for all $\gamma\in\Gamma$. If $\xi_{\widehat{\rho}}:\partial\Gamma\to\mathscr{F}=\mathscr{F}_\Pi$ is the limit curve into the space of full flags of $\mathsf{SO}_0(p,p)$, this readily implies $$\exp(X)\cdot \xi_{\widehat{\rho}}(x)=\xi_{\widehat{\rho}}(x)$$ \noindent for all $X\in\mathfrak{a}_\Theta$ and $x\in\partial\Gamma$. On the other hand, $\widehat{\rho}$ is positive in the sense of Fock-Goncharov \cite{FG}. In particular, the stabilizer of a triple in the limit set is finite. But we just saw it contains $\exp(\mathfrak{a}_\Theta)$, a contradiction. \end{proof} \begin{rem}\label{rem: Zclosure positive is reductive} The proof of Lemma \ref{lem: Zclosure hitchin is reductive} actually shows that the Zariski closure of a $\Theta$-positive representation in the sense of Guichard-W. \cite{GWTheta} is reductive, as in that case the stabilizer of a positive triple is compact \cite{GW22}. \end{rem} For a Hitchin representation $\widehat{\rho}:\Gamma\to \mathsf{SO}_0 (p,p)$, let $\mathfrak{g}_{\widehat{\rho}}^{ss}$ be the semisimple part of the Lie algebra $\mathfrak{g}_{\widehat{\rho}}$ of $\mathsf{G}_{\widehat{\rho}}$. By Sambarino \cite[Theorem A]{sambarino2020infinitesimal}, if $p\neq 4$ then $\mathfrak{g}_{\widehat{\rho}}^{ss}$ is either $\mathfrak{so}(p,p)$, a principal $\mathfrak{sl}_2$, or the image of the standard embedding $\mathfrak{so}(p,p-1)\to\mathfrak{so}(p,p)$. In each case $\mathfrak{g}_{\widehat{\rho}}^{ss}$ contains, up to conjugation, the Lie subalgebra $\mathrm{d}\widehat{\tau}(\mathfrak{sl}_2)$. \begin{lema}\label{lem: center of Z closure hitchin opp} Let $\widehat{\rho}:\Gamma\to\mathsf{SO}_0(p,p)$ be a Hitchin representation. Suppose that $g\in\mathsf{G}_{\widehat{\rho}}$ satisfies $ghg^{-1}=\pm h$ for all $h\in\mathsf{G}_{\widehat{\rho}}$. Then $g\in \{\textnormal{id},-\textnormal{id}\}$. \end{lema} \begin{proof} Let $g\in\mathsf{G}_{\widehat{\rho}}$ be as in the statement. Since $\widehat{\tau}(\mathsf{PSL}(2,\mathbb{R}))\subset(\mathsf{G}_{\widehat{\rho}})_0$, then $g$ centralizes (up to a sign) the principal $\mathsf{PSL}(2,\mathbb{R})$, which factors through $\mathsf{SO}_0(p,p-1)$. Now if $h\in\mathsf{PSL}(2,\mathbb{R})$ is a hyperbolic element with eigenvalues $\pm\lambda$ (well defined up to $\pm 1$), then $\widehat{\tau}(h)$ acting on $\pi_\tau$ is diagonalizable with eigenvalues $$\lambda^{2(p-1)},\dots,\lambda^2,1,\lambda^{-2},\dots,\lambda^{-2(p-1)}.$$ \noindent Note that these are positive independently on whether we choose $\lambda$ or $-\lambda$ for the eigenvalues of $h$, hence to fix ideas we will assume $\lambda>1$. In particular, all the eigenvalues of $\widehat{\tau}(h)$ are positive. We let $\pi_h$ be the two dimensional plane spanned by $\ell_\tau$ and the eigenline in $\pi_\tau$ of eigenvalue $ 1$, which we denote by $\ell^1_h$. That is, $\pi_h$ is the eigenspace of $\widehat{\tau}(h)$ associated to the eigenvalue $1$. We also let $\ell_h^{i}$ be the eigenline of eigenvalue $i=\lambda^{2(p-1)},\dots,\lambda^2,\lambda^{-2},\dots,\lambda^{-2(p-1)}$. Observe that actually $g\widehat{\tau}(h)g^{-1}=\widehat{\tau}(h)$. Indeed, otherwise we would have $g\widehat{\tau}(h)g^{-1}=-\widehat{\tau}(h)$ and for $v\in\ell_h^i$ one has $$g\cdot v=\frac{1}{\lambda^i}g\widehat{\tau}(h)\cdot v=-\frac{1}{\lambda^i}\widehat{\tau}(h)g\cdot v.$$ \noindent We would then find a negative eigenvalue of $\widehat{\tau}(h)$, a contradiction. We conclude that $g\widehat{\tau}(h)g^{-1}=\widehat{\tau}(h)$ as claimed. It follows that $g$ preserves $\ell_h^i$ for all $i$, and also preserves $\pi_h$. We claim that $g$ preserves $\ell_\tau$. Indeed, note that there is some $m\in\mathsf{PSL}(2,\mathbb{R})$ so that $\widehat{\tau}(m)\cdot \ell_h^1 \neq \ell_h^1$, as the action of $\widehat{\tau}(\mathsf{PSL}(2,\mathbb{R}))$ on $\pi_\tau$ is irreducible. Furthermore, $\widehat{\tau}(m)\cdot\ell_h^1$ is different from $\ell_h^i$, as all these lines are isotropic, while $\widehat{\tau}(m)\cdot\ell_h^1$ is not. By what we just proved, $g$ preserves $\pi_{mhm^{-1}}$ and therefore preserves $\pi_{mhm^{-1}}\cap\pi_h=\ell_\tau$. Hence $g\cdot\ell_\tau=\ell_\tau$ and therefore $g\cdot\ell_h^1=\ell_h^1$ for every hyperbolic $h\in\mathsf{PSL}(2,\mathbb{R})$. We conclude that for every hyperbolic $h\in\mathsf{PSL}(2,\mathbb{R})$, the element $g$ preserves the projective basis $$\mathcal{B}_h:=\{\ell^{2(p-1)}_h,\dots,\ell^2_h,\ell^1_h,\ell_\tau,\ell^{-2}_h,\dots,\ell^{-2(p-1)}_h\}.$$ \noindent Fix such an $h$. Let $m\in\mathsf{PSL}(2,\mathbb{R})$ be so that $\widehat{\tau}(m)\cdot \ell_h^1\notin\mathcal{B}_h$. Then $g$ preserves the elements of the basis $\mathcal{B}_{mhm^{-1}}$ as well, and therefore preserves $2p+1$ lines {in general position} in $\mathbb{R}^{2p}$. It follows that $g=\mu\textnormal{id}$ for some $\mu\in\mathbb{R}$. Since $g\in\mathsf{SO}_0(p,p)$, then $\mu=\pm 1$. \end{proof} \begin{cor}\label{cor: Z closure hitchin opp simple and center free} Assume $p\neq 4$ and let $\rho\in\textnormal{Hit}(S,\mathsf{PSO}_0(p,p))$. Then the Zariski closure $\mathsf{G}_\rho$ of $\rho$ is simple and center free, and with Lie algebra $\mathfrak{so}(p,p)$, $\mathfrak{so}(p,p-1)$, or a principal $\mathfrak{sl}_2$. \end{cor} \begin{proof} Let $\widehat{\rho}$ be a lift of $\rho$. Then $\mathsf{G}_\rho=\mathsf{G}_{\widehat{\rho}}/\{\pm \textnormal{id}\}$ and by Lemmas \ref{lem: Zclosure hitchin is reductive} and \ref{lem: center of Z closure hitchin opp}, $\mathsf{G}_\rho$ is reductive and center free. In particular, it is semisimple and by Sambarino \cite[Theorem A]{sambarino2020infinitesimal} the result follows. \end{proof} The proof of the following well-known fact can be found in \cite[Corollary 6.2]{sambarino2020infinitesimal} for $\mathsf{PSL}(d,\mathbb{R})$-Hitchin representations, but the proof applies in our setting. \begin{cor}\label{cor: Z closure0 contains rho} Let $\rho\in\textnormal{Hit}(S,\mathsf{PSO}_0(p,p))$. Then $\rho(\Gamma)\subset(\mathsf{G}_\rho)_0$. \end{cor} We have now completed the analysis of the possible Zariski closures of $\mathsf{PSO}_0(p,p)$-Hitchin representations, and we can prove Theorem \ref{Thm:rigitity length hitchin}. \begin{proof}[Proof of Theorem \ref{Thm:rigitity length hitchin}] By Corollaries \ref{cor: Z closure hitchin opp simple and center free} and \ref{cor: Z closure0 contains rho} and Theorem \ref{thm:rigidity} there exists an isomorphism $\sigma:(\mathsf{G}_\rho)_0\to(\mathsf{G}_{\widehat{\rho}})_0$ so that $\sigma\circ\rho=\widehat{\rho}$. In particular, $(\mathsf{G}_\rho)_0\cong(\mathsf{G}_{\widehat{\rho}})_0$ and we have three possibilities. If $(\mathsf{G}_\rho)_0$ is a principal $\mathsf{PSL}(2,\mathbb{R})$, then the result follows from Length Spectrum Rigidity in Teichm\"uller space. If $(\mathsf{G}_\rho)_0\cong \mathsf{PSO}_0(p,p-1)$, then the corresponding Dynkin diagram is of type $\mathsf{B}_{p-1}$ and therefore admits no non trivial automorphism. Hence, in that case $\sigma$ is inner as desired. Finally, assume $(\mathsf{G}_\rho)_0=\mathsf{PSO}_0(p,p)$ and suppose by contradiction that $\rho\neq\widehat{\rho}$. Hence $\sigma$ is a non internal automorphism. But on the other hand by Theorem \ref{thm:rigidity} we have $\varphi\circ\sigma=\varphi$, contradicting our hypothesis. \end{proof} \begin{rem}\label{rem: asymmetric} A natural length function on $\textnormal{Hit}_d(S)$, specially relevant in the case $d=3$, is the Hilbert length (c.f. Example \ref{list, Lengths}). However, the Hilbert length is not rigid, as the \textit{contragredient} representation $\rho^\star(\gamma):=\leftidx{^t}\rho(\gamma)^{-1}$ of $\rho$ satisfies $h_\rho^{\textnormal{H}}L_\rho^{\textnormal{H}}=h_{\rho^\star}^{\textnormal{H}}L_{\rho^\star}^{\textnormal{H}}$, but in general one has $\rho^\star\neq\rho$. Hence, $d_{\textnormal{Th}}^{\textnormal{H}}(\cdot,\cdot)$ does not separate points of $\textnormal{Hit}_d(S)$. It follows from the proof of Theorem \ref{Thm:rigitity length hitchin} that this is the only possible situation where two different $\mathsf{PSL}(d,\mathbb{R})$-Hitchin representations can have the same Hilbert length spectra. Similar comments apply to the simple roots listed in Table \ref{table:1}. \end{rem} \subsection{Simple root and spectral radius Finsler norms}\label{subsec: finsler for hitchin} We now restrict to $\mathsf{G}=\mathsf{PSL} (d,\mathbb{R})$. We list some useful consequences of Corollary \ref{cor: finsler for reps} and \cite{BCLS,BCLSSIMPLEROOTS}. For the first simple root we have the following. \begin{cor}\label{cor: finsler norm hitchin first root} Let $\varphi=\alpha_1\in \Pi$ be the first simple root. The function on $T \textnormal{Hit}_d(S)$ $$\Vert v\Vert_{\textnormal{Th}}^{\alpha_1}=\displaystyle\sup_{[\gamma]\in[\Gamma]} \frac{ \mathrm{d}_\rho (L_{\cdot}^{\alpha_1}(\gamma))(v) }{L_{\rho}^{\alpha_1}(\gamma)}$$ \noindent defines a Finsler norm on $\textnormal{Hit}_d(S)$. \end{cor} \begin{proof} By Potrie-Sambarino \cite[Theorem B]{PS} we have $h_\rho^{\alpha_1}=1$ for all $\rho\in\textnormal{Hit}_d(S)$. Hence, thanks to Corollary \ref{cor: finsler for reps} we only have to show that $\Vert v\Vert_{\textnormal{Th}}^{\alpha_1}=0$ implies $v=0$. But this follows from Corollary \ref{cor: finsler for reps} and \cite[Theorem 1.7]{BCLSSIMPLEROOTS}: the set $\{\mathrm{d}_\rho(L_\cdot^{\alpha_1}(\gamma))\}_{\gamma\in\Gamma}$ generates the cotangent space $T_\rho^*\textnormal{Hit}_d(S)$. \end{proof} When $d=2j>2$, it is shown in \cite[Proposition 8.1]{BCLSSIMPLEROOTS} that the middle root pressure quadratic form is degenerate along representations that factor through $\mathsf{PSp}(2j,\mathbb{R})$. The proof shows that $\Vert\cdot\Vert_{\textnormal{Th}}^{\alpha_j}$ is degenerate as well. With the same argument as in Corollary \ref{cor: finsler norm hitchin first root} (but applying \cite[Lemma 9.8 \& Proposition 10.1]{BCLS} instead of \cite[Theorem 1.7]{BCLSSIMPLEROOTS}), we obtain the following. \begin{cor}\label{cor: finsler norm hitchin spectral} Let $\varphi=\lambda_1$ be the spectral radius length function. Then the function $\Vert\cdot\Vert_{\textnormal{Th}}^{\lambda_1}: T \textnormal{Hit}_d(S) \to \mathbb{R}_{\geq 0}$, taking $v\in T_\rho \textnormal{Hit}_d(S)$ to $$\Vert v\Vert_{\textnormal{Th}}^{\lambda_1}=\displaystyle\sup_{[\gamma]\in[\Gamma]} \frac{\mathrm{d}_{\rho}(h_{\cdot}^{\lambda_1})(v)L_\rho^{\lambda_1}(\gamma)+h_\rho^{\lambda_1} \mathrm{d}_\rho(L_\cdot^{\lambda_1}(\gamma))(v)}{h_\rho^{\lambda_1} L_\rho^{\lambda_1}(\gamma)}$$ \noindent defines a Finsler norm on $\textnormal{Hit}_d(S)$.\end{cor} We finish this subsection with a comment on Labourie and Wentworth work \cite{VariationAlongFuchsian}, which explicitly compute the derivative of the spectral radius and simple root length functions at points of the Fuchsian locus $\Teich(S)\subset \textnormal{Hit}_d(S)$, along some special directions. More explicitly, fixing a Riemann surface structure $X_0$ on $S$, the canonical line bundle $K$ associated to $X_0$ is the $(1,0)$-part of the complexified cotangent bundle $T^*X_0^{\mathbb{C}}=\mathbb{C}\otimes_{\mathbb{R}}T^*X_0$. An \textit{holomorphic $k$-differential} is an holomorphic section of the bundle $K^k$, where the power $k$ is taken with respect to tensor operation. In local holomorphic coordinates $z=x+i y$, an holomorphic $k$-differential can be written as $$q_k=q_k(z) \underbrace{dz \otimes \cdots \otimes dz}_{k \text{ times} } = q_k(z)dz^k,$$ \noindent with $q_k(z)$ holomorphic. Hitchin's seminal work \cite{Hitchin_HitchinComponent} parametrizes $\textnormal{Hit}_d(S)$ by the space of holomorphic differentials over $X_0$. More precisely, there exists a homemomorphism $$\textnormal{Hit}_d(S)\cong \bigoplus\limits_{k=2}^{d} H^{0}(X_0, K^k),$$ \noindent where $H^{0}(X_0, K^k)$ denotes the space of holomorphic $k$-differentials over $X_0$. Given an holomorphic $k$-differential $q_k\in H^0(X_0,K^k)$, one may consider a natural family of Hitchin representations $\{\rho_t\}_{t\geq 0}$, corresponding to $\{tq_k\}_{t\geq 0}\subset H^{0}(X_0, K^k)$ under this parametrization, with $\rho_0$ corresponding to the point $X_0$ in the Teichm\"uller space $\Teich(S)$. Infinitesimally, this gives a vector space isomorphism: $$T_{\rho_0} \textnormal{Hit}_d(S) \cong \bigoplus\limits_{k=2}^{d} H^{0}(X_0, K^k). $$ Given a family of Hitchin representations $\{\rho_t\}_{t\geq 0}$ as above, we denote by $v=v(q_k):=\frac{d}{dt}\big|_{t=0} \rho_t \in T_{X_0} \textnormal{Hit}_d(S)$ the corresponding tangent vector. The computation of the derivatives $\mathrm{d}_{\rho_0}(L_\cdot^{\lambda_j}(\gamma))(v)$, for $1\leq j \leq d$, has been carried out by Labourie-Wentworth \cite[Theorem 4.0.2]{VariationAlongFuchsian}, using the above identification and information of $H^{0}(X_0, K^k)$. To be more precise, define the function $\textnormal{Re } q_k : T^1X_0 \to \mathbb{R}$ as the real part of the holomorphic differential $q_k$ evaluated on unit tangent vectors. More precisely, $$\textnormal{Re } q_k(x):=\textnormal{Re } \bigg( q_k|_{p}(w,w ,\cdots, w) \bigg) $$ \noindent for $x=(p,w)\in T^1X_0$. Let $\phi$ be the geodesic flow on $T^1X_0$. For $\gamma\in\Gamma$, let $l_{\rho_0}(\gamma):=\frac{2}{d-1}L^{\lambda_1}_{\rho_0}(\gamma)$ be the hyperbolic length of the closed geodesic on $X_0$ corresponding to the free homotopy class $[\gamma]$. \begin{prop}\label{prop: finsler fuchsian locus hitchin} There exist constants $C_1$ and $C_2$, only depending on $d$ and $k$, such that for any vector $v=v(q_k)\in T_{X_0} \textnormal{Hit}_d(S)$ as above, $$\Vert v(q_k)\Vert_{\textnormal{Th}}^{\lambda_1}=C_1\displaystyle\sup_{[\gamma]\in[\Gamma]} \frac{1}{l_{\rho_0}(\gamma)} \int_0^{l_{\rho_0}(\gamma)} \textnormal{Re } q_k(\phi_s(x)) \mathrm{d} s $$\noindent and $$\Vert v(q_k)\Vert_{\textnormal{Th}}^{\alpha_1}=C_2\displaystyle\sup_{[\gamma]\in[\Gamma]} \frac{1}{l_{\rho_0}(\gamma)} \int_0^{l_{\rho_0}(\gamma)} \textnormal{Re } q_k(\phi_s(x)) \mathrm{d} s, $$ \noindent where $x=x_\gamma$ is any point on $T^1X_0$ that lies in the periodic orbit corresponding to $\gamma$. \end{prop} \begin{proof} The proof is a simple combination of Definition \ref{dfn: finsler reps} together with \cite[Theorem 4.0.2, Corollary 4.0.5.]{VariationAlongFuchsian}. One also needs the fact that $h_{\rho}^{\lambda_1}\leq 1$ with equality precisely when $\rho$ is Fuchsian, and $h_\cdot^{\alpha_1}\equiv 1$ (by \cite[Theorem B]{PS}). \end{proof} \section{Other examples}\label{s.other} As discussed in the Introduction in \S\ref{subsec: outlineINTRO}, we need two ingredients to gain a good understanding of the asymmetric metric $d_{\textnormal{Th}}^\varphi(\cdot,\cdot)$: \begin{itemize} \item A reparametrization of the geodesic flow of $\Gamma$ with periods given by the functional $\varphi$: this is needed to show that $d_{\textnormal{Th}}^\varphi(\cdot,\cdot)$ is non-negative, degenerating if and only if the renormalized length spectra coincide. Sambarino provides such a reparametrization whenever $\varphi\in\textnormal{int}((\mathscr{L}_\rho^\Theta)^*)$ and $\Theta$ is the set of Anosov roots (see Section \ref{sec: anosov flows and reps}). \item A good understanding of the Zariski closure and its outer automorphism group for representations belonging to a given class of interests: this is necessary to obtain renormalized length spectrum rigidity. \end{itemize} Furthermore on subsets of representations for which the entropy of some functional is constant, one can avoid the renormalization by entropy We discuss here further classes in which simultaneous knowledge of some of these aspects can be achieved. \subsection{Benoist representations}\label{subsec: benoist} Let $\Gamma$ be a torsion free word hyperbolic group. A \textit{Benoist representation} is a faithful and discrete representation $\rho:\Gamma\to\mathsf{PSL}(d+1,\mathbb{R})$ dividing an open, strictly convex set $\Omega_\rho\subset\mathbb{R}\mathbb{P}^{d}$ (recall Example \ref{ex: hyperconvex}). We denote by $\textnormal{Ben}_d(\Gamma)\subset\frak X(\Gamma,\mathsf{PSL}(d+1,\mathbb{R}))$ the space of conjugacy classes of Benoist representations. Koszul \cite{Koszul} showed that $\textnormal{Ben}_d(\Gamma)$ is an open subset of the character variety, and Benoist \cite{BenoistDivIII} showed it is closed. Hence, $\textnormal{Ben}_d(\Gamma)$ is a union of connected components of $\frak X(\Gamma,\mathsf{PSL}(d+1,\mathbb{R}))$. As Benoist representations are $\Theta$-Anosov for $\Theta=\{\alpha_1,\alpha_d\}$, both the unstable Jacobian $\textnormal {J}_{d-1}:=d\omega_1-\omega_{d}=d\lambda_1+\lambda_{d+1}$ and $\textnormal{H}:=\lambda_{1}-\lambda_{d+1}$ belong to $\textnormal{int}((\mathscr{L}_\rho^\Theta)^*)$ for every $\rho\in\textnormal{Ben}_d(\Gamma)$. We focus here on these two functionals since it was proven in \cite[Corollary 7.1]{PS} that $\textnormal J_{d-1}$ has constant entropy, and the Hilbert length function has particular geometric significance as $L_\rho^\textnormal{H}(\gamma)$ coincides with the length of the unique Hilbert geodesic in $\rho(\Gamma)\backslash \Omega_\rho$ in the isotopy class corresponding to $[\gamma]$. \begin{cor} \label{cor: unst Jac for benoist hilbert} The function $d_{\textnormal{Th}}^{{\textnormal J_{d-1}}}: \textnormal{Ben}_d(\Gamma) \times \textnormal{Ben}_d(\Gamma) \to \mathbb{R}$ given by $$d_{\textnormal{Th}}^{{\textnormal J_{d-1}}}(\rho,\widehat{\rho})=\log\left(\displaystyle\sup_{[\gamma]\in[\Gamma]} \frac{ L_{\widehat{\rho}}^{\textnormal{J}_{d-1}}(\gamma) }{L_{\rho}^{\textnormal J_{d-1}}(\gamma)}\right)$$ \noindent defines a (possibly asymmetric) distance on $\textnormal{Ben}_d(\Gamma)$. \end{cor} \begin{proof} Benoist \cite[Th\'eor\`eme 3.6]{BenoistAutomorphismes} showed that if $\rho\in\textnormal{Ben}_d(\Gamma)$ is not Zariski dense, then $\rho(\Gamma)\subset\mathsf{PSO}(d,1)$. Hence, by Theorems \ref{thm: dth for anosov} and \ref{thm:rigidity}, if $d_{\textnormal{Th}}(\rho,\widehat{\rho})=0 $ then there exists an isomorphism $\sigma:(\mathsf{G}_\rho)_0\to(\mathsf{G}_{\widehat{\rho}})_0$ so that $\sigma\circ\rho=\widehat{\rho}$. If $(\mathsf{G}_\rho)_0\cong(\mathsf{G}_{\widehat{\rho}})_0\cong\mathsf{PSO}_0(1,d)$, then the equality $\rho=\widehat{\rho}$ follows from Length Spectrum Rigidity in Teichm\"uller space (when $d=2$), or by Mostow rigidity (when $d>2$). On the other hand, if $(\mathsf{G}_\rho)_0\cong(\mathsf{G}_{\widehat{\rho}})_0\cong\mathsf{PSL}(d+1,\mathbb{R})$ and $\sigma$ is non inner, it acts non trivially on the Dynkin diagram of type $\mathsf{A}_d$, hence its action on $\mathfrak{a}$ coincides with the opposition involution $\iota$. Since $\textnormal J_{d-1}$ is not $\iota$-invariant, and has constant entropy by \cite[Corollary 7.1]{PS}, Corollary \ref{cor: distance in zariski dense components} finishes the proof. \end{proof} \begin{rem} The same applies for all $(1,1,p)$-hyperconvex representations $\rho:\Gamma\to\sf{PSL}(d,\mathbb R)$ of hyperbolic groups having as boundary a $(p-1)$-dimensional sphere (see Example \ref{ex: hyperconvex}): it follows from \cite[Proposition 7.4]{PSW1} that their projective limit set is a $\textnormal C^1$-sphere, and from \cite[Theorem A]{PSW2} that then the entropy of the unstable Jacobian $\textnormal J_{p-1}:=p\omega_1-\omega_p$ is constant and equal to 1. If we then denote by $\textnormal{Hyp}^{\textnormal Z}(\Gamma)$ the open subset of the character variety consisting of Zariski dense $(1,1,p)$-hyperconvex representations, the function $$d_{\textnormal{Th}}^{\textnormal{J}_{p-1}}(\rho,\widehat{\rho})=\log\left(\displaystyle\sup_{[\gamma]\in[\Gamma]} \frac{ L_{\widehat{\rho}}^{\textnormal{J}_{p-1}}(\gamma) }{L_{\rho}^{\textnormal J_{p-1}}(\gamma)}\right)$$ \noindent defines a (possibly asymmetric) distance on $\textnormal{Hyp}^{\textnormal Z}(\Gamma)$. \end{rem} With the same proof as in Corollary \ref{cor: unst Jac for benoist hilbert} we get the following result. \begin{cor} \label{cor: asymm for benoist hilbert} The function $d_{\textnormal{Th}}^{\textnormal{H}}: \textnormal{Ben}_d(\Gamma) \times \textnormal{Ben}_d(\Gamma) \to \mathbb{R}$ given by $$d_{\textnormal{Th}}^{\textnormal{H}}(\rho,\widehat{\rho})=\log\left(\displaystyle\sup_{[\gamma]\in[\Gamma]} \frac{h_{\widehat{\rho}}^{\textnormal{H}}}{h_{\rho}^{\textnormal{H}}} \frac{ L_{\widehat{\rho}}^{\textnormal{H}}(\gamma) }{L_{\rho}^{\textnormal{H}}(\gamma)}\right)$$ \noindent is real-valued, non-negative and $d_{\textnormal{Th}}^{\textnormal{H}}(\rho,\widehat{\rho})=0$ if and only if $\rho=\widehat{\rho}$ or $\rho=\widehat{\rho}^\star$, where $\rho^\star(\gamma):=\leftidx{^t}\rho(\gamma)^{-1}$ for all $\gamma\in\Gamma$. \end{cor} \begin{rem}\label{rem: other functionals in benoist components} The Hilbert length function $\textnormal H$ is the only element in $\textnormal{int}((\mathscr{L}_\rho^\Theta)^*)$ which is fixed by the opposition involution, and the unstable Jacobian $\textnormal J_{d-1}$ and its image $\textnormal J_{d-1}\circ \iota=-d\lambda_{d+1}-\lambda_1$ are the only elements in $\textnormal{int}((\mathscr{L}_\rho^\Theta)^*)$ that have constant entropy on the whole $\textnormal{Ben}_d(\Gamma)$. In particular for all other functionals $\varphi\in\textnormal{int}((\mathscr{L}_\rho^\Theta)^*)$, such as for example the spectral radius $\lambda_1$, $$d_{\textnormal{Th}}^{\varphi}(\rho,\widehat{\rho}):=\log\left(\displaystyle\sup_{[\gamma]\in[\Gamma]} \frac{h_{\widehat{\rho}}^{\varphi}}{h_{\rho}^{\varphi}} \frac{ L_{\widehat{\rho}}^{\varphi}(\gamma) }{L_{\rho}^{\varphi}(\gamma)}\right)$$ \noindent defines a (possibly asymmetric) distance on $\textnormal{Ben}_d(\Gamma)$. In all these cases the renormalization by entropy is, however, necessary. \end{rem} \subsection{AdS-quasi-Fuchsian representations}\label{subsec: AdSqf} Let $q\geq 2$ and $\Gamma$ be the fundamental group of a closed real hyperbolic $q$-dimensional manifold. Denote by $\textnormal{QF}_q(\Gamma)$ the space of AdS-quasi-Fuchsian representations $\Gamma\to\mathsf{PO}_0(2,q)$, which is a union of connected components of the character variety (recall Example \ref{ex: AdSquasi-fuchsian}). Since representations in $\textnormal{QF}_q(\Gamma)$ are Anosov with respect to the space of isotropic lines, the Hilbert length functional $\textnormal{H}=\omega_1-\omega_{q+1}$ belongs to the Anosov-Levi space $\mathfrak{a}_\Theta^*$. This functional is a multiple of the spectral radius functional on $\mathsf{PO}_0(2,q)$. \begin{cor} \label{cor: asymm for AdSQF hilbert} If $q>2$, the function $d_{\textnormal{Th}}^{\textnormal{H}}: \textnormal{QF}_q(\Gamma) \times \textnormal{QF}_q(\Gamma) \to \mathbb{R}$ given by $$d_{\textnormal{Th}}^{\textnormal{H}}(\rho,\widehat{\rho})=\log\left(\displaystyle\sup_{[\gamma]\in[\Gamma]} \frac{h_{\widehat{\rho}}^{\textnormal{H}}}{h_{\rho}^{\textnormal{H}}} \frac{ L_{\widehat{\rho}}^{\textnormal{H}}(\gamma) }{L_{\rho}^{\textnormal{H}}(\gamma)}\right)$$ \noindent defines a (possibly asymmetric) distance on $\textnormal{QF}_q(\Gamma)$. \end{cor} \begin{proof} For $q>2$ the group $\mathsf{PO}_0(2,q)$ is simple, and the associated root system is of type $\mathsf{B}_2$. In particular, it has no non trivial automorphisms and therefore an automorphism of $\mathsf{PO}_0(2,q)$ is necessarily inner. Corollary \ref{cor: distance in zariski dense components} then proves the result when restricting to Zariski dense AdS-quasi-Fuchsian representations. Furthermore Glorieux-Monclair \cite[Proposition 1.4]{GMRegularity} computed the possible Zariski closures of an AdS-quasi-Fuchsian representation: if $\rho$ is not Zariski dense, then it is AdS-Fuchsian. This means that $\rho$ preserves a totally geodesic copy of $\mathbb{H}^q$ inside the Anti-de Sitter space and acts co-compactly on it (c.f. \cite[Remark 1.13]{DGKHpqCC}). Therefore $\rho(\Gamma)\subset\mathsf{PO}(1,q)\subset\mathsf{PO}_0(2,q)$. Hence the Length Spectrum Rigidity of closed real hyperbolic manifolds finishes the proof. \end{proof} In the special case $q=2$, the function $d_{\textnormal{Th}}^{\textnormal{H}}$ does not separate points. Indeed $\mathsf{PSO}_0(2,2)\cong\mathsf{PSL}(2,\mathbb{R})\times\mathsf{PSL}(2,\mathbb{R})$ and every representation of the form $$\rho=(\rho^{\textnormal{L}},\rho^{\textnormal{R}}):\pi_1(S)\to\mathsf{PSL}(2,\mathbb{R})\times\mathsf{PSL}(2,\mathbb{R}),$$ \noindent where $\rho^\varepsilon$ is a point in Teichm\"uller space for $\varepsilon\in\{\textnormal{L},\textnormal{R}\}$, is AdS-quasi-Fuchsian. However, the representation $\widehat{\rho}:=(\rho^{\textnormal{R}},\rho^{\textnormal{L}})$ has the same Hilbert length spectrum as $\rho$, but $\rho\neq \widehat{\rho}$ (unless $\rho^{\textnormal{L}}=\rho^{\textnormal{R}}$). \begin{rem} Since AdS-quasi-Fuchsian representations have Lipschitz limit set, it follows again from \cite[Theorem A]{PSW2} that the entropy of the unstable Jacobian $\textnormal J_{q-1}:=q\omega_1-\omega_q$ is constant and equal to 1 on $\textnormal{QF}_q(\Gamma)$. In particular, the function $d_{\textnormal{Th}}^{{\textnormal J_{q-1}}}: \textnormal{QF}_q(\Gamma) \times \textnormal{QF}_q(\Gamma) \to \mathbb{R}$ given by $$d_{\textnormal{Th}}^{{\textnormal J_{q-1}}}(\rho,\widehat{\rho}):=\log\left(\displaystyle\sup_{[\gamma]\in[\Gamma]} \frac{ L_{\widehat{\rho}}^{\textnormal{J}_{q-1}}(\gamma) }{L_{\rho}^{\textnormal J_{q-1}}(\gamma)}\right)$$ \noindent is non negative. However, in this case the unstable Jacobian doesn't belong to the Levi-Anosov subspace. As a result it is not clear whether a metric Anosov flow with periods $\textnormal J_{q-1}$ exists allowing us to apply the Thermodynamical Formalism which is at the basis of this work. Thus, we don't know if the condition $d_{\textnormal{Th}}^{\textnormal J_{q-1}}(\rho,\widehat{\rho})=0$ leads to an equality between length spectra that allows to conclude that $d_{\textnormal{Th}}^{\textnormal J_{q-1}}$ separates points. \end{rem} \subsection{Zariski dense $\Theta$-positive representations in $\mathsf{PO}_0(p,p+1)$}\label{s.Zdense} Let $2\leq p\leq q$. Let $\Gamma=\pi_1(S)$ be a surface group and $\textnormal{Pos}_{p,q}(S)$ be the space of $\Theta$-positive representations $\Gamma\to\mathsf{PO}_0(p,q)$ (c.f. Example \ref{ex:positive}). \begin{cor}\label{cor: asymm for maximal sp4 root} For $2<p\leq q$ and $j=1,\dots,p-2$ let $\alpha_j$ be the corresponding simple root of ${\sf PO}_0(p,q)$. Let $\textnormal{Pos}^\textnormal{Z}_{p,q}(\Gamma)\subset\textnormal{Pos}_{p,q}(\Gamma)$ be the subset consisting of Zariski dense representations. Then the function $$d_{\textnormal{Th}}^{\alpha_j}: \textnormal{Pos}^\textnormal{Z}_{p,q}(\Gamma) \times \textnormal{Pos}^\textnormal{Z}_{p,q}(\Gamma) \to \mathbb{R}$$ \noindent given by $$d_{\textnormal{Th}}^{\alpha_j}(\rho,\widehat{\rho})=\log\left(\displaystyle\sup_{[\gamma]\in[\Gamma]} \frac{ L_{\widehat{\rho}}^{\alpha_j}(\gamma) }{L_{\rho}^{\alpha_j}(\gamma)}\right)$$ \noindent defines a (possibly asymmetric) distance on $\textnormal{Pos}^\textnormal{Z}_{p,q}(\Gamma)$. \end{cor} \begin{proof} As ${\sf PO}_0(p,q)$ $\Theta$-positive representations are $\Theta$-Anosov for $\Theta=\{\alpha_1,\dots,\alpha_{p-1}\}$ (see \cite{GLW, BeyPSO}), we have $\alpha_j\in\textnormal{int}((\mathscr{L}_\rho^\Theta)^*)$ for every $\rho\in\textnormal{Pos}^{\textnormal{Z}}_{p,q}(\Gamma)$. Furthermore, $\alpha_j$-entropy is constant on the space of ${\sf PO}_0(p,q)$ positive representations \cite[Corollary 1.7]{PSW2}. Thus to finish the proof it only remains to show that $\alpha_j$-length spectrum rigidity holds on $\textnormal{Pos}^{\textnormal{Z}}_{p,q}(\Gamma)$. Since $\mathsf{PO}_0(p,q)$ is simple and center free, Theorem \ref{thm:rigidity} guarantees that two representations in $\textnormal{Pos}^{\textnormal{Z}}_{p,q}(\Gamma)$ having the same renormalized length spectra differ by an automorphism of $\mathsf{PO}_0(p,q)$. Since the Dynkin diagram associated to the root system of $\mathsf{PO}_0(p,q)$ is of type of type $\mathsf{B}_p$ and admits no non trivial automorphism, the outer automorphism group of $\mathsf{PO}_0(p,q)$ is trivial and this finishes the proof. \end{proof} \begin{rem} The space $\textnormal{Pos}_{2,3}(\Gamma)$ contains connected components only consisting of Zariski dense representations \cite[Theorem 4.40]{AlessandriniCollier}. More generally, for all $p>2$ the space $\textnormal{Pos}_{p,p+1}(\Gamma)$ contains smooth connected components. It is conjectured that these consist only of Zariski dense representations as well (see \cite[Conjecture 1.7]{Collier}), if the conjecture were true, the functions in Corollary \ref{cor: asymm for maximal sp4 root} would define metrics on these connected components. On the other hand it follows from the classification in \cite{ABCGGO} that for $q\geq p$ all connected components of $\textnormal{Pos}_{p,q}(S)$, with the exception of the Hitchin component if $p=q$ contain representations with compact centralizer. \end{rem}
1,314,259,995,367
arxiv
\section{Introduction} Galaxy merger is a key process for the formation and evolution of galaxies in the $\Lambda$CDM hierarchical structure formation paradigm. It is widely accepted that the existence of massive black holes (MBH) in the centers of nearby galaxies is ubiquitous \citep{2013ARA&A..51..511K, 1995ARA&A..33..581K}. During the merging of two galaxies, an massive binary black hole (BBH) system may form naturally since the two central MBHs approach each other due to the dynamical friction and viscous drag \citep{1980Natur.287..307B, 2002MNRAS.331..935Y}. In the meantime, the angular momentum of gas may be also transferred out due to tidal interactions during the merging process, which leads to the gas sinking into the vicinity of one MBH or both MBHs to trigger nuclear activity or activities \citep[e.g.,][]{ 1989Natur.340..687H}. If only one of the MBH is activated, then the system may appear as an offset Active Galactic Nucleus (offset AGN , or oAGN), and if both of the two MBHs are activated, the system may appear as a dual AGN (dAGN) \citep[e.g.,][]{2012ApJ...748L...7V, 2012ApJ...753...42C, 2014ApJ...789..112C, 2015ApJ...806..219C, 2016ApJ...829...37B, 2016ApJ...830...50M, 2017ApJ...838..129B, 2017MNRAS.469.4437C, 2017ApJ...847...41C, 2018MNRAS.478.3056B}. Galaxy mergers may provide an efficient way to transfer the gas angular momentum outward and lead to the sinking of gas into the vicinity of MBHs and thus trigger nuclear activities. However, the connection between galaxy merger and AGN triggering is still observationally inconclusive. Some observations clearly show the disky structure of the low redshift low luminosity AGN host galaxies, which lead to the proposal that galaxy minor merger or secular processes triggers AGN activity \citep{2011ApJ...726...57C, 2012ApJ...744..148K, 2017MNRAS.470..755H, 2017MNRAS.465.2895L, 2019MNRAS.483.2441V}. While some other observations find that AGN host galaxies are highly perturbed in morphology, which lead to a claim that galaxy major merger dominates the triggering of AGNs \citep{2011MNRAS.418.2043E, 2012ApJ...758L..39T, 2014A&A...569A..37M, 2014MNRAS.441.1297S, 2015ApJ...804...34H, 2018ApJ...853...63D, 2018PASJ...70S..37G}. These two lines of observational results apparently contradict with each other and hinder the understanding of the triggering of nuclear activities. Both dAGNs and oAGNs can be used as tracers of galaxy mergers. According to hydrodynamical simulations if both progenitor galaxies are gas rich and comparable in mass (mass ratio $> 1/3$, i.e., a major merger), both MBHs may be activated with relatively large Eddington ratios and high bolometric luminosities, and they emerge as a dAGN system with the two nuclei separated on a scale of $\sim 1-10$\,kpc \citep[e.g.,][]{2012ApJ...748L...7V, 2013MNRAS.429.2594B, 2015MNRAS.447.2123C, 2016MNRAS.458.1013S, 2017MNRAS.469.4437C}. Observations do find such dAGN systems by using different techniques \citep[e.g.,][]{2003ApJ...582L..15K, 2004ApJ...604L..33Z, 2009ApJ...705L..76W, 2009ApJ...705L..20X, 2009ApJ...698..956C, 2010ApJ...708..427L, 2010ApJ...715L..30L, 2011ApJ...737L..19C, 2011ApJ...733..103F, 2011ApJ...740L..44F, 2011ApJ...735L..42K, 2012MNRAS.425.1185F, 2012ApJ...745...67F, 2012ApJS..201...31G, 2012ApJ...746L..22K, 2013MNRAS.429.2594B, 2015ApJ...813..103M, 2016MNRAS.457.3878Z, 2018ApJ...867...66C, 2019MNRAS.482.1889W}. The conditions for the formation of oAGNs may be different from dAGNs as the activation of only one MBH is required. Both major and minor mergers may be responsible for the formation of dAGN, but it is still not clear which one dominates the contribution to oAGNs \citep[e.g.][]{2015ApJ...806..219C, 2018ApJ...869..154B}.\footnote{We also note here that in the final stage of a galaxy merger, the merged MBH may gain a recoiling speed up to several thousand $\mathrm{km\ s^{-1}}$ due to the asymmetric gravitational wave emission \citep{2007PhRvL..98w1102C}. The recoiled BH can carry almost everything bounded to it (within $\sim 10^{5}$ gravitational influence radius), as a consequence, the broad line region (BLR) will run away bounded to the central MBH and the narrow line region (NLR) will be left behind \citep{2004ApJ...606L..17M, 2011MNRAS.412.2154B, 2016ApJ...829...37B, 2018MNRAS.475.5179S}. This may also contribute to the census of oAGNs significantly.} In this paper, we use high resolution hydrodynamical simulations to study whether significant nuclear activities can be triggered by minor galaxy mergers and investigate whether dAGNs and oAGNs can emerge from the merging processes of galaxy minor mergers. The paper is organized as follow. In section~\ref{sec:method}, we briefly introduce the numerical simulations we performed and their initial setups. Then we summarize the main results in Section~\ref{sec:result}. Finally, conclusions and discussions are given in Section~\ref{sec:conclusion}. \section{Numerical simulations} \label{sec:method} \subsection{Initial setup} We use the smoothed particle hydrodynamics (SPH) code GADGET-2 \citep{2005MNRAS.364.1105S} for simulation, in which we take into account those physical processes including the star formation, supernova feedback, BH accretion, and AGN feedback. To simulate the star formation and supernova feedback processes, the gas density ($\rho_{\rm gas}$) is divided into a hot component $\rho_{\mathrm{h}}$ and a cold component $\rho_{\mathrm{c}}$ based on the hybrid model proposed in \cite{2003MNRAS.339..289S}, from which we have $\rho_{\rm gas} = \rho_{\mathrm{h}} + \rho_{\mathrm{c}}$. The star formation rate at a characteristic timescale $t_{*}$ is defined as \begin{equation} \frac{\mathrm{d}\rho_{*}}{\mathrm{d}t} = (1 - \beta)\frac{\rho_{\mathrm{c}}}{t_{*}} \end{equation} where $\beta$ denote the mass fraction of the newly formed stars exploded as the supernovae instantly. A gas particle will spawn a star particle when the gas density exceeds a given threshold $\rho_{\mathrm{th}} = 0.35\ h^{2}\ \mathrm{cm}^{-3}$ to match the observational law \citep{2003MNRAS.339..289S, 2004MNRAS.348..435N, 2014ApJ...780..145T}. With the defined gas density, the MBH accretion rate is calculated by adopting the Bondi-Hoyle-Lyttleton parametrization \citep{1939PCPS...35..405H, 1944MNRAS.104..273B, 1952MNRAS.112..195B} formalism: \begin{equation} \dot{M} = \frac{4 \pi \alpha G^{2} M_{\mathrm{BH}}^{2} \rho_{\rm gas}}{(c_{\mathrm{s}}^{2} + v^{2})^{3/2}} \end{equation} where $\alpha$ is a dimensionless parameter and is set as $\alpha = 8$ for $z = 3$ cases, the same as that adopted in the literature, e.g., \citet{2009MNRAS.398...53B},\citet{ 2009ApJ...690..802J}, \citet{2019SCPMA}, and $\alpha = 100$ for $z = 1$ cases to consider the mass resolution of $z=1$ cases is $10$ times lower than that of the $z=3$ cases \citep[for discussions on the settings of $\alpha$, see][]{2005MNRAS.361..776S, 2014MNRAS.442.1992H, 2015MNRAS.454.1038R, 2017MNRAS.467.3475N}\footnote{We choose a higher booster factor $\alpha = 100$ to calculate the Bondi accretion rate for those three $z=1$ cases. However, we find that the accretion rate is still relatively low compared to $z = 3$ cases. An adaptive mesh refinement (AMR) method or a $10$ times higher mass resolution may be required to better understand the gas feeding to the very central region of the galaxy. We defer this to a future study.}, $c_{\mathrm{s}}$ corresponds to the sound speed of the gas, $v$ is the velocity of the BH relative to the gas. These settings on $\alpha$ ensure a reasonable BH accretion rate. Here we limit $\dot{M}$ to be not larger than the Eddington accretion rate $\dot{M}_{\mathrm{Edd}}$, in order to avoid super-Eddington accretion. As to the AGN feedback, the energy injected into the surrounding gas is a fraction ($\epsilon_{\mathrm{f}}$) of the AGN bolometric luminosity \begin{equation} \dot{E}_{\mathrm{feed}} = \epsilon_{\mathrm{f}} L_{\rm bol} = \epsilon_{\mathrm{f}} \epsilon_{\mathrm{r}} \dot{M} c^{2} \end{equation} where $c$ is the light speed in the vacuum, $\epsilon_{\mathrm{r}}$ is the mass to energy converting efficiency. Here we take the typical values $\epsilon_{\mathrm{r}} = 0.1$ and $\epsilon_{\mathrm{f}} = 0.05$ to study the AGN feedback \citep{2005Natur.433..604D}, which can regulate the evolution of the established galaxy following the observed $M_{\mathrm{BH}}-\sigma$ relation \citep[e.g.,][]{1998AJ....115.2285M, 2002ApJ...574..740T, 2013ARA&A..51..511K}. In the simulation, the BH accretion process and AGN feedback are numerically implemented by following the procedure described in \cite{2005MNRAS.361..776S}. \subsection{Construction of galaxy merger systems} To study whether and how the central MBH can be triggered by minor mergers, we design 9 sets of these systems with different mass ratios and galaxy types as listed in Table \ref{tab:ini_setup}. We denote those simulations according to their parameter settings on $(\textsf{q}, \textsf{f}, \textsf{z})$ in the following way (see Table \ref{tab:ini_setup}). Here \textsf{q} represents the mass ratio of two progenitor galaxies, \textsf{q5} and \textsf{q10} represent $1:5$ and $1:10$ minor mergers, respectively; \textsf{f} marks the gas fraction of each galaxy in the unit of $0.1$, the two consequent numbers after \textsf{f} represent the gas fraction of the primary galaxy and secondary galaxy, respectively, e.g., \textsf{f13} denotes the gas fractions of the primary galaxy and secondary galaxy are $0.1$ and $0.3$, respectively; \textsf{ss} and \textsf{es} represent the type of the primary and secondary galaxies, and \textsf{s} and \textsf{e} represent spiral galaxy and elliptical galaxy, respectively; \textsf{z1} and \textsf{z3} represent the initial redshift of the simulation, i.e. $z = 1$ and $z = 3$, respectively. One of these nine cases has an extra symbol \textsf{p10}, which means that we specify a pericenter $r_{\rm p} = 10$\,kpc, for other cases we adopt the pericenter as the $20$\% of the virial radius of the primary galaxy (9.3 kpc for $z = 3$ and 35.7 kpc for $z = 1$). At redshift $z=3$, galaxies tend to be gas rich and possibly have varied gas fractions. We hence start the simulation from \textsf{q5f13ssz3}, \textsf{q5f31ssz3}, and \textsf{q5f33ssz3}, which are 1:5 mergers started from $z=3$ but with different gas fractions included in the primary and secondary galaxies, with which we can quantify the effect of gas content on the AGN triggering. These three mergers are set to be co-planar (with the inclination angle $i=0^{\circ}$) and prograde. Considering that different inclination angles, which indicate different angular momentum of these merging systems and can affect the morphology of the merged galaxy \citep[e.g.,][]{2005ApJ...622L...9S, 2017MNRAS.470.3946S, 2019SCPMA}, we then set the \textsf{q5f33ssz3i} system with inclination angle $i=45^{\circ}$ but keep other parameters the same as \textsf{q5f33ssz3} to investigate the inclination angle effect. We also set the galaxy mergers with mass ratio 1:10 with $i=0^{\circ}$ (\textsf{q10f33ssz3}) and $i=45^{\circ}$ (\textsf{q10f33ssz3i}) to analyze how the triggering of AGN activity is affected by different mass ratios. The above 6 galaxy mergers are put into a parabolic Keplerian orbit (eccentricity $e = 1$), with the initial separation set as the sum of the virial radii of the primary and secondary galaxies. And the pericenter is set to 20\% of the virial radius of the primary galaxy. At redshift $z=1$, the fraction of elliptical galaxies increases compared to that at $z=3$ \citep[e.g.,][]{2008ApJS..175..356H, 2010ApJ...709..644I, 2014ARA&A..52..291C}. We then simulate one spiral-spiral (\textsf{q5f33ssz1}) and one elliptical-spiral (\textsf{q5f03esz1}) merging systems started at $z=1$ to specify their difference with that happened in $z=3$. These two systems have the same orbital setup as that started at $z=3$. In addition, we set a elliptical-spiral minor merger (\textsf{q5f03esz1p10}) with lower pericenter $r_{\mathrm{p}} = 10\ \mathrm{kpc}$ to make a closer encounter at the first pericentric passage to identify whether the gas transfer can be significantly changed in the merging process. In our simulation, a spiral galaxy consists of a dark matter halo, a stellar bulge, a disk component with both stars and gas included, and a central MBH. An elliptical galaxy includes a dark matter halo, a stellar bulge, and a central MBH. For the spiral galaxy, we use a Hernquist profile \citep{1990ApJ...356..359H} to describe its dark matter halo with the virial mass $M_{\mathrm{vir}}$ given in Table \ref{tab:ini_galaxy}. The disk mass is set as 0.04 of the virial mass, i.e. $m_{\mathrm{d}} = 0.04 M_{\mathrm{vir}}$. Inside the disk, the gas fraction $f_{\mathrm{gas}}$ varies from 0.1 to 0.3 for spiral galaxies. The galaxy bulge, which is also assumed to distribute as a Hernquist profile, is set as 0.008 of the virial mass, which indicates an initial bulge-to-total ratio B/T=0.2. According to the $M_{\mathrm{BH}}$-$M_{\mathrm{Bulge}}$ relation \citep[e.g.,][]{2003ApJ...589L..21M}, we set the central MBH has a mass fraction $m_{\mathrm{BH}} = 1.0875\times 10^{-5}$ of the virial mass, to guarantee the establishment of a typical and reasonable spiral galaxy. The MBH particle is settled down in the galactic center as a sink particle, which accretes the surrounding gas particles including both the mass and momentum. For the elliptical galaxy, the disk and gas components are excluded. Both the bulge and dark matter halo are described by the Hernquist profile. The bulge mass fraction is $m_{\mathrm{b}} = 0.05$, and the BH mass fraction is $m_{\mathrm{BH}} = 8.0 \times 10^{-5}$, which is also set based on the $M_{\mathrm{BH}}$-$M_{\mathrm{Bulge}}$ relation. All the other parameter setup of the elliptical and spiral galaxies are listed in Table \ref{tab:ini_galaxy}, and the corresponding mass resolution and softening lengths of the four particles: dark matter, bulge, disk, and gas are listed in Table \ref{tab:ini_resolution}. \begin{table*} \centering \caption{Physical parameters for constructed galaxy mergers} \begin{tabular}{ccccccccc} \hline Simulation & Galaxy Type &$M_{\mathrm{vir}1}(M_{\odot})$ & $M_{\mathrm{vir}2}(M_{\odot})$ & q & $f_{\mathrm{gas}1}$ & $f_{\mathrm{gas}2}$ & $z$ & Notes\\ \hline \textsf{q5f13ssz3} & spiral + spiral & $2.27\times 10^{11}$ & $4.54\times 10^{10}$ & 1:5 & 0.1 & 0.3 & 3 & ... \\ \textsf{q5f31ssz3} & spiral + spiral & $2.27\times 10^{11}$ & $4.54\times 10^{10}$ & 1:5 & 0.3 & 0.1 & 3 & ... \\ \textsf{q5f33ssz3} & spiral + spiral & $2.27\times 10^{11}$ & $4.54\times 10^{10}$ & 1:5 & 0.3 & 0.3 & 3 & ... \\ \textsf{q5f33ssz3i} & spiral + spiral & $2.27\times 10^{11}$ & $4.54\times 10^{10}$ & 1:5 & 0.3 & 0.3 & 3 & Inclined by $45^{\circ}$ \\ \hline \textsf{q10f33ssz3} & spiral + spiral & $2.27\times 10^{11}$ & $2.27\times 10^{10}$ & 1:10 & 0.3 & 0.3 & 3 & ... \\ \textsf{q10f33ssz3i} & spiral + spiral & $2.27\times 10^{11}$ & $2.27\times 10^{10}$ & 1:10 & 0.3 & 0.3 & 3 & Inclined by $45^{\circ}$ \\ \hline \textsf{q5f33ssz1} & spiral + spiral & $2.0\times 10^{12}$ & $4.0\times 10^{11}$ & 1:5 & 0.3 & 0.3 & 1 & ...\\ \textsf{q5f03esz1} & elliptical + spiral & $2.0\times 10^{12}$ & $4.0\times 10^{11}$ & 1:5 & 0 & 0.3 & 1 & ... \\ \textsf{q5f03esz1p10} & elliptical + spiral & $2.0\times 10^{12}$ & $4.0\times 10^{11}$ & 1:5 & 0 & 0.3 & 1 & $r_{p} = 10$\, kpc \\ \hline \end{tabular} \label{tab:ini_setup} \end{table*} \begin{table*} \caption{Physical parameters of individual galaxies in our simulation.} \begin{center} \begin{tabular}{c|ccc|ccc} \hline \multicolumn{1}{c}{} & \multicolumn{3}{|c|}{$z = 1$} & \multicolumn{3}{c}{$z = 3$}\\ \cline{2-7} Symbol & Primary(E) & Primary(S) & Secondary(S) & Primary & Secondary(1:5) & Secondary(1:10)\\ (1) & (2) & (3) & (4) & (5) & (6) & (7)\\ \hline $M_{\mathrm{vir}}\ (M_{\odot})$ & $2.0 \times 10^{12}$ & $2.0 \times 10^{12}$ & $4.0 \times 10^{11}$ & $ 2.3 \times 10^{11}$ & $4.5 \times 10^{10}$ & $2.3 \times 10^{10}$\\ $m_{\mathrm{d}}\ (M_{\mathrm{vir}})$ & 0 & 0.04 & 0.04 & 0.04 & 0.04 & 0.04\\ $m_{\mathrm{b}}\ (M_{\mathrm{vir}})$ & 0.05 & 0.008 & 0.008 & 0.008 & 0.008 & 0.008\\ $M_{\mathrm{BH}}\ (M_{\odot})$ & $1.6 \times 10^{8}$ & $2.2 \times 10^{7}$ & $4.4 \times 10^{6}$ & $3.0 \times 10^{6}$ & $6.0 \times 10^{5}$ & $3.0 \times 10^{5}$\\ $j_{\mathrm{d}}\ (j_{\mathrm{halo}})$ & 0 & 0.04 & 0.04 & 0.04 & 0.04 & 0.04\\ $R_{200}\ (\mathrm{kpc})$ & 178.50 & 178.50 & 104.42 & 46.53 & 27.08 & 21.59\\ $R_{\mathrm{H}}\ (\mathrm{kpc})$ & 67.12 & 67.12 & 39.26 & 8.66 & 5.04 & 4.02\\ $R_{\mathrm{S}}\ (\mathrm{kpc})$ & 59.50 & 59.50 & 34.80 & 5.17 & 3.01 & 2.40\\ $H\ (\mathrm{kpc})$ & 0 & 5.73 & 3.35 & 0.94 & 0.55 & 0.44\\ $Z_{0}\ (\mathrm{kpc})$ & 0 & 0.57 & 0.34 & 0.10 & 0.05 & 0.04\\ $A\ (\mathrm{kpc})$ & 2.78 & 1.15 & 0.67 & 0.19 & 0.11 & 0.09\\ \hline \end{tabular} \end{center} \label{tab:ini_galaxy} Note: The left column shows symbols used for describing a galaxy, which from top to bottom represent the virial mass, disk mass fraction, bulge mass fraction, BH mass, disk spin, halo virial radius, scale radius of Hernquist profile, halo scale radius, disk scale length, disk thickness, and bulge scale radius, respectively. Column (2)-(7) list the correspond values, (E) stands for the elliptical galaxy, (S) means the spiral one, (1:5) and (1:10) represent the secondary spiral galaxies in 1:5 and 1:10 minor mergers, respectively. \end{table*} \begin{table*} \centering \caption{Mass and spatial resolutions for different particles at $z = 1$ and $z = 3$ } \begin{tabular}{c|cc|cc} \hline % & \multicolumn{2}{|c|}{Mass Resolution ($M_{\odot}$)} & \multicolumn{2}{c}{Softening Length (pc)}\\ \cline{2-5} Particle Type & $z = 1$ & $z = 3$ & $z = 1$ & $z = 3$\\ \hline Dark Matter & $1.1 \times 10^6$ & $1.1 \times 10^5$ & 30 & 30 \\ Bulge & $3.7 \times 10^{4}$ & $3.7 \times 10^{3}$ & 10 & 10\\ Disk & $3.7 \times 10^{4}$ & $3.7 \times 10^{3}$ & 10 & 10\\ Gas & $4.6 \times 10^{4}$ & $4.6 \times 10^{3}$ & 20 & 20\\ \hline \end{tabular} \label{tab:ini_resolution} \end{table*} \subsection{Identification of AGN activities} In the galaxy merging process, the gas can be concentrated to the galaxy center and trigger the nuclear activity, but the corresponding detection depends on the detectability of current telescopes. Therefore, in this paper, we set two thresholds for the bolometric luminosity ($L_{\rm bol}=10^{43} {\rm erg~s^{-1}}$, $L_{\rm bol}=10^{44} {\rm erg~s^{-1}}$), and two in Eddington ratio ($f_{\rm Edd}=0.01$, $f_{\rm Edd}=0.05$) to match with varied detection capabilities of different telescopes. \section{Results} \label{sec:result} \begin{figure*} \centering % \includegraphics[scale=0.43]{RAA-2019-0120fig1.pdf} % \caption{Evolution of the nuclear bolometric luminosities (first row), Eddington ratios (second row), MBH masses (third row), total star formation rate (fourth row), and the separation of the primary and secondary MBHs (fifth row). Columns from left to right show the results obtained from the four simulations starting from $z = 3$ with mass ratio of $1:5$. The red and blue solid lines in the first to third rows represent the corresponding evolution curves for the primary and secondary MBHs, respectively. In each column, the vertical dotted lines from left to right mark the cosmic time at the first, second, third, and fourth pericentric passages, respectively. The three horizontal dotted lines in the bottom row indicate separations of the two MBHs at the first to third pericentric passages, respectively.} % \label{fig:bhacc_z3_5} \end{figure*} \begin{figure*} \centering % \includegraphics[scale=0.43]{RAA-2019-0120fig2.pdf} % \caption{Parameter evolution of the minor mergers started at $z=3$ with mass ratio 1:10 (\textsf{q10f33ssz3} in left and \textsf{q10f33ssz3i} in right). Lines and colors are the same as shown in Figure~\ref{fig:bhacc_z3_5}.} % \label{fig:bhacc_z3_10} \end{figure*} \begin{figure*} % \centering % \includegraphics[scale=0.43]{RAA-2019-0120fig3.pdf} % \caption{Parameter evolution of the minor mergers started at $z=1$ with mass ratio 1:5. Columns from left to right show the evolution curves of \textsf{q5f33ssz1}, \textsf{q5f03esz1}, and \textsf{q5f03esz1p10} simulations, respectively. Lines and colors are the same as shown in Figure~\ref{fig:bhacc_z3_5}.} % \label{fig:bhacc_z1} \end{figure*} Figures~\ref{fig:bhacc_z3_5}, \ref{fig:bhacc_z3_10}, and \ref{fig:bhacc_z1} show the evolution processes of the nuclear bolometric luminosity, Eddington ratio, BH mass, star formation rate (SFR), and BH separation of all the 9 minor mergers as listed in Table \ref{tab:ini_setup}. The nine sets of evolution present us how the primary and secondary galaxies play their roles in the minor merger. Based on our constructed galaxy mergers, we can have a clear understanding on how the orbital decay rely on different initial conditions. The orbital decay of the first three simulations in Figures~\ref{fig:bhacc_z3_5} and the first two simulations in Figure \ref{fig:bhacc_z1} are quite similar before the first three pericentric passages, which indicates that the dynamical friction at the early stage of minor mergers is determined by the mass ratio, instead of gas content \citep{2002MNRAS.331..935Y}. Those minor mergers with an inclination angle (last column of Figure \ref{fig:bhacc_z3_5} and second column of Figure \ref{fig:bhacc_z3_10}) or smaller $r_{\rm P}$ (last two columns of Figure \ref{fig:bhacc_z1}) can accelerate the merging process . When the mass ratio decrease from $1:5$ (Figure \ref{fig:bhacc_z3_5}) to $1:10$ (Figure \ref{fig:bhacc_z3_10}), the merging time increases by a factor of $\sim 2$ which can be easily understood as the dynamical friction timescale is inversely proportional to the mass of the secondary galaxy. At the beginning of the minor merger, the bolometric luminosities and Eddington ratios of the primary and secondary MBHs are determined by their initial gas fraction. For the \textsf{q5f13ssz3}, \textsf{q5f03esz1} and \textsf{q5f03esz1p10} simulations, the gas fraction of the primary galaxy is lower than the secondary one, which causes correspondingly lower $L_{\rm bol}$ and $f_{\rm Edd}$. For the other six cases, the primary MBHs still have lower $f_{\rm Edd}$ but their $L_{\rm bol}$ are higher than the secondary galaxy at most of the evolution time. Once the merging two galaxies go through the first pericentric passage, the primary galaxy begins to rob the gas from the secondary galaxy, and the bolometric luminosity of the primary MBH is systematically larger than the secondary galaxy for those gas-rich mergers. For the two elliptical-spiral galaxy minor mergers (right two columns of Figure~\ref{fig:bhacc_z1}), the evidence of the gas capture is more clear: both the $L_{\rm bol}$ and $f_{\rm Edd}$ of the primary MBH increase dramatically after the first pericenter, and their $f_{\rm Edd}$ are comparable with the secondary MBH after the fourth pericentric passage, which means their $L_{\rm bol}$ are $\sim 5$ times higher than the secondary MBH. This gas capture process can actually decrease the nuclear activity of the secondary MBH. On the other hand, the tidal torques can also enhance the gas concentration to the secondary MBH. In all the nine cases we can see that both the $L_{\rm bol}$ and $f_{\rm Edd}$ increase after the first to fourth pericentric passages as shown by the four vertical dotted lines in each panel. The gas capture and tidal torque finally produce the oscillated $L_{\rm bol}$ and $f_{\rm Edd}$ evolution curves. Due to the similar evolution processes, the MBH masses of the two galaxies increase in similar trends for those minor mergers started at $z=3$ (Figures \ref{fig:bhacc_z3_5} and \ref{fig:bhacc_z3_10}). The primary MBH increase about $\sim 0.6$ dex, while the secondary MBH only have a maximum increase of $\sim 0.2$ dex. For those galaxy mergers started at $z=1$ (Figure~\ref{fig:bhacc_z1}), since their galaxy and MBH masses are 10 times larger than that at $z=3$, minor merger can not supply enough gas accretion and the MBH masses increase less than $0.1$ dex. The amplitude of the SFR evolution for the 9 galaxy mergers are different, and determined by the total amount of gas included in the two galaxies. The total gas mass included in the \textsf{q5f13ssz3} merger (left column of Figure~\ref{fig:bhacc_z3_5}) is weaker than others because of its smallest amount of gas fraction included in the two galaxies. On the contrary, the \textsf{q5f33ssz1} (left column of Figure~\ref{fig:bhacc_z1}) contains the largest amount of gas, which then have the strongest SFR. \begin{figure*} \centering % \includegraphics[width=0.9\textwidth]{RAA-2019-0120fig4a.jpg}\\ \includegraphics[width=0.9\textwidth]{RAA-2019-0120fig4b.jpg}\\ % \includegraphics[width=0.9\textwidth]{RAA-2019-0120fig4c.jpg}\\ % \includegraphics[width=0.9\textwidth]{RAA-2019-0120fig4d.jpg}\\ % \caption{Snapshots of the four minor mergers started at $z = 3$ with mass ratio 1:5. Every two rows from top to bottom correspond to the simulations of \textsf{q5f13ssz3}, \textsf{q5f31ssz3}, \textsf{q5f33ssz3}, and \textsf{q5f33ssz3i}. In each two rows, the first row shows the four snapshots viewed in face-on (perpendicular to the galactic plane of the primary galaxy), and the second row shows that viewed in edge-on (parallel to the galactic plane of the primary galaxy). Numbers 1-4 at the top left of each panel represent the four snapshots during the merger: (1) the first pericentric passage, (2) the first apocentric passage after the first pericentric passage, (3) one of the two nuclei is active, and (4) the last output of the simulation. The separation between the two MBHs is given at the bottom left of each panel. The black circles in each panel show the position of the two MBHs, those MBHs with $L_{\rm bol}>10^{43}{\rm erg~ s^{-1}}$ are shown in filled black circle, while those MBHs with $L_{\rm bol}<10^{43}{\rm erg~ s^{-1}}$ are shown in open black circles. The radii of the circles are not scaled to the real size of MBHs.} % \label{fig:galaxy1} \end{figure*} Figures~\ref{fig:galaxy1} and \ref{fig:galaxy2} show the morphology of the merging galaxies in four different snapshots for each simulation: (1) the first pericentric passage, (2) the first apocentric passage after the first pericentric passage, (3) the time when one of the two nuclei is active, and (4) the last output of the simulation. In the two figures, each two rows show the four snapshots viewed by face-on (first row) and edge-on (second row) angles. After the first pericentric passage a tidal bridge appears, which is the channel for the material transportation between the two galaxies. In the cases that the secondary galaxy is colliding with inclination angle $i=45^{\circ}$ (\textsf{q5f33ssz3i} and \textsf{q10f33ssz3i}), a tidal tail outside the galactic plane can be clearly seen. The tidal tails and bridges are believed as the evidences of galaxy merger \citep{1995ApJ...438L..75M, 1996ApJ...471..115B, 2003AJ....126.1227K, 2007A&A...468...61D, 2007AJ....133..791S} \begin{figure*} \centering % \includegraphics[width=0.9\textwidth]{RAA-2019-0120fig5a.jpg}\\ % \includegraphics[width=0.9\textwidth]{RAA-2019-0120fig5b.jpg}\\ % \includegraphics[width=0.9\textwidth]{RAA-2019-0120fig5c.jpg}\\ % \includegraphics[width=0.9\textwidth]{RAA-2019-0120fig5d.jpg}\\ % \includegraphics[width=0.9\textwidth]{RAA-2019-0120fig5e.jpg}\\ % \caption{Snapshots of the simulations, legends are the same as Figure~\ref{fig:galaxy1}, except that the simulations shown here are \textsf{q10f33ssz3}, \textsf{q10f33ssz3i}, \textsf{q5f33ssz1}, \textsf{q5f03esz1}, and \textsf{q5f03esz1p10}, respectively. } % \label{fig:galaxy2} \end{figure*} For galaxy merging processes in different galaxy types and gas fractions, the MBH activities in the primary and secondary galaxy are triggered at different times, which rely on the gas contained in each galaxy. Figure~\ref{fig:time_all} shows the duration time at different separations, bolometric luminosities, and Eddington ratios for paired galaxies. Here the duration time means the observed timescale of an active nucleus at a given separation, bolometric luminosity, and Eddington ratio for each simulation run. From this figure we can obtain the following interpretation: 1) for those spiral-spiral minor mergers, the black hole activities have a dichotomy distributions: one peaks at larger separations ($\gtrsim 50$ kpc, and another one peaks at sub-kpc scale. 2) for the primary galaxies in the spiral-spiral minor mergers, following the mass increase of its central MBH, the $L_{\rm bol}$ increase at the sub-kpc activity peak, but the corresponding $f_{\rm Edd}$ are similar. 3) for the secondary galaxies in the spiral-spiral minor mergers, their $L_{\rm bol}$ at different separations have small oscillation, but their $f_{\rm Edd}$ decrease with smaller separations. 4) for the two elliptical-spiral mergers, if the two galaxies collide in closer encounter at the first pericentric passage (\textsf{q5f03esz1p10}, bottom row), the primary elliptical galaxy can capture the gas from the secondary spiral galaxy easier and its central MBH can accrete more gas and becomes more active than that of the larger encounter case (\textsf{q5f03esz1}, 8th row). With the detected duration time of the two merging galaxies, we find that dAGN may emerge in several merging cases if using the lowest thresholds of $L_{\rm bol}=10^{43}{\rm erg~s^{-1}}$ or $f_{\rm Edd}$ as set in Section \ref{sec:method}. For the duration time of each MBH, Table \ref{tab:time_fraction} lists the results based on the two $L_{\rm bol}$ and two $f_{\rm Edd}$ thresholds, respectively. In all the minor merger cases, the secondary MBHs never have their $L_{\rm bol} \ge 10^{44}~{\rm erg~s^{-1}}$. For each simulation, we count the time duration when both MBHs are active and above the given $L_{\rm bol}$ or $f_{\rm Edd}$ thresholds, and list them in the row named `dAGN'. We find that not all the minor mergers can trigger observable dAGNs with significant time duration. Comparing \textsf{q5f13ssz3} with the other $z = 3$ cases, a system with the primary galaxy has lower gas fraction than the secondary galaxy can significantly decrease the detection rate of dAGNs. The last row `offset' of each simulation listed in Table \ref{tab:time_fraction} shows the time duration when the two nuclei reach the $L_{\rm bol}$ or $f_{\rm Edd}$ thresholds, and the secondary nucleus has larger luminosity or higher Eddington ratio than the primary galaxy, i.e. an oAGN system. From the `Offset' fraction detected under the $f_{\rm Edd}=0.01$ threshold, current nine merging systems only provide weak clues that oAGNs appear more frequently for those gas-rich mergers with the two galaxies have different gas fractions (e.g., \textsf{q5f13ssz3} and \textsf{q5f31ssz3}) than those gas-rich mergers with similar gas fractions (e.g., other spiral-spiral mergers) or elliptical-spiral mergers. Figure~\ref{fig:time_frac} summarizes the AGN fraction of the primary (top row) and secondary (bottom row) detected at different luminosity and Eddington ratio thresholds. The dichotomy distributions shown in Figure~\ref{fig:time_all} present clearer in Figure~\ref{fig:time_frac}. The peaks located at larger separation are caused by the strong interaction after the first pericentric passage (see details in Figure~\ref{fig:bhacc_z3_5}, Figure~\ref{fig:bhacc_z3_10} and Figure~\ref{fig:bhacc_z1}). It is not surprising that the time fraction reaches its maximum at small separation since the galaxy interaction induced nuclear activity. The AGN fraction in the three $z = 1$ simulations are hard to recognize because the two nuclei never reach to $\sim$ 100 pc in our simulation. None of the secondary nucleus can be more luminous than $L_{\mathrm{bol}} = 10^{44}\ \mathrm{erg\ s^{-1}}$. \begin{table*} \centering \caption{AGN fractions in different thresholds of $L_{\rm bol}$ or $f_{\rm Edd}$.} \begin{tabular}{cc|cccc|cccc} \hline % \multicolumn{2}{c|}{Run} & \multicolumn{2}{c}{$L_{\mathrm{bol}} = 10^{43}\ \mathrm{erg\ s^{-1}}$} & \multicolumn{2}{c|}{$L_{\mathrm{bol}} = 10^{44}\ \mathrm{erg\ s^{-1}}$} & \multicolumn{2}{c}{$f_{\mathrm{Edd}} = 0.01$} & \multicolumn{2}{c}{$f_{\mathrm{Edd}} = 0.05$}\\ \cline{3-10} % \multicolumn{2}{c|}{} & $t_{\mathrm{AGN}}$ & $t_{\mathrm{AGN}}/t_{\mathrm{tot}}$ & $t_{\mathrm{AGN}}$ & $t_{\mathrm{AGN}}/t_{\mathrm{tot}}$ & $t_{\mathrm{AGN}}$ & $t_{\mathrm{AGN}}/t_{\mathrm{tot}}$ & $t_{\mathrm{AGN}}$ & $t_{\mathrm{AGN}}/t_{\mathrm{tot}}$\\ \hline % \multirow{2}{*}{\textsf{q5f13ssz3}} & BH$_{1}$ & 1.31 & 0.52 & 0.06 & 0.03 & 1.92 & 0.77 & 0.42 & 0.17 \\ % & BH$_{2}$ & 0.01 & 0.004 & 0 & 0 & 1.03 & 0.41 & 0.22 & 0.09 \\ % & dAGN & 0 & 0 & 0 & 0 & 0.36 & 0.14 & 0 & 0 \\ & Offset & 0.002 & 0.0008 & 0 & 0 & 0.37 & 0.15 & 0.03 & 0.01 \\ \hline % \multirow{2}{*}{\textsf{q5f31ssz3}} & BH$_{1}$ & 1.87 & 0.71 & 0.17 & 0.07 & 2.15 & 0.82 & 0.49 & 0.19 \\ & BH$_{2}$ & 0.03 & 0.01 & 0 & 0 & 1.79 & 0.68 & 0.11 & 0.04\\ & dAGN & 0 & 0 & 0 & 0 & 1.20 & 0.45 & 0.01 & 0.004 \\ & Offset & 0 & 0 & 0 & 0 & 0.43 & 0.17 & 0.09 & 0.03\\ \hline % \multirow{2}{*}{\textsf{q5f33ssz3}} & BH$_{1}$ & 1.87 & 0.73 & 0.05 & 0.02 & 2.05 & 0.79 & 0.23 & 0.09\\ & BH$_{2}$ & 0.03 & 0.01 & 0 & 0 & 1.31 & 0.51 & 0.27 & 0.11\\ & dAGN & 0.01 & 0.004 & 0 & 0 & 1.13 & 0.44 & 0.007 & 0.003\\ & Offset & 0 & 0 & 0 & 0 & 0.15 & 0.06 & 0.05 & 0.02\\ \hline % \multirow{2}{*}{\textsf{q5f33ssz3i}} & BH$_{1}$ & 2.00 & 0.80 & 0.05 & 0.02 & 2.03 & 0.80 & 0.12 & 0.05 \\ & BH$_{2}$ & 0.003 & 0.001 & 0 & 0 & 1.02 & 0.41 & 0.28 & 0.11 \\ & dAGN & 0.002 & 0.001 & 0 & 0 & 0.92 & 0.37 & 0 & 0\\ & Offset & 0 & 0 & 0 & 0 & 0.11 & 0.04 & 0.02 & 0.008 \\ \hline % \multirow{2}{*}{\textsf{q10f33ssz3}} & BH$_{1}$ & 2.69 & 0.62 & 0.19 & 0.44 & 2.71 & 0.62 & 0.18 & 0.04 \\ & BH$_{2}$ & 0.05 & 0.001 & 0 & 0 & 1.49 & 0.34 & 0.33 & 0.08 \\ & dAGN & 0.02 & 0.005 & 0 & 0 & 1.33 & 0.31 & 0.004 & 0.001 \\ & Offset & 0 & 0 & 0 & 0 & 0.28 & 0.06 & 0.22 & 0.05 \\ \hline % \multirow{2}{*}{\textsf{q10f33ssz3i}} & BH$_{1}$ & 2.45 & 0.57 & 0.11 & 0.03 & 2.39 & 0.56 & 0.28 & 0.07 \\ & BH$_{2}$ & 0 & 0 & 0 & 0 & 1.16 & 0.27 & 0.26 & 0.06 \\ & dAGN & 0 & 0 & 0 & 0 & 0.98 & 0.23 & 0 & 0\\ & Offset & 0 & 0 & 0 & 0 & 0.15 & 0.04 & 0.005 & 0.001 \\ \hline % \multirow{2}{*}{\textsf{q5f33ssz1}} & BH$_{1}$ & 0.03 & 0.006 & 0 & 0 & 0 & 0 & 0 & 0 \\ % & BH$_{2}$ & 0.07 & 0.01 & 0 & 0 & 0.088 & 0.02 & 0 & 0 \\ & dAGN & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ & Offset & 0.05 & 0.01 & 0 & 0 & 0 & 0 & 0 & 0 \\ \hline % \multirow{2}{*}{\textsf{q5f03esz1}} & BH$_{1}$ & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ % & BH$_{2}$ & 0.005 & 0.001 & 0 & 0 & 0.07 & 0.01 & 0 & 0 \\ & dAGN & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ & Offset & 0.006 & 0.001 & 0 & 0 & 0.06 & 0.01 & 0 & 0 \\ \hline % \multirow{2}{*}{\textsf{q5f03esz1p10}} & BH$_{1}$ & 0.01 & 0.003 & 0 & 0 & 0 & 0 & 0 & 0 \\ & BH$_{2}$ & 0.10 & 0.03 & 0 & 0 & 0.15 & 0.04 & 0 & 0 \\ & dAGN & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ & Offset & 0.05 & 0.01 & 0 & 0 & 0.14 & 0.04 & 0 & 0 \\ \hline % \end{tabular} % \label{tab:time_fraction} \end{table*} \begin{figure*} \centering % \includegraphics[width=0.99\textwidth]{RAA-2019-0120fig6.pdf} % \caption{Duration time of the MBH activities at different separations ($\log(d/{\rm kpc})$) between the primary and secondary MBHs, bolometric luminosities (left two columns), and Eddington ratios (right two columns) for the primary (the first and third columns) and secondary (the second and fourth columns) galaxies. Simulations from top to bottom rows correspond to that shown in Table \ref{tab:ini_setup} from top to bottom rows, respectively. The right colorbar shows the exact duration time in different colors.} % \label{fig:time_all} \end{figure*} \begin{figure*} % \centering % \includegraphics[width=0.99\textwidth]{RAA-2019-0120fig7.pdf} % \caption{The AGN fractions at different separations detected in the two luminosity ($L_{\mathrm{bol}} = 10^{43}\ \mathrm{erg/s}$ for left column, $L_{\mathrm{bol}} = 10^{44}\ \mathrm{erg/s}$ for middle-left column) and two Eddington ratio ($f_{\mathrm{Edd}} = 0.01$ for middle-right column, $f_{\mathrm{Edd}} = 0.05$ for right column) thresholds for the primary (top row) and secondary (bottom row) MBHs. Line colors shown from blue to red show results of the 9 simulations listed from top to bottom in Table \ref{tab:ini_setup}.} % \label{fig:time_frac} \end{figure*} \section{Conclusions} \label{sec:conclusion} We perform nine hydro-dynamical simulations with different settings of progenitor galaxies (mass ratio, gas fraction, starting redshift, and projected separation) to investigate how the nuclear activity can be triggered in galaxy minor mergers. We find that similar to galaxy major mergers, galaxy minor merger can trigger dAGNs but with a substantially smaller time duration (typically $\le 0.01$\,Gyr), more than an order of magnitude smaller than those by major merger (typically $\le 0.24$\,Gyr) \citep[e.g.,][]{2019SCPMA}. Minor Merger can also result in oAGNs with a time duration of $\le 0.22$\,Gyr. The Eddington ratios of the nuclear activities induced by minor mergers can hardly exceed $0.1$. As a comparison, those nuclear activities induced by major mergers can last for more than hundred million years with Eddington ratio larger than $0.1$. For all the simulations, the Eddington ratio of the primary galaxy increases after the first pericentric passage whatever the primary galaxy is initially gas poor ($f_{\rm gas} = 0.1$ in \textsf{q5f13ssz3} and $f_{\rm gas} = 0$ in \textsf{q5f03esz1} and \textsf{q5f03esz1p10}) or gas rich (the other six simulations), since the primary galaxy can always rob the gas from its companion. After the fourth pericentric passage, the Eddington ratio of the secondary galaxy decreases gradually after the gas is either consumed by star formation or captured by the primary galaxy during the interaction. In the dry-wet mergers at $z = 1$ (\textsf{q5f03esz1} and \textsf{q5f03esz1p10}), as the two galaxies approach to each other, the primary galaxy gradually robs more and more gas from the secondary galaxy to feed the central engine. However, the amount of gas is still inadequate to trigger the nuclear activity to a higher Eddington ratio. The co-planar gas rich mergers generally trigger a relatively longer-lived AGN (including dAGN and oAGN) than that merging with an inclination angle (e.g., $45^\circ$ in our simulation), because the galaxy interaction tends to be strongest in the co-planar case, in which more gas can be transferred to the vicinity of the central MBH. From all the 9 runs we find that minor galaxy merger can trigger dAGN and oAGN systems but the time duration is relatively short compared to those gas rich major mergers \citep{2012ApJ...748L...7V, 2017MNRAS.469.4437C, 2019SCPMA}. Galaxy minor merger can be responsible for triggering nuclear activity as luminous as $L_{\mathrm{bol}} = 10^{44}\,\mathrm{erg/s}$ if both of the two progenitors are not too dry ($f_{\mathrm{gas}}$ should not be too smaller than $0.1$), especially at small BH separations. \normalem \begin{acknowledgements} This work is supported by the National Key Program for Science and Technology Research and Development (Grant No. 2016YFA0400704), the National Natural Science Foundation of China (NSFC) under grant number 11690024 and 11873056, and the Strategic Priority Program of the Chinese Academy of Sciences (Grant No. XDB 23040100). \end{acknowledgements} \bibliographystyle{raa}
1,314,259,995,368
arxiv
\section{Introduction} \label{Int} With the accumulation of observational data, such as the Supernovae Type-Ia (SN Ia) \cite{Miknaitis:2007jd,Davis:2007na}, Cosmic Microwave Background (CMB) \cite{wmap3:2006:1,wmap3:2006:2,wmap3:2006:3,wmap3:2006:4}, Large Scale Structure (LSS) \cite{Tegmark:2006az} and so forth, it is possible for us to unveil, despite not conclusively for the time being, some enigmas in cosmology, such as the nature of dark energy (DE), the inflation, the total neutrino mass $\sum m_{\nu}$, the curvature of our Universe $\Omega_{K}$ and even the possible violation of \emph{CPT} conservation in Cosmology \cite{Feng:2006dp}. Dark energy, the mysterious source driving the present acceleration of our Universe, has been studied widely in the literature since its first discovery in $1998$ \cite{SN98}. However, the nature of DE, encoded in its equation of state (EoS) parameter $w$, remains controversial. Being the simplest candidate of DE and fitting the current data well, the Cosmological Constant (CC), whose EoS remains $-1$, suffers from the severe theoretical drawbacks such as the fine-tunning and coincidence problems \cite{SW89}. To ameliorate such problems, the dynamical dark energy models were proposed. For example, the Quintessence, whose EoS evolves with the cosmic time and satisfies $w(z)>-1$ \cite{quint}, Phantom with $w(z)<-1$ \cite{phantom} and K-essence with $w(z)>-1$ or $<-1$ \cite{kessence}. As addressed in literature, recent observations mildly favor the DE models with $w(z)$ crossing the cosmological constant boundary during the evolution \cite{quintom,Xia:2005ge,Xia:2006cr,Zhao:2006bt,Xia:2006wd,Zhao:2006qg,Xia:2007km,others}. Unfortunately, the EoS of the above models cannot realize such ``crossing" behavior due to the ``No-Go" Theorem \cite{Xia:2007km,Vikman:2004dc,Kunz:2006wc}. Quintom, whose EoS can cross the cosmological constant boundary, is mildly favored by the observations and has been investigating extensively since its invention \cite{quintom,study4quintom}. Our universe has experienced at least two different stages of accelerated expansion. One is the current acceleration driven by dark energy, the other is the inflation in the very early universe \cite{inflation,Guth:1979bh}. The mechanics of inflation can naturally explain the flatness, homogeneity and the isotropy of our Universe. Inflation stretches the primordial density fluctuations and seeds the presently observed large scale structures and cosmic microwave background radiation. In $2006$, the WMAP group claimed that the simple scale-invariant primordial spectrum does not fit well to the Three-Year WMAP data \cite{wmap3:2006:1}. Alternatively, the Harrison-Zel'dovich-Peebles scale invariant (HZ) spectrum ($n_s=1,~r=0$) is disfavored about $2 \sim 3\sigma$. And the large running of the scalar spectral index is still allowed \cite{Easther:2006tv}. The aforementioned key cosmological questions might be answered by the virtue of the future high precision astronomical measurements. Especially, the Planck mission of European Space Agency (ESA) will determine the geometry and contents of our Universe by measuring the CMB with unprecedented accuracy \cite{:2006uk}. Planck will image the full sky with sensitivity of $\Delta T/T\sim2\times10^{-6}$, angular resolution to $5'$ and frequency coverage of $30-857$ GHz. The angular resolution of Planck measurement is three times superior to the current WMAP observation and the noise is lowered by an order of magnitude at around $100$ GHz. These significant improvements permit much more accurate measurements of the CMB power spectra, so that Planck has the very power and unique new capabilities to constrain the cosmological parameters. In Ref.\cite{:2006uk}, the Planck collaboration has done some sensitivity studies of constraining the cosmological parameters with simulated Planck data combined with the future SNAP measurement. They investigate the dynamics of inflation, neutrino mass, \emph{etc.} in the framework of the $\Lambda$CDM model and find that with Planck one can get much more stringent constraints on the cosmological parameters. In our previous works \cite{Xia:2006cr,Xia:2006wd,Zhao:2006qg} we addressed that the determination of the cosmological parameters, such as $\sum m_{\nu}$ , $\Omega_{K}$ and the inflationary parameters, is highly affected by the dynamics of dark energy model due to the degeneracies among the EoS of DE and these parameters. Furthermore, dark energy perturbation plays a crucial role in the global fit \cite{Xia:2005ge,Zhao:2005vj}. Therefore, it is much more fair and reliable to do the error forecasts of the cosmological parameters in the framework of dynamical dark energy model rather than assuming a constant $w$ of DE or the $\Lambda$CDM model. In this paper, we study the constraints of $\sum m_{\nu}$ , $\Omega_{K}$ as well as the inflationary parameters in the framework of dynamical dark energy models. Using the simulated Planck data, we make a global fit using MCMC method, while paying particular attention of the dark energy perturbation in the full parameter space of EoS of dark energy. We also stress the role of Planck and CMBpol to detect the possible \emph{CPT} violation. To obtain the fiducial value of parameters for simulation, we firstly constrain these cosmological parameters from the current observations and find the best-fit models. Our paper is organized as follows: In Section II we describe the method and the current observational datasets we used; In Section III we present our method to do the futuristic simulations in detail; Section IV contains our MCMC fitting results using the current and future observations and the last section is our conclusion and discussion. \section{Method and Current Observations} \label{Method} In our studies, we have modified the publicly available Markov Chain Monte Carlo package \emph{CosmoMC}\footnote{Available at: http://cosmologist.info/.} \cite{CosmoMC} to include the dark energy perturbation when the EoS of DE gets across the cosmological constant boundary as we illustrate later. We assume the purely adiabatic initial conditions. Our most general parameter space is: \begin{equation} \label{parameter} {\bf P} \equiv (\omega_{b}, \omega_{c}, \Omega_k, \Theta_{s}, \tau, w_{0}, w_{1}, f_{\nu}, n_{s}, \log[10^{10}A_{s}], \alpha_s, r)~, \end{equation} where $\omega_{b}\equiv\Omega_{b}h^{2}$ and $\omega_{c}\equiv\Omega_{c}h^{2}$ are the physical baryon and cold dark matter densities relative to the critical density, $\Omega_k=1-\Omega_m-\Omega_{{\mathrm{DE}}}$ is the spatial curvature, $\Theta_{s}$ is the ratio (multiplied by 100) of the sound horizon to the angular diameter distance at decoupling, $\tau$ is the optical depth to re-ionization, $f_{\nu}$ is the dark matter neutrino fraction at present, namely, \begin{equation} f_{\nu}\equiv\frac{\rho_{\nu}}{\rho_{{\mathrm{DM}}}}=\frac{\Sigma m_{\nu}}{93.105~eV~\Omega_ch^2}~, \end{equation} $A_{s}$ is defined as the amplitude of the primordial spectrum. We parameterize the primordial power spectrum in form of \begin{equation} \label{ns} n_s(k)=n_s(k_{s0}) + \alpha_{s} \ln \left( \frac{k}{k_{s0}}\right)~, \end{equation} where $k_{s0}$ is a pivot scale which is arbitrary in principle, here we set $k_{s0}=0.05$Mpc$^{-1}$, and $\alpha_{s}$ is a constant characterizing the ``running" $dn_s/d\ln k$ of the scalar spectral index. $r$ is the tensor to scalar ratio of the primordial spectrum. The scalar spectral index $n_s$ is related to the primordial scalar power spectrum ${\cal P}_{\chi}(k)$ by definition: \begin{equation} \label{nsdef} n_s(k) \equiv \frac{d{\cal P}_{\chi}(k)}{d \ln k} +1~. \end{equation} Correspondingly, ${\cal P}_{\chi}(k)$ is now parameterized as \cite{paraPk}: \begin{equation} \label{spectrum} \ln {\cal P}_{\chi}(k)=\ln A_{s} + (n_{s}(k_{s0})-1)\ln \left( \frac{k}{k_{s0}}\right)+\frac{\alpha_{s}}{2}\left(\ln \left(\frac{k}{k_{s0}}\right)\right)^{2}~. \end{equation} For dark energy, we choose the commonly used parametrization of the DE equation of state as \cite{Linderpara}: \begin{equation} \label{EOS1} w_{\mathrm{DE}}(a) = w_{0} + w_{1}(1-a)~, \end{equation} where $a=1/(1+z)$ is the scale factor and $w_{1}=-dw/da$ characterizes the ``running" of the equation of state. In left panel of Fig.\ref{fig0}, we divide the ($w_0$,~$w_1$) panel into four blocks by lines $w_0=-1$ and $w_0+w_1=-1$ as illustrated. In the upper right and lower left parts, $w(z)$ is greater or smaller than $-1$ corresponding to the Quintessence and Phantom models respectively. In the other two parts, $w(z)$ can cross the cosmological constant boundary during the evolution which can be realized by the Quintom model. The models of Quintom A crosses $-1$ from upside down while Quintom B crosses from the other direction during the evolution. And the intersecting point denotes the $\Lambda$CDM model. However, if one takes the futuristic evolution of EoS into consideration, parts of the region occupied by the Quintessence and Phantom will be replaced by Quintom. More explicitly, in the right panel of Fig.\ref{fig0}, we redivide the parameter space into six parts by the lines $w_0=-1$, $w_0+w_1=-1$ and $w_1=0$. Part III is for the Quintessence-like models, namely, the equation of state remains greater than $-1$ regardless of the cosmic time, say, $w>-1$ for past, present and future. Correspondingly, part VI is for the Phantom-like models. Part I,II,V and IV are all for the Quintom-like models. For the models lie within part I and IV, their equation of state has crossed over $-1$ till now while the EoS of the DE models in part II and V will cross $-1$ in future. \begin{figure}[htbp] \begin{center} \includegraphics[scale=0.7]{fourblocks.eps} \includegraphics[scale=0.7]{sixblocks.eps} \caption{Left panel: The parameter space is divide into four parts to distinguish different dark energy models; Right panel: The parameter space is divide into six parts including the future behavior of the EoS of dark energy. See text for details.\label{fig0}} \end{center} \end{figure} Moreover, we also consider another phenomenological parametrization, namely, oscillating Quintom whose EoS oscillates with time and is allowed to cross the cosmological constant boundary: \begin{equation} \label{EOS2} w_{\mathrm{DE}}(a) = w_{0} + w_{1}\sin(w_2\ln(a))~. \end{equation} This oscillating behavior in EoS can lead to the oscillation on the Hubble diagram \cite{Xia:2006rr} or a recurrent universe which can unify the early inflation and the current acceleration of our Universe \cite{Feng:2004ff}. In Refs.\cite{Barenboim:2004kz,Xia:2004rw,Xia:2006rr,Zhao:2006qg} some preliminary studies have been presented on this kind of dark energy model. From the latest SNIa paper \cite{Riess:2006fw}, one can find some hints of oscillating behavior in their Fig.$10$ where they used a polynomial fitting. Our sine function has the advantage of preserving the oscillating feature of the dark energy EoS at high redshift measured by the CMB data. For simplicity and focusing on the study at lower redshift, we set $w_2$ to be $3\pi/2$ in order to allow the EoS to evolve more than one period within the redshift range of $z=0$ to $z=2$ where the SNIa data are most robust. We label the above two dark energy parameterizations (\ref{EOS1}) and (\ref{EOS2}) as Para I and Para II respectively throughout this paper. When using the MCMC global fitting strategy to constrain the cosmological parameters, it is crucial to include Dark Energy perturbation. The conservation law of energy reads: \begin{equation} \label{conserve} T^{\mu\nu}{}_{;\mu}=0~, \end{equation} where $T^{\mu\nu}$ is the energy-momentum tensor of dark energy and ``;" denotes the covariant differentiation. Working in the conformal Newtonian gauge and normally setting the anisotropic stress perturbation of dark energy to be zero, one can derive the perturbation equations of dark energy as follows \cite{ma}: \begin{eqnarray} \delta'&=&-(1+w)(\theta-3\Phi') -3\mathcal{H}(\hat{c}_{s}^2-w)\delta-3\mathcal{H}(w'+3\mathcal{H}(1+w)(\hat{c}_{s}^2-w))\frac{\theta}{k^2}~, \label{dotdelta}\\ \theta'&=&-\mathcal{H}(1-3\hat{c}_{s}^2)\theta+k^{2}(\frac{\hat{c}_{s}^2\delta}{{1+w}}+ \Psi)~. \label{dottheta} \end{eqnarray} However one cannot handle the Dark Energy perturbation when the parameterized EoS crosses $-1$ based on the Quintessence, Phantom, K-essence and other non-crossing dark energy models. By the virtue of Quintom dark energy model, the perturbation at the crossing points is continuous, thus we introduce a small positive constant $\epsilon$ to divide the full range of the allowed value of $w$ into three parts: 1) $ w > -1 + \epsilon$; 2) $-1 + \epsilon \geq w \geq-1 - \epsilon$; and 3) $w < -1 -\epsilon $. Neglecting the entropy perturbation contributions, for the regions 1) and 3) the equation of state does not get across $-1$ and the perturbations are well defined by solving Eqs.(\ref{dotdelta},\ref{dottheta}). For the case 2), the density perturbation $\delta$ and velocity perturbation $\theta$, and the derivatives of $\delta$ and $\theta$ are finite and continuous for the realistic Quintom dark energy models. However for the perturbations of the parameterized EoS there is clearly a divergence. In our study for such a regime, we match the perturbation in region 2) to the regions 1) and 3) at the boundary and set: \begin{equation}\label{dotx} \delta'=0 ~~,~~\theta'=0~~. \end{equation} In our numerical calculations we have limited the range to be $| \epsilon |<10^{-5}$ and we find our method is a very good approximation to the multi-field Quintom DE model. In our calculation the initial condition we choose is the adiabatic perturbations of dark energy, while the isocurvature perturbation of dark energy can be safely neglected \cite{Zhao:2005vj}. For more details of this method we refer the readers to our previous companion papers \cite{Zhao:2005vj,Xia:2005ge}. In our calculations we have taken the total likelihood to be the products of the separate likelihoods (${\bf \cal{L}}_i$) of CMB, LSS and SNIa. In other words defining $\chi_{L,i}^2 \equiv -2 \log {\bf \cal{L}}_i$, we get \begin{equation} \chi^2_{L,total} = \chi^2_{L,CMB} + \chi^2_{L,LSS} + \chi^2_{L,SNIa}~.\label{chi2} \end{equation} If the likelihood function is Gaussian, $\chi^2_{L}$ coincides with the usual definition of $\chi^2$ up to an additive constant corresponding to the logarithm of the normalization factor of ${\cal L}$. In the computation of CMB we have included the three-year WMAP (WMAP3) Temperature-Temperature (TT) and Temperature-Polarization (TE) power spectrum with the routine for computing the likelihood supplied by the WMAP team \cite{wmap3:2006:1,wmap3:2006:2,wmap3:2006:3,wmap3:2006:4} as well as the smaller scale experiments, including Boomerang-2K2 \cite{MacTavish:2005yk}, CBI \cite{Readhead:2004gy}, VSA \cite{Dickinson:2004yr} and ACBAR \cite{Kuo:2002ua}. For the Large Scale Structure information, we have used the Sloan Digital Sky Survey (SDSS) luminous red galaxy (LRG) sample \cite{Tegmark:2006az} and 2dFGRS \cite{Cole:2005sx}. To be conservative but more robust, in the fittings to the SDSS LRG sample we have used the first $15$ bins only, which are supposed to be well within the linear regime. In the calculation of the likelihood from SNIa we have marginalized over the nuisance parameter \cite{DiPietro:2002cz}. The supernova data we use are the recently released ESSENCE (192 sample) data \cite{Miknaitis:2007jd,Davis:2007na}. Furthermore, we make use of the Hubble Space Telescope (HST) measurement of the Hubble parameter $H_{0}\equiv 100$h~km~s$^{-1}$~Mpc$^{-1}$ \cite{Hubble} by multiplying the likelihood by a Gaussian likelihood function centered around $h=0.72$ and with a standard deviation $\sigma=0.08$. We also impose a weak Gaussian prior on the baryon density $\Omega_{b}h^{2}=0.022\pm0.002$ (1 $\sigma$) from the Big Bang Nucleosynthesis \cite{BBN}. Simultaneously we will also use a cosmic age tophat prior as 10 Gyr $< t_0 <$ 20 Gyr. For each regular calculation, we run 8 independent chains comprising of $150,000-300,000$ chain elements and spend thousands of CPU hours to calculate on a supercomputer. The average acceptance rate is about $40\%$. We test the convergence of the chains by Gelman and Rubin criteria\cite{R-1} and find $R-1$ is of order $0.01$ which is more conservative than the recommended value $R-1<0.1$. \section{Future Measurements} \label{Future} When considering the constraints on the cosmological parameters from the simulated data of future CMB (PLANCK\footnote{Available at http://sci.esa.int/science-e/www/area/index.cfm?fareaid=17/.} and CMBpol\footnote{Available at http://universe.gsfc.nasa.gov/program/inflation.html/.}) measurements, the fiducial models are obtained by maximizing the likelihood (the best-fit model) using the current observations. Firstly we derive the likelihood function for a CMB experiment as given by Ref.\cite{Easther:2004vq,Perotto:2006rj}. Assuming the CMB multipoles are Gaussian distributed, one can obtain the likelihood function as follows: \begin{equation} \mathcal{L}\propto\prod_{lm}\frac{\exp \left[ -\frac{1}{2}D^{\dag}_{lm}C^{-1}D_{lm}\right]}{\sqrt{\det C}}~,\label{CMB1} \end{equation} where $D_{lm}=\left[ a^{T}_{lm}, a^{E}_{lm}, a^{B}_{lm} \right]$ is the data vector of spherical harmonic coefficients which are contributed from the CMB signal $s_{lm}$ and the experimental noise $n_{lm}$: $a^{X}_{lm}=s^{X}_{lm}+n^{X}_{lm}$, and $C$ is the theoretical data covariance matrix generally given by: \begin{equation} C=\left( \begin{array}{ccc} \bar{C}^{TT}_{l}&\bar{C}^{TE}_{l}&\bar{C}^{TB}_{l} \\ \bar{C}^{TE}_{l}&\bar{C}^{EE}_{l}&\bar{C}^{EB}_{l} \\ \bar{C}^{TB}_{l}&\bar{C}^{EB}_{l}&\bar{C}^{BB}_{l} \\ \end{array} \right)=\left( \begin{array}{ccc} C^{TT}_{l}+N^{TT}_{l}&C^{TE}_{l}&C^{TB}_{l} \\ C^{TE}_{l}&C^{EE}_{l}+N^{EE}_{l}&C^{EB}_{l} \\ C^{TB}_{l}&C^{EB}_{l}&C^{BB}_{l}+N^{BB}_{l} \\ \end{array} \right)~.\label{CMB2} \end{equation} In this covariance matrix, $C^{XX'}_{l}$ denotes the theoretical power spectra and $N^{XX'}_{l}$ is the noise power spectra which can be approximated as: \begin{equation} N^{XX'}_{l}\equiv\langle n^{X\dag}_{lm}n^{X'}_{lm} \rangle=\delta_{XX'}\theta^{2}_{{\mathrm{fwhm}}}\Delta^{2}_{X}\exp \left[ l(l+1)\frac{\theta^{2}_{{\mathrm{fwhm}}}}{8\ln 2}\right]~,\label{CMB3} \end{equation} where $\theta_{{\mathrm{fwhm}}}$ is the full width at half maximum of the Gaussian beam, and $\Delta_{X}$ is the root mean square of the instrumental noise. Non-diagonal noise terms are expected to be zero since the noise contributions from different maps are uncorrelated. Due to the global isotropy, the terms $C^{TB}_{l}$ and $C^{EB}_{l}$ are always set to be zero. In our calculations we also assume them to be zero except for studying the possible \emph{CPT} violation in the later section \ref{Angle}. On the other hand, we can estimate the power spectra from the data as follows: \begin{equation} \hat{C}^{XY}_{l}=\sum_{m}\frac{|a^{X\dag}_{lm}a^{Y}_{lm}|}{2l+1}~.\label{CMB4} \end{equation} So we can obtain the effective $\chi^2_{{\mathrm{eff}}}$: \begin{equation} \chi^2_{{\mathrm{eff}}}\equiv-2\ln \mathcal{L}=\sum_{l}(2l+1)f_{{\mathrm{sky}}}\left(\frac{A}{|\bar{C}|}+\ln\frac{|\bar{C}|}{|\hat{C}|}+3\right)~,\label{CMB9} \end{equation} where $f_{{\mathrm{sky}}}$ denotes the observed fraction of the sky in the real experiments, $A$ is defined as: \begin{eqnarray} A &=& \hat{C}^{TT}_{l}(\bar{C}^{EE}_{l}\bar{C}^{BB}_{l}-(\bar{C}^{EB}_{l})^2)+\hat{C}^{TE}_{l}(\bar{C}^{TB}_{l}\bar{C}^{EB}_{l}-\bar{C}^{TE}_{l}\bar{C}^{BB}_{l})+\hat{C}^{TB}_{l}(\bar{C}^{TE}_{l}\bar{C}^{EB}_{l}-\bar{C}^{TB}_{l}\bar{C}^{EE}_{l}) \nonumber\\ &+& \hat{C}^{TE}_{l}(\bar{C}^{TB}_{l}\bar{C}^{EB}_{l}-\bar{C}^{TE}_{l}\bar{C}^{BB}_{l})+\hat{C}^{EE}_{l}(\bar{C}^{TT}_{l}\bar{C}^{BB}_{l}-(\bar{C}^{TB}_{l})^2)+\hat{C}^{EB}_{l}(\bar{C}^{TE}_{l}\bar{C}^{TB}_{l}-\bar{C}^{TT}_{l}\bar{C}^{EB}_{l}) \nonumber \\ &+& \hat{C}^{TB}_{l}(\bar{C}^{TE}_{l}\bar{C}^{EB}_{l}-\bar{C}^{EE}_{l}\bar{C}^{TB}_{l})+\hat{C}^{EB}_{l}(\bar{C}^{TE}_{l}\bar{C}^{TB}_{l}-\bar{C}^{TT}_{l}\bar{C}^{EB}_{l})+\hat{C}^{BB}_{l}(\bar{C}^{TT}_{l}\bar{C}^{EE}_{l}-(\bar{C}^{TE}_{l})^2)~,\label{CMB6} \end{eqnarray} and $|\bar{C}|$ and $|\hat{C}|$ denote the determinants of the theoretical and observed data covariance matrices respectively, \begin{eqnarray} |\bar{C}|&=&\bar{C}^{TT}_{l}\bar{C}^{EE}_{l}\bar{C}^{BB}_{l}+2\bar{C}^{TE}_{l}\bar{C}^{TB}_{l}\bar{C}^{EB}_{l} -\bar{C}^{TT}_{l}(\bar{C}^{EB}_{l})^2-\bar{C}^{EE}_{l}(\bar{C}^{TB}_{l})^2-\bar{C}^{BB}_{l}(\bar{C}^{TE}_{l})^2~,\label{CMB7}\\ |\hat{C}|&=&\hat{C}^{TT}_{l}\hat{C}^{EE}_{l}\hat{C}^{BB}_{l}+2\hat{C}^{TE}_{l}\hat{C}^{TB}_{l}\hat{C}^{EB}_{l} -\hat{C}^{TT}_{l}(\hat{C}^{EB}_{l})^2-\hat{C}^{EE}_{l}(\hat{C}^{TB}_{l})^2-\hat{C}^{BB}_{l}(\hat{C}^{TE}_{l})^2~.\label{CMB8} \end{eqnarray} The likelihood has been normalized with respect to the maximum likelihood $\chi^2_{{\mathrm{eff}}}=0$, where $\bar{C}^{XY}_{l}=\hat{C}^{XY}_{l}$. If we set the ${C}^{TB}_{l}$ and ${C}^{EB}_{l}$ to be zero, the likelihood function will be reduced to the Eq.($17$) of Ref.\cite{Easther:2004vq}. Furthermore, we can obtain the Eq.($9$) of Ref.\cite{Xia:2006wd} if we ignore the tensor information. In some of our simulations we also consider the gravitational lensing effect on the CMB power spectrum. The lensed Stokes parameters $I$, $Q$ and $U$ which specify the intensity and linear polarization of observed CMB are related to the unlensed Stokes parameters at the last scattering surface (denoted with a tilde) by $X(\textbf{n})=\tilde X(\textbf{n}')=\tilde X(\textbf{n}+\delta \textbf{n})$, where $X$ denotes $I$, $Q$ or $U$ and $\delta \textbf{n}$ is the angular excursion of the photon as it propagates from the last scattering surface until the present. This deflection angle, $\delta \textbf{n}$, is given by the gradient of the lensing potential $\bigtriangledown\phi(\textbf{n})$, \begin{equation} \phi(\textbf{n})=2\int dr\frac{r-r_s}{rr_s}\Psi(r\hat \textbf{n}, r)~\label{lensphi}, \end{equation} where $r$ is the comoving distance along the line of sight, $s$ denotes the CMB last scattering surface, and $\Psi$ is the three dimensional gravitational potential \cite{Zaldarriaga:1998ar,Lewis:2006fu}. The important feature is that the gravitational lensing can mix $E$ and $B$ modes \cite{Zaldarriaga:1998ar}. If we assume that there is only unlensed $E$ type polarization and the unlensed $\tilde C_l^{BB}=0$ in the last scattering surface, the gravitational lensing will generate $B$ type polarization in the observed field, $C_l^{BB}\neq0$. The information from the gravitational lensing is added through the power spectrum for the lensing potential $C_l^{\phi\phi}$ and the correlation to the temperature $C_l^{T\phi}$: \begin{equation} \langle a^{\phi\dag}_{lm}a^{\phi}_{l'm'}\rangle=(C_l^{\phi\phi}+N_l^{\phi\phi})\delta_{ll'}\delta_{mm'}~~,~~ \langle a^{T\dag}_{lm}a^{\phi}_{l'm'}\rangle=(C_l^{T\phi}+N_l^{T\phi})\delta_{ll'}\delta_{mm'}~, \end{equation} which can be computed numerically in the linear theory using CAMB\footnote{Available at http://camb.info/.} \cite{Lewis:1999bs}. In our analysis we use the unlensed power spectra, $\tilde C_l^{TT}$, $\tilde C_l^{TE}$, $\tilde C_l^{EE}$, and $C_l^{\phi\phi}$, $C_l^{T\phi}$. We do not use the lensed power spectra to avoid the complication of the correlation in their errors between different $l$ values and with the error in $C_l^{\phi\phi}$ and $C_l^{T\phi}$ \cite{Hu:2001fb,Smith:2006nk}. For errors on $C_l^{\phi\phi}$ we follow the Ref.\cite{Hu:2001kj}. We use the publicly available code\footnote{Available at http://lappweb.in2p3.fr/$\sim$perotto/FUTURCMB/home.html/.} \cite{Perotto:2006rj} to simulate the mock CMB power spectra of our fiducial models. In Table I we list the assumed experimental specifications of the future Planck and CMBpol measurements and neglect the foreground contamination. \begin{table} TABLE I. Assumed experimental specifications. We use the CMB power spectra only at $l\leq2500$. The noise parameters $\Delta_T$ and $\Delta_P$ are given in units of $\mu$K-arcmin. \begin{center} \begin{tabular}{lcccccc} \hline \hline ~Experiment~ & ~$f_{\mathrm{sky}}$~ & ~$l_{\mathrm{max}}$~ & ~(GHz)~ & ~$\theta_{\mathrm{fwhm}}$~ & ~$\Delta_T$~ & ~$\Delta_P$~ \\ \hline ~PLANCK & 0.65 & 2500 & 100 & 9.5' & 6.8 & 10.9 \\ & & & 143 & 7.1' & 6.0 & 11.4 \\ & & & 217 & 5.0' & 13.1 & 26.7 \\ ~CMBpol & 0.65 & 2500 & 217 & 3.0' & 1.0 & 1.4 \\ \hline \hline \end{tabular} \end{center} \end{table} To make our constraints more robust, we add the simulated SNAP data to do all the simulations throughout this paper. The projected satellite SNAP\footnote{SNAP is one of the several candidates emission concepts for the Joint Dark Energy Mission (JDEM). Available at http://snap.lbl.gov/.} (Supernova/Acceleration Probe) would be a space-based telescope with a one square degree field of view with $10^9$ pixels. It aims to increase the discovery rate for SNIa to about $2000$ per year. The simulated SNIa data distribution is taken from Refs.\cite{kim,Li:2005zd}. As for the error, we follow the Ref.\cite{kim} which takes the magnitude dispersion $0.15$ and the systematic error $\sigma_{{\mathrm{sys}}}=0.02\times z/1.7$, and the whole error for each data is as follows: \begin{equation} \sigma_{{\mathrm{mag}}}(z_i)=\sqrt{\sigma^2_{{\mathrm{sys}}}(z_i)+\frac{0.15^2}{n_i}}~,\label{snap} \end{equation} where $n_i$ is the number of supernova in the $i'$th redshift bin. \section{Results} In this section we show our global fitting results of the cosmological parameters and focus on the dark energy parameters, inflationary parameters, space-time curvature, total neutrino mass and the rotation angle denoting the possible \emph{CPT} violation respectively. \subsection{Equation of State of Dark Energy} \label{DEEoS} \begin{table*} TABLE II. Constraints on the EoS of dark energy and some background parameters from the current observations and the future simulations. Note that Para I and Para II represent $w_{\mathrm{DE}}(a) = w_{0} + w_{1}(1-a)$ and $w_{\mathrm{DE}}(a) = w_{0} + w_{1}\sin(3\pi/2\ln(a))$ respectively. For the current constraints we have shown the mean values $1,2\sigma$ (Mean) and the best fit results together. And we also list the standard deviation (SD) of these parameters based on the future simulations. \begin{center} \begin{tabular}{|c|ccc|ccc|cc|} \hline &\multicolumn{3}{c|}{$\Lambda$CDM} &\multicolumn{3}{c|}{~Para~I~} &\multicolumn{2}{c|}{~Para~II~} \\ \hline &\multicolumn{2}{c|}{Current}&\multicolumn{1}{c|}{~Future~}&\multicolumn{2}{c|}{Current}&\multicolumn{1}{c|}{~Future~}&\multicolumn{2}{c|}{Current}\\ \hline &\multicolumn{1}{c|}{~Best~Fit~}&\multicolumn{1}{c|}{~~~~Mean~~~~}&\multicolumn{1}{c|}{~~SD~~}&\multicolumn{1}{c|}{~Best~Fit~}&\multicolumn{1}{c|}{~~~~Mean~~~~} &\multicolumn{1}{c|}{~~SD~~}&\multicolumn{1}{c|}{~Best~Fit~}&\multicolumn{1}{c|}{~~~~Mean~~~~} \\ \hline $w_0$&$-1$&$-1$&$-$&$-1.16$&$-1.03^{+0.15+0.36}_{-0.15-0.26}$&$0.045$&$-0.898$&$-0.981^{+0.320+0.534}_{-0.340-0.748}$\\ \hline $w_1$&$0$&$0$&$-$&$0.968$&$0.405^{+0.562+0.781}_{-0.587-1.570}$&$0.11$&$0.047$&$-0.068^{+0.561+1.037}_{-0.591-1.245}$\\ \hline $~\Omega_{{\mathrm{DE}}}~$&$0.760$&$0.762^{+0.015+0.029}_{-0.015-0.033}$&$0.0043$&$0.756$&$0.760^{+0.017+0.033}_{-0.018-0.035}$&$0.0064$&$0.765$&$0.764^{+0.019+0.045}_{-0.019-0.044}$\\ \hline $H_0$&$73.1$&$73.3^{+1.6+3.2}_{-1.7-3.2}$&$0.44$&$70.3$&$71.2^{+2.3+4.6}_{-2.3-4.2}$&$0.76$&$72.0$&$72.2^{+2.8+5.0}_{-2.6-6.3}$\\ \hline \end{tabular} \end{center} \end{table*} To study the dynamics of dark energy, we parameterize our universe as follows: \begin{equation} \label{parameterDE} {\bf P} \equiv (\omega_{b}, \omega_{c}, \Theta_{s}, \tau, w_{0}, w_{1}, n_{s}, \log[10^{10}A_{s}])~. \end{equation} Our main results of dark energy parameters are summarized in Table II. Besides the two parameterizations Para I and Para II, we also investigate the $\Lambda$CDM model for comparison. In addition, we present the future constraints for the $\Lambda$CDM model and Para I using the simulated Planck and SNAP data as introduced above. Marginalized over other cosmological parameters, in Table II we list the constraints on the dark energy parameters as well as the Hubble constant in different dark energy models. \begin{figure}[htbp] \begin{center} \includegraphics[scale=0.4]{Fig1.eps} \includegraphics[scale=0.4]{Fig2.eps} \caption{Constraints on the dark energy parameters $w_0$ and $w_1$ from the combination of current observations (Black Solid Lines) and the future simulation data (Red Dashed Lines) respectively. The left panel is for Para I: $w_{{\mathrm{DE}}}(a)=w_0+w_1(1-a)$. And the right panel is for Para II: $w_{{\mathrm{DE}}}(a)=w_0+w_1\sin(\frac{3\pi}{2}\log(a))$. The two blue dotted lines in the ($w_0,w_1$) panel distinguish the dark energy models and their intersecting point denotes the $\Lambda$CDM model.\label{fig1}} \end{center} \end{figure} \begin{figure}[htbp] \begin{center} \includegraphics[scale=0.4]{wz.eps} \caption{Constraints on $w_{{\mathrm{DE}}}(a)=w_0+w_1(1-a)$ from the current observations. Median value (central red solid line), $68\%$ (inner, dark shaded area) and $95\%$ (outer, light shaded area) intervals are shown. The blue dashed lines denote the cosmological constant boundary.\label{fig2}} \end{center} \end{figure} In Fig.\ref{fig1} we illustrate the constraints on the dark energy parameters $w_0$ and $w_1$ of two parameterizations. From the current observations we find that $w_0=-1.03\pm0.15$, $w_1=0.405^{+0.562}_{-0.587}$ for Para I and the Quintom scenario, where $w(z)$ can cross the cosmological constant boundary during the evolution, is mildly favored. Using the current data, we find that the best fit model is located in the Quintom A region while the $\Lambda$CDM, denoted by the intersect of two straight lines, lies at the edge of $1\sigma$ contour. The one dimensional constraint on the evolution of $w(a)$ from the current data is shown in Fig.\ref{fig2}. This behavior can be found more obviously from the best fit model. However, current data can not distinguish different dark energy models decisively, namely, the variance of $w_0$ and $w_1$ are too large to distinguish dynamical dark energy models from the $\Lambda$CDM model. The $\Lambda$CDM model is still a good fit right now. In order to distinguish different Dark Energy models we consider the future measurements Planck and SNAP. The fiducial model we choose is the best fit model from the current constraints of Para I. We show the $68\%$ and $95\%$ confidence level contours (Red Dashed lines) on the left panel of Fig.\ref{fig1}. As we expected, the constraints from the simulated data are much more stringent than the current constraints. By the virtue of Planck and SNAP data, we see that the standard deviations of $w_0$ and $w_1$ are $\sigma=0.045$ and $\sigma=0.11$ respectively, which are reduced by a factor of $3.33$ and $5.2$. The Quintom model and the $\Lambda$CDM model might be distinguished at around $4\sigma$ confidence level. For Para II, the mean values from the current observations are $w_0=-0.981^{+0.320}_{-0.340}$, $w_1=-0.068^{+0.561}_{-0.591}$ which still support the Quintom scenario despite of the weak significance. In right panel of Fig.\ref{fig1}, we see that the Quintom models occupies the most of the contour while the $\Lambda$CDM model still lies well within $1\sigma$ contour. \subsection{Other Cosmological Parameters} The dynamics of dark energy can have profound effects on the determination of other cosmological parameters, such as the inflationary parameters ($n_s$, $\alpha_s$, $r$), the total neutrino mass $\sum m_{\nu}$ as well as the curvature of space-time $\Omega_k$, due to the well-known degeneracies among these parameters. In this subsection, we measure the above parameters in the framework of dynamical dark energy models. \begin{table*} TABLE III. Constraints on some cosmological parameters $n_s$, $\alpha_s$, $r$, $\Omega_k$ and $\sum m_{\nu}$ from the current observations and the future simulations. We have shown the mean $1,2\sigma$ errors (Mean) for the current constraints and the standard deviation (SD) of these parameters based on the future simulations. For the weakly constrained parameters we quote the $95\%$ upper limit instead. \begin{center} \begin{tabular}{|c|cc|cc|c|} \hline &\multicolumn{2}{c|}{$\Lambda$CDM} &\multicolumn{2}{c|}{~Para~I~} &\multicolumn{1}{c|}{~Para~II~} \\ \hline &\multicolumn{1}{c|}{~~~~Mean~~~~}&\multicolumn{1}{c|}{~~SD~~}&\multicolumn{1}{c|}{~~~~Mean~~~~} &\multicolumn{1}{c|}{~~SD~~}&\multicolumn{1}{c|}{~~~~Mean~~~~} \\ \hline $n_s$&$0.953^{+0.014+0.028}_{-0.013-0.026}$&$0.003$&$0.965^{+0.017+0.038}_{-0.017-0.032}$&$0.0037$&$0.962^{+0.016+0.036}_{-0.017-0.031}$\\ \hline $100\times\alpha_s$&$-3.75^{+2.19+4.24}_{-2.21-4.23}$&$0.53$&$-3.38^{+2.52+4.80}_{-2.50-4.76}$&$0.55$&$-3.95^{+2.37+4.86}_{-2.39-4.72}$\\ \hline $r$&$<0.231$ ($95\%$)&$<0.055$ ($95\%$)&$<0.392$ ($95\%$)&$<0.074$ ($95\%$)&$<0.356$ ($95\%$)\\ \hline $100\times\Omega_k$&$-0.873^{+0.788+1.454}_{-0.753-1.581}$&$0.289$&$-0.201^{+1.46+2.74}_{-1.29-2.58}$&$1.05$&$-0.593^{+1.23+3.51}_{-1.35-2.57}$\\ \hline $\sum m_{\nu}$&$<0.958$ ($95\%$)&$0.077$&$<1.59$ ($95\%$)&$0.179$&$<1.53$ ($95\%$)\\ \hline \end{tabular} \end{center} \end{table*} \subsubsection{Inflationary Models} \label{Inf} The current acceleration and the inflation, the two stages of accelerated expansion of our universe, might have some deep relationship albeit the significant difference of energy scale between them. Some efforts have been made to unify these two expansion epoches, such as the quintessential inflation \cite{Pvilenkin}. Moreover, the isocurvature perturbations in dark energy sector generated during inflation may give rise to the suppression of power of CMB at large scale, which can be mimicked by suppressed primordial spectrum \cite{Moroi:2003pq}. This means different dynamics of the dark energy and inflation can lead to similar effects on observations and studying such degeneracies might unveil the possible connections between dark energy and inflation. \begin{figure}[htbp] \begin{center} \includegraphics[scale=0.4]{Fig4.eps} \includegraphics[scale=0.4]{Fig5.eps} \includegraphics[scale=0.4]{Fig6.eps} \caption{$1D$ current constraints on the parameters $n_s$, $\alpha_s$ and $r$ based on the different dark energy models: $\Lambda$CDM (black solid line), $w_{{\mathrm{DE}}}=w_0+w_1(1-a)$ (red dashed line) and $w_{{\mathrm{DE}}}=w_0+w_1\sin(\frac{3\pi}{2}\log(a))$ (blue dotted line).\label{fig3}} \end{center} \end{figure} In this section, we constrain the inflationary parameters in the framework of dynamical dark energy using the current and simulated datasets. We sample in the following 10 dimensional parameter space using MCMC algorithm: \begin{equation} \label{parameterInf} {\bf P} \equiv (\omega_{b}, \omega_{c}, \Theta_{s}, \tau, w_{0}, w_{1}, n_{s}, \log[10^{10}A_{s}], \alpha_s, r)~. \end{equation} It's noteworthy that we do not constrain $\alpha_s$ and $r$ simultaneously in our global fittings. From Table~III, we can see the effects of dynamical dark energy on the determination of the inflationary parameters. Again, we give the fitting results for Para I, Para II and the $\Lambda$CDM model for comparison. We find that the constraints for the spectral index $n_s$, the running $\alpha_s$ and the tensor-to-scalar ratio $r$ have been weaken with the presence of dynamics of dark energy. Quantitatively, the $2\sigma$ constraints of $n_s$, $\alpha_s$ and $r$ can be relaxed by roughly $36\%$, $13\%$ and $70\%$ respectively. This can be seen from the one dimensional distribution plot of Fig.\ref{fig3}. The WMAP group found that the scale invariant primordial spectrum and the inflation models with $n_s>1$ is disfavored at almost the $3\sigma$ level. Our result is in good agreement with them, $n_s=0.953^{+0.014}_{-0.013}$, based on the $\Lambda$CDM model. However, we find that the mean value of $n_s$ moves toward to the ``blue" spectral in the framework of dynamical dark energy model, $n_s=0.965\pm0.017$. From the future data we find that the standard deviation of $n_s$ can be shrink to be $\sigma=0.003$ and the scale invariant spectrum will be verified with much higher confidence level. \begin{figure}[htbp] \begin{center} \includegraphics[scale=0.4]{Fig7.eps} \caption{$68\%$ and $95\%$ constraints on the panel ($n_s$, $r$) based on the different Dark Energy models: $\Lambda$CDM (black solid line), $w_{{\mathrm{DE}}}=w_0+w_1(1-a)$ (red dashed line) and $w_{{\mathrm{DE}}}=w_0+w_1\sin(\frac{3\pi}{2}\log(a))$ (blue dotted line). The two solid green lines delimit the three classes of inflation models, namely, small-field, large-field and hybrid models. The blue points are predicted by $m^2\phi^2$ model and $\lambda\phi^4$ model respectively. These predictions assume that the number of e-foldings, $N$, is $50-60$ for $m^2\phi^2$ model and $64$ for $\lambda\phi^4$ model. The magenta dash-dotted lines denote the $1,2\sigma$ contours obtained from the future simulated data.\label{fig4}} \end{center} \end{figure} In the framework of dynamical dark energy model, from Fig.\ref{fig3}, we find that the $95\%$ upper limit of $r$ can be relax from $r<0.231$ to $r<0.392$. The degeneracy may be due to the reason that the tensor fluctuation and the dark energy component mostly affect the large scale (low multipoles) power spectrum of CMB. In the two dimensional plot of Fig.\ref{fig4}, we find that the Harrison-Zel'dovich-Peebles scale invariant (HZ) spectrum ($n_s=1,~r=0$) is disfavored about $2\sim3\sigma$ in $\Lambda$CDM model. However, this spectrum can be allowed with the presence of the dynamics of dark energy. The single slow-rolling scalar field with potential $V(\phi)\sim m^{2}\phi^{2}$, which predicts $(n_s,r)=(1-2/N,8/N)$, is well within $1\sigma$ region, while another single slow-rolling scalar field with potential $V(\phi)\sim \lambda\phi^{4}$, which predicts $(n_s,r)=(1-3/N,12/N)$, is excluded about $2\sigma$ in the $\Lambda$CDM model. Interestingly many hybrid inflation models, excluded in the $\Lambda$CDM model, revive in the framework of dynamical dark energy model as illustrated in Fig.\ref{fig4}. Another feature of WMAP data, both for WMAP1 \cite{peiris} and WMAP3 \cite{wmap3:2006:1,Easther:2006tv}, is the large running of the scalar primordia spectrum index $\alpha_s$. Our result shows that the large running is favored more than $1\sigma$, $\alpha_s=-0.038\pm0.022$. In Fig.\ref{fig3} we find that the dynamical dark energy models enlarge the error of $\alpha_s$ slightly and do not affect the mean value obviously. Given the large uncertainties in the constraints of inflationary parameters from the current observations, different inflation models cannot be distinguished conclusively. Yet, the constraints from the future Planck measurement can make this possible. From our simulation results in Table~III, we find that the error bars of inflationary parameters can be reduced by about a factor of $5$. This dramatic improvement will play a crucial role in the study of dynamics of inflation and can also shed light on the investigate of the dynamical dark energy model due to the correlations among inflationary and dark energy parameters. \subsubsection{Curvature of Universe} \label{Omk} Dark energy and the curvature, $\Omega_K=1-\Omega_{m}-\Omega_{{\mathrm{DE}}}$, are dominant factors in determining the fate of our Universe. Further, DE parameters and $\Omega_K$ are correlated. This is expected since $\Omega_K$ and dark energy can contribute to the luminosity distance $d_L$ via: \begin{eqnarray} \label{lumdis} d_{\rm L}(z)&=&\frac{1+z}{H_0\sqrt{|\Omega_{k}|}} {\rm sinn}\left[\sqrt{|\Omega_{k}|}\int_0^z \frac{dz'}{E(z')}\right]~,\\ E(z)\equiv\frac{H(z)}{H_0}&=&\sqrt{\Omega_m(1+z)^3+\Omega_{{\mathrm{DE}}}\exp\left(3\int_0^{z}\frac{1+w(z')}{1+z'}dz'\right) +\Omega_K(1+z)^2}~, \end{eqnarray} where ${\rm sinn}(\sqrt{|k|}x)/\sqrt{|k|}=\sin(x)$, $x$, $\sinh(x)$ if $k<0$, $k=0$ and $k>0$. In addition, $\Omega_K$ can modify the angular diameter distance to last scattering surface, which leaves imprints on the CMB power spectrum. \begin{figure}[htbp] \begin{center} \includegraphics[scale=0.4]{Fig9.eps} \caption{$1D$ current constraints on the parameters $\Omega_k$ based on the different dark energy models: $\Lambda$CDM (black solid line), $w_{{\mathrm{DE}}}=w_0+w_1(1-a)$ (red dashed line) and $w_{{\mathrm{DE}}}=w_0+w_1\sin(\frac{3\pi}{2}\log(a))$ (blue dotted line).\label{fig5}} \end{center} \end{figure} We concentrate on the determination of $\Omega_K$ in dynamical dark energy models using current and simulated data. Our parameter space is: \begin{equation} \label{parameter} {\bf P} \equiv (\omega_{b}, \omega_{c}, \Omega_k, \Theta_{s}, \tau, w_{0}, w_{1}, n_{s}, \log[10^{10}A_{s}])~. \end{equation} From Table~III and Fig.\ref{fig5}, we see our universe is very close to flatness, namely, the absolute value of space-time curvature $|\Omega_k|$ is smaller than $0.025$ in the $\Lambda$CDM model, $0.028$ for Para I and $0.032$ for Para II. The dynamics of dark energy weakens the constraint of $|\Omega_k|$ due to the well-known correlation among $\Omega_k$ and dark energy parameters. This correlation plays a crucial role in the reconstruction of equation of state of dark energy \cite{Clarkson:2007bc}. By the simulated data, we are able to detect the curvature more accurately. \subsubsection{Neutrino Mass} \label{Mnu} Detecting the absolute mass of neutrino is another challenge of modern physics. The cosmological observations can obtain upper limits of the absolute neutrino mass. For background evolution, neutrino masses, albeit small, contribute to the cosmic energy budget and modify the epoch of matter-radiation equality, angular diameter distance to the last scattering surface and other related physical quantities. For the evolution of perturbation, neutrino becomes non-relativistic at late time thus they damp the perturbation within their free streaming scale. Thus the matter power spectrum can be suppressed by roughly $\Delta P/P \sim - 8 \Omega_\nu/\Omega_m$ \cite{Hu:1997mj}. As a result, neutrino can leave imprints on the cosmological observations, such as CMB and matter power spectrum. On the other hand, the evolution of dark energy can also affect the evolution of background and perturbation, which mimics the behavior of neutrino to some extent. This leads to an obvious degeneracy among dark energy parameters and the neutrino mass. The degeneracy between dark energy with constant equation of state and neutrino mass has been studied in the literature \cite{Hannestad:2005gj,wmap3:2006:1}. In this section, we update our previous results to study the upper limits of neutrino mass with the presence of dynamical dark energy \cite{Xia:2006wd} and investigate the degeneracy between dynamical dark energy and neutrino mass with current cosmological observations as well as with the future simulated data. \begin{figure}[htbp] \begin{center} \includegraphics[scale=0.4]{Fig11.eps} \caption{$1D$ current constraints on the parameters $\sum m_{\nu}$ based on the different dark energy models: $\Lambda$CDM (black solid line), $w_{{\mathrm{DE}}}=w_0+w_1(1-a)$ (red dashed line) and $w_{{\mathrm{DE}}}=w_0+w_1\sin(\frac{3\pi}{2}\log(a))$ (blue dotted line).\label{fig6}} \end{center} \end{figure} We concentrate on the determination of $\sum m_{\nu}$ in dynamical dark energy models using current and simulated data. Our parameter space is: \begin{equation} \label{parameter} {\bf P} \equiv (\omega_{b}, \omega_{c}, \Theta_{s}, \tau, w_{0}, w_{1}, f_{\nu}, n_{s}, \log[10^{10}A_{s}])~. \end{equation} In the last row of Table~III, one can read $95\%~C.L.$ neutrino mass limits derived from the current observations as well as the simulated data of Planck and SNAP in the $\Lambda$CDM and dynamical dark energy models. In the $\Lambda$CDM model, the limit of neutrino mass we get, $\sum m_{\nu}<0.958~eV~(95\%)$, is consistent with Tegmark's result \cite{Tegmark:2006az}. For the dynamical dark energy model, the limit can be relaxed to $\sum m_{\nu}<1.59~eV~(95\%)$ and $\sum m_{\nu}<1.53~eV~(95\%)$ obviously, due to the degeneracy between the dark energy parameters and the neutrino mass from the geometric feature of our Universe \cite{Xia:2006wd,Hannestad:2005gj}. In Fig.\ref{fig6} we illustrate this effect with current astronomical data. With the simulated data, we obtain the two tail posterior distribution due to the nonzero fiducial value of neutrino mass. The standard deviation will be greatly tightened to be $0.077~eV$. With the presence of dynamical dark energy, the standard deviation can be relaxed by a factor of $2.3$ using the simulated data. We might distinguish the normal hierarchy ($\sum m_{\nu}\sim0.05~eV$) from the inverted hierarchy ($\sum m_{\nu}\sim0.1~eV$) using the future Planck measurement. \subsection{Cosmological \emph{CPT} Violation} \label{Angle} The \emph{CPT} symmetry which has been proved to be exact within the framework of standard model of particle physics and Einstein gravity could be violated dynamically during the evolution of the universe \cite{Li:2006ss}. The detection of \emph{CPT} violation will reveal new physics beyond the standard model. In our previous work we studied the cosmological \emph{CPT} violation in the photon sector. We introduce a Chern-Simons term in the effective Lagrangian of the form \cite{Carroll:1989vb}: \begin{equation} \label{CPT}\Delta \mathcal{L} = -\frac{1}{4}p_{\mu}A_{\nu}\tilde F^{\mu\nu}~, \end{equation} where $p_{\mu}$ is a four-vector and $\tilde F^{\mu\nu}=(1/2)\epsilon^{\mu\nu\rho\sigma}F_{\rho\sigma}$ is the dual of the electromagnetic tensor. This action is gauge invariant if $p_{\mu}$ is a constant and homogeneous vector or the gradient of a scalar. It violates Lorentz and \emph{CPT} symmetries if the background value of $p_{\mu}$ is nonzero. In the scenario of quintessential baryo-/leptogenesis \cite{Li:2001st,Li:2002wd} the four-vector $p_{\mu}$ is in the form of the derivative of the quintessence scalar, $\partial_{\mu}\phi$. During the evolution of quintessence, the time component of $\partial_{\mu}\phi$ does not vanish, which causes \emph{CPT} violation. In the scenario of gravitational baryo-/leptogenesis \cite{Davoudiasl:2004gf,Li:2004hh}, $p_{\mu}$ is the gradient of a function of Ricci scalar R. \begin{figure}[htbp] \begin{center} \includegraphics[scale=0.4]{Other4.eps} \caption{$1D$ current constraints on the rotation angle $\Delta\alpha$ from the simulated Planck (Black Solid line) and CMBpol (Red Dahsed line) data.\label{fig7}} \end{center} \end{figure} The interaction in Eq.(\ref{CPT}) violates also the \emph{P} and \emph{CP} symmetries, as long as $p_{0}$ does not vanish \cite{Klinkhamer:1999zh}. It leads to a rotation of the polarization vector of electromagnetic waves when they are propagating over cosmological distances. This effect is known as ``cosmological birefringence". The polarization vector of each photon is rotated by an angle $\Delta\alpha$ and one would modify the power spectra of TE, EE, BB, TB and EB at the last scattering surface as: \begin{eqnarray} C_{l}^{'TB} &=& C_{l}^{TE}\sin(2\Delta\alpha)~, \\ C_{l}^{'EB} &=& \frac{1}{2}(C_{l}^{EE}-C_{l}^{BB})\sin(4\Delta\alpha)~,\\ C_{l}^{'TE} &=& C_{l}^{TE}\cos(2\Delta\alpha)~, \\ C_{l}^{'EE} &=& C_{l}^{EE}\cos^2(2\Delta\alpha) + C_{l}^{BB}\sin^2(2\Delta\alpha)~,\\ C_{l}^{'BB} &=& C_{l}^{BB}\cos^2(2\Delta\alpha) + C_{l}^{EE}\sin^2(2\Delta\alpha)~, \end{eqnarray} where the primed quantities are rotated. In Ref.\cite{Feng:2006dp} we have performed a global fit and found that a nonzero rotation angle of the photons is mildly favored, $\Delta\alpha=-6.0\pm4.0$ deg, using the WMAP3 (without the information of TB and EB power spectra) and full data of Boomerang-2K2\footnote{The WMAP group did not release the TB and EB data when we prepared our paper \cite{Feng:2006dp} at that time. We had to set the TB and EB of power spectra of WMAP3 data to be zero, $C^{TB}_l=0$ and $C^{EB}_l=0$. Recently Cabella \emph{et al.} \cite{Cabella:2007br} performed a wavelet analysis of the temperature and polarization maps of the CMB delivered by the WMAP experiment which includes the information of TB and EB power spectra. They obtained a limit on the CMB photon rotation angle $\Delta\alpha=-2.5\pm3.0$ deg. Right now the WMAP group has released their results of TB and EB power spectra. We plan to combine the full data of WMAP3 (including the information of TB and EB power spectra) and BOOMERANG-2K2 to reanalyze the rotation angle in the future \cite{xia}.}. In Fig.\ref{fig7}, by the simulated data with Planck and CMBpol accuracy, we find that the standard deviation of the rotation angle will be greatly tightened to $\sigma=0.057$ deg for PLANCK and $\sigma=2.57\times10^{-3}$ deg for CMBpol. These results are much more stringent than the current constraint and will be used to verify the \emph{CPT} symmetry with a higher precision \cite{Feng:2004mq}. \section{Summary} \label{Sum} Since the mystery of our Universe encodes in the cosmological parameters, constraining these parameters with the latest observational data and doing error forecasts with the simulated futuristic data can lead us to understand the Nature. In this paper, we focus on the dynamics of dark energy in light of current and simulated Planck and SNAP data and then constrain the inflationary parameters, total neutrino mass and curvature of space-time in the framework of dynamical dark energy model. In addition, we investigate the rotation angle $\Delta\alpha$, the possible signature of \emph{CPT} violation, by simulated Planck and CMBpol data. Parameterizing the EoS of dark energy in two forms as Eqs.(\ref{EOS1},\ref{EOS2}), we find that the Quintom model, whose EoS crosses $-1$ during the evolution, is mildly favored by latest observations, albeit the $\Lambda$CDM model remains a good fit. Using the simulated Planck data complimented by SNAP data, we find that the variance of the dark energy parameters in Eq.(\ref{EOS1}) decreases dramatically, namely, $\sigma(w_0)$ and $\sigma(w_1)$ can be reduced by a factor of $3.33$ and $5.2$ respectively. Given the current central value, this means that Planck can distinguish dynamical dark energy from the $\Lambda$CDM model at around 4$\sigma$ confidence level. Since the dynamics of dark energy greatly affects the determination of other cosmological parameters, we constrain the inflationary parameters, total neutrino mass $\sum m_{\nu}$ and curvature of space-time $\Omega_{K}$ with the presence of dynamics of dark energy using current and simulated observational data. We find that the dynamics of dark energy generally weakens the determination of other cosmological parameters. For instance, we find that the inflation models with a ``blue" tilt ($n_{s}>1$), which is strongly disfavored in the $\Lambda$CDM model, are now well within $2\sigma$ region in the framework of dynamical dark energy model. This is something intriguing in that Hybrid Inflation Models have been revived due to the dynamical dark energy, due to the obviously enlarged parameter space of ($n_s$,$r$) in the framework of dynamical dark energy model. These discoveries can lead us to further study the dynamics of inflation, dark energy and the relationship between them. With Planck, the uncertainties of inflationary parameters, including $n_{s},~\alpha_{s},~r$, can be roughly reduced by a factor of $5$. This significant improvement make it possible to distinguish the inflationary models decidedly. The presence of dark energy can relax the upper limit of neutrino mass by a factor of $2$. This sheds new light on the research of neutrino physics, such as the scenario of mass varying neutrino \cite{mvn1,mvn2}. By Planck, we can rise the precision of measurement of this mass limit by a factor of $12$, namely, the standard deviation of neutrino mass can be shrunk to $0.077~eV$ for the $\Lambda$CDM model and $0.179~eV$ for the dynamical dark energy model (Para~I). This measurement might make us understand the Nature of physics conclusively. Current data imply that our Universe is very close to flatness, say, $|\Omega_{K}|<0.03~(95\%~C.L.)$. By Planck, we can reduce $\sigma(\Omega_{K})$ by a factor of $2.67$($\Lambda$CDM). Such improvement helps us to reveal the Nature of space-time. The global symmetry of \emph{CPT} plays a critical role in understanding the fundamental physics. In our previous works we have found some hints of \emph{CPT} violation encoded in the rotation angel of polarization vector of photons $\Delta\alpha$. With Planck and CMBpol, we can test the \emph{CPT} symmetry at a unprecedent accuracy. {\bf{Acknowledgments:}} We acknowledge the use of the Legacy Archive for Microwave Background Data Analysis (LAMBDA). Support for LAMBDA is provided by the NASA Office of Space Science. We have performed our numerical analysis on the Shanghai Supercomputer Center (SSC). We are grateful to Laurence Perotto for discussions related to the simulation of lensed CMB power spectra. We thank Yi-Fu Cai, Zu-Hui Fan, Pei-Hong Gu, Steen Hannestad, Hiranya Peiris, Yun-Song Piao, Levon Pogosian, Tao-Tao Qiu and Douglas Scott for helpful discussions. This work is supported in part by National Natural Science Foundation of China under Grant Nos. 90303004, 10533010 and 10675136 and by the Chinese Academy of Science under Grant No. KJCX3-SYW-N2.
1,314,259,995,369
arxiv
\section{Introduction} In this paper we present some sensitivity results for the eigenvalues of the time-harmonic Maxwell's equations in a cavity upon perturbations of the permittivity parameter. The cavity is represented by a bounded domain (i.e. a connected open subset) $\Omega $ of $\mathbb{R}^3$, and it is thought made of a material which in general is inhomogeneous and anisotropic. Accordingly, the permittivity $\varepsilon$ of the medium filling the domain $\Omega$ is described by a $(3 \times 3)$-matrix valued function in $\Omega$. In particular cases where the material presents additional properties and symmetries, the permittivity $\varepsilon$ takes simpler forms, for example becoming scalar in the case of a isotropic material, or even a scalar constant if the medium is both isotropic and homogeneous. The eigenfrequency problem in a bounded domain $\Omega \subset \mathbb{R}^3$ consists in finding two non-zero eigenfields $E,H$ and a non-zero eigenfrequency $\omega$ (also called angular frequency) such that the time-harmonic Maxwell's equations are satisfied in $\Omega$, namely \begin{equation}\label{sys:max} \mathrm{curl} E -{\rm i}\,\omega \mu H =0, \qquad \mathrm{curl} H +{\rm i}\,\omega \varepsilon E=0 \qquad \mbox{ in } \Omega. \end{equation} The vector field $E$ denotes the electric part of the electromagnetic field, while $H$ the magnetic one. Furthermore, $\varepsilon$ and $\mu$ are $(3 \times 3)$-matrix valued maps which represent the electric permittivity and the magnetic permeability tensors of the medium, respectively. For the sake of simplicity, since we are interested in studying the behavior of problem \eqref{sys:max} upon variation of $\varepsilon$, we normalize the permeability to have $\mu =I_3$, where $I_3$ denotes the $(3 \times 3)$-identity matrix. By applying the $\mathrm{curl}$ operator to the first equation of \eqref{sys:max} and setting $\omega^2 = \lambda$ we obtain \begin{equation*} \operatorname{curl}\operatorname{curl} E = \lambda \varepsilon E\qquad \mbox{ in } \Omega. \end{equation*} Since the divergence of a $\mathrm{curl}$ is always zero then \[ \mathrm{div}\, \varepsilon E = 0 \qquad \mbox{ in } \Omega. \] We couple the system with the boundary conditions of a perfect conductor, which for the electric field $E$ read as follows: \begin{equation} \label{introluz:elec:bc} \nu \times E = 0 \qquad \mbox{ on } \partial \Omega. \end{equation} Here $\nu$ denotes the outer unit normal vector to the boundary of $\Omega$ hence condition \eqref{introluz:elec:bc} means that the electric field is orthogonal to the surface $\partial \Omega$. Therefore, we arrive at the following (electric) eigenvalue problem: \begin{equation}\label{prob:eig} \begin{cases} \operatorname{curl}\operatorname{curl} E = \lambda \, \varepsilon E, \qquad &\mbox{ in } \Omega,\\ \mathrm{div}\, \varepsilon E = 0, & \mbox{ in } \Omega,\\ \nu \times E = 0, \qquad &\mbox{ on } \partial \Omega. \end{cases} \end{equation} Note that it is also possible to obtain the magnetic counterpart of problem \eqref{prob:eig}. However, in the present work, we will devote our attention to the electric side. The spectrum of problem \eqref{prob:eig} is discrete (cf. \cite[Thm. 4.34]{kihe}) and it consists of a divergent sequence of $\varepsilon$-dependent non-negative eigenvalues $\{\lambda_j[\varepsilon]\}_{j \in \mathbb{N}}$ of finite multiplicity that can be arranged in an increasing way \[ 0\leq \lambda_1[\varepsilon] \leq \lambda_2[\varepsilon] \leq \cdots \leq \lambda_n[\varepsilon] \leq \cdots \nearrow +\infty, \] where each eigenvalue is repeated in accordance with its multiplicity. The kernel $K^\varepsilon(\Omega)$ of problem \eqref{prob:eig}, i.e. those eigenfields associated with $\lambda =0$, is composed of curl-free vector fields which are normal to the boundary and such that $\operatorname{div}\varepsilon E=0$ in $\Omega$, namely \begin{equation} \label{zero:eigenspace:eps} K^\varepsilon(\Omega) = \set{E \in L^2(\Omega)^3 : \operatorname{curl}E=0 \text{ in }\Omega,\, \operatorname{div}\varepsilon E =0\text{ in }\Omega,\, \nu \times E=0 \text{ on }\partial\Omega}. \end{equation} If $m \in \mathbb{N}$ is the number of connected components of the boundary of $\Omega$, then $\mathrm{dim}_\mathbb{R}K^\varepsilon(\Omega)=m-1$. In particular, if $\partial\Omega$ has only one connected component, $K^\varepsilon(\Omega)=\{0\}$. It is worth noting that the presence of the zero eigenvalue, and its multiplicity, depends only on the topology of $\Omega$. For a proof we refer to \cite[Prop. 6.1.1]{AsCiLa18} (see also \cite[Prop. 3.18]{abdg}, \cite[Ch. IX-A \S 1.3]{DaLi90}). On the one hand, the aim of our work is to understand the dependence of all the eigenvalues $\lambda_j[\varepsilon]$, both simple and multiple, with respect to variations of the permittivity $\varepsilon$. On the other hand, as a consequence of our sensitivity analysis, we prove that all the eigenvalues are generically simple with respect to $\varepsilon$. The mathematical study of Maxwell's equations and in particular of electromagnetic cavities has great interest not only from the theoretic side but also for its real world applications, for example in designing cavity resonators or shielding structures for electronic circuits. Here we mention, without the sake of completeness, the monographs \cite{Ce96, DaLi90, GiRa86, Mo03, Ne01,RoStYa12} and the classical papers \cite{Co90, Co91, CoDo99} for a complete introduction to this field and a detailed discussion of both theoretic and applied problems in the mathematical theory of electromagnetism. For more recent papers we refer to, e.g., \cite{AmBaWo00, BaPaSc16, CoCoMo19, LaSt19, Pa17, LaZa21}. Incidentally, we note that in \cite{LaZa20} Lamberti and the second named author have considered the eigenvalues of problem \eqref{prob:eig} with fixed and constant permittivity $\varepsilon = I_3$ on a variable domain and proved a real analytic dependence upon variation of the shape of the domain. To the best of the authors' knowledge, the dependence of the eigenvalues $\lambda_j[\varepsilon]$ upon perturbation of $\varepsilon$ has not yet been investigated. As a first step, we consider the stability of the eigenvalues and we show that all the eigenvalues, both simple and multiple, are continuous with respect to $\varepsilon$ varying in $W^{1,\infty}$. Actually we are able to prove a stronger result, indeed we show that the eigenvalues are locally Lipschitz continuous in $\varepsilon$ (see Theorem \ref{thm:loclip}). Then, we pass to consider higher regularity properties. At this stage we face a first issue related to bifurcation phenomena of multiple eigenvalues which is common to any eigenvalue problem depending on a parameter. In our case, if one has a multiple eigenvalue $\lambda= \lambda_{j}[\varepsilon]=\cdots = \lambda_{j+m-1}[\varepsilon]$ and $\varepsilon$ is slightly perturbed, $\lambda$ could split into different eigenvalues of lower multiplicity and thus the corresponding branches can present a corner at the splitting point, and then be not differentiable. A possible way to overturn this problem is to consider only perturbations $\{\varepsilon_t\}_{t \in \mathbb{R}}$, with $\varepsilon_0=\varepsilon$, depending on a single scalar parameter $t \in \mathbb{R}$ and consider the one-sided derivative of the multiple eigenvalue at $t=0$ (see, e.g., \cite{DjFaWe21,FaWe18} for the one-sided shape derivative of a--possibly multiple--eigenvalue for two different problems). Note that this approach, being based on the variational characterization of the eigenvalues, has been effectively applied only to the first non-zero eigenvalue. Here we adopt a different point of view that allows us to deal with multiple eigenvalues and general (infinite dimensional) perturbations of the permittivity: instead of considering a single eigenvalue we consider the symmetric functions of multiple eigenvalues and we show that they depend real analytically on $\varepsilon$. In addition we provide an explicit formula for the (Fr\'echet) derivative in $\varepsilon$ of the symmetric functions of the eigenvalues (see Theorem \ref{thm:diffeps}). This suggests that the symmetric functions are a natural quantity to consider when dealing with the regularity (and the optimization) of multiple eigenvalues. This approach was introduced by Lamberti and Lanza de Cristoforis \cite{LaLa04} and later adopted in other works (see, e.g., \cite{BuLa13, BuLa15, LaZa20, LaLuMu21, LaMuTa21}). We also consider the case of perturbations depending on a single scalar parameter, like the ones we introduced above, and we prove a Rellich-Nagy-type theorem which describes the bifurcation phenomenon of multiple eigenvalues. More precisely, we show that all the eigenvalues splitting from a multiple eigenvalue of multiplicity $m$ can be described by $m$ real analytic functions of the scalar parameter (see Theorem \ref{thm:RN}). As an application of the above described results, we show that all the non-zero eigenvalues of problem \eqref{prob:eig} are simple for a generic permittivity. That is, in few words, given any permittivity $\varepsilon$ it is always possible to find a perturbation $\tilde \varepsilon$ as close as desired to $\varepsilon$ such that the non-zero eigenvalues $\{\lambda_j[\tilde \varepsilon]\}_{j \in \mathbb{N}}$ are all simple (see Theorem \ref{thm:gensim}). To a certain extent, our work is inspired by Lamberti \cite{La09} and Lamberti and Provenzano \cite{LaPr13} where the authors investigate the behavior of the eigenvalues of the Laplacian and of a general elliptic operators upon perturbations of mass density. Incidentally, we mention that this paper is a first step towards understanding the behavior of Maxwell eigenvalues upon permittivities variations. In particular the authors plan to investigate issues related to the optimization of Maxwell eigenvalues with respect to $\varepsilon$ in a future work. After the present introduction the paper is organized as follows: Section \ref{sec:pre} is a section of preliminaries containing notation, the functional setting and basic results about the eigenvalue problem. In Section \ref{sec:cont} we prove that all the eigenvalues are locally Lipschitz continuous in $\varepsilon$. In Section \ref{sec:an} we show that the symmetric functions of the eigenvalues depend analytically upon $\varepsilon$ and we obtain an explicit formula for the $\varepsilon$-derivative. Moreover, we prove a Rellich-Nagy-type result for permittivities depending on a single scalar parameter. Finally, in Section \ref{sec:gen} we show that all the non-zero eigenvalues are simple for a generic permittivity. \section{Some preliminaries}\label{sec:pre} If $\mathcal{X}$ is a Hilbert space of scalar functions, by $\mathcal{X}^3$ we denote the Hilbert space of vector-valued functions whose components belong to $\mathcal{X}$, endowed with the natural inner product \begin{equation*} \langle f,g \rangle_{\mathcal{X}^3} = \sum_{i=1}^3 \langle f_i,g_i\ \rangle_\mathcal{X} \end{equation*} for all $f=(f_1,f_2,f_3)$, $ g=(g_1,g_2,g_3) \in \mathcal{X}^3$, where $\langle \cdot, \cdot \rangle_{\mathcal{X}}$ is the inner product of $\mathcal{X}$. In this sense, e.g., if $L^2(\Omega)$ is the standard Lebesgue space of square integrable real valued functions, then the space $L^2(\Omega)^3$ is endowed with the inner product \[ \int_{\Omega} u \cdot v \,dx = \int_{\Omega} (u_1v_1+u_2v_2+u_3v_3) \,dx \qquad \forall u,v \in L^2(\Omega)^3. \] Let $\Omega$ be a bounded domain of $\mathbb{R}^3$. We denote by $L^\infty(\Omega)^{3 \times 3}$ and $W^{1,\infty}(\Omega)^{3 \times 3}$ the spaces of real matrix-valued functions $M=\left( M_{ij} \right)_{1 \leq i,j \leq 3}:\Omega \to \mathbb{R}^{3 \times 3}$ whose components are in $L^\infty(\Omega)$ and $W^{1,\infty}(\Omega)$, respectively. We endow these spaces with the norms \begin{equation*} \norm{M}_{L^\infty(\Omega)^{3 \times 3}} := \max_{1 \leq i,j \leq 3} \norm{M_{ij}}_{L^\infty(\Omega)} \end{equation*} and \begin{equation*} \norm{M}_{W^{1,\infty}(\Omega)^{3 \times 3}} := \max_{1 \leq i,j \leq 3} \norm{M_{ij}}_{W^{1,\infty}(\Omega)}. \end{equation*} For the sake of simplicity, we will respectively write $L^\infty(\Omega)$ and $W^{1,\infty}(\Omega)$ instead of $L^\infty(\Omega)^{3 \times 3}$ and $W^{1,\infty}(\Omega)^{3 \times 3}$, and the space we are referring to will be clear from the context. Let $M \in L^\infty(\Omega)$. One has the following trivial inequalities that we will exploit in the paper: \begin{equation*} \abs{M \xi \cdot \xi} \leq 3 \norm{M}_{L^\infty(\Omega)} \abs{\xi}^2, \qquad \abs{M \xi} \leq 3 \norm{M}_{L^\infty(\Omega)} |\xi| \end{equation*} for all $\xi \in \mathbb{R}^3$ and a.e. in $\Omega$. In order to consider our eigenvalue problem, we first need to specify where we take the permittivities $\varepsilon$. From now on we will assume that: \begin{equation}\label{Omega_def} \text{$\Omega$ is a bounded domain of $\mathbb{R}^3$ of class $C^{1,1}$.} \end{equation} The admissible set where we take the permittivities is the following \begin{align*} \mathcal{E}:= \Big\{\varepsilon \in W^{1,\infty} \left(\Omega\right) \cap \,\, &\mathrm{Sym}_3 (\Omega) : \\ &\exists \, c>0 \text{ s.t. } \varepsilon(x) \, \xi \cdot \xi \geq c \, \abs{\xi}^2 \text{ for a.a. } x \in \Omega, \text{ for all }\xi \in \mathbb{R}^3\Big\}, \end{align*} where $\mathrm{Sym}_3 (\Omega)$ denotes the set of $(3 \times 3)$-symmetric matrix valued functions in $\Omega$. Given $\varepsilon \in \mathcal{E}$, we denote by $c_\varepsilon >0$ the greatest positive constant that guarantees the coercivity condition in the above definition, that is \begin{equation}\label{def:cc} c_\varepsilon := \max\Big\{c>0 : \varepsilon(x) \, \xi \cdot \xi \geq c \, \abs{\xi}^2 \text{ for a.a. } x \in \Omega, \text{ for all }\xi \in \mathbb{R}^3\Big\}. \end{equation} The set $\mathcal{E}$ is open in $W^{1,\infty}(\Omega)\cap \,\, \mathrm{Sym}_3 (\Omega)$. This is implied by the continuity of the map \begin{equation*} \left( \mathcal{E}, \|\cdot\|_{L^{\infty}(\Omega)} \right) \to \mathbb{R}_+, \qquad \varepsilon \mapsto c_\varepsilon. \end{equation*} Indeed let $\varepsilon_1, \varepsilon_2 \in \mathcal{E}$. Since $\abs{(\varepsilon_2 - \varepsilon_1) \, \xi \cdot \xi} \leq 3 \norm{\varepsilon_2 - \varepsilon_1}_{L^\infty(\Omega)} \abs{\xi}^2$ a.e. in $\Omega$, then \begin{equation*} \varepsilon_2 \, \xi \cdot \xi = \varepsilon_1 \, \xi \cdot \xi + (\varepsilon_2 - \varepsilon_1) \, \xi \cdot \xi \geq \left(c_{\varepsilon_1} - 3 \norm{\varepsilon_2 - \varepsilon_1}_{L^\infty(\Omega)} \right) \abs{\xi}^2. \end{equation*} Hence $c_{\varepsilon_2} \geq c_{\varepsilon_1} - 3 \|\varepsilon_2 - \varepsilon_1\|_{L^\infty(\Omega)}$. Eventually exchanging the role of $\varepsilon_1$ and $\varepsilon_2$, we have that \begin{equation}\label{contcoerc} \abs{c_{\varepsilon_2} - c_{\varepsilon_1}} \leq 3 \norm{\varepsilon_2 - \varepsilon_1}_{L^\infty(\Omega)}. \end{equation} \begin{comment} We will actually show more, namely that if $\varepsilon_k \xrightarrow[k \to \infty]{L^\infty} \varepsilon$, then $\lim\inf_{k\to \infty} c_k \geq c$. Suppose by contradiction that there exists $0<\delta < c/2$ such that for all $n \in \mathbb{N}$ there exists a $k_n = k \geq n$ such that $c_k \leq c-2 \delta$. Then, considering the sequence $\{c_{k} \}_{k \in \mathbb{N}}$, we would have that for every $k \in \mathbb{N}$ there exists a $0 \neq \tilde{\xi}_k = \tilde{\xi} \in \mathbb{R}^3$ such that \begin{equation} c_m |\tilde{\xi}|^2 \leq \varepsilon_k \, \tilde{\xi} \cdot \tilde{\xi} \leq (c-\delta) |\tilde{\xi}|^2. \end{equation} But then, since $\abs{(\varepsilon_k -\varepsilon) \, \tilde{\xi} \cdot \tilde{\xi}} \leq \norm{\varepsilon_k - \varepsilon}_{L^\infty(\Omega)} |\tilde{\xi}|^2 \leq \delta/2 |\tilde{\xi}|^2$ for $k$ sufficiently large, we have that \begin{equation} (c - \delta/2) |\tilde{\xi}|^2 \leq \varepsilon \, \tilde{\xi} \cdot \tilde{\xi} + (\varepsilon_k - \varepsilon) \, \tilde{\xi} \cdot \tilde{\xi} = \varepsilon_k \, \tilde{\xi} \cdot \tilde{\xi} \leq (c- \delta) |\tilde{\xi}|^2, \end{equation} which in turn implies that $\delta > 2 \delta$, a contradiction. \end{comment} Let $\varepsilon \in \mathcal{E}$. We denote by $L^2_\varepsilon(\Omega)$ the space $L^2(\Omega)^3$ endowed with the inner product \begin{equation} \label{eps:inner:product} \langle u, v \rangle_\varepsilon = J_\varepsilon[u][v]:= \int_\Omega \varepsilon u \cdot v \, dx \qquad \forall u,v \in L^2(\Omega)^3. \end{equation} Note that the above inner product induces a norm equivalent to the standard $L^2$-norm since \begin{equation*} c_{\varepsilon} \int_\Omega \abs{u}^2 dx \leq \int_\Omega \varepsilon u \cdot u \, dx \leq 3 \norm{\varepsilon}_{L^\infty(\Omega)} \int_\Omega \abs{u}^2 dx \qquad \forall u \in L^2(\Omega)^3. \end{equation*} Next we introduce the natural functional setting and tools in order to deal with problem \eqref{prob:eig}. By $H(\mathrm{curl}, \Omega)$ we denote the space of vector fields $ u \in L^2(\Omega)^3$ with distributional $\mathrm{curl}$ in $L^2(\Omega)^3$, i.e. those square integrable vector fields for which there exists a function $\mathrm{curl}\, u \in L^2(\Omega)^3$ such that \begin{equation} \label{distrib:eps:div:dfn} \int_\Omega u \cdot \operatorname{curl}\varphi \, dx = \int_\Omega \operatorname{curl} u \cdot \varphi \, dx \qquad \forall \varphi \in C^\infty_c(\Omega)^3. \end{equation} We endow this space with the inner product \begin{equation*} \langle u,v\rangle_{H(\operatorname{curl}, \Omega)} := \int_\Omega \varepsilon u \cdot v \, dx + \int_\Omega \operatorname{curl}u \cdot \operatorname{curl}v \, dx \qquad \forall u,v \in H(\operatorname{curl},\Omega), \end{equation*} which makes it a Hilbert space. By $H_0(\mathrm{curl}, \Omega)$ we denote the closure of $C^\infty_c(\Omega)^3$ in $H(\mathrm{curl}, \Omega)$. If a vector field $u$ is regular enough to be traced on the boundary, say it is smooth up to the boundary, then the \emph{tangential trace} of $u$ coincides exactly with the cross product between its restriction to $\partial \Omega$ and the outer unit normal, i.e. $\nu \times u\rvert_{\partial \Omega}$. From now on we use the same notation also to denote the tangential trace of a vector field $u \in H(\operatorname{curl}, \Omega)$, which in general is just an element of the dual space of $H^{1/2}(\partial \Omega)^3$ (see \cite[Thm. 2.11]{GiRa86}). We will also often omit the boundary restriction subscript. It turns out that $H_0(\operatorname{curl},\Omega)$ is exactly the space of those vector fields whose tangential trace vanish on $\partial\Omega$ (cf. \cite[Thm. 2.12]{GiRa86}), i.e. \begin{equation*} H_0(\operatorname{curl},\Omega) = \set{u \in H(\operatorname{curl},\Omega) : \nu \times u\rvert_{\partial \Omega}=0}, \end{equation*} hence it naturally encodes the electric boundary condition \eqref{introluz:elec:bc}. For more details we refer to \cite[Ch. 2]{GiRa86} or \cite[Ch. IX-A \S 1.2]{DaLi90}. Similarly, we introduce the space $H(\operatorname{div} \varepsilon, \Omega)$ of vector fields $u\in L^2(\Omega)^3$ such that the vector field $\varepsilon u$ has distributional divergence in $L^2(\Omega)$, namely there exists a function $\operatorname{div}(\varepsilon u) \in L^2(\Omega)$ such that \begin{equation*} \int_\Omega \varepsilon u \cdot \nabla \varphi \, dx = - \int_\Omega \operatorname{div}(\varepsilon u) \, \varphi \, dx \qquad \forall\varphi \in C^\infty_c(\Omega). \end{equation*} We endow $H(\operatorname{div}\varepsilon, \Omega)$ with the inner product \begin{equation*} \langle u,v\rangle_{H(\operatorname{div} \varepsilon, \Omega)} := \int_\Omega \varepsilon u \cdot v \, dx + \int_\Omega \operatorname{div}(\varepsilon u) \operatorname{div}(\varepsilon v) \, dx \qquad \forall u,v \in H(\operatorname{div}\varepsilon,\Omega), \end{equation*} which makes it a Hilbert space. Moreover, we consider the space \[ X_{\rm \scriptscriptstyle N}^\varepsilon(\Omega) := H_0(\mathrm{curl}, \Omega) \cap H(\mathrm{div}\,\varepsilon, \Omega) \] equipped with inner product \begin{equation*} \langle u,v\rangle_{X_{\rm \scriptscriptstyle N}^\varepsilon(\Omega)} := \int_\Omega \varepsilon u \cdot v \, dx + \int_\Omega \operatorname{curl}u \cdot \operatorname{curl}v \, dx + \int_\Omega \operatorname{div} (\varepsilon u) \, \operatorname{div}(\varepsilon v) \, dx \end{equation*} for all $u,v \in X_{\rm \scriptscriptstyle N}^\varepsilon(\Omega)$. Finally, we set \begin{equation*} \begin{split} X_{\rm \scriptscriptstyle N}^\varepsilon(\mathrm{div}\,\varepsilon 0,\Omega) &:= \set{u \in X_{\rm \scriptscriptstyle N}^\varepsilon(\Omega) : \mathrm{div} \, (\varepsilon u) = 0} \\ &= \set{u \in L^2(\Omega)^3 : \operatorname{curl}u \in L^2(\Omega)^3, \operatorname{div}(\varepsilon u)=0, \nu \times u\rvert_{\partial \Omega}=0}. \end{split} \end{equation*} If $\varepsilon \in \mathcal{E}$ and the assumption \eqref{Omega_def} holds, i.e. if $\Omega$ is a bounded domain of $\mathbb{R}^3$ of class $C^{1,1}$, the space $X_{\rm \scriptscriptstyle N}^\varepsilon(\Omega)$ is continuously embedded into $H^1(\Omega)^3$. This is implied by the so-called Gaffney (or Gaffney-Friedrichs) inequality, which states that there exists a constant $C_\varepsilon >0$ such that \begin{equation}\label{ineq:GF} \norm{u}_{H^1(\Omega)^3}^2 \leq C_\varepsilon \left( \langle \varepsilon u, u\rangle_{L^2(\Omega)^3}+ \norm{\operatorname{curl}u}^2_{L^2(\Omega)^3} + \|\operatorname{div}\varepsilon u\|^2_{L^2(\Omega)} \right) = C_\varepsilon \|u\|_{X_{\rm \scriptscriptstyle N}^\varepsilon (\Omega)} \end{equation} for all $u \in X_{\rm \scriptscriptstyle N}^\varepsilon (\Omega)$. We refer to Prokhorov and Filonov \cite[Thm. 1.1]{PrFi15} for a proof of the above inequality. Their result includes more general permittivities and domains, such as convex domains or in general Lipschitz domains satisfying the exterior ball condition. Another proof can be found in Alberti and Capdeboscq \cite{AlCa14}. Other classical references for the Gaffney inequality are Saranen \cite{Sa83} and Mitrea \cite{Mit01}. More recently, Creo and Lancia \cite{CrLa20} generalized the Gaffney inequality to more irregular domains in dimension $2$ and $3$. Incidentally, we point out that one of the main reasons for the regularity assumption \eqref{Omega_def} we require on $\Omega$ is exactly the validity of \eqref{ineq:GF}. Note that if we lower the regularity assumptions, for example requiring $\Omega$ just of Lipschitz class, there is no guarantee to obtain the same perturbation results. The authors plan to address this issue in future works. We recall here a known formula for the divergence of the matrix-vector product $\varepsilon v$ with $\varepsilon \in \mathcal{E}$ and $v \in H^1(\Omega)^3$ that we will exploit extensively throughout the paper: \begin{equation} \label{div:matrixxvector} \operatorname{div}(\varepsilon v) = \operatorname{tr}(\varepsilon Dv) + \operatorname{div}\varepsilon \cdot v \qquad \mbox{ a.e. in } \Omega. \end{equation} where $\operatorname{tr}(\cdot)$ denotes the trace operator and $\operatorname{div}\varepsilon$ is the vector field defined by \begin{equation*} \operatorname{div}\varepsilon := \left( \operatorname{div}\varepsilon^{(1)}, \operatorname{div}\varepsilon^{(2)}, \operatorname{div}\varepsilon^{(3)} \right). \end{equation*} with $\varepsilon^{(k)}$ denoting the $k$-th column of $\varepsilon = \left(\varepsilon^{(1)} | \, \varepsilon^{(2)} | \, \varepsilon^{(3)} \right)$. Recall the electric eigenvalue problem \begin{equation}\label{prob:eigen1} \begin{cases} \operatorname{curl}\operatorname{curl} u = \lambda \, \varepsilon \, u \qquad &\mbox{ in } \Omega,\\ \operatorname{div}\varepsilon u = 0 \qquad &\mbox{ in } \Omega,\\ \nu \times u = 0 \qquad &\mbox{ on } \partial \Omega. \end{cases} \end{equation} By classical integration by parts, one has that \begin{equation*} \int_\Omega \operatorname{curl}F \cdot G \, dx = \int_\Omega F \cdot \operatorname{curl}G \, dx + \int_{\partial \Omega} (G \times \nu) \cdot F \, d\sigma \end{equation*} for all sufficiently regular vector fields $F,G$ (see, e.g. \cite[Thm. A.13]{kihe}). Then is readily seen that the weak formulation of problem \eqref{prob:eigen1} is \begin{equation}\label{prob:eigen1weak} \int_{\Omega} \mathrm{curl}\,u \cdot \mathrm{curl}\, v \,dx =\lambda\int_{\Omega}\varepsilon u\cdot v \,dx \qquad \forall v \in X_{\rm \scriptscriptstyle N}^\varepsilon(\mathrm{div}\,\varepsilon 0,\Omega), \end{equation} in the unknowns $\lambda \in \mathbb{R}$ (the eigenvalues) and $u \in X_{\rm \scriptscriptstyle N}^\varepsilon(\mathrm{div}\,\varepsilon 0,\Omega)$ (the eigenvectors). The eigenvalues of problem \eqref{prob:eigen1weak} are non-negative, as one can easily see by testing the eigenfunction $u$ against itself. For our purposes it will be convenient to work in the space $X_{\rm \scriptscriptstyle N}^\varepsilon(\Omega)$ rather than $X_{\rm \scriptscriptstyle N}^\varepsilon(\mathrm{div}\,\varepsilon 0,\Omega)$. Hence, following Costabel \cite{Co91} and Costabel and Dauge \cite{CoDo99}, we consider the following eigenvalue problem which presents an additional penalty term: \begin{equation}\label{prob:eigen2weak} \int_{\Omega} \mathrm{curl}\,u \cdot \mathrm{curl}\,v \,dx + \tau \int_{\Omega} \operatorname{div}(\varepsilon u )\, \operatorname{div}(\varepsilon v) \,dx = \sigma \int_{\Omega}\varepsilon u\cdot v \,dx \quad \forall v \in X_{\rm \scriptscriptstyle N}^\varepsilon(\Omega), \end{equation} in the unknowns $u \in X_{\rm \scriptscriptstyle N}^\varepsilon(\Omega)$ and $\sigma \in \mathbb{R}$. Here $\tau>0$ is any fixed positive real number. Solutions of problem \eqref{prob:eigen1weak} will then corresponds to solutions $u$ of \eqref{prob:eigen2weak} with $\operatorname{div}(\varepsilon u)=0$ in $\Omega$ (see Theorem \ref{thm:codau} below). Observe that also the eigenvalues $\sigma$ of problem \eqref{prob:eigen2weak} are non-negative, and that the zero eigenspace of problem \eqref{prob:eigen2weak} (and of problem \eqref{prob:eigen1weak}) coincides with the set $K^\varepsilon(\Omega)$ defined in \eqref{zero:eigenspace:eps}. Following a standard procedure, one can convert problem \eqref{prob:eigen2weak} into an eigenvalue problem for a compact self-adjoint operator. Recall the map $J_\varepsilon$ defined in \eqref{eps:inner:product}, which is nothing but the bilinear form corresponding to the inner product of $L^2_\varepsilon(\Omega)$. Obviously $J_\varepsilon$ can be thought as an operator acting from $L^2_\varepsilon(\Omega)$ to $(X_{\rm \scriptscriptstyle N}^\varepsilon (\Omega))'$. We define the operator $T_\varepsilon$ from $X_{\rm \scriptscriptstyle N}^\varepsilon (\Omega)$ to its dual $(X_{\rm \scriptscriptstyle N}^\varepsilon (\Omega))'$ by \begin{equation*} T_\varepsilon[u][v]:= \int_\Omega \varepsilon u \cdot v \, dx + \int_\Omega \operatorname{curl}u \cdot \operatorname{curl}v \, dx + \tau \int_\Omega \operatorname{div}(\varepsilon u) \, \operatorname{div}(\varepsilon v) \, dx \quad \forall u,v \in X_{\rm \scriptscriptstyle N}^\varepsilon(\Omega). \end{equation*} Observe that by the Riesz theorem, $T_\varepsilon$ is a homeomorphism from $X_{\rm \scriptscriptstyle N}^\varepsilon (\Omega)$ to its dual and thus it can be inverted. We can therefore define the operator $S_\varepsilon$, acting from $L^2_\varepsilon(\Omega)$ to itself, by setting \begin{equation} \label{operator:dfn} S_\varepsilon := \iota_\varepsilon \circ T_\varepsilon^{-1} \circ J_\varepsilon: L^2_\varepsilon(\Omega) \to L^2_\varepsilon(\Omega), \end{equation} where $\iota_\varepsilon$ denotes the embedding of $X_{\rm \scriptscriptstyle N}^\varepsilon(\Omega)$ into $L^2_\varepsilon (\Omega)$. Observe that the space $L^2_\varepsilon(\Omega)$ is equal to $L^2(\Omega)^3$ as a set, and the varying inner products depending on $\varepsilon$ are all equivalent to the standard one. We then have the following lemma. \begin{lemma}\label{lem:Te} Let $\varepsilon \in \mathcal{E}$. Then the operator $S_\varepsilon$ is a self-adjoint operator from $L^2_\varepsilon(\Omega)$ to itself. Moreover, $\sigma$ is an eigenvalue of problem \eqref{prob:eigen2weak} if and only if $\mu=(\sigma +1)^{-1}$ is an eigenvalue of the operator $S_\varepsilon$, the eigenvectors being the same. \end{lemma} \begin{proof} Since $J_\varepsilon$ and $T_\varepsilon$ are both symmetric we get that \begin{equation*} \begin{split} J_\varepsilon[S_\varepsilon [u]][v] &= J_\varepsilon [v] [T_\varepsilon^{-1} \circ J_\varepsilon [u]] = T_\varepsilon [T_\varepsilon^{-1} \circ J_\varepsilon [v]] [T_\varepsilon^{-1} \circ J_\varepsilon [u]] \\ &= T_\varepsilon[T_\varepsilon^{-1} \circ J_\varepsilon [u]] [T_\varepsilon^{-1} \circ J_\varepsilon[v]] = J_\varepsilon [u] [S_\varepsilon [v]] \quad \forall u,v \in L^2_\varepsilon(\Omega), \end{split} \end{equation*} proving that $S_\varepsilon$ is self-adjoint in $L^2_\varepsilon(\Omega)$. Finally, if $(\sigma, u) \in \mathbb{R} \times X_{\rm \scriptscriptstyle N}^\varepsilon(\Omega)$ is an eigenpair of problem \eqref{prob:eigen2weak}, then $T_\varepsilon [u]= (\sigma +1) J_\varepsilon[u]$. Viceversa, if $(\mu,u) \in \mathbb{R} \times L^2_\varepsilon(\Omega)$ is such that $S_\varepsilon [u] = \mu u$ then $u \in X_{\rm \scriptscriptstyle N}^\varepsilon(\Omega)$ and $T_\varepsilon [u] = \mu^{-1} J_\varepsilon [u]$, and thus $u$ is an eigenvector of problem \eqref{prob:eigen2weak} corresponding to the eigenvalue $\sigma=\mu^{-1}-1$. \end{proof} If the space $X_{\rm \scriptscriptstyle N}^\varepsilon(\Omega)$ is compactly embedded into $L^2(\Omega)^3$, which is true under our assumptions on $\varepsilon$ and $\Omega$ (see Weber \cite{We80}), the operator $S_\varepsilon$ is compact and its spectrum consists of $\{0\} \cup \{\mu_n\}_{n \in \mathbb{N}}$ with $\mu_n$ being a decreasing sequence composed of positive eigenvalues of $S_\varepsilon$ of finite multiplicity converging to zero. Accordingly, by \Cref{lem:Te}, the spectrum of problem \eqref{prob:eigen2weak} is composed by ($\varepsilon$-dependent) non-negative eigenvalues of finite multiplicity which can be arranged in an increasing sequence \[ 0\leq \sigma_1[\varepsilon] \leq \sigma_2[\varepsilon] \leq \cdots \leq \sigma_n[\varepsilon] \leq \cdots \nearrow +\infty. \] Here each eigenvalue is repeated in accordance with its multiplicity. Note that the zero eigenvalue has fixed multiplicity depending only on the topology of $\Omega$. By the min-max formula every eigenvalue can be variationally characterized as follows: \begin{equation} \label{minmax:formula} \sigma_j[\varepsilon] = \min_{\substack{V_j \subset X_{\rm \scriptscriptstyle N}^\varepsilon(\Omega),\\ \operatorname{dim}V_j = j}} \max_{\substack{u \in V_j,\\u \neq 0}} \frac{\int_{\Omega} \abs{\operatorname{curl}u}^2 dx + \tau \int_\Omega \abs{\operatorname{div}(\varepsilon u)}^2 dx}{\int_{\Omega} \varepsilon u \cdot u \, dx}. \end{equation} Moreover, we have the following result, in the same spirit of Costabel and Dauge \cite[Thm 1.1]{CoDo99}. \begin{theorem} \label{thm:codau} Let $\Omega$ be as in \eqref{Omega_def}. Let $\varepsilon \in \mathcal{E}$. Then the eigenpairs $(\sigma, u) \in \mathbb{R} \times X_{\rm \scriptscriptstyle N}^\varepsilon(\Omega)$ of problem \eqref{prob:eigen2weak} are spanned by the following two disjoint families: \begin{itemize} \item[i)] the pairs $( \lambda, u) \in \mathbb{R} \times X_{\rm \scriptscriptstyle N}^\varepsilon(\mathrm{div}\,\varepsilon 0,\Omega)$ solutions of problem \eqref{prob:eigen1weak}; \item[ii)] the pairs $(\tau \rho,\nabla f)$ where $(\rho,f) \in \mathbb{R} \times H^1_0(\Omega)$ is an eigenpair of the problem \begin{equation} \label{dirichlet:problem:simillap} \begin{cases} -\operatorname{div}(\varepsilon \nabla f)=\rho f & \text{in }\Omega,\\ f=0 & \text{on }\partial \Omega. \end{cases} \end{equation} \end{itemize} In particular, the set of eigenvalues of problem \eqref{prob:eigen2weak} are given by the union of the set of eigenvalues of problem \eqref{prob:eigen1weak} and the set of eigenvalues of the operator $-\operatorname{div}(\varepsilon\nabla\cdot)$ with Dirichlet boundary conditions in $\Omega$ multiplied by $\tau$. \end{theorem} \begin{proof} It is easily seen that if $(\lambda, u) \in \mathbb{R} \times X_{\rm \scriptscriptstyle N}^\varepsilon(\mathrm{div}\,\varepsilon 0,\Omega)$ is an eigenpair of problem \eqref{prob:eigen1weak}, then it is an eigenpair of problem \eqref{prob:eigen2weak}. Moreover, if $u = \nabla f$, where $f \in H^1_0(\Omega)$ is a solution of problem \eqref{dirichlet:problem:simillap}, then $u \in X_{\rm \scriptscriptstyle N}^\varepsilon(\Omega)$ solves \eqref{prob:eigen2weak} with $\sigma=\tau\rho$. Conversely, suppose that $(\sigma, u) \in \mathbb{R} \times X_{\rm \scriptscriptstyle N}^\varepsilon(\Omega)$ is an eigenpair of problem \eqref{prob:eigen2weak}. If \[ p:= \mathrm{div}(\varepsilon u) =0, \] then clearly $u \in X_{\rm \scriptscriptstyle N}^\varepsilon(\mathrm{div}\,\varepsilon 0,\Omega)$ and solves \eqref{prob:eigen1weak}. Suppose now that $p \neq 0$. We set \[ H^1_0(\Omega,\mathrm{div}(\varepsilon \nabla\cdot)) := \{u \in H^1_0(\Omega): \mathrm{div}(\varepsilon \nabla u) \in L^2(\Omega)\}. \] Then for all $\psi \in H^1_0(\Omega,\mathrm{div}(\varepsilon \nabla\cdot))$, by taking $\nabla \psi$ as test functions in \eqref{prob:eigen2weak} we get \begin{equation*} \int_\Omega \tau \, p \, \operatorname{div}(\varepsilon \nabla \psi) \, dx= \sigma \int_\Omega \varepsilon u \cdot \nabla \psi \, dx = - \sigma \int_\Omega p \, \psi \, dx, \end{equation*} thus \begin{equation} \label{eq:p:and:div} \int_\Omega p \left( \tau \, \operatorname{div}(\varepsilon \nabla \psi) + \sigma \psi \right) \, dx =0. \end{equation} Necessarily $\sigma/\tau$ belongs to the spectrum of the operator $-\operatorname{div}(\varepsilon \nabla\cdot)$ with Dirichlet boundary conditions, because if not we could find a $\hat{\psi}$ such that $\operatorname{div}(\varepsilon \nabla \hat{\psi}) + \frac{\sigma}{\tau}\hat{\psi}=p$, hence from \eqref{eq:p:and:div} we would get $p=0$, which is a contradiction. From the Fredholm alternative we deduce that $p$ belongs to the associated eigenspace, thus $p \in H^1_0(\Omega,\mathrm{div}(\varepsilon \nabla\cdot))$ and \begin{equation} \label{diff:equation:for:p} \operatorname{div}(\varepsilon \nabla p) + \frac{\sigma}{\tau} \, p =0. \end{equation} Now, we define the field \[ w := u + \frac{\tau}{\sigma} \, \nabla p \in X_{\rm \scriptscriptstyle N}^\varepsilon(\Omega). \] If $w=0$ then $u = - \frac{\tau}{\sigma} \, \nabla p$, and recalling \eqref{diff:equation:for:p} one deduces that $(\sigma,u)$ is of the form in ii). Therefore, suppose that $w \neq 0$. Observe that $w$ satisfies \begin{equation*} \operatorname{div}(\varepsilon w)=p + \frac{\tau}{\sigma} \operatorname{div}(\varepsilon \nabla p) =0 \quad \text{and} \quad \operatorname{curl}w = \operatorname{curl}u. \end{equation*} Hence for any $v \in X_{\rm \scriptscriptstyle N}^\varepsilon(\Omega)$ \begin{equation*} \begin{split} \int_\Omega \operatorname{curl}w \cdot \operatorname{curl}v \, dx &= \int_\Omega \left( \sigma \, \varepsilon u \cdot v - \tau \, p \, \operatorname{div}(\varepsilon v) \right) dx = \int_\Omega ( \sigma \, \varepsilon u + \tau \, \varepsilon \nabla p) \cdot v \, dx \\ &= \sigma \int_\Omega \varepsilon w \cdot v \, dx. \end{split} \end{equation*} Thus the pair $(\sigma, w)$ belongs to the family in i) and $\sigma$ is a multiple eigenvalue of \eqref{prob:eigen2weak}. In this case we can split the eigenspace corresponding to $\sigma$ according to the two families in i) and ii). \end{proof} In view of the previous theorem, we introduce the following definition. \begin{definition}\label{defi:maxeig} Let $\Omega$ be as in \eqref{Omega_def}. Let $\varepsilon \in \mathcal{E}$. An eigenvalue $\sigma$ of problem \eqref{prob:eigen2weak} is said to be a \emph{Maxwell eigenvalue} if there exists $u \in X_{\rm \scriptscriptstyle N}^\varepsilon(\mathrm{div}\,\varepsilon 0,\Omega)$, $u \neq 0$, such that $(\sigma, u) $ is an eigenpair of problem \eqref{prob:eigen1weak}. In this case, we say that $u$ is a \emph{Maxwell eigenvector}. We denote the set of Maxwell eigenvalues by: \[ 0\leq \lambda_1[\varepsilon] \leq \lambda_2[\varepsilon] \leq \cdots \leq \lambda_n[\varepsilon] \leq \cdots \nearrow +\infty, \] where we repeat the eigenvalues in accordance with their (Maxwell) multiplicity, i.e. the dimension of the space generated by the corresponding Maxwell eigenvectors. \end{definition} We stress that the introduction of problem \eqref{prob:eigen2weak} is of technical nature to bypass the problem of working in $\varepsilon$-dependent spaces, but in this paper we are mostly interested in the behavior of Maxwell eigenvalues. Accordingly, we will focus more on the behavior of $\{\lambda_j[\varepsilon]\}_{j \in \mathbb{N}} \subseteq \{\sigma[\varepsilon]\}_{j \in \mathbb{N}}$ than on the one of all $\{\sigma[\varepsilon]\}_{j \in \mathbb{N}}$. Note also that the Maxwell eigenvalues $\{\lambda_j[\varepsilon]\}_{j \in \mathbb{N}}$ do not depend upon the choice of the parameter $\tau>0$ multiplying the penalty term of problem \eqref{prob:eigen2weak}, meaning that different values of $\tau$ provide exactly the same Maxwell spectrum. \section{Continuity of the eigenvalues}\label{sec:cont} We first focus on the continuity of the eigenvalues $\sigma_j[\varepsilon]$ of problem \eqref{prob:eigen2weak}, which in particular implies the continuity of the Maxwell eigenvalues $\lambda_j[\varepsilon]$. For the sake of simplicity, in this section we will fix $\tau=1$. Note that the results presented below remain valid independently of the value of $\tau >0$. We find it convenient to introduce the space \begin{equation} \label{dfn:H1N} H^1_{\rm \scriptscriptstyle N}(\Omega) : = \set{u \in H^1(\Omega)^3 : \nu \times u =0 \text{ on } \partial \Omega}, \end{equation} endowed with the usual $H^1$-norm. Note that in view of formula \eqref{div:matrixxvector} and of the Gaffney inequality \eqref{ineq:GF} (valid under our assumptions \eqref{Omega_def}) the spaces $X_{\rm \scriptscriptstyle N}^\varepsilon(\Omega)$ and $H^1_{\rm \scriptscriptstyle N}(\Omega)$ coincide as sets for every $\varepsilon \in \mathcal{E}$, and their respective norms are equivalent. Hence one can use the space $H^1_{\rm \scriptscriptstyle N}(\Omega)$ for the variational characterization of the eigenvalues: the benefit lies in the fact that in this way we do not have to deal with Hilbert spaces that may depend on the permittivity parameter $\varepsilon$, allowing us to compare Rayleigh quotients relative to different permittivities. In other words, the min-max characterization \eqref{minmax:formula} can be equivalently written as \begin{equation} \label{minmax:formula:H1} \sigma_j[\varepsilon] = \min_{\substack{V_j \subset H^1_{\rm \scriptscriptstyle N}(\Omega),\\ \operatorname{dim}V_j = j}} \max_{\substack{u \in V_j,\\u \neq 0}} \frac{\int_{\Omega} \abs{\operatorname{curl}u}^2 dx + \int_\Omega \abs{\operatorname{div}(\varepsilon u)}^2 dx}{\int_{\Omega} \varepsilon u \cdot u \, dx}, \end{equation} which is the one we will exploit in order to prove our continuity result. Before doing so, we first prove a locally uniform Gaffney inequality, that can be obtained by exploiting the standard inequality \eqref{ineq:GF} for a fixed permittivity. \begin{proposition}\label{prop:ugi} Let $\Omega$ be as in \eqref{Omega_def}. Let $\tilde{\varepsilon} \in \mathcal{E}$. Then there exist two constants $\delta, C_\mathcal{G}>0$ such that \begin{equation} \label{uniform:permittivity:gaffney:ineq} \|u\|_{H^1(\Omega)^3}^2 \leq C_\mathcal{G} \left( \langle \varepsilon u, u\rangle_{L^2(\Omega)^3} + \|\operatorname{curl}u\|^2_{L^2(\Omega)^3} + \|\operatorname{div}(\varepsilon u)\|^2_{L^2(\Omega)} \right) \end{equation} for all $u \in H^1_{\rm \scriptscriptstyle N}(\Omega)$ and for all $\varepsilon \in \mathcal{E}$ with $\|\varepsilon - \tilde{\varepsilon}\|_{W^{1,\infty}(\Omega)} < \delta$. \end{proposition} \begin{proof} First of all, we observe that if $\varepsilon' \in \mathcal{E}$ then by formula \eqref{div:matrixxvector} we have that \begin{equation}\label{PK} \operatorname{div}(\varepsilon' u) = \operatorname{tr}(\varepsilon' Du) + (\operatorname{div}\varepsilon') \cdot u. \end{equation} Moreover, if $M$ is a $3 \times 3$ matrix then the following inequalities \begin{equation} \label{stima:trace} |\operatorname{tr}(\varepsilon'(x) M)| \leq 9 \norm{\varepsilon'}_{L^\infty(\Omega)} |M|, \end{equation} \begin{equation} \label{stima:diveps} |\operatorname{div}\varepsilon'(x)| \leq 3 \sqrt{3} \norm{\varepsilon'}_{W^{1,\infty}(\Omega)} \end{equation} hold for a.e. $x \in \Omega$, where $|M|$ denotes the matrix norm $|M| := \max_{i,j}|M_{ij}|$. Fix $u \in H^1_{\rm \scriptscriptstyle N}(\Omega)$ and $\varepsilon \in \mathcal{E}$. From \eqref{ineq:GF} we have that the Gaffney inequality holds for $\tilde{\varepsilon}$, namely there exists a constant $C_{\tilde{\varepsilon}} >0$ independent of $u$ such that \begin{equation} \label{gaffney:ineq:tilde:eps} \|u\|_{H^1(\Omega)^3}^2 \leq C_{\tilde{\varepsilon}} \left( \langle \tilde\varepsilon u, u\rangle_{L^2(\Omega)^3} + \|\operatorname{curl}u\|^2_{L^2(\Omega)^3} + \|\operatorname{div}\tilde\varepsilon u\|^2_{L^2(\Omega)} \right). \end{equation} Moreover \begin{align*} \abs{\operatorname{tr}(\tilde\varepsilon Du)^2 - \operatorname{tr}(\varepsilon Du)^2} &= \abs{\operatorname{tr}\left( (\tilde\varepsilon + \varepsilon) Du\right) \operatorname{tr}\left( (\tilde\varepsilon - \varepsilon) Du) \right)} \\ & \leq 9^2 \, \norm{\tilde\varepsilon +\varepsilon}_{L^\infty(\Omega)} \norm{\tilde\varepsilon -\varepsilon}_{L^\infty(\Omega)} \, |Du|^2, \end{align*} and \begin{align*} \abs{\left( \operatorname{div}\tilde\varepsilon \cdot u\right)^2 - \left( \operatorname{div}\varepsilon \cdot u\right)^2}& = \abs{\left( \operatorname{div}(\tilde\varepsilon- \varepsilon) \cdot u \right) \left( \operatorname{div} (\tilde\varepsilon + \varepsilon) \cdot u \right)} \\ &\leq (3\sqrt{3})^2 \, \norm{\tilde\varepsilon +\varepsilon}_{W^{1,\infty}(\Omega)} \norm{\tilde\varepsilon -\varepsilon}_{W^{1,\infty}(\Omega)} \, |u|^2 \end{align*} and \begin{equation*} \begin{split} &2 \abs{\operatorname{tr}(\tilde\varepsilon Du) \ \operatorname{div}\tilde\varepsilon \cdot u - \operatorname{tr}(\varepsilon Du) \ \operatorname{div}\varepsilon \cdot u} \\ & \quad \leq 2 \abs{\operatorname{tr}(\tilde\varepsilon Du) \ \operatorname{div}(\tilde\varepsilon- \varepsilon) \cdot u} + 2 \abs{\operatorname{tr}\left((\tilde\varepsilon-\varepsilon) Du \right) \ \operatorname{div}\varepsilon \cdot u} \\ & \quad \leq 2\cdot 9\cdot3\sqrt{3} \, \left(\norm{\tilde\varepsilon}_{W^{1,\infty}(\Omega)} + \norm{\tilde\varepsilon -\varepsilon}_{W^{1,\infty}(\Omega)} \right) \norm{\tilde\varepsilon -\varepsilon}_{W^{1,\infty}(\Omega)} 2 \, |u|\,|Du| \\ & \quad \leq 54\sqrt{3} \, \left(\norm{\tilde\varepsilon}_{W^{1,\infty}(\Omega)} + \norm{\tilde\varepsilon -\varepsilon}_{W^{1,\infty}(\Omega)} \right) \norm{\tilde\varepsilon -\varepsilon}_{W^{1,\infty}(\Omega)} (\,|u|^2 + |Du|^2). \end{split} \end{equation*} Thus \begin{equation} \label{diff:div:estimate} \begin{split} &\abs{\,\norm{\operatorname{div}(\tilde{\varepsilon}u)}_{L^2(\Omega)}^2 - \norm{\operatorname{div}(\varepsilon u)}_{L^2(\Omega)}^2} \\ & \qquad \leq 54\sqrt{3} \, \left(\norm{\tilde\varepsilon}_{W^{1,\infty}(\Omega)} + \norm{\tilde\varepsilon -\varepsilon}_{W^{1,\infty}(\Omega)} + \norm{\tilde\varepsilon +\varepsilon}_{W^{1,\infty}(\Omega)} \right)\\ & \qquad\qquad \times \norm{\tilde\varepsilon -\varepsilon}_{W^{1,\infty}(\Omega)} \left (\int_\Omega |u|^2 + \int_\Omega |Du|^2 \right). \end{split} \end{equation} Moreover, we have that \begin{equation} \label{stima:diff:eps:tildeeps:L2} \abs{\langle \tilde\varepsilon u, u\rangle_{L^2(\Omega)^3} - \langle \varepsilon u, u\rangle_{L^2(\Omega)^3}} \leq 3 \norm{\tilde{\varepsilon} - \varepsilon}_{L^\infty(\Omega)} \norm{u}^2_{L^2(\Omega)^3}. \end{equation} Therefore, making use of \eqref{diff:div:estimate} and \eqref{stima:diff:eps:tildeeps:L2} in \eqref{gaffney:ineq:tilde:eps} we obtain that \begin{equation*} \begin{split} \norm{u}_{H^1(\Omega)^3}^2 &\leq C_{\tilde\varepsilon} \left( \langle \varepsilon u, u\rangle_{L^2(\Omega)^3} + \norm{\operatorname{curl}u}^2_{L^2(\Omega)^3} + \norm{\operatorname{div}\varepsilon u}^2_{L^2(\Omega)} \right) + \\ & + C_{\tilde\varepsilon} \left( \, 54\sqrt{3} \, \left(\norm{\tilde\varepsilon}_{W^{1,\infty}(\Omega)} + \norm{\tilde\varepsilon -\varepsilon}_{W^{1,\infty}(\Omega)} + \norm{\tilde\varepsilon +\varepsilon}_{W^{1,\infty}(\Omega)} \right) +3\right)\norm{\tilde\varepsilon -\varepsilon}_{W^{1,\infty}(\Omega)} \norm{u}^2_{H^1(\Omega)^3}\\ &\leq C_{\tilde\varepsilon} \left( \langle \varepsilon u, u\rangle_{L^2(\Omega)^3} + \norm{\operatorname{curl}u}^2_{L^2(\Omega)^3} + \norm{\operatorname{div}\varepsilon u}^2_{L^2(\Omega)} \right) + \\ & + C_{\tilde\varepsilon} \left( \, 3\cdot 54\sqrt{3} \, \left(\norm{\tilde\varepsilon}_{W^{1,\infty}(\Omega)} + \norm{\tilde\varepsilon -\varepsilon}_{W^{1,\infty}(\Omega)} \right)+3\right) \norm{\tilde\varepsilon -\varepsilon}_{W^{1,\infty}(\Omega)} \norm{u}^2_{H^1(\Omega)^3}. \end{split} \end{equation*} Hence, taking $\delta>0$ small enough such that for all $\varepsilon \in \mathcal{E}$ with $\|\tilde\varepsilon -\varepsilon\|_{W^{1,\infty}(\Omega)} <\delta$ we have that \begin{equation*} 1-C_{\tilde\varepsilon} \left( \, 162\sqrt{3} \, \left(\norm{\tilde\varepsilon}_{W^{1,\infty}(\Omega)} + \norm{\tilde\varepsilon -\varepsilon}_{W^{1,\infty}(\Omega)} \right)+3\right)\norm{\tilde\varepsilon -\varepsilon}_{W^{1,\infty}(\Omega)} >0, \end{equation*} then we get that formula \eqref{uniform:permittivity:gaffney:ineq} holds with \begin{equation*} C_\mathcal{G} := \frac{ C_{\tilde\varepsilon} }{ 1-\delta \,C_{\tilde\varepsilon}\left( \, 162\sqrt{3} \, \left(\norm{\tilde\varepsilon}_{W^{1,\infty}(\Omega)} + \delta \right)+3\right)}. \end{equation*} \end{proof} We are now ready to show that the eigenvalues $\sigma_j[\varepsilon]$ of problem \eqref{prob:eigen2weak} are locally Lipschitz continuous in $\varepsilon$. \begin{theorem} \label{thm:loclip} Let $\Omega$ be as in \eqref{Omega_def}. Let $j\in\mathbb{N}$ and $\varepsilon_1 \in \mathcal{E}$. Then there exist two constants $\delta, \tilde{C}>0$ such that \begin{equation} \label{eigenarestronglycont} \abs{\sigma_j[\varepsilon_1] - \sigma_j[\varepsilon_2]} \leq \tilde{C} \norm{\varepsilon_1 -\varepsilon_2}_{W^{1,\infty}(\Omega)} \end{equation} for all $\varepsilon_2 \in \mathcal{E}$ such that $\norm{\varepsilon_1 -\varepsilon_2}_{W^{1,\infty}(\Omega)} < \delta$. \end{theorem} \begin{proof} For the sake of simplicity in this proof, given $\varepsilon \in \mathcal{E}$ and $u \in H^1_{\rm \scriptscriptstyle N}(\Omega)$, we set \[ \mathcal{R}[u]:= \int_\Omega |\operatorname{curl}u|^2 dx , \qquad \mathcal{D}_\varepsilon[u]:= \int_\Omega |\operatorname{div}(\varepsilon u)|^2 dx . \] Let be $\delta>0$ be as in Proposition \ref{prop:ugi} with $\tilde \varepsilon = \varepsilon_1$. Let $\varepsilon_2 \in \mathcal{E}$ be such that $\|\varepsilon_1-\varepsilon_2\|_{W^{1,\infty}(\Omega)}<\delta$ and recall that $c_{\varepsilon_1}$, $c_{\varepsilon_2}$ denote the constants associated with the coercivity of $\varepsilon_1, \varepsilon_2$ respectively (see \eqref{def:cc}). Fix $u \in H^1_{\rm \scriptscriptstyle N}(\Omega)$. Then \begin{align} \label{eq:rd1rd2} &\abs{\frac{\mathcal{R}[u] + \mathcal{D}_{\varepsilon_1}[u]}{\int_\Omega \varepsilon_1 u \cdot u\,dx} - \frac{\mathcal{R}[u] + \mathcal{D}_{\varepsilon_2}[u]}{\int_\Omega \varepsilon_2 u \cdot u\,dx}} \\ \nonumber & \quad \leq \frac{\mathcal{R}[u] \abs{\int_\Omega (\varepsilon_2- \varepsilon_1) u \cdot u\,dx} + \abs{\mathcal{D}_{\varepsilon_1}[u] \int_\Omega \varepsilon_2 \, u \cdot u\,dx - \mathcal{D}_{\varepsilon_2}[u] \int_\Omega \varepsilon_1 \, u \cdot u\,dx}}{(\int_\Omega \varepsilon_1 u \cdot u\,dx) (\int_\Omega \varepsilon_2 u \cdot u\,dx)} \\ \nonumber & \quad \leq \frac{3 \, \|\varepsilon_2- \varepsilon_1\|_{L^\infty(\Omega)} \mathcal{R}[u] \int_\Omega \abs{u}^2\,dx}{(\int_\Omega \varepsilon_1 u \cdot u\,dx) (\int_\Omega \varepsilon_2 u \cdot u\,dx)} \\ \nonumber & \quad \quad + \frac{\Big|\mathcal{D}_{\varepsilon_1}[u] \int_\Omega \varepsilon_2 \, u \cdot u\,dx - \mathcal{D}_{\varepsilon_1}[u] \int_\Omega \varepsilon_1 \, u \cdot u\,dx \Big|}{(\int_\Omega \varepsilon_1 u \cdot u\,dx) (\int_\Omega \varepsilon_2 u \cdot u\,dx)}\\ \nonumber &\quad\quad+\frac{\Big|\mathcal{D}_{\varepsilon_1}[u] \int_\Omega \varepsilon_1 \, u \cdot u\,dx - \mathcal{D}_{\varepsilon_2}[u] \int_\Omega \varepsilon_1 \, u \cdot u\,dx\Big|}{(\int_\Omega \varepsilon_1 u \cdot u\,dx) (\int_\Omega \varepsilon_2 u \cdot u\,dx)} \\ \nonumber & \quad \leq \frac{3 \, \|\varepsilon_1- \varepsilon_2\|_{W^{1,\infty}(\Omega)}}{c_{\varepsilon_2}} \frac{\mathcal{R}[u] + \mathcal{D}_{\varepsilon_1}[u]}{\int_\Omega \varepsilon_1 u \cdot u\,dx} + \frac{\abs{\mathcal{D}_{\varepsilon_1}[u] - \mathcal{D}_{\varepsilon_2}[u]}}{\int_\Omega \varepsilon_2 u \cdot u\,dx}. \end{align} We now focus on the second term in the right hand side of the above inequality. By the same reasoning used to prove inequality \eqref{diff:div:estimate} we deduce that there exist a constant $C>0$ not depending on $\varepsilon_1$, $\varepsilon_2$ and $u$ such that \begin{equation} \label{diff:div:estimate:formula2} \begin{split} &\abs{\mathcal{D}_{\varepsilon_1}[u] - \mathcal{D}_{\varepsilon_2}[u]} \leq C \, \max_{i=1,2}\set{\norm{\varepsilon_i}_{W^{1,\infty}(\Omega)}} \norm{\varepsilon_1 -\varepsilon_2}_{W^{1,\infty}(\Omega)} \left (\int_\Omega |u|^2\,dx + \int_\Omega |Du|^2 \,dx\right). \end{split} \end{equation} Moreover, thanks to the locally uniform Gaffney inequality \eqref{uniform:permittivity:gaffney:ineq} there exists a constant $C_\mathcal{G}>0$ such that for $i=1,2$ \begin{equation*} \int_\Omega |Du|^2\,dx \leq C_\mathcal{G} \int_\Omega \left( \varepsilon_i u \cdot u + |\operatorname{curl}u|^2 + |\operatorname{div}(\varepsilon_i u)|^2 \right)\,dx. \end{equation*} Using the above inequality with $i=2$ we get \begin{equation*} \frac{\int_\Omega |Du|^2\,dx}{\int_\Omega \varepsilon_2 u \cdot u\,dx} \leq C_\mathcal{G} \left( 1 + \frac{\mathcal{R}[u] + \mathcal{D}_{\varepsilon_2}[u]}{\int_\Omega \varepsilon_2 u \cdot u\,dx} \right), \end{equation*} which applied to \eqref{diff:div:estimate:formula2} yields \begin{align}\label{eq:d1d2} \frac{\abs{\mathcal{D}_{\varepsilon_1}[u] - \mathcal{D}_{\varepsilon_2}[u]}}{\int_\Omega \varepsilon_2 u \cdot u\,dx} \leq C \, \max_{i=1,2} &\,\set{\|\varepsilon_i\|_{W^{1,\infty}(\Omega)}} \|\varepsilon_1 -\varepsilon_2\|_{W^{1,\infty}(\Omega)} \\ \nonumber &\times \left( \frac{1}{c_{\varepsilon_2}} + C_\mathcal{G} \left( 1 + \frac{\mathcal{R}[u] + \mathcal{D}_{\varepsilon_2}[u]}{\int_\Omega \varepsilon_2 u \cdot u\,dx} \right) \right). \end{align} Thus it follows from \eqref{eq:rd1rd2} and \eqref{eq:d1d2} that \begin{equation} \label{carabinieri:autov:gigi} \begin{split} &\frac{\mathcal{R}[u] + \mathcal{D}_{\varepsilon_1}[u]}{\int_\Omega \varepsilon_1 u \cdot u\,dx} \left( 1 - 3\, \frac{\norm{\varepsilon_2- \varepsilon_1}_{W^{1,\infty}(\Omega)}}{c_{\varepsilon_2}} \right) \\ & \quad \leq \frac{\mathcal{R}[u] + \mathcal{D}_{\varepsilon_2}[u]}{\int_\Omega \varepsilon_2 u \cdot u\,dx} \left( 1 + C_\mathcal{G} \, C \max_{i=1,2} \set{\norm{\varepsilon_i}_{W^{1,\infty}(\Omega)}} \norm{\varepsilon_1 -\varepsilon_2}_{W^{1,\infty}(\Omega)} \right) \\ & \quad \quad + C \max_{i=1,2} \set{\norm{\varepsilon_i}_{W^{1,\infty}(\Omega)}} \norm{\varepsilon_1 -\varepsilon_2}_{W^{1,\infty}(\Omega)} \left( \frac{1}{c_{\varepsilon_2}} + C_\mathcal{G} \right). \end{split} \end{equation} Eventually taking a smaller $\delta>0$, and taking the appropriate supremum and infimum in \eqref{carabinieri:autov:gigi}, the min-max formula \eqref{minmax:formula:H1} yields \begin{equation* \begin{split} \sigma_j[\varepsilon_1] - \sigma_j[\varepsilon_2] &\leq \Bigg( \frac{3}{c_{\varepsilon_2}} \sigma_j[\varepsilon_1]+ C_\mathcal{G} \, C \max_{i=1,2} \set{\norm{\varepsilon_i}_{W^{1,\infty}(\Omega)}} \sigma_j[\varepsilon_2] \\ & \quad + C \max_{i=1,2} \set{\norm{\varepsilon_i}_{W^{1,\infty}(\Omega)}}\left( \frac{1}{c_{\varepsilon_2}} + C_\mathcal{G} \right) \Bigg) \|\varepsilon_1 -\varepsilon_2\|_{W^{1,\infty}(\Omega)}. \end{split} \end{equation*} Exchanging the role of $\varepsilon_1$ and $\varepsilon_2$, we get the inequality \eqref{eigenarestronglycont} but with a constant possibly depending also on $\varepsilon_2$, which is: \begin{align}\label{lipconst} \widehat C(\varepsilon_2):= & \,\,3\max \set{\frac{\sigma_j[\varepsilon_1]}{c_{\varepsilon_2}}, \frac{\sigma_j[\varepsilon_2]}{c_{\varepsilon_1}}} \\ \nonumber &+ C\max_{i=1,2} \set{\norm{\varepsilon_i}_{W^{1,\infty}(\Omega)}} \bigg( C_G\max_{i=1,2} \set{\sigma_j[\varepsilon_i]} + \max_{i=1,2} \set{\frac{1}{c_{\varepsilon_i}}} + C_\mathcal{G} \bigg) . \end{align} In order to finish the proof, it only remains to show that this constant can be chosen uniform in $\varepsilon_2$. Up to taking a smaller $\delta$, we note that by \eqref{contcoerc} the constant $c_{\varepsilon_2}$ is uniformly bounded away from zero in $\varepsilon_2$. Indeed by \eqref{contcoerc} one has that \[ c_{\varepsilon_2}\geq c_{\varepsilon_1} -3\delta. \] Moreover, $\sigma_j[\varepsilon_2]$ is also locally uniformly bounded in $\varepsilon_2$. Indeed, from \eqref{PK}, \eqref{stima:trace} and \eqref{stima:diveps} it is not difficult to see that there exists a constant $C'>0$ not depending on $\varepsilon_2$ such that for all $u \in H^1_{\rm \scriptscriptstyle N}(\Omega)$ one has \begin{equation*} \int_\Omega \abs{\operatorname{div}(\varepsilon_2 u)}^2\,dx \leq C' \norm{\varepsilon_2}_{W^{1,\infty}(\Omega)}^2 \int_\Omega \left( |u|^2 + |Du|^2 \right)\,dx. \end{equation*} Then, applying the standard Gaffney inequality (with unitary permittivity) we get that for all $u \in H^1_{\rm \scriptscriptstyle N}(\Omega)$: \begin{equation*} \int_\Omega \abs{\operatorname{div}(\varepsilon_2 u)}^2 \,dx\leq C' \norm{\varepsilon_2}_{W^{1,\infty}(\Omega)}^2 \int_\Omega \left( |u|^2 + |\operatorname{curl}u|^2 + |\operatorname{div}u|^2 \right)\,dx. \end{equation*} Hence, using the min-max formula \eqref{minmax:formula:H1} for $\sigma_j[\varepsilon_2]$ we have that \begin{equation*} \begin{split} \sigma_j [\varepsilon_2] &= \min_{\substack{V_j \subset H^1_{\rm \scriptscriptstyle N}(\Omega),\\ \operatorname{dim}V_j = j}} \max_{\substack{u \in V_j,\\u \neq 0}} \frac{\int_\Omega |\operatorname{curl}u|^2\,dx + \int_\Omega |\operatorname{div}(\varepsilon_2 u)|^2\,dx}{\int_\Omega \varepsilon_2 \, u \cdot u\,dx} \\ & \leq \, \frac{C'\norm{\varepsilon_2}^2_{W^{1,\infty}(\Omega)}+1}{c_{\varepsilon_2}} \min_{\substack{V_j \subset H^1_{\rm \scriptscriptstyle N}(\Omega),\\ \operatorname{dim}V_j = j}} \max_{\substack{u \in V_j,\\u \neq 0}} \left( \frac{\int_\Omega |\operatorname{curl}u|^2\,dx + \int_\Omega |\operatorname{div}u|^2\,dx}{\int_\Omega |u|^2\,dx} +1 \right) \\ &= \, \frac{C'\norm{\varepsilon_2}^2_{W^{1,\infty}(\Omega)}+1}{c_{\varepsilon_2}} (\sigma_j[I_3] +1) \\ &\leq \, \frac{C'\left(\norm{\varepsilon_1}_{W^{1,\infty}(\Omega)}+\delta\right)^2+1}{c_{\varepsilon_1}-3\delta} (\sigma_j[I_3] +1), \end{split} \end{equation*} where $\sigma_j[I_3]$ is the $j$-th eigenvalue of problem \eqref{prob:eigen2weak} set with unitary permittivity. Accordingly, the constant $\widehat C(\varepsilon_2)$ defined in \eqref{lipconst} is bounded above by a constant $\tilde C$ independent of $\varepsilon_2$ for all $\varepsilon_2 \in \mathcal{E}$ such that $\|\varepsilon_1-\varepsilon_2\|_{W^{1,\infty}(\Omega)}<\delta$. Thus the inequality \eqref{eigenarestronglycont} is proved. \end{proof} \section{Analyticity and the derivative in $\varepsilon$}\label{sec:an} In the previous section we have showed that the eigenvalues $\sigma_j[\varepsilon]$ of the modified problem \eqref{prob:eigen2weak} (and in particular the Maxwell eigenvalues $\lambda_j[\varepsilon]$) are locally Lipschitz continuous in $\varepsilon \in \mathcal{E}$. In this section we are interested in proving higher regularity properties. More in detail we plan to show that the eigenvalues depend analytically upon $\varepsilon$, and provide an explicit formula for their $\varepsilon$-derivative. As already mentioned in the introduction, if we consider a multiple eigenvalue, a perturbation of the permittivity can in principle split the eigenvalue into different eigenvalues of lower multiplicity and thus the corresponding branches can have a corner at the splitting point. In this case we will not even have differentiability. Our strategy in order to bypass this problem is to consider the symmetric functions of multiple eigenvalues. This point of view has been first introduced by Lamberti and Lanza de Cristoforis in \cite{LaLa04} and later successfully adopted in many other works (see, e.g., \cite{BuLa13, BuLa15, LaZa20, LaLuMu21, LaMuTa21}). Recall that \[ 0< \sigma_1[\varepsilon] \leq \sigma_2[\varepsilon] \leq \cdots \leq \sigma_n[\varepsilon] \leq \cdots \nearrow +\infty. \] are the eigenvalues of problem \eqref{prob:eigen2weak}, while instead \[ 0< \lambda_1[\varepsilon] \leq \lambda_2[\varepsilon] \leq \cdots \leq \lambda_n[\varepsilon] \leq \cdots \nearrow +\infty. \] are the subset of Maxwell eigenvalues of problem \eqref{prob:eigen2weak} (see Definition \ref{defi:maxeig}). Also recall that, by Lemma \ref{lem:Te}, $\{\sigma_j[\varepsilon]\}_{j \in \mathbb{N}}$ coincide with the reciprocal minus one of the eigenvalues of the operator $S_\varepsilon$ defined in \eqref{operator:dfn}. In order to obtain an explicit formula for the derivatives of the Maxwell eigenvalues with respect to the permittivity $\varepsilon$ we need the following technical lemma. \begin{lemma}\label{lem:der} Let $\Omega$ be as in \eqref{Omega_def}. Let $\tilde{\varepsilon} \in \mathcal{E}$ and $\tilde{u}, \tilde{v} \in X_{\rm \scriptscriptstyle N}^{\tilde \varepsilon}(\mathrm{div}\,\tilde\varepsilon 0,\Omega)$ be two Maxwell eigenvectors associated with a Maxwell eigenvalue $\tilde{\lambda}$ with permittivity $\tilde \varepsilon$. Then \begin{equation} \label{der:oper} \langle d\rvert_{\varepsilon = \tilde{\varepsilon}} S_\varepsilon [\eta][\tilde{u}], \tilde{v} \rangle_{\tilde{ \varepsilon}} = \tilde{\lambda}(\tilde{\lambda}+1)^{-2} \int_\Omega \eta \tilde{u} \cdot \tilde{v}\, dx \end{equation} for all $\eta \in W^{1,\infty} \left(\Omega\right) \cap \mathrm{Sym}_3 (\Omega)$. \end{lemma} \begin{proof} Under our assumptions on $\Omega$, the space $X_{\rm \scriptscriptstyle N}^\varepsilon(\Omega)$ coincides with the space $H^1_{\rm \scriptscriptstyle N}(\Omega)$ introduced in \eqref{dfn:H1N}, and their norm are equivalent. Then, it is easily seen that the compact self-adjoint operator $S_\varepsilon$ in $L^2(\Omega)$ is obtained by compositions and inversions of real-analytic maps in $\varepsilon$ (such as linear and multilinear continuous maps). As a consequence $S_\varepsilon$ depends real analytically upon $\varepsilon$. Now let $\eta \in W^{1,\infty} \left(\Omega\right) \cap \mathrm{Sym}_3 (\Omega)$. Since $J_{\tilde \varepsilon}[\tilde{u}] = (\tilde{\lambda} +1)^{-1} T_{\tilde{\varepsilon}}[\tilde{u}]$, $J_\varepsilon[\tilde{v}] = (\tilde{\lambda} +1)^{-1} T_{\tilde{\varepsilon}}[\tilde{v}]$, and $S_{\tilde{\varepsilon}}$ is symmetric, we have that \begin{equation} \label{diff:JeB} \begin{split} &\langle d\rvert_{\varepsilon = \tilde{\varepsilon}} S_\varepsilon [\eta][\tilde{u}], \tilde{v}\rangle_{\tilde \varepsilon} \\ &\quad = \langle \iota_\varepsilon \circ T_{\tilde{\varepsilon}}^{-1} \circ d\rvert_{\varepsilon = \tilde{\varepsilon}} J_\varepsilon [\eta] [\tilde{u}], \tilde{v} \rangle_{\tilde \varepsilon} + \langle \iota_\varepsilon \circ d\rvert_{\varepsilon = \tilde{\varepsilon}} T_{\varepsilon}^{-1} [\eta] \circ J_{\tilde \varepsilon} [\tilde{u}], \tilde{v}\rangle_{\tilde \varepsilon}\\ &\quad = J_{\tilde{\varepsilon}} [\tilde{v}] \left[ \iota_\varepsilon \circ T_{\tilde{\varepsilon}}^{-1} \circ d\rvert_{\varepsilon = \tilde{\varepsilon}} J_\varepsilon[\eta][\tilde{u}] \right] + J_{\tilde{\varepsilon}} [\tilde{v}]\left[ \iota_\varepsilon \circ d\rvert_{\varepsilon = \tilde{\varepsilon}} T_\varepsilon^{-1} [\eta] \circ J_{\tilde{\varepsilon}} [\tilde{u}] \right]\\ &\quad = (\tilde{\lambda} +1)^{-1} T_{\tilde{\varepsilon}}[\tilde{v}] \left[ T_{\tilde{\varepsilon}}^{-1} \circ d\rvert_{\varepsilon = \tilde{\varepsilon}} J_\varepsilon[\eta][\tilde{u}] - T_{\tilde{\varepsilon}}^{-1} \circ d\rvert_{\varepsilon = \tilde{\varepsilon}} T_\varepsilon [\eta] \circ T_{\tilde{\varepsilon}}^{-1} \circ J_{\tilde{\varepsilon}}[\tilde{u}] \right]\\ &\quad = (\tilde{\lambda} +1)^{-1} T_{\tilde{\varepsilon}} \left[ T_{\tilde{\varepsilon}}^{-1} \circ d\rvert_{\varepsilon = \tilde{\varepsilon}} J_\varepsilon[\eta][\tilde{u}] - T_{\tilde{\varepsilon}}^{-1} \circ d\rvert_{\varepsilon = \tilde{\varepsilon}} T_\varepsilon [\eta] \circ T_{\tilde{\varepsilon}}^{-1} \circ (\tilde{\lambda} +1)^{-1} T_{\tilde{\varepsilon}} [\tilde{u}] \right] [\tilde{v}]\\ &\quad = (\tilde{\lambda} +1)^{-1} \left( d\rvert_{\varepsilon = \tilde{\varepsilon}} J_\varepsilon[\eta][\tilde{u}][\tilde{v}] - (\tilde{\lambda} +1)^{-1} d\rvert_{\varepsilon = \tilde{\varepsilon}} T_\varepsilon [\eta][\tilde{u}][\tilde{v}] \right). \end{split} \end{equation} Moreover, by standard calculus, \begin{equation} \label{der:J} d\rvert_{\varepsilon = \tilde{\varepsilon}} J_\varepsilon[\eta][\tilde{u}][\tilde{v}] = \int_\Omega \eta \tilde{u} \cdot \tilde{v} \, dx \end{equation} and \begin{equation} \label{der:B} \begin{split} d\rvert_{\varepsilon = \tilde{\varepsilon}} T_\varepsilon [\eta][\tilde{u}][\tilde{v}] = \int_\Omega \eta \tilde{u} \cdot \tilde{v} \, dx + \int_\Omega \left( \operatorname{div}(\tilde{\varepsilon}\tilde{u}) \, \operatorname{div}(\eta \tilde{v}) + \operatorname{div}(\eta \tilde{u}) \, \operatorname{div}(\tilde{\varepsilon} \tilde{v}) \right) \, dx. \end{split} \end{equation} Since $\operatorname{div}(\tilde{\varepsilon}\tilde{u})=0=\operatorname{div}(\tilde{\varepsilon}\tilde{v})$ in $\Omega$, using \eqref{diff:JeB}, \eqref{der:J} and \eqref{der:B}, we get \eqref{der:oper}. \end{proof} Following \cite{LaLa04}, given a finite set of indices $F \subset \mathbb{N}$, we consider those permittivities $\varepsilon \in \mathcal{E}$ for which Maxwell eigenvalues with indices in $F$ do not coincide with Maxwell eigenvalues with indices outside $F$. We then introduce the following sets: \[ \mathcal{E}[F] := \set{\varepsilon \in \mathcal{E} : \lambda_j[\varepsilon] \neq \lambda_l[\varepsilon] \ \forall j \in F, l \in \mathbb{N}\setminus F} \] and \[ \Theta[F] := \set{\varepsilon \in \mathcal{E}[F] : \lambda_j[\varepsilon] \text{ have a common value } \lambda_F[\varepsilon] \text{ for all } j \in F}. \] Let $\varepsilon \in \mathcal{E}[F]$. The elementary symmetric function of degree $s \in \{1,\dots, \abs{F}\}$ of the Maxwell eigenvalues with indices in $F$ is defined by \[ \Lambda_{F,s}[\varepsilon] := \sum_{\substack{j_1,\dots,j_s \in F \\ j_1<\dots<j_s}} \lambda_{j_1}[\varepsilon] \cdots \lambda_{j_s}[\varepsilon]. \] In the following theorem we show that the maps $\varepsilon \mapsto \Lambda_{F,s}[\varepsilon]$ are real analytical on $\mathcal{E}[F]$ and we compute their Fr\'echet derivatives with respect to $\varepsilon$. \begin{theorem}\label{thm:diffeps} Let $\Omega$ be as in \eqref{Omega_def}. Let $F$ be a finite subset of $\mathbb{N}$ and $s \in \{1,\ldots,|F|\}$. Then $\mathcal{E}[F]$ is open in $W^{1,\infty} \left(\Omega\right) \cap \mathrm{Sym}_3 (\Omega)$ and the elementary symmetric function $\Lambda_{F,s}$ depend real analytically upon $\varepsilon \in \mathcal{E}[F]$. Moreover, if $\{F_1,\ldots, F_n\}$ is a partition of $F$ and $\tilde \varepsilon \in \bigcap_{k=1}^n \Theta[F]$ is such that for each $k =1,\ldots, n$ the Maxwell eigenvalues $\lambda_j[\tilde \varepsilon]$ assume the common value $\lambda_{F_k}[\tilde \varepsilon]$ for all $j \in F_k$, then the differential of the function $\Lambda_{F,s}$ at the point $\tilde \varepsilon$ is given by the formula \begin{equation}\label{diff:Lambdas} d\rvert_{\varepsilon=\tilde{\varepsilon}} \Lambda_{F,s} [\eta] = -\sum_{k=1}^nc_k \sum_{l \in F_k} \int_\Omega \eta \tilde{E}^{(l)} \cdot \tilde{E}^{(l)} \, dx, \end{equation} for all $\eta \in W^{1,\infty} \left(\Omega\right) \cap \mathrm{Sym}_3 (\Omega)$, where \begin{equation*} c_k := \sum_{\substack{0 \leq s_1 \leq |F_1| \\ \ldots \\ 0 \leq s_n \leq |F_n| \\s_1+\ldots +s_n =s}} \binom{\, \abs{F_k}-1}{s_k-1} (\lambda_{F_k}[\tilde{\varepsilon}])^{s_k} \prod_{\substack{j=1\\j \neq k}}^n \binom{\, \abs{F_j}}{s_j}(\lambda_{F_j}[\tilde \varepsilon])^{s_j}, \end{equation*} and for each $k=1, \dots, n$, $\{\tilde E^{(l)}\}_{l \in F_k}$ is an orthonormal basis in $L_{\tilde \varepsilon}^2(\Omega)$ of Maxwell eigenvectors for the eigenspace associated with $\lambda_{F_k}[\tilde \varepsilon]$. \end{theorem} \begin{proof} Let $\tilde \varepsilon \in \mathcal{E}$. As we have already pointed out, Maxwell eigenvalues are independent on the choice of the parameter $\tau>0$ in \eqref{prob:eigen2weak}. Thus, to avoid problems of different enumeration between Maxwell eigenvalues and the eigenvalues of $S_\varepsilon$, we can fix $\tau$ big enough such that all the Maxwell eigenvalues $\{\lambda_j[\tilde \varepsilon]\}_{j \in F}$ are strictly smaller than any other eigenvalue of \eqref{prob:eigen2weak} which is not a Maxwell eigenvalue (i.e. an eigenvalue belonging to the family ii) in Theorem \ref{thm:codau}). In this way $\sigma_j[\tilde \varepsilon] =\lambda_j[\tilde \varepsilon]$ for all $j \in F$. The eigenvalues $\mu_j$ of the operator $S_\varepsilon$ and the eigenvalues $\sigma_j$ of \eqref{prob:eigen2weak} satisfy $\mu_j= ({\sigma_j} +1)^{-1}$. Then the sets $\mathcal{E}[F]$ and $\set{\varepsilon \in \mathcal{E} : \mu_j[\varepsilon] \neq \mu_l[\varepsilon] \,\,\, \forall j \in F, \, l \in \mathbb{N} \setminus F}$ coincide locally around $\tilde \varepsilon$. By Lemma \ref{lem:Te}, $S_\varepsilon$ is a compact self-adjoint operator acting on $L^2_\varepsilon(\Omega)$. Furthermore, as already pointed out in the proof of Lemma \ref{lem:der}, $S_\varepsilon$ depends real analytically on $\varepsilon$. In the same way one shows that also the scalar product $\langle \cdot, \cdot \rangle_\varepsilon$ on $L^2(\Omega)^3$ depends real analytically on $\varepsilon$. Therefore, by the abstract result of Lamberti and Lanza de Cristoforis \cite[Thm. 2.30]{LaLa04}, we have that the set $\set{\varepsilon \in \mathcal{E} : \mu_j[\varepsilon] \neq \mu_l[\varepsilon] \,\,\, \forall j \in F, l \in \mathbb{N} \setminus F}$ is open in $W^{1,\infty} \left(\Omega\right) \cap \mathrm{Sym}_3 (\Omega)$ and that the function \[ M_{F,s}[\varepsilon] := \sum_{\substack{j_1,\dots,j_s \in F \\ j_1<\dots<j_s}} \mu_{j_1}[\varepsilon] \cdots \mu_{j_s}[\varepsilon] \] depend real analytically on $\varepsilon \in \mathcal{E}[F]$. From this, to infer the real analyticity of the functions $\Lambda_{F,s}$ on $\varepsilon \in \mathcal{E}[F]$, one can just observe that if we denote \[ \hat{\Lambda}_{F,s}[\varepsilon] := \sum_{\substack{j_1,\dots,j_s \in F \\ j_1<\dots<j_s}} (\lambda_{j_1}[\varepsilon] +1) \cdots (\lambda_{j_s}[\varepsilon]+1), \] then we have \[ \hat{\Lambda}_{F,s} [\varepsilon] = \frac{M_{F,\, |F|-s}[\varepsilon]}{M_{F,\, |F|}[\varepsilon]} \] and by elementary combinatorics \begin{equation} \label{Lambdahat:Lambda} \Lambda_{F,s}[\varepsilon] = \sum_{k=0}^s (-1)^{s-k} \binom{\, \abs{F}-k}{s-k} \hat{\Lambda}_{F,k} [\varepsilon], \end{equation} where we have set $\hat{\Lambda}_{F,0}=1$. Then we can deduce that locally around $\tilde \varepsilon$ the maps $\Lambda_{F,s}[\varepsilon]$ are real analytic and accordingly the analyticity part of the statement follows since $\tilde \varepsilon$ is arbitrary. Next, we turn to prove formula \eqref{diff:Lambdas}. We start by the case $n=1$, that is $F_1 =F$ and $\tilde \varepsilon \in \Theta[F]$. Let $\eta \in W^{1,\infty} \left(\Omega\right) \cap \mathrm{Sym}_3 (\Omega)$. By \cite[Thm. 2.30]{LaLa04} we get that \begin{equation*} d\rvert_{\varepsilon=\tilde{\varepsilon}} M_{F,s} [{\eta}] = \binom{\, \abs{F}-1}{s-1} (\lambda_F [\tilde{\varepsilon}]+1)^{1-s} \sum_{l \in F} \langle d\rvert_{\varepsilon=\tilde{\varepsilon}} \, S_{\varepsilon} [\eta][\tilde{E}^{(l)}], \tilde{E}^{(l)} \rangle_{\tilde{\varepsilon}}. \end{equation*} Moreover, by using formula \eqref{der:oper} of Lemma \ref{lem:der}, we have that \begin{equation*} \begin{split} & d\rvert_{\varepsilon = \tilde{\varepsilon}} \hat{\Lambda}_{F,s} [\eta] \\ & =\left( d\rvert_{\varepsilon=\tilde{\varepsilon}} M_{F, \, \abs{F}-s} [\eta] M_{F, \, \abs{F}} \, [\tilde{\varepsilon}] - M_{F, \, \abs{F}-s} \, [\tilde{\varepsilon}] \, d\rvert_{\varepsilon=\tilde{\varepsilon}} M_{F, \, \abs{F}} \, [\eta] \right) (\lambda_F [\tilde{\varepsilon}]+1)^{2 \, \abs{F}} \\ & = \left( \binom{\, \abs{F}-1}{\, \abs{F}-s-1} (\lambda_F [\tilde{\varepsilon}] +1)^{s+1-2 \, \abs{F}}- \binom{\, \abs{F}}{s} \binom{\, \abs{F}-1}{\, \abs{F}-1} (\lambda_F [\tilde{\varepsilon}]+1)^{s+1 -2 \, \abs{F}} \right) \\ & \qquad \cdot (\lambda_F [\tilde{\varepsilon}]+1)^{2 \, \abs{F}} \sum_{l \in F} \langle d\rvert_{\varepsilon=\tilde{\varepsilon}} \, S_{\varepsilon} [\eta][\tilde{E}^{(l)}], \tilde{E}^{(l)} \rangle_{\tilde{\varepsilon}}\\ & = - \lambda_F [\tilde{\varepsilon}] (\lambda_F [\tilde{\varepsilon}]+1)^{s-1} \binom{\, \abs{F}-1}{s-1} \sum_{l \in F} \int_\Omega \eta \tilde{E}^{(l)} \cdot \tilde{E}^{(l)} \, dx. \end{split} \end{equation*} Finally, recalling \eqref{Lambdahat:Lambda}, we get \begin{equation*} \begin{split} &d\rvert_{\varepsilon = \tilde{\varepsilon}} \Lambda_{F,s} [\eta]\\ & = - \lambda_F [\tilde{\varepsilon}] \sum_{k=1}^s (-1)^{s-k} (\lambda_F [\tilde{\varepsilon}]+1)^{k-1} \binom{\, \abs{F} -k}{s-k} \binom{\, \abs{F}-1}{k-1} \sum_{l \in F} \int_\Omega \eta \tilde{E}^{(l)} \cdot \tilde{E}^{(l)} \, dx\\ & = - \binom{\, \abs{F}-1}{s-1} \lambda_F [\tilde{\varepsilon}] \sum_{k=0}^{s-1} \binom{s-1}{k} (\lambda_F [\tilde{\varepsilon}]+1)^{k} (-1)^{s-k-1} \sum_{l \in F} \int_\Omega \eta \tilde{E}^{(l)} \cdot \tilde{E}^{(l)} \, dx\\ &= - \binom{\, \abs{F}-1}{s-1} (\lambda_F [\tilde{\varepsilon}])^s \sum_{l \in F} \int_\Omega \eta \tilde{E}^{(l)} \cdot \tilde{E}^{(l)} \, dx. \end{split} \end{equation*} Next we consider the case $n \geq 2$. By means of a continuity argument, one can easily see that there exists an open neighborhood $\mathcal{W}$ of $\tilde \varepsilon$ in $\mathcal{E}[F]$ such that $\mathcal{W} \subseteq \bigcap_{k=1}^n\mathcal{E}[F_k]$. Thus \[ \Lambda_{F,s}[\varepsilon] = \sum_{\substack{0 \leq s_1 \leq |F_1| \\ \ldots \\ 0 \leq s_n \leq |F_n| \\s_1+\ldots +s_n =s}} \prod_{k=1}^n\Lambda_{F_k,s_k}[\varepsilon] \qquad \forall \varepsilon \in \mathcal{W}. \] Differentiating the above equality at the point $\tilde{\varepsilon}$ and using formula \eqref{diff:Lambdas} with $n=1$ to each function $\Lambda_{F_k,s_k}$, one can see that formula \eqref{diff:Lambdas} holds true for any $n \in \mathbb{N}$. \end{proof} We conclude this section by studying the case of one-parametric families of permittivities. Using \Cref{lem:der} and classical analytic perturbation theory we can recover a Rellich-Nagy-type theorem which allows us to describe all the eigenvalues splitting from a multiple eigenvalue of multiplicity $m$ by means of $m$ real-analytic functions. For classical results in analytic perturbation theory we refer to the seminal works of Rellich \cite{Re37} and Nagy \cite{Na48}. More up do date formulations can be found in Chow and Hale \cite[Theorem 5.2, p. 487]{ChHa82}, Kato \cite[Theorem 3.9, p. 393]{Ka95}, Lamberti and Lanza de Cristoforis \cite[Theorem 2.27]{LaLa04}. \begin{theorem} \label{thm:RN} Let $\Omega$ be as in \eqref{Omega_def}. Let $\tilde \varepsilon \in \mathcal{E}$ and let $\{\varepsilon_t\}_{t \in \mathbb{R}} \subseteq \mathcal{E}$ be a family depending real analytically on $t$ and such that $\varepsilon_0 = \tilde \varepsilon$. Let $\tilde \lambda$ be a Maxwell eigenvalue of multiplicity $m \in \mathbb{N}$ and $\tilde{E}^{(1)},\dots, \tilde{E}^{(m)}$ a corresponding orthonormal basis of Maxwell eigenvectors in $L^2_{\tilde \varepsilon}(\Omega)$ with $\varepsilon = \tilde \varepsilon$. Let $\tilde \lambda = \lambda_n[\tilde \varepsilon] = \cdots = \lambda_{n+m-1}[\tilde \varepsilon]$ for some $n \in \mathbb{N}$. Then there exist an open interval $I \subseteq \mathbb{R}$ containing zero and $m$ real analytic functions $g_1,\ldots, g_m$ from $I$ to $\mathbb{R}$ such that \[ \{\lambda_n[\varepsilon_t] , \ldots , \lambda_{n+m-1}[ \varepsilon_t]\} = \{g_1(t),\ldots, g_m(t)\} \qquad \forall t \in I. \] Moreover, the derivatives $g'_1(0), \ldots, g'_m(0)$ of the functions $g_1,\ldots, g_m$ at zero coincide with the eigenvalues of the matrix \begin{equation*} \left( -\tilde \lambda \int_\Omega \dot \varepsilon_0 \, \tilde{E}^{(i)} \cdot \tilde{E}^{(j)} \, dx \right)_{i,j=1,\ldots,m}, \end{equation*} where $\dot \varepsilon_0$ denotes the derivative at $t=0$ of the map $t \mapsto \varepsilon_t$. \end{theorem} \begin{proof} Again, we can assume that $\tau$ is big enough such that $\tilde \lambda$ is strictly smaller than any eigenvalue of \eqref{prob:eigen2weak} which is not a Maxwell eigenvalue. By applying \cite[Thm. 2.27, Cor. 2.28]{LaLa04} to the operator $S_\varepsilon$ defined in \eqref{operator:dfn} we get that there exist an open interval $I$ of $\mathbb{R}$ containing zero and $m$ real analytic functions $h_1, \dots, h_m$ from $I$ to $\mathbb{R}$ such that $\{(\lambda_n[\varepsilon_t] +1)^{-1} , \ldots , (\lambda_{n+m-1}[ \varepsilon_t] + 1)^{-1} \} = \{h_1(t),\ldots, h_m(t)\}$ for all $t \in I$. Furthermore, the derivatives at zero of the functions $h_i, i=1, \dots, m$ coincide with the eigenvalues of the matrix \begin{equation*} \left( \langle d\rvert_{\varepsilon=\tilde{\varepsilon}} S_\varepsilon [\dot{\varepsilon}_0] \tilde{E}^{(i)}, \tilde{E}^{(j)} \rangle_{\tilde{\varepsilon}} \right)_{i,j=1,\dots,m}. \end{equation*} By continuity we have that, eventually further restricting the interval $I$, the functions $h_i$ are away from zero for all $t \in I$. Then, setting $$g_i(t):= \frac{1}{h_i(t)} - 1$$ we have that $\{\lambda_n[\varepsilon_t] , \ldots , \lambda_{n+m-1}[ \varepsilon_t]\} = \{g_1(t),\ldots, g_m(t)\}$. Finally, noticing that \begin{equation*} \frac{d}{dt} g_i(t) \rvert_{t=0} = -(\tilde{\lambda}+1)^2 \frac{d}{dt} h_i(t) \rvert_{t=0}, \end{equation*} we deduce that the derivatives at zero of the functions $g_i$ coincide with the eigenvalues of the matrix \begin{equation*} -(\tilde{\lambda}+1)^2 \left( \langle d\rvert_{\varepsilon=\tilde{\varepsilon}} S_\varepsilon [\dot{\varepsilon}_0] \tilde{E}^{(i)}, \tilde{E}^{(j)} \rangle_{\tilde{\varepsilon}} \right)_{i,j=1,\dots,m} = \left( -\tilde{\lambda} \int_\Omega \dot{\varepsilon}_0 \, \tilde{E}^{(i)} \cdot \tilde{E}^{(j)}\, dx\right)_{i,j=1,\dots,m}, \end{equation*} where this last equality is justified by \Cref{lem:der}. \end{proof} \section{The spectrum is simple for generic permittivities}\label{sec:gen} The issue of understanding if the eigenvalues of a parameter dependent problem can be made all simple by an arbitrarily small perturbation of the parameter is a natural question and has been already investigated by several authors for different problems. For example, Albert \cite{Al75} proved the generic simplicity of the spectrum of an elliptic operator with respect to the perturbation of the zeroth order term. Moreover, the generic simplicity of the spectrum has been also considered with respect to the domain perturbation in various papers. We mention, e.g, Micheletti \cite{Mi72, Mi73} for the Laplacian and for a general elliptic operator and Ortega and Zuazua \cite{OrZu01} and Chitour, Kateb and Long \cite{ChKaLo16} for the Stokes system in dimension two and three, respectively. Finally, we also mention the more recent paper by Dabrowski \cite{Da21} where the author analyze the Laplacian with different boundary conditions and consider also singular perturbations of the domain. A first step, as we will show in the next proposition, is to prove that it is always possible to find a small perturbation of the permittivity that splits a non-zero Maxwell eigenvalue of multiplicity $m$ into $m$ simple eigenvalues. \begin{proposition} \label{first:step:genericity} Let $\Omega$ be as in \eqref{Omega_def}. Let $\tilde \varepsilon \in \mathcal{E}$, $\tilde \lambda \neq 0$ a Maxwell eigenvalue of multiplicity $m \in \mathbb{N}$ and $\tilde{E}^{(1)},\ldots, \tilde{E}^{(m)}$ a corresponding orthonormal basis of Maxwell eigenvectors in $L^2_{\tilde \varepsilon}(\Omega)$ with $\varepsilon = \tilde \varepsilon$. Let $\tilde \lambda = \lambda_n[\tilde \varepsilon] = \cdots = \lambda_{n+m-1}[\tilde \varepsilon]$ for some $n \in \mathbb{N}$. Define \[ \tilde \varepsilon_{t,\eta} := \tilde \varepsilon +t \eta \qquad \forall t \in \mathbb{R}, \] for all $\eta \in W^{1,\infty} \left(\Omega\right) \cap \mathrm{Sym}_3 (\Omega)$, $\norm{\eta}_{ W^{1,\infty}(\Omega)}\leq 1$. Then for all $T>0$ there exist $\eta \in W^{1,\infty} \left(\Omega\right) \cap \mathrm{Sym}_3 (\Omega)$ with $\norm{\eta}_{ W^{1,\infty}(\Omega)}\leq 1$, and $t\in \mathopen]0,T[$ such that $\tilde \varepsilon_{t,\eta} \in \mathcal{E}$ and the eigenvalues $\lambda_n[\tilde \varepsilon_{t,\eta}] , \ldots , \lambda_{n+m-1}[\tilde \varepsilon_{t,\eta}]$ are all simple. \end{proposition} \begin{proof} We will only prove that there exist $\eta \in W^{1,\infty} \left(\Omega\right) \cap \mathrm{Sym}_3 (\Omega)$ with $\norm{\eta}_{W^{1,\infty}(\Omega)} \leq 1$ and $t>0$ as small as desired such that the eigenvalues $\lambda_n[\tilde \varepsilon_{t,\eta}] , \ldots , \lambda_{n+m-1}[\tilde \varepsilon_{t,\eta}]$ are not all equal. Then, repeating the same argument for the eigenvalues that have still a multiplicity strictly greater than one, in a finite number of steps we are done. Note that by the continuity of the eigenvalues with respect to permittivity variations and by choosing $t$ small enough we can avoid that the eigenvalues splitting from a multiple eigenvalue could overlap or switch position with other eigenvalues. Hence, suppose by contradiction that there exists $T >0$ such that for all $\eta \in W^{1,\infty} \left(\Omega\right) \cap \mathrm{Sym}_3 (\Omega)$ with $\norm{\eta}_{W^{1,\infty}(\Omega)} \leq 1$ and for all $t \in \mathopen]0,T[$, all the eigenvalues $\lambda_n[\tilde \varepsilon_{t,\eta}] , \ldots , \lambda_{n+m-1}[\tilde \varepsilon_{t,\eta}]$ coincide. As a consequence, all the right derivatives at $t = 0$ of the branches coincide. Then, if we fix $\eta$ and use \Cref{thm:RN}, we get that all the eigenvalues of the matrix \begin{equation} \label{matrix:rellichnagy:dfnM} M:=\left( - \tilde \lambda \int_\Omega \eta \tilde{E}^{(i)} \cdot \tilde{E}^{(j)} \, dx \right)_{i,j=1,\ldots,m} \end{equation} coincide. Since the above matrix is a real symmetric matrix with only one eigenvalue, it is a scalar matrix. In other words, there exists $\mu[\eta] \in \mathbb{R}$ such that \begin{equation}\label{eq:M} M = \mu[\eta] \, I_m, \end{equation} where $I_m$ denotes the $(m \times m)$-identity matrix. For $h=1,2,3$ we set \[ \eta_h := \norm{\xi}_{W^{1,\infty}(\Omega)}^{-1} \xi \, e_{hh} \] with $0\neq \xi \in C_c^1(\Omega)$ arbitrary and $e_{hh}$ the $(3 \times 3)$-matrix with $(h,h)$-entry equal to $1$ and zeros elsewhere. Since $\tilde \lambda \neq 0$, by \eqref{matrix:rellichnagy:dfnM}, \eqref{eq:M} and using the above defined $\eta_h$ we can recover that for all $\xi \in C_c^1(\Omega)$ \[ \int_{\Omega}\xi \, E^{(i)}_h E^{(j)}_h \,dx=0 \qquad \forall i,j \in \{1,\ldots,m\}, i \neq j, \quad \forall h=1,2,3, \] and \[ \int_{\Omega}\xi \left( (E^{(i)}_h)^2- (E^{(j)}_h)^2 \right)\,dx= 0 \qquad \forall i,j \in \{1,\ldots,m\}, \quad \forall h=1,2,3. \] By the fundamental lemma of calculus of variations we get that a.e. in $\Omega$ \[ E^{(i)}_h E^{(j)}_h=0 \qquad \forall i,j \in \{1,\ldots,m\},\, i \neq j, \quad \forall h=1,2,3, \] and \[ (E^{(i)}_h)^2- (E^{(j)}_h)^2 =0 \qquad \forall i,j \in \{1,\ldots,m\}, \quad \forall h=1,2,3. \] The above relations clearly implies that $E_i=0$ for all $ i \in \{1,\ldots,m\}$, which is a contradiction since they are not identically zero, being eigenfunctions. \end{proof} \begin{remark} The constraint $\norm{\eta}_{ W^{1,\infty}(\Omega)}\leq 1$ in the above proposition can be replaced by $\norm{\eta}_{ W^{1,\infty}(\Omega)}\leq \delta$ for any $\delta >0$. \end{remark} \begin{remark} The argument we have used to split a multiple eigenvalue into several eigenvalues of lower multiplicity uses that $\eta$ is a general symmetric matrix and not a scalar matrix. However, noticing in which way $\eta_h$ is defined, one can easily realize that such an argument still works if $\eta$ varies in the class of diagonal matrices. Instead, in the case that we restrict ourselves to the case of scalar matrices, what we can recover by arguing in the same way is that \[ E^{(i)} \cdot E^{(j)}=0 \qquad \forall i,j \in \{1,\ldots,m\}, i \neq j. \] and \[ |E^{(i)}|^2- |E^{(j)}|^2 = 0 \qquad \forall i,j \in \{1,\ldots,m\}. \] This does not immediately lead to a contradiction. Thus, it would be interesting to investigate whether it is still possible to split the whole spectrum when the permittivies are scalar. \end{remark} We are now ready to show that the whole positive Maxwell spectrum is generically simple with respect to the permittivity. We note that our proof is inspired by the methods of Albert \cite{Al75} \begin{theorem}\label{thm:gensim} Let $\Omega$ be as in \eqref{Omega_def}. Let $\tilde{\varepsilon} \in \mathcal{E}$ and let $\delta>0$ be small enough such that \[ \tilde \varepsilon + \eta \in \mathcal{E} \] for all $\eta \in W^{1,\infty} \left(\Omega\right) \cap \mathrm{Sym}_3 (\Omega)$ with $\|\eta \|_{W^{1,\infty}(\Omega)} \leq \delta$. Let \[ B_0 := \left\{ \eta \in W^{1,\infty} \left(\Omega\right) \cap \mathrm{Sym}_3 (\Omega): \|\eta \|_{W^{1,\infty}(\Omega)} \leq \delta \right\} \] and \[ B_n := \left\{\eta \in B_0: \mbox{ the first $n$ positive Maxwell eigenvalues with $\varepsilon =\tilde \varepsilon +\eta$ are simple}\right\} \] for $n \in \mathbb{N}$. Then \[ B:= \bigcap_{n \in \mathbb{N}}B_n = \left\{ \eta \in B_0: \mbox{ all the positive Maxwell eigenvalues with $\varepsilon =\tilde \varepsilon +\eta$ are simple}\right\} \] is dense in $B_0$. \end{theorem} \begin{proof} The proof follows by applying the Baire's lemma in the complete metric space $B_0$. In order to do this, we have to show that \begin{itemize} \item[i)] $B_n$ is open in $B_0$ for all $n \in \mathbb{N}$, \item[ii)] $B_{n+1}$ is dense in $B_n$ for all $n \in \mathbb{N}$. \end{itemize} Statement i) follows from the continuity of the eigenvalues with respect to the permittivity parameter (see Theorem \ref{thm:loclip}). Next we prove statement ii) by contradiction. Assume that $B_{n+1}$ is not dense in $B_n$ for some $n \in \mathbb{N}$. Then there exists $\eta \in B_n \setminus B_{n+1}$ and a neighborhood $U$ of $\eta$ in $B_0$ such that \[ U \subseteq B_n \setminus B_{n+1}. \] Since $\eta \in B_n\setminus B_{n+1}$ then \begin{itemize} \item the first $n$ non-zero Maxwell eigenvalues with $\varepsilon=\tilde \varepsilon +\eta$ are simple, \item the $(n+1)$-th non-zero Maxwell eigenvalue with $\varepsilon=\tilde \varepsilon +\eta$ has multiplicity $k$ for some $k \in \mathbb{N}$, $k \geq 2$. \end{itemize} Moreover, we note that for all $\rho \in U \subseteq B_n \setminus B_{n+1}$ we have: \begin{itemize} \item the first $n$ non-zero Maxwell eigenvalues with $\varepsilon=\tilde \varepsilon +\rho$ are simple, \item the $(n+1)$-th non-zero Maxwell eigenvalue with $\varepsilon=\tilde \varepsilon +\rho$ is not simple. \end{itemize} By \Cref{first:step:genericity} there exist $\hat\rho \in W^{1,\infty} \left(\Omega\right) \cap \mathrm{Sym}_3 (\Omega)$ with $\|\hat\rho\|_{W^{1,\infty}(\Omega) }\leq 1$ and $t>0$ arbitrarily small such that $\eta + t\hat \rho \in U$ and all the non-zero Maxwell eigenvalues with $\varepsilon=\tilde \varepsilon + \eta + t\hat \rho$ with indices from $(n+1)$ to $(n+k)$ are simple, therefore we deduce that in particular $\eta + t\hat \rho \in B_{n+1}$. This is a contradiction since $U \subseteq B_n \setminus B_{n+1}$. \end{proof} \subsection*{Acknowledgment} The authors are members of the `Gruppo Nazionale per l'Analisi Matematica, la Probabilit\`a e le loro Applicazioni' (GNAMPA) of the `Istituto Nazionale di Alta Matematica' (INdAM) and acknowledge the support of the Project BIRD191739/19 `Sensitivity analysis of partial differential equations in the mathematical theory of electromagnetism' of the University of Padova. The second author was partially supported by `Fondazione Ing. Aldo Gini' during the preparation of this paper. The authors are deeply thankful to Prof. Pier Domenico Lamberti for many valuable comments during the preparation of the paper.
1,314,259,995,370
arxiv
\section{Introduction} Future beyond 5G(B5G)/6G networks are expected to bring new improvements over the previous mobile networks (4G and 5G) and enable new paradigms. This technological evolution paves the way for endless services tailored to specific verticals; e.g., e-Health, Industry 4.0, Internet of Things (IoT), and automotive. More cells and antennas will be deployed in combination with advanced technologies, such as Virtual Radio Access Network (vRAN), which enable partial or full virtualization of the network through Network Function Virtualization (NFV) and Network Slicing. To serve all these needs, current and future networks heavily rely on the Edge Computing (EC) concept to reduce the distance between the end user and the computing resources. By placing Virtual Network Functions (VNF) in proper edge computing centers at the edge of the network, data is no longer stored or processed in a distant data center, which translates into a significantly lower latency \cite{corneo2021much} at the cost of additional complexity of network management. B5G/6G technologies are expected to provide an increase in capabilities, all in a multi-vendor environment, with particular focus on ultra-low latency data transmission. It is no wonder that B5G/6G networks are expected to become complex systems, making it difficult to deploy and manage services on them. The options abundance in terms of offered services makes it even more difficult to determine an optimal way to deliver them in the future and to predict the demand in order to deploy in the most efficient way the infrastructure. Such a complex system requires advanced methods targeting an optimal use of its resources. A major issue with managing EC-enabled 6G network is the inter-relation between the computing and communication resources to provide services---to admit the user's request, the corresponding VNF should be placed in a EC center that has sufficient computing resources as well as sufficient communication resource to handle the offered traffic load. Therefore, dealing with the edge cloud and transport Key Performance Indicators (KPIs) simultaneously is the added complexity for the MNO when purporting to provide scalability. This is one of the issues that we studied in this paper. Furthermore, in this paper we also analytically find the optimal performance bounds of such edge cloud scenarios that, unlike previous work, combine computing and communication characteristics to take decisions. The scenario is designed so that it is mathematically tractable. Consequently, the problem is formulated as a Markov Decision Process (MDP) and its optimal solution is found through the \textit{Policy Iteration} (PI) algorithm. After that, this paper focuses on the design of a practical algorithm that is able to perform as close as possible to the theoretical bound in terms of rejection ratio. It is practical in the sense that it does not take any restrictive assumption on service statistics, hence it can be applied to real networks. And given its low rejection ratio compared to other approaches, it makes the most out of the resources of the operator. In this direction, and given their potential, AI/ML approaches \cite{5GPPPaper}, and more specifically, reinforcement learning, is shown to perform close to optimal under multiple conditions if enough iterations are run. \begin{comment} As the recent white paper by 5G Infrastructure Public Private Partnership (5G-PPP) \cite{5GPPPaper} explains, Artificial Intelligence (AI) and Machine Learning (ML) algorithms are being incorporated as new forms of technology to address such challenges. AI/ML enables Mobile Network Operators (MNOs) to pass on their benefits to 6G users, to improve the current model of centralized networks that overtax themselves to serve the increasing demand for services. AI/ML can efficiently manage the deployment of resources based on the required infrastructure conditions to enable the network to operate effectively. So, it is promising that utilize AI/ML techniques for resource allocation in EC-enabled 6G networks. Despite of the popularity AI/ML techniques in different contexts, evaluating the quality of the solutions provided by these approaches remains a challenge. In contexts like games, AI/ML solutions are compared against human performance \cite{mnih2015human}, but it cannot be used in network resource allocation problems, as the human performance may be far from the optimal solution. This is the second of the research gaps we address in this paper via mathematical formulation of the problem as a Markov Decision Process (MDP) and solve it optimally by the \textit{Policy Iteration} (PI) algorithm. \end{comment} \subsection{Related Work} Resource management between network edge nodes has been widely researched and studied in various domains. The authors in \cite{8885745} propose a Q-Learning algorithm to choose optimal offloading among Fog network nodes. To evaluate the performance of this algorithm, it is compared with existing offloading methods as: Least-Queue, Nearest Node or Random Node selection. Reference \cite{mecoff} formulates the resource allocation problem in EC as a minority game, and then compares the performance of different RL methods to make its agent solve the game. Akkarit et al. \cite{inproceedings} present the automatic adaptation of container instances under a Q-Learning algorithm and also with the implementation of neural networks to maintain a certain service quality level without reducing the cloud computing resources. Yala L. et al. \cite{8647858} propose an algorithm for placing VNF using optimization-adapted Genetic Algorithm meta-heuristic which aims at minimizing latency and maximizing service availability. Some of the above works differ from this paper in that the evaluation stage of the proposed algorithms are compared with solutions that do not necessarily yield the optimal solution which considers both transport and cloud related parameters. In this paper, \textit{Q-Learning} is evaluated against a mathematical solution that allows measuring the performance gap between the two. Moreover, the problem is posed as a finite MDP and solved by a model-based and a model-free RL algorithm. \vspace{-1mm} \subsection{Main Contributions} \label{sec_related} This paper takes the architecture of the H2020 5GPPP 5Growth project\footnote{https://5growth.eu/} \cite{li20215growth} as a general reference. The 5Growth project aims to validate the operation of 5G systems deployed in vertical industries and incorporates (through open APIs) AI/ML-related algorithms in service deployment and operation. In this paper, VNF placement decisions for each service requests are made according to specific operational requirements (e.g., ensuring an efficient use of edge resources while maximizing the number of services delivered over the shared infrastructure). First, the theoretical performance bounds are calculated through Dynamic Programming (DP) and in the form of a model-based MDP solved through the PI algorithm. For this, all state transition probabilities must be computed in advance, as well as all possible valid states. This is an algorithm that mathematically allows the optimal solution to be obtained from all possible solutions in the search space. Another algorithm representing reasonable operation guidelines is \textit{best fit}, which is also taken as reference for comparison with theoretical bounds and practical learning algorithms. This algorithm is inspired in the classic load balancing algorithm Weighted Round Robin \cite{articlewrr}, but adapted to the needs of the topic under consideration. In \textit{best fit} approach, each EC has a weight based on the administrators chosen criteria. The EC with the highest weight serves the request. Finally, the problem has been approached through a \textit{Q-Learning off-policy time differential} algorithm to conceive a deployable algorithm in practice that performed as close as possible to the optimal bound. In this case, an agent observes each incoming new VNF request. Based on the network state, the agent performs an action by assigning it to an optimal EC node to maximize the total number of processed VNF requests in the system. In summary, the key contributions of this paper are: \begin{itemize} \item The optimal VNF placement problem is formulated as a finite MDP and solved using a model-based algorithm, i.e., PI, considering cloud and transport network conditions, hence obtaining the optimal performance bounds. \item A practical solution is given for a (near) optimal solution using a model-free and off-policy algorithm, i.e., \textit{Q-Learning}, and compared with PI and \textit{best fit}. \item The simulation results make it clear that \textit{Q-Learning} works near-optimally when EC resources must be managed conscientiously. \end{itemize} The rest of the paper is organized as follows. Section \ref{scenario} describes the considered scenario, system model and the problem statement. Section \ref{sec_MDP} presents the problem as an MDP and the PI approach. Section \ref{sec_QLRL} introduces both practical solutions, \textit{Q-Learning} and \textit{best fit} algorithms. Section \ref{sec_sim} details each of the simulations that have been carried out. Finally, section VI draws the conclusions. \section{Scenario and System Model} \label{scenario} \subsection{Considered Scenario} In this paper, we consider the joint use of cloud and transport KPIs to make the optimal decision in the placement of VNFs. Fig. \ref{fig:context} shows an example application scenario of our approach. If only cloud or transport parameters are considered separately during VNF placement decision making process (i.e. selecting the most appropriate EC), the requested service cannot be provided in some cases. For example, consider the utilization of link 1 (poor quality link) in Fig. \ref{fig:context} for only cloud aware decision making to reach $EC_3$ (EC with high cloud resource) and link 2 (good quality link) for only transport aware decision making to reach $EC_2$ (EC with low cloud resources). In both conditions, both transport and cloud level service requests cannot be provided efficiently. For this reason, the selection of a EC center must depend on both cloud and transport parameters as is done with link 3 (high quality link) to reach $EC_1$ (EC with high cloud resources). More specifically, in \cite{zeydan2021} the authors have demonstrated that considering both transport and cloud related parameters simultaneously, better EC decisions are taken by the previously trained ML models. \begin{figure} \centering \includegraphics[width=0.97\linewidth]{figures/network.001.jpeg} \caption{Edge computer nodes across different network regions} \label{fig:context} \end{figure} \vspace{-1mm} \subsection{System Model and Problem Statement} \label{sec_model} In this paper, we make the following assumptions. The MNO operates a network with $\mathcal{K}=\{1,\ldots,K\}$ ECs, and provides a set $\mathcal{I}=\{1,\ldots, I\}$ of VNFs to users. Each user request asks for an instance of a VNF $i \in \mathcal{I}$. The requests for VNF type $i \in \mathcal{I}$ arrive and depart following Poisson processes with rate $\lambda_i$ and $\mu_i$, respectively. The problem is assumed to be online - i.e. the demands arrive into the system one-by-one. The network management system keeps records of the VNFs served by each network EC node. Each request $req$ is represented by a vector compounded of the number of CPU cores of the corresponding VNF and the bandwidth of the required traffic $[req_{cpu}, req_{bw}]$; e.g., $[3,100]$ is a request for a VNF that needs 3 CPU cores and processes 100 Mbps traffic. The requests are represented by the set $\mathcal{J} = \{req_1, req_2, ..., req_J\}$. Under these assumptions, the problem is to place each given VNF request in a proper EC, without knowledge of future requests in order to maximize the acceptance rate. The solution to this problem should consider both the capacity constraints of the ECs and the bandwidth constraints of the links. \vspace{-1mm} \section{Optimal Solution} \label{sec_MDP} In this section we obtain the optimal solution of the problem by formulating it as a finite MDP then solving it using DP, i.e., the PI algorithm. \subsection{The MDP Formulation} An MDP \cite{10.5555/517430}, models controlled stochastic dynamical systems whose evolution is subject to random factors and which can be modified by certain decisions. An MDP provides a mathematical framework for learning sequential decision making, where actions in each state $s$ provide not only immediate rewards $\mathcal{R}$, but also the subsequent state $s'$. Mathematically, an MDP is a 5-tuple of $\langle \mathcal{S}, \mathcal{A}, \mathcal{P}, \mathcal{R}, s_0 \rangle$, where: \begin{itemize} \item $\mathcal{S}$: the environment's finite set of states \item $\mathcal{A}$: the finite set of applicable actions within the environment states \item $\mathcal{P}$: the state transition probabilities where\\ $\mathcal{P}(\mathcal{S}_{t+1}=s' \, | \, \mathcal{S}_t=s, A_t=a)$ \item $\mathcal{R}(s, a, s')$: the immediate reward the agent obtains for being in state $s$, taking action $a$ and ending up in the subsequent state $s'$. \item $s_0$: the initial state at which the agent starts its task \end{itemize} We assume here that the environment is fully observable and the environment state at time $t$, denoted $s_t$, contains all the relevant environment information to the agent. It reflects not only the network status at time step $t$, but also the VNF $req_j$ arriving into the system at time $t$. An MDP where all 5-tuple elements are known in advance, is called a model-based MDP and can be solved by DP methods, such as the PI algorithm. This allows to obtain the optimal agent's policy $\pi^*(s)$, in order for it to place as many VNFs as possible. We define the state $s_t$ as a vector of vectors $s_t = [[ecs], [d]]$, where $ecs$ represents as many vectors as there are EC nodes in the system, $ecs = [[M_1], [...], [M_K]]$. Each $M_k$ vector has size $I$ which is the total number of different VNF types. The value of the $i$-th element of $[M_k]$ determines the total number of active requests using instances of VNF type $i \in \mathcal{I}$, otherwise 0. Following the technique presented in \cite{bakhshi2021globe}, the vector $[d]$ has size $\mathcal{I}$ and only one of its $i$-th elements can take a value while the remaining elements are $0$; where $d[i] = +1$ if an incoming request $req$ requests an instance of VNF type $i$, and $d[i] = -1$ if a request that have been using an instance of VNF type $i$ departs from the network. The state transition probabilities $\mathcal{P}$ are crucial to compute the optimal agent’s policy $\pi^*(s)$. In this paper, the $\mathcal{P}$ are computed from the VNF $req$ arrival and departure rates, $\lambda_{i}$ and $\mu_{i}$ $\forall i \in \mathcal{I}$. To describe the process of computing $\mathcal{P}$ let us define two subsets of the state space, namely: \begin{itemize} \item $\mathcal{S}^{+} \subset \mathcal{S} = \text{\{s} \, | \, \exists i \textbf{ s. t. } d_{i}=+1 \text{\}}$ \item $\mathcal{S}^{-} \subset \mathcal{S} = \text{\{s} \, | \, \exists i \textbf{ s. t. } d_{i}=-1 \text{\}}$ \end{itemize} Whenever $s \in \mathcal{S}^+$, this implies a request $req_j$ that can either be served by a EC node chosen by the agent $\mathcal{A}(s) \in \{M_1, M_2, ..., M_K\}$ or rejected due to insufficient resources to handle it. The states $s \in \mathcal{S}^-$ are completely transparent to the agent, since it does not perform any action upon departure of requests; therefore, there is no agent action as such, we could say that the action is \textit{void}. Let us illustrate this with an example considering the following scenario: $\mathcal{I}=\{1,2\}$, $\lambda_1 = 2, \lambda_2 = 4$, and $\mu_1 = 0.25, \mu_2 = 1$. In Fig. \ref{fig:arrival_example}, $s_0$ reflects the network status where the type 1 request arrives at $t=0$. The agent chooses $a=M_1$, so is allocated on EC 1, which increments $M_1[1]$ by 1. After this action, there are several possible $s'$ that depend on the next event---a new $req$ of type 1 or 2 could arrive in the system, a departure of the $req$ of type 1 in $M_1$ or of type 2 in $M_2$ could also happen. Fig. \ref{fig:arrival_example} shows how the transition probabilities are calculated based on the transition rates using the competing exponentials theorem \cite{blumenfeld2001operations}. The numerator of $\mathcal{P}$ is the rate of the corresponding event, i.e., $\lambda_{i}$ for the states $s \in \mathcal{S}^{+}$, and for the departure states $s \in \mathcal{S}^{-}$, it is the total number of active $req$ of type $i$ that are held times the corresponding departure rate, $\mu_{i}$. The denominator is the total rate of all possible events in this state. Note that the arrival rates are independent and the departure rates depend on the total number of active $req$ of each type in the $M_k$. \begin{figure} \centering \includegraphics[width=0.97\linewidth]{figures/flow1.pdf} \caption{State transition probabilities for an arrival event} \label{fig:arrival_example} \end{figure} In Fig. \ref{fig:departure_example}, $s_0$ indicates that a $req$ of type 2 is departing from the system. It could be from $M_1$ or $M_2$. Fig. \ref{fig:departure_example} shows the state transition probabilities only for the case where the $req$ is departing from $M_1$; for $M_2$, it would be the same procedure as shown here. In this case, the $\mathcal{P}$ is composed of two terms, ${\mu_2}/{2\mu_2}$ is the probability of $req$ of type 2 that is departing from $M_1$, considering $\mu_2$ from $M_1$ and $M_2$. The second term, as shown in Fig. \ref{fig:arrival_example}, is obtained by the competing exponentials theorem. Note that the sum of probabilities $\mathcal{P}(s_i)$ is equal to $0.5$, since only half of the possible transitions are represented. In this paper, $\mathcal{R}(s, a, s') = 1$ if the given request is deployed in one of the EC nodes and $\mathcal{R}(s, a, s') = 0$ if the demands are rejected. \begin{figure} \centering \includegraphics[width=0.97\linewidth]{figures/flow2.pdf} \caption{State transition probabilities for a departure event} \label{fig:departure_example} \end{figure} \vspace{-1mm} \subsection{Dynamic Programming} \label{sec_PI} DP is a collection of methods by which it is possible to obtain the optimal policy of an MDP, as long as all elements of the model are known in advance. The PI algorithm is one of these methods \cite{sutton2018}. As shown in Algorithm \ref{alg_PI}, PI is divided into two sub-algorithms; $\textit{Policy Evaluation}$ and $\textit{Policy Improvement}$. The former computes $V_i(s)$ $\forall s \in \mathcal{S}$ for a given policy $\pi_i(s)$ . The latter improves the previously given policy $\pi_i(s)$ and obtains a new improved policy $\pi_{i+1}(s)$. The state-value function $V(s)$ pairs $s$ to $r$ and determines how good it is for the agent to be in a given $s$. The $V(s)$ can be expressed in terms of $\pi(s)$, where $V_{\pi}(s)$ describes how good it was for the agent to follow its $\pi(s)$ in a particular $s$, take an $a$ and transition to another $s'$. The immediate $\mathcal{R}(s,a,s')$ plus the function $V(s')$ of the landed next state $s'$ determines how good the original $s$ was; more precisely, we have \begin{equation*} V_{\pi}(s) = \sum_{a} \pi(a \, | \, s) \sum_{s'} \mathcal{P}[\mathcal{R}(s,a,s') + \gamma V(s')] \end{equation*} The discount rate $\gamma \in [0, 1]$ prevents the agent from infinitely returning to a state accumulating rewards. If $\gamma \approx 1$ the agent will prioritize expected future rewards. In contrast, when $\gamma \approx 0$ the agent will strongly consider the immediate rewards. Each iteration is guaranteed to result in an improved new policy until the optimal strategy is obtained. Since a finite MDP is a finite set of $\mathcal{S}$ and $\mathcal{A}$, convergence of $V^*(s)$ to obtain the $\pi^*(s)$ is achievable in a finite number of iterations. Prior knowledge of all valid $s$, allows to set $\gamma \approx 1$. Finally, out of all possible policies, there's at least one to be better or equally as good to all other policies achieving the optimal state-value function $V^*(s)$; more specifically: \begin{equation*} V^*(s) \gets \max_{\pi} V_{\pi}(s) \ \ \ \ \forall s\in \mathcal{S}. \end{equation*} \begin{algorithm} \caption{$\textit{Policy Iteration}$} \label{alg_PI} \begin{small} \begin{algorithmic} \State Randomly initialize $V(s) \in \mathbb{R}$ and $\pi(s) \in {\mathcal{A}}(s)$ $\forall s \in {\mathcal{S}}$ \While {$\Delta > \theta$} \Comment{The $\textit{Policy Evaluation}$ loop ~~~~~~~~~~~~~~$\,$} \State \Comment{$\theta$ determines the accuracy of estimation} \State $\Delta \gets 0$ \For {$\forall s \in \mathcal{\mathcal{S}}$} \State $v \gets V(s)$ \State $V(s) \gets \sum_{s', r} p(s',r \, | \, s,\pi(s))[r+\gamma V(s')]$ \State $\Delta \gets \text{max}(\Delta, |v - V(s)|)$ \EndFor \EndWhile \State $policy_{stable} \gets true$ \For {$\forall s \in \mathcal{S}$} \Comment{The $\textit{Policy Improvement}$ loop} \State $action_{old} \gets \pi(s)$ \State $\pi(s) \gets \text{argmax}_{a} \sum_{s', r} p(s',r \, | \, s,a)[r+\gamma V(s')]$ \If {$action_{old} \ne \pi(s)$} \State$policy_{stable} \gets false$ \EndIf \EndFor \If {$policy_{stable}$} \State \Return $\pi^*$ \Else \State $\text{go to $\textit{Policy Evaluation}$ loop}$ \EndIf \end{algorithmic} \end{small} \end{algorithm} \vspace{-1mm} \section{Practical Solutions } \label{sec_QLRL} In the MDP formulation, we defined a subset $\mathcal{S}^{-}$ to derive the transition probabilities in the case of departure from a demand, as in Fig. \ref{fig:departure_example}. Since it is not needed in the practical solution, we redefine the state as follows: \begin{equation*} s = [req_{cpu}, req_{bw}, M_{1}^{fcpu}, ..., M_{K}^{fcpu}, M_1^{fbw}, ..., M_K^{fbw}] \end{equation*} where $M_k^{fcpu}$ is the available CPU units of the $k$-th EC node, and $M_k^{fbw}$ is the available BW connection to reach the $k$-th EC node. \subsection{{Q-Learning}} In cases where not all aspects of the system are known in advance, model-free reinforcement learning approaches can find the (near) optimal policy. Here, the agent must learn from its own actions without a given $\pi(s)$; hence, it is also an off-policy model. The \textit{Q-Learning} algorithm \cite{Watkins:1989} drives the agent's learning by assigning values to $(s, a)$ pairs. The Q-values define how good an action is in a given state. They are updated for each interaction with the environment by the following rule: \begin{equation*} \begin{split} \underbrace {Q(s,a)}_{\text{new value}} \gets \underbrace {Q(s,a)}_{\text{old value}} + \underbrace {\alpha }_{\text{learning rate}}\cdot \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \\\ \overbrace {{\bigg (}\underbrace {\underbrace {\mathcal{R}(s, a, s')} _{\text{reward}}+\underbrace {\gamma } _{\text{discount rate}}\cdot \underbrace {\max _{a'}Q(s',a')} _{\text{estimate of optimal future value}}} _{\text{new value (temporal difference target)}}-\underbrace {Q(s,a)}_{\text{old value}}{\bigg )}} ^{\text{temporal difference}} \end{split} \end{equation*} By introducing a learning rate $\alpha \in [0, 1]$, it is possible to control the variation of Q-values, where $\alpha$ defines to what degree the agent replaces the old data with new ones. A rate $\alpha \approx 1$ forces the agent to consider the latest information, while $\alpha \approx 0$ causes the agent to learn nothing. In the update rule of Q-Learning, the new Q-value is the weighted combination between the old Q-value and the new observation that the agent must believe. \textit{Q-Learning} algorithm converges to an optimal Q-value, $Q^*(s,a)$, given sufficient $\alpha$ and exploration over $\mathcal{S}$ that satisfies the Robbins-Monro conditions \cite{stochasticaprox} \cite{qlearninghistory}. A disadvantage of \textit{Q-Learning} is that the agent can only learn from actions performed in visited states, otherwise there is no learning. The chain of successive actions and the resulting states form an episode. This leads to the exploitation/exploration dilemma. It might be interesting for the agent, especially at the beginning of its training\footnote{There is no training concept as such in \textit{Q-Learning} as is exists in other ML domains. Nonetheless, we refer to training as the initial episodes in which the agent attempts to populate its Q-Table.}, adopt an explorer profile to visit as many states and try as many actions as possible within those states. When the agent's learning reaches a certain level, it is more beneficial to exploit the known actions that bring higher rewards when revisiting the known states. This strategy is known in the literature as $\epsilon$-greedy \cite{sutton2018}. \textit{Q-Learning} differs from PI in that it cannot be evaluated directly without the agent having prior experience. That is, it requires knowledge about what to do when confronted with an $s$, otherwise the agent's actions would be completely random according to the $\epsilon$-greedy strategy. Therefore, the \textit{Q-Learning} agent is trained with several different VNF $req$ sequences before using its agent to evaluate. To prevent \textit{Q-Learning} agent from being confronted with an unknown state (during the evaluation stage) and failing its attempt to search for it in Q-Table, the algorithm has been modified to allow it to learn during the evaluation stage as well. \begin{algorithm} \caption{\textit{Q-Learning}} \label{alg_QL} \begin{small} \begin{algorithmic} \State \text{Set values for: }\text{learning rate} $\alpha$, \text{discount rate} $\gamma$ \State Randomly initialize $Q[s,a] \in \mathbb{R}$ $\forall s \in {\mathcal{S}}$, $\forall a \in {\mathcal{A}}(s)$ \For {each episode} \State Initialize $\mathcal{S}$ \For {each step} \If{evaluation} \If{$s$ in $Qtable$} \State $a \gets $ $argmax(Q(s, a))$ \Else \State behave as in $not$ evaluation \EndIf \Else \State $a \gets $ action from ${\mathcal{A}}(s)$ by $\epsilon$-greedy strategy \State Observe $s'$ \State $r \gets \mathcal{R}(s,a,s')$ \State $Q(s,a) \gets Q(s,a) + \alpha [ r + \gamma \max_{a'}Q(s',a') - Q(s,a)]$ \State $s \gets s'$ \EndIf \EndFor \State Until $S_T$ \EndFor \end{algorithmic} \end{small} \end{algorithm} Another important aspect to consider in the present work is that there is no absorbing/terminal state $S_T$ in the VNF placement problem. In episodic tasks, a $S_T$ is reached when the agent reaches its goal or commits an error that forces the environment to restart. However, in our problem, the VNFs assignment should be done as long as there are requests. Therefore, in this work $S_T$ is determined by the total number of $req_j$ in the VNF $req$ training files. \subsection{{Best Fit}} In addition to Q-Learning, we develop another practical solution called \textit{best fit} which is inspired by a classical Weighted Round Robin load balancing algorithm. The algorithm assigns the incoming VNF request to the EC that has the highest network metric value $l$ which is defined as follows, \begin{equation*} l= a\ \frac{M_k^{fcpu}}{100} + (1-a) \ \frac{1}{Num_{hops}} \end{equation*} where $Num_{hops}$ is the number of hops from the BS to the EC node, and $a = {M_k^{ubw}}\ /\ ({\text {Total Network BW})}$ where $M_k^{ubw}$ is the BW connection used to reach the $k$-th EC node. The algorithm \textit{best fit} first checks if all EC nodes have enough CPU and BW resources, otherwise a rejection is generated. If there is only one EC with available resources, the VNF $req$ is assigned to that EC node. If there is more than one EC node with the same resource availability, the VNF $req$ in this paper is randomly assigned. If there is more than one EC node with different resources availability, the VNF $req$ is assigned to the EC node that is proved to have the highest $l$. \vspace{-1mm} \section{Simulation Results} \label{sec_sim} The following simulations were performed from a theoretical perspective, since the PI algorithm requires a large amount of computational resources and execution time. It would be a tedious task to simulate a network with a large number of entities. For this reason, several BSs demanding $req$ and two EC nodes were considered to meet the demands. The goal is to measure the performance of \textit{Q-Learning} and observe how far it is from a mathematical solution, a model-based MDP, and how much it can outperform \textit{best fit}. During the evaluation stage all three algorithms are run for only one episode over the same set of VNF requests. Unless otherwise specified, the following simulations are run with the settings defined in Table \ref{table:default_sim_settings}. \begin{table}[t] \centering \caption{{Default Simulation Settings}} \vspace{-2mm} \begin{small} \begin{tabular}{|c c|} \hline \textbf{Settings} & \textbf{Values} \\ [0.1ex] \hline\hline Initial network status & $[4, 12, 0, 0, 1000, 400, 0, 0]$ \\ \hline No. training files & $10$ \\ \hline No. $req$ in training files & $500$ \\ \hline No. episodes in training files & $250$ \\ \hline No. evaluation files & $20$ \\ \hline No. $req$ in evaluation files& $500$\\ \hline [$\lambda_1, \lambda_2]$&$[3, 2]$\\ \hline [$\mu_1, \mu_2]$&$[1, 0.5]$\\ \hline [$req_1, req_2]$&$[(1, 300), (3, 50)]$\\ \hline \end{tabular} \label{table:default_sim_settings} \end{small} \end{table} \subsection{Influence of $\alpha$ and $\gamma$ in \textit{Q-Learning} algorithm} Continuous tasks, such as the one considered here, force the agent to compromise by achieving a high reward in the long-run but giving enough importance to each current state value. The agent must learn to some degree, but without constantly overriding what it has already learned. Therefore, $\alpha$ and $\gamma$ must be configured considering the nature of the task the agent is expected to develop. \begin{figure} \subfloat[$\alpha = 0, \gamma = 0.001$]{ \centering \hspace{-3mm} \includegraphics[width=0.49\linewidth]{figures/a0_g0001.jpeg} } \hfill \subfloat[$\alpha = 0.5, \gamma = 0.5$]{ \centering \hspace{-3mm} \includegraphics[width=0.49\linewidth]{figures/a05_g05.jpeg} } \vspace{-2mm} \subfloat[$\alpha = 0.9, \gamma = 0.9$]{ \centering \hspace{-3mm} \includegraphics[width=0.49\linewidth]{figures/a09_g09.jpeg} } \hfill \subfloat[$\alpha = 0.5, \gamma = 0.9$]{ \centering \hspace{-3mm} \includegraphics[width=0.49\linewidth]{figures/a05_g09.jpeg} } \caption{Influence of $\alpha$ and $\gamma$ in \textit{Q-Learning} algorithm} \label{fig:alpha_gamma} \end{figure} Fig. \ref{fig:alpha_gamma} shows four different \textit{Q-Learning} simulations representing the average reward (Y-axis) collected by the agent over the course of 250 episodes (X-axis) through the same 500 VNF $req$ sequence file. Note that setting $\alpha$ and $\gamma$ to consider the most recent information and to favor the long-term reward does not ensure the desired learning convergence, Fig. \ref{fig:alpha_gamma}(c). This is mainly because there is no $S_T$ that determines the goal to be achieved. Setting $\alpha$ and $\gamma$ to $0.5$ have been show to be satisfactory. \subsection{Influence of $\epsilon$-greedy parameters in \textit{Q-Learning} algorithm} \begin{figure} \subfloat[$\epsilon_{decay}= 0.1$]{ \centering \hspace{-3mm} \includegraphics[width=0.49\linewidth]{figures/e01.jpeg} } \hfill \subfloat[$\epsilon_{decay}= 0.01$]{ \centering \hspace{-3mm} \includegraphics[width=0.49\linewidth]{figures/e001.jpeg} } \vspace{-2mm} \subfloat[$\epsilon_{decay}= 0.001$]{ \centering \hspace{-3mm} \includegraphics[width=0.49\linewidth]{figures/e0001.jpeg} } \hfill \subfloat[$\epsilon_{decay}= 0.03$]{ \centering \hspace{-3mm} \includegraphics[width=0.49\linewidth]{figures/e003.jpeg} } \caption{Influence of $\epsilon$-greedy parameters in \textit{Q-Learning} algorithm} \label{fig:epsilon} \end{figure} The \textit{Q-Learning} $\epsilon$-greedy sub-algorithm that determines how much of an explorer or exploiter the agent is has certain parameters that define exactly how long the agent will take on such profiles. The decay rate ($\epsilon_{decay}$) has huge impact on the convergence of the agent’s learning performance, as shown in Fig. \ref{fig:epsilon}. Depending on the number of episodes, a very small $\epsilon_{decay}$ rate may mean that the agent can never exploit what it has already learned, as in Fig. \ref{fig:epsilon}(c). By leaving a minimum epsilon value $\epsilon_{min} = 0.001$ and a $\epsilon_{max}=1$, the probability that the agent explores from time to time increases, even towards the end of the training stage. This is the reason why certain peaks are seen along the performance curve once it has converged Fig. \ref{fig:epsilon}(a). Fig. \ref{fig:epsilon}(d) shows the same level of convergence and the same average reward as in Fig. 5 (a) but over the course of 2000 episodes instead of 250 with an $\epsilon_{decay} = 0.03$. $\epsilon$ decays after each episode using: $\epsilon = \epsilon_{min} + (\epsilon_{max} - \epsilon_{min}) \cdot e^{(-\epsilon_{decay} \cdot episode_i)}$. \subsection{Rejection ratio with respect to VNF arrival rate} In these simulations, the performance of all three algorithms is evaluated using different $\lambda_i$ rates, as shown in Table \ref{table:rate}. The goal is to analyze how the $req_j$ arrival rate affects the resources of the EC nodes. The agent is forced to handle consecutive incoming requests with short and high arrival rate. The results are shown in Fig. \ref{fig:rate_results}, where the X-axis represents the different factor values by which the VNF arrival rates are multiplied. In Simulation 1, the short $\lambda_2$ and $\mu_2$ ensure that such a demand arrives and departs in a short time, returning the resources to the EC center to which they were allocated, and thus becoming available again. As $req_j$ inter-arrival times increases, (Simulations 2 and 3), the MDP achieves a slightly better policy than \textit{Q-Learning}. The agent must constantly learn how to assign the demands among the resources of EC nodes as they are occupied for longer time steps. This occasionally results in rejections. Nevertheless, the performance difference is very small, indicating that the agent behaves optimally when EC centers' resources are scarce. \begin{table}[t] \centering \caption{{Simulation Settings to Analyze Arrival Rate}} \vspace{-2mm} \begin{small} \begin{tabular}{|c c c c|} \hline \textbf{Parameter} & \textbf{Sim. 1} & \textbf{Sim. 2} & \textbf{Sim. 3}\\ [0.1ex] \hline\hline Factor & 0.2 & 1.0 & 2.0 \\ \hline [$\lambda_1, \lambda_2]$& [0.6, 0.4]&[3, 2]&[6, 4] \\ \hline [$\mu_1, \mu_2]$&$[1, 0.5]$&$[1, 0.5]$&$[1, 0.5]$\\ \hline \end{tabular} \label{table:rate} \end{small} \end{table} \begin{figure}[t] \centering \hspace*{-0.4cm} \includegraphics[width=0.97\linewidth]{figures/47654.jpeg} \vspace{-3mm} \caption{Rejection ratio with respect to VNF arrival rate} \label{fig:rate_results} \vspace{1mm} \end{figure} \subsection{Rejection ratio with respect to EC and link capacity} \begin{table}[t] \centering \caption{{Simulation Settings to Analyze EC Resources}} \vspace{-2mm} \begin{small} \begin{tabular}{|c c c c|} \hline \textbf{Parameter} & \textbf{Sim. 1} & \textbf{Sim. 2} & \textbf{Sim. 3}\\ [0.1ex] \hline\hline Factor & 0.8 & 1.0 & 1.2 \\ \hline $[M_1^{fcpu}, M_2^{fcpu}]$ &[4, 8] &[5, 10] &[6, 12]\\ \hline $[M_1^{fbw}, M_2^{fbw}]$ &[800, 320] &[1000, 400] &[1200, 480]\\ \hline \end{tabular} \label{table:capacity} \end{small} \end{table} In these simulations, we evaluate the performance of the algorithms with different CPU cores and BW link capacities, as shown in Table \ref{table:capacity}. The results are shown in Fig. \ref{fig:capacity_result}. In Fig. \ref{fig:capacity_result}, for Simulation 1, the scarce available resources, for either $M_1^{fcpu}$ or $M_2^{fbw}$, determine the performance difference between MDP and \textit{Q-Learning}. The \textit{Q-Learning}'s policy is better than that of \textit{best fit}, but worse than that of MDP. The $M_1^{fcpu}$ are used up very quickly, so the rest of the outcome is somehow deterministic. As the available resources are increased in each EC, \textit{Q-Learning} appears to improve its learning and to closely converge to ideal policy of the MDP. It should be noted that the randomness of \textit{Q-Learning} during the exploration stage, determines, to some extent, the starting point for building the path for the optimal policy. When the exploitation stage begins, the probabilities of changing what is already known are less likely. \begin{figure} \centering \hspace*{-0.4cm} \includegraphics[width=0.97\linewidth]{figures/123123123.jpeg} \vspace{-3mm} \caption{Rejection ratio with respect to EC and link capacity} \label{fig:capacity_result} \vspace{1mm} \end{figure} \subsection{Rejection ratio with respect to EC resource heterogeneity} In this simulations, we analyze how the agent can effectively reflect the heterogeneity of EC resources. To this end, in simulations EC 1's available resources are fixed, i.e. $[4, 1000]$, while EC 2's CPU cores are increased as $EC_2^{cpu}=\beta \cdot EC_1^{cpu}$, and its available BW is decreased as $EC_2^{BW}=\frac{1}{\beta} \cdot EC_1^{BW}$. The simulation settings and corresponding results are shown in Table \ref{table:mec_het} and Fig. \ref{fig:mec_het_result} respectively. The first thing that can be observed in all simulations is the small influence of each resource parameter on the agent's decision. It is clear that the agent treats the lack of resources similarly whether it is CPU or BW. It is noticeable that simulation 1 and 3 show almost the same results in terms of rejection ratio, and the difference between the two simulations is a factor 3. Simulation 2 also shows the same dynamics as all previous simulations: Q-Learning performs slightly worse when the resources are larger and there are more opportunities to assign $req_j$ among the network's EC centers. \begin{table}[t] \centering \caption{{Simulation Settings to Analyze EC Heterogeneity}} \vspace{-2mm} \begin{small} \begin{tabular}{|c c c c|} \hline \textbf{Parameter} & \textbf{Sim. 1} & \textbf{Sim. 2} & \textbf{Sim. 3}\\ [0.1ex] \hline\hline $\beta$ & 1.0 & 2.5 & 3.0\\ \hline $M_2^{fcpu}$ &[4] &[10] &[12]\\ \hline $M_2^{fbw}$ &[1000] &[400] &[333]\\ \hline \end{tabular} \label{table:mec_het} \end{small} \end{table} \begin{figure}[t] \centering \hspace*{-0.4cm} \includegraphics[width=0.97\linewidth]{figures/4645645.jpeg} \vspace{-3mm} \caption{Rejection ratio with respect to EC resource heterogeneity} \label{fig:mec_het_result} \vspace{1mm} \end{figure} \subsection{Rejection ratio with respect to VNF demand heterogeneity} \begin{table}[t] \centering \caption{$req_2$ Demand Settings to Analyze Demand Heterogeneity} \vspace{-2mm} \begin{small} \begin{tabular}{|c c c c|} \hline \textbf{Parameter} & \textbf{Sim. 1} & \textbf{Sim. 2} & \textbf{Sim. 3}\\ [0.1ex] \hline\hline $\beta$ &1.5 &2 &4\\ \hline $req_2^{CPU}$ & 3 & 4 & 8 \\ \hline $req_2^{BW}$ & 200 & 150 & 75 \\ \hline \end{tabular} \label{table:demand_het} \end{small} \end{table} In these simulations, we proceed similarly to the previous one, but with implications for the VNF demand values. In this case, the $req_1$ demands remain static and the $req_2$ values are changed according to the Table \ref{table:demand_het}. For $req_2$, as the number of CPU requests increases, $req_{2cpu}=\beta \cdot req_{1cpu}$, the required link BW is lowered, $req_{2BW} = \frac{1}{\beta} \cdot req_{1BW}$. The results are shown in Fig. \ref{fig:demand_het_result}. In Fig. \ref{fig:demand_het_result}, for $\beta=1.5$, a similar agent behavior, when available resources are not critical, can be seen. Seems to perform worse than MDP. As EC resources begin to be scarce, \textit{Q-Learning} approximates the MDP, resulting in similar policies. \begin{figure}[t] \centering \hspace*{-0.4cm} \includegraphics[width=0.97\linewidth]{figures/567r467.jpeg} \vspace{-3mm} \caption{Rejection ratio with respect to VNF demand heterogeneity} \label{fig:demand_het_result} \vspace{1mm} \end{figure} \section{Conclusions and Future Work} In this paper, we have studied the problem of VNF placement in EC-enabled 6G networks, considering both computational and communication resources. We have developed theoretically optimal and practical solutions to this problem. We obtained the former by formulating the problem as a finite MDP, solved via PI. The latter is the model-free reinforcement learning approach. We evaluated the solutions in a wide range of network parameter settings. It has been shown that there is a striking performance similarity between Q-Learning and PI, especially when both algorithms face limited EC resources. Nevertheless, the MDP needs to know all the environment dynamics in advance, which is an arduous task in a real world scenario. It has also been shown that Q-Learning performs better than \textit{best fit} in all cases and it performs well considering cloud and transport network parameters, which was the main objective in the problem. Once near-optimality of learning-based approaches is shown in this paper thanks to the mathematical tractability of the scenario, future work will be devoted to extend the proposed practical schemes towards increasingly complex 6G scenarios, e.g., through Deep Q-Networks (DQN). \bibliographystyle{ieeetr}
1,314,259,995,371
arxiv
\section{Introduction}\label{intro} Let $\cS_+^n$ and $\Sn_{++}$ be the {cones of positive semidefinite and positive definite matrices}, respectively, in the space of $n\times n$ symmetric matrices $\cS^n$ endowed with the standard trace inner product $\inprod{\cdot}{\cdot}$ and the Frobenius norm $\norm{\cdot}$. In this paper, we consider the following convex quadratic semidefinite programming (QSDP) {problem}: \begin{eqnarray} ({\bf P}) \quad \min \left\{\frac{1}{2}\inprod{X}{\cQ X} + \inprod{C}{X} \mid \cA X= b, \; X\in\cS_+^n \cap \cK \right\}, \nonumber \end{eqnarray} where $\cQ:\Sn\to \Sn $ is a self-adjoint positive semidefinite linear operator, $\cA:\Sn \rightarrow \Re^{m}$ is a linear map whose adjoint is denoted as $\cA^*$, $C\in \Sn$, $b \in \Re^{m}$ are given data, $\cK$ is a simple nonempty closed convex polyhedral set in $\Sn$, e.g., $\cK =\{X \in\Sn \mid\, L\leq X\leq U\}$ with $L,U\in \Sn$ being given matrices. The main objective of this paper is to design and analyse efficient algorithms for solving ({\bf P}) and its dual. We are particularly interested in the case where the dimensions $n$ and/or $m$ are large, and {it may be impossible to explicitly store or compute the matrix representation of $\cQ$.} For example, if $\cQ = H\otimes H$ is the Kronecker product of a dense matrix $H\in \cS^n_+$ with itself, then it would be extremely expensive to store the matrix representation of $\cQ$ explicitly when $n$ is larger than, say, 500. As far as we are aware of, the best solvers currently available for solving ({\bf P}) are based on inexact primal-dual interior-point methods \cite{toh2008inexact}. However, they are highly inefficient for solving large scale problems as interior-point methods have severe inherent ill-conditioning limitations which would make the convergence of a Krylov subspace iterative solver employed to compute the search directions to be extremely slow. {While sophisticated preconditioners have been constructed in \cite{toh2008inexact} to alleviate the ill-conditioning, the improvement is however not dramatic enough for the algorithm to handle large scale problems comfortably.} On the other hand, an interior-point method which employs a direct solver to compute the search directions is prohibitively expensive for solving ({\bf P}) since the cost is at least $O( (m+n^2)^3)$ arithmetic operations per iteration. {It is safe to say that there is currently no solver which can efficiently handle large scale QSDP problem of the form ({\bf P}) and our paper precisely aims to provide an efficient and robust solver for ({\bf P}).} The algorithms which we will design later are based on the augmented Lagrangian function for the dual of ({\bf P}) (in its equivalent minimization form): \begin{equation} ({\bf D}) \quad \min \left\{\delta_{\cK}^*(-Z) +\frac{1}{2}\inprod{W}{\cQ W} - \inprod{b}{y} \; \Big| \begin{array}{l} Z - \cQ W + S + \cA^* y = C, \\[3pt] S\in\Sn_+,\; W\in\cW,\; y\in\Re^m, Z \in \Sn \end{array} \right\}, \nonumber \end{equation} where $\cW$ is any subspace of $\Sn$ containing the range space of $\cQ$ {(denoted as $\Range(\cQ))$, $\delta_{\cK}^*(\cdot)$ is the Fenchel conjugate of the indicator function $\delta_{\cK}(\cdot)$.} { Due to its great potential in applications and mathematical elegance, QSDP has been studied quite actively both from the theoretical and numerical aspects \cite{alfakih1999solving, higham2002computing, jiang2014partial, krislock2004local, Nie2001, qi2006quadratically, toh2007inexact, toh2008inexact}. } For the recent theoretical developments, one may refer to \cite{cui2016on,HanSZ2015,qi2009local,sun2010a} and references therein. Here we focus on the numerical aspect and we will next briefly review some of the methods available for solving QSDP problems. Toh et al \cite{toh2007inexact} and Toh \cite{toh2008inexact} proposed inexact primal-dual path-following interior-point methods to solve the special class of convex QSDP without the constraint in $\cK$. In theory, these methods can be used to solve QSDP problems with inequality constraints and constraint in $\cK$ by reformulating the problems {into} the required standard form. However, as already mentioned, in practice interior-point methods are not efficient for solving QSDP problems beyond moderate scales either due to the extremely high computational cost per iteration or the inherent ill-conditioning of the linear systems governing the search directions. In \cite{zhao2009semismooth}, Zhao designed a semismooth Newton-CG augmented Lagrangian (NAL) method and analyzed its convergence for solving the primal QSDP problem ({\bf P}). However, the NAL algorithm often encounters numerical difficulty (due to singular or nearly singular generalized Hessian) when the polyhedral set constraint $X\in\cK$ is present. Subsequently, Jiang et al \cite{jiang2012inexact} proposed an inexact accelerated proximal gradient method for least squares semidefinite programming {with only equality constraints where} the objective function in ({\bf P}) is expressed explicitly in the form of $\norm{\cB X - d}^2$ for some given linear map $\cB$. More recently, inspired by the successes achieved in \cite{SunTY3c,YangST2015} for solving the linear SDP problems with nonnegative constraints, Li, Sun and Toh \cite{LiSunToh_scb2014} proposed a first-order algorithm, {known as the} Schur complement based semi-proximal alternating direction method of multipliers (SCB-sPADMM), for solving the dual form ({\bf D}) of QSDP. As far as we aware of, \cite{LiSunToh_scb2014} is the first paper to advocate using the dual approach for solving QSDP problems {even though the dual problem ({\bf D}) looks a lot more complicated than the primal problem ({\bf P}), especially with the presence of the subspace constraint involving $\cW$.} By leveraging on the Schur complement based decomposition technique developed in \cite{LiSunToh_scb2014,LiThesis2014}, Chen, Sun and Toh \cite{chen2015efficient} also employed the dual approach by proposing an efficient inexact ADMM-type first-order method (which we name as SCB-isPADMM) for solving problem ({\bf D}). {Promising numerical results have been obtained by the dual based first-order algorithms in solving various classes of QSDP problems to moderate accuracy \cite{LiSunToh_scb2014, chen2015efficient}. Naturally one may hope to also relay on the ADMM scheme to compute highly accurate solutions. However, as one will observe from the numerical experiments presented later in Section \ref{sec:comp-example}, ADMM-type methods are incapable of finding accurate solutions for difficult QSDP problems due to its slow local convergence or stagnation}. On the other hand, recent studies on the convergence rate of augmented Lagrangian methods (ALM) for solving convex semidefinite programming with multiple solutions \cite{cui2016on} show that comparing to ADMM-type methods, the ALM {can enjoy a faster convergence rate (in fact asymptotically superlinear) under milder conditions. These recent advances thus strongly indicate that one should be able to design a highly efficient algorithm based on the ALM for ({\bf D}) for solving QSDP problems to high accuracy.} More specifically, we will propose a two-phase augmented Lagrangian based algorithm with Phase I to generate a reasonably good initial point to warm start the Phase II algorithm so as to {compute} accurate solutions efficiently. We call this new method \QSDPNAL since it extends the ideas of SDPNAL \cite{SDPNAL} and SDPNAL+ \cite{YangST2015} for linear SDP problems to QSDP problems. Although the aforementioned two-phase framework has already been demonstrated to be highly efficient for solving linear {SDP} problems \cite{YangST2015,SDPNAL}, {it remains to be seen whether we can achieve equally or even more impressive performance on various QSDP problems.} {In recent years, it has become fashionable to design first-order algorithms for solving convex optimization problems, with some even claiming their efficacy in solving various challenging classes of matrix conic optimization problems based on limited performance evaluations. However, based on our extensive numerical experience in solving large scale linear SDPs \cite{SunTY3c,YangST2015,SDPNAL}, we have observed that while first-order methods can be rather effective in solving easy problems which are well-posed and nondegenerate, they are typically powerless in solving difficult instances which are ill-posed or degenerate. Even for a well designed first-order algorithm with guaranteed convergence and highly optimized implementations, such as the ADMM+ algorithm in \cite{SunTY3c}, a first-order method may still fail on slightly more challenging problems. For example, the ADMM+ algorithm designed in \cite{YangST2015} can encounter varying degrees of difficulties in solving linear SDPs arising from rank-one tensor approximation problems. On the other hand, the SDPNAL algorithm in \cite{SDPNAL} (which exploits second-order information) is able to solve those problems very efficiently to high accuracy. We believe that in order to design an efficient and robust algorithm to solve the highly challenging class of matrix conic optimization problems including QSDPs, one must fully combine the advantages offered by both the first and second order algorithms, rather than just solely relying on first-order algorithms even though they may appear to be easier to implement. } {Next} we briefly describe our algorithm {\sc Qsdpnal}. Let $\cZ = \Sn\times\cW\times\Sn\times\Re^m$. Consider the following Lagrange function associated with ({\bf D}): \begin{equation*} \label{fun:lag} l(Z,W,S,y;X) := \delta_{\cK}^*(-Z) + \frac{1}{2}\inprod{W}{\cQ W} +\delta_{\Sn_+}(S)- \inprod{b}{y} + \inprod{Z-\cQ W + S + \cA^*y - C}{X}, \end{equation*} where $ (Z,W,S,y)\in\cZ$ and $X\in \Sn$. For a given positive scalar $\sigma$, the augmented Lagrangian function for ({\bf D}) is defined by \begin{equation}\label{eq-aug-d-intro} \cL_{\sigma}(Z,W,S,y;X) := l(Z,W,S,y;X) + \frac{\sigma}{2}\norm{Z-\cQ W+S + \cA^*y-C}^2,\quad (Z,W,S,y)\in\cZ, \; X\in \Sn. \end{equation} The algorithm which we will adopt in {\sc Qsdpnal}-Phase I is a variant of the SCB-isPADMM algorithm developed in \cite{chen2015efficient}. In {\sc Qsdpnal}-Phase II, we design an augmented Lagrangian method (ALM) for solving ({\bf D}) where the inner subproblem in each iteration is solved via inexact semismooth Newton based algorithms. Given $\sigma_0 >0$, $(Z^0,W^0,S^0,y^0,X^0)\in\cZ\times \Sn$, the $(k+1)$th iteration of the ALM consists of the following steps: \begin{equation*}\label{intro-prox-aug} \begin{aligned} &(Z^{k+1},W^{k+1},S^{k+1},y^{k+1}) \approx \argmin \left\{ \mathcal{L}_{\sigma_k}(Z,W,S,y; X^{k}) \mid\, (Z,W,S,y) \in \cZ \right\}, \\[5pt] &X^{k+1} = X^k + \sigma_k(Z^{k+1} - \cQ W^{k+1} + S^{k+1} + \mathcal{A}^*y^{k+1} - C), \end{aligned} \end{equation*} where $\sigma_k\in(0,+\infty)$. The first issue in the above ALM is the choice of the subspace $\cW$. The {obvious choice $\cW = \Sn$ can lead to various difficulties} in the implementation of the above algorithm. For example, since $\cQ:\Sn \to \Sn$ is only assumed to be positive semidefinite, the Newton systems corresponding to the inner subproblems {may be} singular and the sequence $\{W^{k}\}$ generated by the ALM can be unbounded. {As a result,} it will be extremely difficult to analyze the convergence of the inner algorithms for solving the {ALM subproblems.} The second issue is that one needs to design {easy-to-check} stopping criteria for the inner subproblems, and to ensure the fast convergence of the ALM {under reasonable conditions imposed on the QSDP problems. Concerning the first issue, we propose to choose $\cW = \Range(\cQ)$, although such a choice also leads to obstacles which we will overcome in Section 4.} Indeed, by restricting $W\in\Range(\cQ)$, the difficulties in analyzing the convergence and the superlinear (quadratic) convergence of the Newton-CG algorithm are circumvented as the {possibilities of} singularity and unboundedness are removed. For the second issue, under the restriction {that} $\cW = \Range(\cQ)$, thanks to the recent advances in \cite{cui2016on}, we are able to design checkable stopping criteria for solving the inner subproblem inexactly while establishing the global convergence of the above ALM. {Moreover, we are able to establish} the R-(super)linear convergence rate of the KKT residual. At first glance, the restriction that $W\in\Range(\cQ)$ appears to introduce severe numerical difficulties when we {need to} a solve linear system under this restriction. Fortunately, by carefully examining our algorithm and {devising novel numerical techniques, we are able to overcome these difficulties as we shall see in Section \ref{sec:numerical-issues}.} Our preliminary evaluation of \QSDPNAL has demonstrated that our algorithm is capable of solving large scale general QSDP problems of the form ({\bf P}) to high accuracy very efficiently {and robustly}. For example, we are able to solve an elementwise weighted nearest correlation matrix estimation problem with matrix dimension $n=10,000$ in less than 11 hours to the relative accuracy smaller than $10^{-6}$ in the KKT residual. {Such a numerical performance has not been attained in the past.} { As the readers may have already observed, even though our goal in developing algorithms for solving convex optimization problems such as ({\bf P}) and ({\bf D}) is to design those with desirable theoretical properties such as asymptotic superlinear convergence, it is our belief that it is equally if not even more important for the algorithms designed to be practically implementable and able to achieve realistic numerical efficiency. It is obvious that our proposed two-phase augmented Lagrangian based algorithm for solving ({\bf P}) and ({\bf D}) is designed based on such a belief. } The remaining parts of this paper are organized as follows. The next section is devoted to our main algorithm {\sc Qsdpnal}, which is a two-phase augmented Lagrangian {based} algorithm whose Phase I is used to generate a reasonably good initial point to warm-start the Phase \rm{II} algorithm so as to obtain accurate solutions efficiently. In Section \ref{sec:ABCD}, we propose to solve the inner minimization subproblems of the ALM method by semismooth Newton based algorithms and study their global and local superlinear (quadratic) convergence. In Section \ref{sec:numerical-issues}, we discuss {critical numerical} issues concerning the efficient implementation of {\sc Qsdpnal}. In Section \ref{sec:LS}, we discuss the special case of applying \QSDPNAL to solve least squares semidefinite programming problems. {The extension of \QSDPNAL for solving QSDP problems with unstructured inequality constraints is discussed in Section 5.2.} In Section \ref{sec:comp-example}, we conduct numerical experiments to evaluate the performance of \QSDPNAL in solving various QSDP problems and their extensions. We conclude our paper in the final section. Below we list several notation and definitions to be use in the paper. For a given closed proper convex function $\theta:\cX \to (-\infty,\infty]$, where $\cX$ is a finite-dimensional real inner product space, the Moreau-Yosida proximal mapping $\text{Prox}_\theta(x)$ for $\theta$ at a point $x$ is defined by \begin{equation*} \text{Prox}_\theta(x) := \argmin_{y\in \cX} \Big\{\theta(y) + \frac{1}{2}\|y-x\|^2\Big\}. \end{equation*} We will often make use of the following identity: \begin{eqnarray*} \text{Prox}_{t \theta}( x) + t \text{Prox}_{\theta^*/t}(x/t) = x, \end{eqnarray*} where $t > 0$ is a given parameter, and $\theta^*: \cX \to(-\infty,\infty]$ is the conjugate function of $\theta$. If $\theta$ is the indicator function of a given closed convex set $D\subseteq \cX$, then the Moreau-Yosida proximal mapping is in fact the metric projector over $D$, denoted by $\Pi_{D}(\cdot)$. For any $x\in\cX$, we define ${\rm dist}(x,D): = \inf_{d\in D}\norm{x - d}$. For any $X\in\Sn$, we use $\lambda_{\max}(X)$ and $\lambda_{\min}(X)$ to {denote} the largest and the smallest eigenvalue of $X$, respectively. {Similar notation is used when $X$ is replaced by the linear operator $\cQ$.} \section{A two-phase augmented Lagrangian method}\label{sec:2phase} In this section, we shall present our two-phase algorithm \QSDPNAL for solving the QSDP problems ({\bf D}) and ({\bf P}). For the convergence analysis of Algorithm {\sc Qsdpnal}, we {need to make the following standard assumption for ({\bf P}). Such an assumption is analogous to the Slater's condition in the context of nonlinear programming in $\Re^m$.} \begin{assumption} \label{assump:slater} There exists {$\widehat X\in\Sn_{++} \cap {\rm ri}(\cK)$} such that \[\cA (\cT_{\cK}(\widehat X)) = \Re^m, \] where ${\rm ri}(\cK)$ denotes the relative interior of $\cK$ and $\cT_{\cK}(\widehat X)$ is the tangent cone of $\cK$ at point $\widehat X$. \end{assumption} \subsection{Phase I: An SCB based inexact semi-proximal ADMM} In Phase I, we propose a new variant of the Schur complement based inexact semi-proximal ADMM (SCB-isPADMM) developed in \cite{chen2015efficient} to solve ({\bf D}). Recall the augmented Lagrangian function associated with problem ({\bf D}) defined in \eqref{eq-aug-d-intro}. The detail steps of our Phase I algorithm for solving ({\bf D}) are given as follows. \bigskip \centerline{\fbox{\parbox{\textwidth}{ {\bf Algorithm} {\bf \sc{Qsdpnal}}-{Phase I}: {\bf An SCB based inexact semi-proximal ADMM for ({\bf D})}. \\[5pt] Select an initial point $(W^0,S^0,y^0,X^0)\in\Range(\cQ)\times\Sn_+\times\Re^{m}\times\Sn$ and $-Z^0\in \textup{dom}(\delta^*_\cK)$. Let $\{\varepsilon_k\}$ be a summable sequence of nonnegative numbers, and $\sigma >0$, $\tau\in(0,\infty)$ are given parameters. For $k=0,1,\ldots$, perform the following steps in each iteration. \begin{description} \item[Step 1.] Compute \begin{align} \widehat W^{k} ={}& \argmin \{ \cL_{\sigma}(Z^k,W,S^k,y^k;X^k) - \inprod{\hat\delta_Q^k}{W} \mid {W\in\Range(\cQ)} \}, \label{W1} \\[5pt] Z^{k+1} ={}& \argmin \{ \cL_{\sigma}(Z,\widehat W^k,S^k, y^k;X^k) \mid Z\in \cS^n\},\nn \\[5pt] W^{k+1} ={}& \argmin \{ \cL_{\sigma}(Z^{k+1},W,S^k, y^k;X^k) - \inprod{\delta_{\cQ}^k}{W} \mid {W\in\Range(\cQ)} \}, \label{W2} \\[5pt] \hat y^{k} ={}& \argmin \{ \cL_{\sigma}(Z^{k+1},W^{k+1},S^k,y;X^k) - \inprod{\hat\delta_y^k}{y} \mid y\in \Re^m\},\nn \\[5pt] S^{k+1} = {}& \argmin \{ \cL_{\sigma}(Z^{k+1},W^{k+1},S,\hat y^k; X^k) \mid S \in \cS^n\},\nn \\[5pt] y^{k+1} ={}& \argmin \{ \cL_{\sigma}(Z^{k+1},W^{k+1},S^{k+1},y;X^k) - \inprod{\delta_y^k}{y} \mid y \in \Re^m\},\nn \end{align} where $\delta_y^k, \, \hat{\delta}_y^k \in \Re^{m}$, $\delta_{\cQ}^k,\,\hat\delta_{\cQ}^k \in\Range(\cQ)$ are error vectors such that \begin{equation*} \max \{ \norm{\delta_y^k}, \norm{\hat \delta_y^k}, \norm{\delta_{\cQ}^k}, \norm{\hat\delta_{\cQ}^k}\}\leq \varepsilon_k. \end{equation*} \item [Step 2.] Compute $X^{k+1} = X^k + \tau\sigma(Z^{k+1} - QW^{k+1} +S^{k+1}+ \cA^*y^{k+1} -C).$ \end{description} }}} \bigskip \begin{remark} \label{rmk:error_terms} We shall explain here the roles of the error vectors $\delta_y^k, \, \hat{\delta}_y^k, \delta_{\cQ}^k$ and $\hat\delta_{\cQ}^k$. There is no need to choose these error vectors in advance. The presence of these error vectors simply indicates that the corresponding subproblems can be solved inexactly. For example, the updating rule of $y^{k+1}$ in the above algorithm can be interpreted as follows: {find $y^{k+1}$ inexactly} through \[y^{k+1}\approx \argmin \cL_{\sigma}(Z^{k+1},W^{k+1},S^{k+1},y;X^k)\] such that the residual \[\norm{\delta_y^k} = \norm{b - \cA X^k - \sigma \cA(Z^{k+1} - \cQ W^{k+1} + S^{k+1} + \cA^* y^{k+1} - C)} \le \varepsilon_k.\] \end{remark} {\begin{remark} \label{rmk:qsdpnal_1} In contrast to Aglorithm SCB-isPADMM in \cite{chen2015efficient}, our Algorithm {\rm {\sc Qsdpnal}-Phase I} requires the subspace constraint $W\in\Range(\cQ)$ explicitly in the subproblems \eqref{W1} and \eqref{W2}. Note that due to the presence of the subspace constraint $W\in\Range(\cQ)$, there is no need to add extra proximal terms {in the subproblems} corresponding to $W$ {to satisfy the positive definiteness requirement needed in applying} the inexact Schur compliment based decomposition technique developed in \cite{LiSunToh_scb2014,LiThesis2014}. This is {certainly more elegant than} the indirect reformulation strategy considered in \cite{LiSunToh_scb2014,chen2015efficient}. \end{remark}} The convergence of the above algorithm follows from \cite[Theorem 1]{chen2015efficient} without much difficulty, {and its proof is omitted.} \begin{theorem} \label{thm:sGS-qsdp} Suppose that the solution set of {\rm({\bf P})} is nonempty and Assumption \ref{assump:slater} holds. Let $\{(Z^k,W^k,S^k,y^k,X^k)\}$ be the sequence generated by Algorithm {\rm {\sc Qsdpnal}-Phase I}. If $\tau\in(0,(1+\sqrt{5}\,)/2)$, then the sequence $\{(Z^k,W^k,S^k,y^k)\}$ converges to an optimal solution of {\rm ({\bf D})} and $\{X^k\}$ converges to an optimal solution of {\rm ({\bf P})}. \end{theorem} \begin{remark} \label{rmk:convergence_rate_ADMM} Under some error bound conditions on the limit point of $\{(Z^k,W^k,S^k,y^k,X^k)\}$, one can derive the linear rate of convergence of the exact version of Algorithm {\rm {\sc Qsdpnal}-Phase I}. For a recent study on this {topic}, see \cite{HanSZ2015} and the references therein. Here we will not address this issue as our Phase II {algorithm} enjoys a better rate of convergence under weaker conditions. \end{remark} \subsection{Phase II: An augmented Lagrangian algorithm} In this section, we discuss our Phase II algorithm for solving the {dual problem} ({\bf D}). The purpose of this phase is to obtain high accuracy solutions efficiently after warm-started by our Phase I algorithm. Our Phase II algorithm has the following template. \bigskip \centerline{\fbox{\parbox{\textwidth}{ {\bf Algorithm} {\sc Qsdpnal}-Phase II: {\bf An augmented Lagrangian method of multipliers for solving ({\bf D}).} \\[5pt] Let $\sigma_0 >0$ be a given parameter. Choose $(W^0,S^0,y^0,X^0)\in\Range(\cQ)\times\Sn_+\times\Re^{m}\times\Sn$ and $-Z^0\in\textup{dom}(\delta^*_\cK)$. For $k=0,1,\ldots$, perform the following steps in each iteration. \begin{description} \item [Step 1.] Compute \begin{equation} \label{p2:palm-sub} \begin{aligned} &(Z^{k+1},W^{k+1},S^{k+1},y^{k+1}) \approx \argmin \left\{ \begin{aligned &\Psi_k(Z,W,S,y):=\mathcal{L}_{\sigma_k}(Z,W,S,y; X^{k}) \\[5pt] & \mid\, (Z,W,S,y)\in\Sn\times\Range(\cQ)\times\Sn\times\Re^m \end{aligned} \right\}. \\[5pt] \end{aligned} \end{equation} \item[Step 2.] Compute \begin{equation*}\label{eq:qsdpnal2_X} X^{k+1} = X^k + \sigma_k(Z^{k+1} - \cQ W^{k+1} + S^{k+1} + \mathcal{A}^*y^{k+1} - C). \end{equation*} Update $\sigma_{k+1} \uparrow \sigma_\infty\leq \infty$. \end{description} }}} \bigskip As an important issue on the implementation of the above algorithm, the stopping criteria for approximately solving subproblem \eqref{p2:palm-sub} shall be discussed here. {Let the feasible set for ({\bf P}) be denoted as $\cF:=\{X\in\Sn \mid \cA X = b, \, X\in\Sn_+\cap\cK \}$. Define the feasibility residual function $\gamma:\Sn\to \Re$ for the primal problem ({\bf P}) by} \begin{equation*} \label{eq:residual_F} \gamma(X) : = \norm{b - \cA X} + \norm{X - \Pi_{\Sn_+}(X)} + \norm{X - \Pi_{\cK}(X)} ,\quad \forall\, X\in\Sn. \end{equation*} Note that $\gamma(X) = 0$ if and only if $X\in\cF$. Indeed, for $X\not\in \cF$, $\gamma(X)$ provides an {easy-to-compute measure on the primal infeasibility of $X$. Similar to} \cite[Proposition 4.2]{cui2016on}, we can use this feasibility measure function to derive an upper bound on the distance of a given point to the feasible set $\cF$ in the next lemma. Its proof can be obtained without much difficulty by applying Hoffman's error bound \cite[Lemma 3.2.3]{facchinei2003finite} to the nonempty polyhedral convex set $\{X\in\Sn\mid \cA X = b,\, X\in\cK\}$, e.g., see \cite[Theorem 7]{Bauschke1999Strong}. \begin{lemma}\label{lemma:distXtoPFX} Assume that $\cF\cap\Sn_{++}\neq \emptyset$. Then, there exists a constant $\mu>0 $ such that \begin{equation}\label{eq:lemmaXtoPFX} \norm{X - \Pi_{\cF}(X)} \le \mu (1+\norm{X}) \gamma(X), \quad \forall\, X\in\Sn . \end{equation} \end{lemma} When the ALM is applied to solve ({\bf D}), numerically it is difficult to execute criteria $(A'')$ and $({\rm B}_1'')$ proposed in \cite{rockafellar1976augmented}. Fortunately, Lemma \ref{lemma:distXtoPFX} and recent advances in the analysis of the ALM \cite{cui2016on} allow us to design easy-to-verify stopping criteria for the subproblems in {\sc Qsdpnal}-Phase II. For any $k\ge 0$, denote \[f_k(X): = - \frac{1}{2}\inprod{X}{\cQ X} - \inprod{C}{X} - \frac{1}{2\sigma_k}\norm{X - X^k}^2 , \quad \forall\, X\in\Sn.\] {Note that $f_k(\cdot)$ is in fact the objective function in the dual of problem \eqref{p2:palm-sub}.} Let $\{\varepsilon_k\}$ and $\{\delta_k\}$ be two given positive summable sequences. Given $k\ge 0$ and $X^k\in\Sn$, we propose to {terminate} the minimization of the subproblem \eqref{p2:palm-sub} in the $(k+1)$th iteration of Algorithm {\sc Qsdpnal}-Phase II with {either one of the following two easy-to-check} stopping criteria: \begin{equation*} \label{ALM_stop} \begin{aligned} &(\textup{A}) \quad \left\{ \begin{aligned} &\Psi_k(Z^{k+1},W^{k+1},S^{k+1},y^{k+1}) - f_k(X^{k+1}) \le \varepsilon_k^2/2\sigma_k, \\[5pt] & (1+\norm{X^{k+1}})\gamma(X^{k+1}) \le \alpha_k \varepsilon_k/\sqrt{2\sigma_k}, \end{aligned} \right. \\[8pt] &({\rm B})\quad \left\{ \begin{aligned} & \Psi_k(Z^{k+1},W^{k+1},S^{k+1},y^{k+1}) - f_k(X^{k+1}) \le \delta_k^2 \norm{X^{k+1} - X^k}^2/2\sigma_k, \\[5pt] & (1+\norm{X^{k+1}})\gamma(X^{k+1}) \le \beta_k\delta_k\norm{X^{k+1} - X^k} /\sqrt{2\sigma_k}, \end{aligned} \right. \end{aligned} \end{equation*} where \[\alpha_k = \min\left\{1,\sqrt{\sigma_k},\frac{\varepsilon_k}{\sqrt{2\sigma_k}\norm{\nabla f_k(X^{k+1})}}\right\}\quad {\rm and} \quad \beta_k = \min\left\{1,\sqrt{\sigma_k},\frac{\delta_k\norm{X^{k+1} - X^k}}{\sqrt{2\sigma_k}\norm{\nabla f_k(X^{k+1})}}\right\}. \] \begin{lemma} \label{lemma:stopp_cond} Assume that Assumption \ref{assump:slater} holds. Let $\mu$ be the constant given in \eqref{eq:lemmaXtoPFX}. Suppose that for some $k\ge 0$, $X^k$ is not an optimal solution to problem ({\bf P}). Then one can always find $(Z^{k+1},W^{k+1},S^{k+1},y^{k+1})$ and $X^{k+1} = X^k + \sigma_k(Z^{k+1} - \cQ W^{k+1} + S^{k+1} + \cA^*y^{k+1} - C)$ satisfying both (A) and (B). Moreover, (A) implies {that} \[ \Psi_k(Z^{k+1},W^{k+1},S^{k+1},y^{k+1}) - \inf \Psi_k \le \nu \varepsilon_k^2/2\sigma_k \] {}and (B) implies that \[ \Psi_k(Z^{k+1},W^{k+1},S^{k+1},y^{k+1}) - \inf \Psi_k\leq (\nu \delta_k^2/2\sigma_k)\norm{X^{k+1} - X^k}^2,\] {}respectively, where \begin{equation} \label{eq:nu} \nu = 1 + \mu + \frac{1}{2}\lambda_{\max}(\cQ) + \frac{1}{2}\mu^2. \end{equation} \end{lemma} \begin{proof} With the help of Lemma \ref{lemma:distXtoPFX}, one can establish the assertion in the same fashion as in \cite[Proposition 4.2, Proposition 4.3]{cui2016on}. \end{proof} {For the subsequent analysis, we need to define the essential objective function of ({\bf P}), which is given by} \begin{equation*} \label{fun:essential_objP} \begin{aligned} \phi(X) := {}& - \inf \, \{\,l(Z,W,S,y;X)\mid (Z,W,S,y)\in\Sn\times\Range(\cQ)\times\Sn\times\Re^m\} \\[5pt] = {}&\left\{ \begin{aligned} & \frac{1}{2}\inprod{X}{\cQ X} + \inprod{X}{C}+ \delta_{\Sn_+}(X) + \delta_{\cK}(X) \quad \mbox{if}\ \cA X = b , \\[5pt] & +\infty \quad \mbox{otherwise}. \end{aligned} \right. \end{aligned} \end{equation*} {For convenience, we also let $\Omega = \partial\phi^{-1}(0)$ to denote the solution set of ({\bf P}). } We say that for ({\bf P}), the second order growth condition holds at an optimal solution {$\overline X\in \Omega$ with respect to the set $\Omega$} if there exist $\kappa>0 $ and a neighborhood $U$ of $\overline X$ such that \begin{equation}\label{eq:second_order_growth} \phi(X) \ge \phi(\overline X) + {\kappa^{-1} {\rm dist}^2(X,\Omega)}, \quad \forall\, X\in U. \end{equation} Let the objective function $g:\Sn \times\Range(\cQ)\times\Sn \times \Re^m\to (-\infty,+\infty]$ associated with ({\bf D}) be given as follows: \[g(Z,W,S,y) := \delta_{\cK}^*(-Z) + \frac{1}{2}\inprod{W}{\cQ W} + \delta_{\Sn_+}(S) - \inprod{b}{y}, \quad \forall \, (Z,W,S,y)\in \Sn \times\Range(\cQ)\times\Sn \times \Re^m.\] Now, with Lemma \ref{lemma:stopp_cond}, we can prove the global and local (super)linear convergence of Algorithm {\sc Qsdpnal}-Phase II {by adapting the proofs in} \cite[Theorem 4]{rockafellar1976augmented} and \cite[Theorem 4.2]{cui2016on}. It shows that, for most QSDP problems, one can always expect the KKT residual of the sequence generated by {\sc Qsdpnal}-Phase II {to converge} at least R-(super)linearly. \begin{theorem} \label{thm:qsdpnal-2-global} Suppose that {$\Omega$, the solution set of {\rm({\bf P})}, is nonempty} and Assumption \ref{assump:slater} holds. Then the sequence $\{(Z^k,W^k,S^k,y^k,X^k)\}$ generated by Algorithm {\rm {\sc Qsdpnal}-Phase II} under the stopping criterion $(\textup{A})$ for all $k\ge0$ is bounded, and $\{X^k\}$ converges to $X^{\infty}$, an optimal solution {of} {\rm ({\bf P})}, and $\{(Z^k,W^k,S^k,y^k)\}$ converges to an optimal solution {of} {\rm ({\bf D})}. Moreover, for all $k\ge 0$, it holds that \begin{equation*} \begin{aligned} &g(Z^{k+1},W^{k+1},S^{k+1},y^{k+1}) - \inf \,({\bf D})\\[5pt] \le{} &\Psi_k(Z^{k+1},W^{k+1},S^{k+1},y^{k+1}) - \inf \Psi_k + (1/2\sigma_k)(\norm{X^k}^2 - \norm{X^{k+1}}^2). \end{aligned} \end{equation*} Assume that for {\rm ({\bf P})}, the second order growth condition \eqref{eq:second_order_growth} holds at $X^{\infty}$ with respect to the set {$\Omega$}, i.e., there exists a constant $\kappa > 0$ and a neighborhood $U$ of $X^{\infty}$ such that \[\phi(X) \ge \phi(X^{\infty}) + \kappa^{-1} {\rm dist}^2(X, {\Omega}), \quad \forall\, X\in U.\] {Suppose that the algorithm is} executed under {criteria {\rm (A)} and {\rm (B)}} for all $k\ge 0$ and $\nu$ is the constant given in \eqref{eq:nu}. Then, for all $k$ sufficiently large, it holds that \begin{eqnarray} {\rm dist}(X^{k+1}, {\Omega}) \le \theta_k {\rm dist}(X^{k}, {\Omega}), \label{eq:asyP_Qsupl} \\[5pt] \norm{Z^{k+1} - \cQ W^{k+1} + S^{k+1} + \cA^*y - C} \le \tau_k{\rm dist}(X^k, {\Omega}), \label{eq:asyD_Rsupl}\\[5pt] g(Z^{k+1},W^{k+1},S^{k+1},y^{k+1}) - \inf \,({\bf D}) \le \tau_k'{\rm dist}(X^k, {\Omega}), \label{eq:asygap_Rsupl} \end{eqnarray} where \begin{equation*} \begin{aligned} & 1>\theta_k = \big(\kappa/\sqrt{\kappa^2 + \sigma_k^2 }+ 2\nu\delta_k\big)(1 - \nu\delta_k)^{-1} \to \theta_{\infty} = \kappa/\sqrt{\kappa^2 + \sigma_{\infty}^2} \quad (\theta_{\infty} = 0 \; {\rm if}\; \sigma_{\infty} = \infty),\\[5pt] & \tau_k = \sigma_k^{-1}(1-\nu\delta_k)^{-1} \to \tau_{\infty} = 1/\sigma_{\infty}\quad (\tau_{\infty} = 0 \; {\rm if}\; \sigma_{\infty} = \infty), \\[5pt] & \tau_k' = \tau_k(\nu^2\delta_k^2\norm{X^{k+1} - X^k} + \norm{X^{k+1}} + \norm{X^k})/2 \to \tau'_{\infty} = \norm{X^{\infty}}/\sigma_{\infty}\quad (\tau'_{\infty} = 0 \; {\rm if}\; \sigma_{\infty} = \infty). \end{aligned} \end{equation*} \end{theorem} \bigskip {Next we give a few} comments on the convergence rates and assumptions made in Theorem \ref{thm:qsdpnal-2-global}. \begin{remark} \label{rmk: convergence_rate_KKT_res} Under the assumptions of Theorem \ref{thm:qsdpnal-2-global}, we have proven that the KKT residual, corresponding to ({\bf P}) and ({\bf D}), along the sequence $\{(Z^{k},W^k,S^k,y^k,X^k)\}$ converges at least R-(super)linearly. Indeed, {under stopping criteria {(A)}, {(B)} and} from \eqref{eq:asyP_Qsupl},\eqref{eq:asyD_Rsupl} and \eqref{eq:asygap_Rsupl}, we know that the primal feasibility, the dual feasibility and the duality gap all converge at least R-(super)linearly. \end{remark} \begin{remark} \label{rmk:mild_second_order_growth} The assumption that the second order growth condition \eqref{eq:second_order_growth} holds for ({\bf P}) is quite mild. Indeed, it holds when any optimal solution $\overline X$ of ({\bf P}), together with any of its multiplier $\overline S \in \Sn_+$ corresponding only to the semidefinite constraint, satisfies the strict complementarity condition \cite[Corollary 3.1]{cui2016on}. It is also valid when the ``no-gap'' second order sufficient condition holds at the optimal solution\footnote{In this case, the optimal solution set to ({\bf P}) is necessarily a singleton though ({\bf D}) may have multiple solutions.} to ({\bf P}) \cite[Theorem 3.137]{Bonnans2000perturbation}. \end{remark} \section{Inexact semismooth Newton based algorithms for solving the inner subproblems \eqref{p2:palm-sub} in ALM}\label{sec:ABCD} In this section, we will design efficient inexact semismooth Newton based algorithms to solve the inner subproblems \eqref{p2:palm-sub} in the augmented Lagrangian method, where each subproblem takes the form of: \begin{equation}\label{prob-abcd} \min\left\{ \Psi(Z,W,S,y):= \cL_{\sigma}(Z,W,S,y;\widehat X) \mid\, (Z,W,S,y)\in\Sn\times\Range(\cQ)\times\Sn\times\Re^{m} \right\} \end{equation} for a given $ \widehat X \in \Sn$. Note that the dual problem of \eqref{prob-abcd} is given as follows: \begin{equation*}\label{prob:abdc_D} \max \left\{ - \frac{1}{2}\inprod{X}{\cQ X} - \inprod{C}{X} - \frac{1}{2\sigma}\norm{X - \widehat X}^2 \mid \cA X= b, \; X\in\cS_+^n,\; X \in \cK \right\}. \end{equation*} Under Assumption \ref{assump:slater}, from \cite[Theorems 17 \& 18]{rockafellar1974conjugate}, we know that the optimal solution set of problem \eqref{prob-abcd} is nonempty and for any $\alpha \in \Re$, the level set $\cL_{\alpha}:=\{(Z,W,S,y) \in \Sn\times\Range(\cQ)\times\Sn\times\Re^{m} \,\mid\, \Psi(Z,W,S,y)\le \alpha\}$ is a closed and bounded convex set. \subsection{A semismooth Newton-CG algorithm for \eqref{prob-abcd} with $\cK = \Sn$} \label{subsec-sncg} Note that {in quite a number of applications, the polyhedral convex set $\cK$ is actually the whole space $\Sn$}. Therefore, we shall first study how the inner problems \eqref{prob-abcd} in Algorithm ALM can be solved efficiently when $\cK = \Sn$. Under this setting, $Z$ is vacuous, i.e., {$Z=0$.} Let $\sigma >0$ be given. Denote \[S(W,y) := \cA^*y - \cQ W - \widehat C, \quad \forall \, (W,y)\in \Range(\cQ) \times \Re^m.\] where $\widehat C = C - \sigma^{-1}\widehat X$. Observe that if \begin{equation*} \label{sncg-prob} (W^*,S^*,y^*) = \argmin\{\Psi(0,W,S,y) \,\mid\, (W,S,y)\in\Range(\cQ)\times\Sn\times\Re^m\}, \end{equation*} then $(W^*,S^*,y^*)$ can be computed in the following manner \begin{eqnarray} &&(W^*,y^*) = \argmin \left\{ \begin{aligned} \varphi(W,y) \,|\, (W,y)\in\Range(\cQ)\times\Re^m \end{aligned} \right\}, \label{eq-wy}\\[5pt] &&S^* = \Pi_{\Sn_+}(-S(W^*,y^*)), \nonumber \end{eqnarray} where \[\varphi(W,y):= \frac{1}{2} \inprod{W}{\cQ W} - \inprod{b}{y} + \frac{\sigma}{2}\norm{\Pi_{\Sn_+}(S(W,y))}^2, \quad \forall\, (W,y)\in\Range(\cQ)\times\Re^m.\] Note that $\varphi(\cdot,\cdot)$ is a continuously differentiable function on $\Range(\cQ)\times \Re^m$ with \begin{equation*} \nabla\varphi(W,y) = \left(\begin{array}{l} \cQ W - \sigma\cQ \Pi_{\Sn_+}(S(W,y)) \\[5pt] -b + \sigma \cA \Pi_{\Sn_+}(S(W,y)) \end{array} \right). \end{equation*} Then, solving \eqref{eq-wy} is equivalent to solving the following nonsmooth equation: \begin{equation*}\label{eq-wy-nonsmooth} \nabla\varphi(W,y) = 0, \quad (W,y)\in\Range(\cQ)\times\Re^m. \end{equation*} Since $\Pi_{\Sn_+}$ is strongly semismooth \cite{SunS2002}, we can design a semismooth Newton-CG (SNCG) method to solve \eqref{eq-wy} and could expect to get a fast superlinear or even quadratic convergence. For any $(W,y)\in\Range(\cQ)\times\Re^m$, define \[\hat\partial^2 \varphi(W,y) := \left[ \begin{array}{cc} \cQ & \\ & 0 \end{array} \right]+ \sigma \left[ \begin{array}{c} \cQ \\ -\cA \end{array} \right] \partial\Pi_{\Sn_+}(S(W,y)) [\cQ \; -\cA^*], \] where $\partial\Pi_{\Sn_+}(S(W,y))$ is the Clarke subdifferential \cite{Clarke83} of $\Pi_{\Sn_+}(\cdot)$ at $S(W,y)$. Note that from \cite{hiriart1984generalized}, we know that \begin{equation*} \hat{\partial}^2 \varphi(W,y)\, (d_W,d_y) = {\partial}^2 \varphi(W,y)\, (d_W,d_y), \quad \forall \, (d_W,d_y) \in \Range(\cQ)\times\Re^m, \label{eq-Clarke} \end{equation*} where ${\partial}^2 \varphi(W,y)$ denotes the generalized Hessian of $\varphi$ at $(W,y)$, i.e., the Clarke subdifferential of $\nabla \varphi$ at $(W,y)$. Given $(\widetilde W, \tilde y)\in\Range(Q)\times\Re^m$, consider the following eigenvalue decomposition: \begin{eqnarray*}\label{decomp-M} S(\widetilde W, \tilde y) = \cA^*\tilde y - \cQ \widetilde W - \widehat C = P \, \Gamma \,P^{\T}, \end{eqnarray*} where $P\in\Re^{n\times n}$ is an orthogonal matrix whose columns are eigenvectors, and $\Gamma$ is the corresponding diagonal matrix of eigenvalues, arranged in a nonincreasing order: $\lambda_1 \geq \lambda_2 \geq \cdots \geq \lambda_n$. Define the following index sets \begin{eqnarray*} \alpha := \{i \mid \lambda_i>0\}, \quad \bar{\alpha} :=\{i \mid \lambda_i\leq 0\}. \end{eqnarray*} \noindent We define the operator $U^0 : \mathcal{S}^n \rightarrow \mathcal{S}^n$ by \begin{eqnarray} \label{def-W0} U^0 (H) := P(\Sigma \circ (P^{\T}H P))P^{\T}, \quad H \in \mathcal{S}^n, \end{eqnarray} where $``\circ"$ denotes the Hadamard product of two matrices, \begin{eqnarray*}\label{def-nu} \quad \Sigma = \left[ \begin{array}{cc} E_{\alpha \alpha} & \nu_{\alpha \bar{\alpha}}\\[4pt] \nu^{\T}_{\alpha \bar{\alpha}} & 0 \end{array} \right], \quad \nu_{ij} := \frac{\lambda_i}{\lambda_i-\lambda_j}, \,\, i \in \alpha, j \in \bar{\alpha}, \end{eqnarray*} and $E_{\alpha \alpha} \in \mathcal{S}^{|\alpha|}$ is the matrix of ones. In \cite[Lemma 11]{pang2003semismooth}, it is proved that $$ U^0 \in\partial \Pi_{\Sn_+}(S(\widetilde W, \tilde y)). $$ Define \begin{equation}\label{p2:eq-netwon-partial} V^0 := \left[ \begin{array}{cc} \cQ & \\ & 0 \end{array} \right]+ \sigma \left[ \begin{array}{c} \cQ \\ -\cA \end{array} \right] U^0 [\cQ \, -\cA^*]. \end{equation} Then, we have $V^0\in\hat\partial^2\varphi(\widetilde W,\tilde y)$. After all the above preparations, we can design the following semismooth Newton-CG method as in \cite{SDPNAL} to solve \eqref{eq-wy}. \bigskip \noindent \centerline{\fbox{\parbox{\textwidth}{ {\bf Algorithm SNCG}: {\bf A semismooth Newton-CG algorithm.} \\[5pt] Given $\mu \in (0, 1/2)$, $\bar{\eta} \in (0, 1)$, $\tau \in (0,1]$, $\tau_1,\tau_2\in(0,1)$ and $\delta \in (0, 1)$. Choose $(W^0,y^0)\in\Range(\cQ)\times\Re^m$. Set $j=0$. Iterate the following steps. \begin{description} \item[Step 1.] Choose $U_j\in \partial\Pi_{\Sn_+}(S(W^j,y^j))$ defined as in \eqref{def-W0}. Let $V_j$ be given in \eqref{p2:eq-netwon-partial} with $U^0$ replacing by $U_j$ and $\epsilon_j = \tau_1\min\{\tau_2,\norm{\nabla \varphi(W^j,y^j)}\}$. Apply the CG algorithm to find an approximate solution $(d_W^j,d_y^j)\in\Range(\cQ)\times\Re^m$ to \begin{eqnarray}\label{eqn-epsk} V_j (d_W,d_y) + \epsilon_j (0,d_y) = -\nabla \varphi(W^j,y^j) \end{eqnarray} such that \begin{equation*} \norm{V_j(d_W^j,d_y^j) + \epsilon_j (0,d_y^j)+\nabla \varphi(W^j,y^j)}\le \eta_j := \min(\bar{\eta}, \| \nabla \varphi(W^j, y^j)\|^{1+\tau}). \label{eq-eta} \end{equation*} \item[Step 2.] Set $\alpha_j = \delta^{m_j}$, where $m_j$ is the first nonnegative integer $m$ for which \begin{equation*}\label{Armijo} \varphi(W^j + \delta^{m} d_W^j,y^j+\delta^m d_y^j) \leq \varphi(W^j,y^j) + \mu \delta^{m} \langle \nabla \varphi(W^j,y^j), (d^j_W,d^j_y) \rangle. \end{equation*} \item[Step 3.] Set $W^{j+1} = W^j + \alpha_j \, d_W^j$ and $y^{j+1} = y^j + \alpha_j \, d_y^j$. \end{description} }}} \bigskip \vskip 10 true pt The convergence results for the above SNCG algorithm are stated in the next theorem. \begin{theorem} Suppose that Assumption \ref{assump:slater} holds. Then Algorithm SNCG generates a bounded sequence $\{(W^j,y^j)\}$ and any accumulation point $(\overline W, \bar y) \in \Range(\cQ)\times\Re^m$ is an optimal solution to problem \eqref{eq-wy}. \end{theorem} The following proposition is the key ingredient in our subsequent convergence analysis. \begin{prop}\label{prop:psd-RangeQ} Let $\cU:\Sn\to\Sn$ be a self-adjoint positive semidefinite linear operator and $\sigma>0$. Then, it holds that $\cA \cU \cA^*$ is positive definite if and only if \begin{equation}\label{psd-Q} \Inprod{\left[\begin{array}{c} W\\ y \end{array}\right]}{\left(\left[ \begin{array}{cc} \cQ & \\ & 0 \end{array} \right]+\sigma \left[ \begin{array}{c} \cQ \\ -\cA \end{array} \right] \cU [\cQ \, -\cA^*]\right)\left[\begin{array}{c} W\\ y \end{array}\right]} > 0 \end{equation} for all $(W,y)\in\Range(\cQ)\times\Re^m\backslash \{(0,0)\}.$ \end{prop} \begin{proof} Since the ``if" statement obviously holds true, we only need to prove the ``only if" statement. Note that $$ \inprod{W}{\cQ W} > 0,\quad \forall\, W\in\Range(\cQ){\backslash\{0\}}. $$ Now suppose that $\cA \cU \cA^*$ is positive definite, and hence nonsingular. By the Schur complement condition for ensuring the positive definiteness of a linear operator, we know that \eqref{psd-Q} holds if and only if \begin{equation}\label{psd-QAschur} \inprod{W}{(\cQ + \sigma \cQ \cU\cQ - \sigma \cQ \cU\cA^*(\cA \cU\cA^*)^{-1}\cA \cU\cQ)W} > 0,\quad\forall\, W\in\Range(\cQ){\backslash\{0\}}. \end{equation} But for any $W\in\Range(\cQ) \backslash\{ 0\}$, { we have that $\inprod{W}{\cQ W} > 0$, and } \begin{eqnarray*} && \hspace{-0.7cm} \inprod{W}{(\cQ \cU\cQ - \cQ \cU \cA^*(\cA \cU\cA^*)^{-1}\cA \cU \cQ)W}\; =\; \inprod{W}{\cQ \cU^{\frac{1}{2}}(\cI -\cU^{\frac{1}{2}}\cA^*(\cA \cU\cA^*)^{-1}\cA \cU^{\frac{1}{2}})\cU^{\frac{1}{2}}\cQ W}\\[5pt] & = & \inprod{ \cU^{\frac{1}{2}} \cQ W}{(\cI -\cU^{\frac{1}{2}}\cA^*(\cA \cU\cA^*)^{-1}\cA \cU^{\frac{1}{2}})\cU^{\frac{1}{2}}\cQ W} \;\geq \; 0. \end{eqnarray*} Hence, \eqref{psd-QAschur} holds automatically. This completes the proof of the proposition. \end{proof} Base on the above proposition, under the constraint nondegeneracy condition for ({\bf P}), we shall show in the next theorem that one can still ensure the positive definiteness of the coefficient matrix in the semismooth Newton system at the solution point. \begin{theorem} \label{SNCG-no-prox} Let $(\overline W, \by)$ be the optimal solution for problem \eqref{eq-wy}. Let $\overline Y := \Pi_{\Sn_+}(\cA^*\bar y - \cQ \overline W - \widehat C)$. The following conditions are equivalent: \begin{enumerate} \item[{\rm (i)}] The constraint nondegeneracy condition, \begin{equation}\label{eq:cons_nondegen} \cA\,{\rm lin}(\cT_{\Sn_+}(\overline{Y})) = \Re^m, \end{equation} holds at $\overline Y$, where ${\rm lin}(\cT_{\Sn_+}(\overline{Y}))$ denotes the lineality space of the tangent cone of $\Sn_+$ at $\overline{Y}$. \item[{\rm (ii)}] Every element in \begin{equation*}\label{psd-nondegen} \left[ \begin{array}{cc} \cQ & \\ & 0 \end{array} \right]+\sigma \left[ \begin{array}{c} \cQ \\ -\cA \end{array} \right] \partial\Pi_{\Sn_+}(\cA^*\bar y - \cQ \overline W - \widehat C) [\cQ \, -\cA^*] \end{equation*} is self-adjoint and positive definite on $\Range(\cQ)\times\Re^m.$ \end{enumerate} \end{theorem} \begin{proof} In the same fashion as in \cite[Proposition 3.2]{SDPNAL}, we can prove that $ \cA \cU \cA^* $ is positive definite for all $\cU \in \partial\Pi_{\Sn_+}(\cA^*\bar y - \cQ \overline W - \widehat C)$ if only if (i) holds. Then, by Proposition \ref{prop:psd-RangeQ}, we readily obtain the desired results. \end{proof} \begin{theorem}\label{convergence-zwy-newton} Assume that Assumption \ref{assump:slater} holds. Let $(\overline W, \bar y)$ be an accumulation point of the infinite sequence $\{(W^j,y^j)\}$ generated by Algorithm SNCG for solving problem \eqref{eq-wy}. Assume that the constraint nondegeneracy condition \eqref{eq:cons_nondegen} holds at $\overline Y: = \Pi_{\Sn_+}(\cA^*\bar y - \cQ \overline W - \widehat C)$. Then, the whole sequence $\{(W^j,y^j)\}$ converges to $(\overline W, \bar y)$ and \begin{equation*} \|(W^{j+1},y^{j+1}) - (\overline W,\bar y) \| = O(\norm{(W^j,y^j) - (\overline W,\bar y)})^{1+\tau}. \end{equation*} \end{theorem} \begin{proof} From Theorem \ref{SNCG-no-prox}, we know that under the constraint nondegeneracy condition \eqref{eq:cons_nondegen}, every $V \in\hat\partial^2 \varphi(\overline W,\bar y)$ is self-adjoint and positive definite on $ \Range(\cQ)\times\Re^n$. {Hence} one can obtain the desired results from \cite[Theorem 3.5]{SDPNAL} by further noting the strong semismoothness of $\Pi_{\Sn_+}(\cdot)$. \end{proof} \subsection{Semismooth Newton based inexact ABCD algorithms for \eqref{prob-abcd} when $\cK\not=\Sn$} When $\cK\neq \Sn$, we will adapt the recently developed inexact accelerated block coordinate descent (ABCD) algorithm \cite{ABCD} to solve the inner subproblems \eqref{prob-abcd} in the augmented Lagrangian method. The detailed steps of the ABCD algorithm to be used for solving \eqref{prob-abcd} will be presented below. In this algorithm, $(Z,W,S,y)$ is decomposed into two groups, namely $Z$ and $(W,S,y)$. In this case, $(W,S,y)$ is regarded as a single block and the corresponding subproblem in the ABCD algorithm can only be solved by an iterative method inexactly. Here, we propose to {develop} a semismooth Newton-CG method to solve the corresponding subproblem. \bigskip \centerline{\fbox{\parbox{\textwidth}{ {\bf Algorithm ABCD($Z^{0},W^0,S^{0},y^{0},\widehat X,\sigma$)}: {\bf An inexact ABCD algorithm for \eqref{prob-abcd}.} \\[5pt] Given $(W^0,S^0,y^0)\in\Range(\cQ)\times\Sn_+\times\Re^{m}$, $-Z^{0}\in\dom(\delta^*_\cK)$ and $\eta > 0$, set $(\tZ^1,\tW^1, \tS^1,\ty^1) = (Z^{0},W^0,S^{0},y^{0})$ and $t_1=1 $. Let $\{\varepsilon_l\}$ be a nonnegative summable sequence. For $l = 1,\ldots,$ perform the following steps in each iteration. \begin{description} \item[Step 1.] Let $\tR^l = \sig (\tS^k + \cA^*\ty^l-\cQ\tW^l - C +\sig^{-1} \widehat{X})$. Compute \begin{align} Z^l = {}&\argmin \big\{\Psi(Z,\tW^{l},\tS^{l},\ty^l)\,\mid\, Z\in\Sn \big\} = \frac{1}{\sig}\big( \Pi_{\cK}(\tR^l) - \tR^l\big), \nn\\[5pt] (W^l,S^l,y^{l}) ={}& \argmin \left\{ \begin{aligned} &\Psi(Z^l,W,S,y) + \frac{\eta}{2}\norm{y - \tilde y^l}^2 -\inprod{\delta_y^l}{y} -\inprod{\delta_{\cQ}^l}{W} \\[5pt] &\mid\, (W,S,y)\in\Range(\cQ)\times\Sn\times\Re^m \end{aligned} \right\}, \label{sncg-abcd} \end{align} where $\delta_y^l \in \Re^{m}$, $\delta_{\cQ}^l \in \Range(\cQ)$ are error vectors such that $$ \max \{ \norm{\delta_y^l}, \norm{\delta_{\cQ}^l}\}\leq \varepsilon_l/t_l. $$ \item [Step 2.] Set $t_{l+1} = \frac{1+\sqrt{1+4t_l^2}}{2}$, $\beta_l=\frac{t_l-1}{t_{l+1}}$. Compute \begin{equation*} \hspace{-0.7cm} \tW^{l+1} = W^l + \beta_l (W^l - W^{l-1}), \; \tS^{l+1} = S^l +\beta_l (S^l-S^{l-1}), \; \ty^{l+1} = y^l + \beta_l(y^l-y^{l-1}). \end{equation*} \end{description} }}} \bigskip { Note that in order to meet the convergence requirement of the inexact ABCD algorithm, a proximal term involving the positive parameter $\eta$ is added in \eqref{sncg-abcd} to ensure the strong convexity of the objective function in the subproblem. For the computational efficiency, one can always take $\eta$ to be a small number, say $10^{-6}$. For the subproblem \eqref{sncg-abcd}, it can be solved by a semismooth Newton-CG algorithm similar to the one developed in Subsection \ref{subsec-sncg}. Since $\eta >0$, the superlinear convergence of such a semismooth Newton-CG algorithm can also be proven based on the} strong semismoothness of $\Pi_{\Sn_+}(\cdot)$ and the symmetric positive definiteness of the corresponding generalized Hessian. The convergence results for the above Algorithm ABCD are stated in the next theorem, whose proof essentially follows from {that in} \cite[Theorem 3.1]{ABCD}. Here, we omit the proof for brevity. \begin{theorem} \label{ABCD-1} Suppose that Assumption \ref{assump:slater} holds and $\eta >0$. Let $\{(Z^l,W^l,S^l,y^l)\}$ be the sequence generated by Algorithm ABCD. Then, \[\inf_{Z}\Psi(Z,W^l,S^l,y^l) - \Psi(Z^*,W^*,S^*,y^*) = O(1/l^2)\] where $(Z^*,W^*,S^*,y^*)$ is an optimal solution of problem \eqref{prob-abcd}. Moreover, the sequence $\{(Z^l,W^l,S^l,y^l)\}$ is bounded and {all of its cluster points are optimal solutions to problem \eqref{prob-abcd}.} \end{theorem} \section{Numerical issues in \QSDPNAL} \label{sec:numerical-issues} \def\hW{\widehat{W}} In Algorithm {\sc Qsdpnal}-Phase I, in order to obtain $\widehat W^k$ and $W^{k+1}$ at the $k$th iteration, we need to solve the following linear system of equations \begin{equation} \label{eq-qppal-w} (\cQ +\sigma \cQ^2) W \approx \cQ R,\quad W\in\Range(\cQ) \end{equation} with the residual \begin{equation} \label{eq-qppal-w-r} \norm{\cQ R- (\cQ+ \sigma \cQ^2) W}\le \varepsilon, \end{equation} where $R\in\Sn$ and $\varepsilon >0$ are given. Note that the exact solution to \eqref{eq-qppal-w} is unique since $\cQ+\sig \cQ^2$ is positive definite on $\Range(\cQ)$. But the linear system is typically very large even for a moderate $n$, say $n= 500$. Under the high dimensional setting which we are particularly interested in, the matrix representation of $\cQ$ is generally not available or too expensive to be stored explicitly. Thus \eqref{eq-qppal-w} can only be solved inexactly by an iterative method. However when $\cQ$ is singular (and hence $\Range(\cQ)\neq \Sn$), due to the presence of the subspace constraint $W\in\Range(\cQ)$, it is extremely difficult to apply preconditioning to \eqref{eq-qppal-w} while ensuring that the approximate solution is contained in $\Range(\cQ)$. Fortunately, as shown in the next proposition, instead of solving \eqref{eq-qppal-w} directly, we can solve a simpler and yet better conditioned linear system to overcome this difficulty. \begin{prop}\label{prop:p1-com} Let $\widehat W$ be an approximate solution to the following linear system: \begin{equation}\label{eq-p1com-w} (\cI + \sigma \cQ) W \approx R \end{equation} with the residual satisfying \begin{equation*}\label{eq-p1-com-r} \norm{R-(\cI+\sigma \cQ) \hW }\le \frac{\varepsilon}{{\lambda_{\max}(\cQ)}}. \end{equation*} Then, $\widehat{W}_{\cQ} := \Pi_{\Range(\cQ)}(\widehat W) \in \Range(\cQ)$ solves \eqref{eq-qppal-w} with the residual satisfying \eqref{eq-qppal-w-r}. Moreover, $\cQ \widehat{W}_{\cQ} = \cQ \widehat W$ and $\inprod{\widehat{W}_{\cQ}}{\cQ \widehat{W}_{\cQ}} = \inprod{\widehat W}{\cQ \widehat W}$. \end{prop} \begin{proof} First we note that the results $\cQ \widehat{W}_{\cQ} = \cQ \widehat W$ and $\inprod{\widehat{W}_{\cQ}}{\cQ \widehat{W}_{\cQ}} = \inprod{\widehat W}{\cQ \widehat W}$ follow from the decomposition $\hW = \Pi_{\Range(\cQ)}(\hW) + \Pi_{\Range(\cQ)^\perp}(\hW)$. Next, by observing that \begin{eqnarray*} \norm{\cQ R - (\cQ+\sigma \cQ^2)\hW_{\cQ}} = \norm{\cQ R - (\cQ+\sigma \cQ^2)\widehat W} \le {\lambda_{\max}(\cQ)} \,\norm{R-(\cI+\sigma \cQ)\widehat W } \le \varepsilon, \end{eqnarray*} one can easily obtain the desired results. \end{proof} \medskip By Proposition \ref{prop:p1-com}, in order to obtain $\hW_{\cQ}$, we can first apply an iterative method such as the preconditioned conjugate gradient (PCG) method to solve \eqref{eq-p1com-w} to obtain $\hW$ and then perform the projection step. However, by carefully analysing the steps in {\sc Qsdpnal}-Phase I, we are {surprised} to observe that instead of explicitly computing $\hW_{\cQ}$, we can update the iterations in the algorithm by using only $\cQ \hW_{\cQ}=\cQ\hW$. Thus, we only need to compute $\cQ\widehat W$ and the {potentially expensive} projection step to compute $\widehat{W}_\cQ$ {can be avoid completely.} It is important for us to emphasize the computational advantage of solving the linear system \eqref{eq-p1com-w} over \eqref{eq-qppal-w}. First, the former only requires one evaluation of $\cQ(\cdot)$ whereas the latter requires two such evaluations in each PCG iteration. Second, the coefficient matrix in the former system is typically much more well-conditioned than the coefficient matrix in the latter system. More precisely, when $\cQ$ is positive definite, then $\cI+\sig\cQ$ is clearly better conditioned than $\cQ+\sig\cQ^2$ {by a factor of $\lambda_{\max}(\cQ)/\lambda_{\min}(\cQ)$}. When $\cQ$ is singular, with its smallest positive eigenvalue denoted as $\lambda_{+}(\cQ)$, then $\cI+\sig\cQ$ is better conditioned when $\lambda_{\max}(\cQ) \geq \lambda_+(\cQ)(1+\sig\lambda_+(\cQ))$. The previous inequality would obviously hold when $\lambda_+ \leq (\sqrt{4\sig\lambda_{\max}(\cQ)+1}-1)/(2\sig)$. In Algorithm {\sc Qsdpal}-Phase II, the subspace constraint $W\in\Range(\cQ)$ also appears when we solve the semismooth Newton linear system \eqref{eqn-epsk} in Algorithm SNCG. Specifically, we need to find $(dW,dy)$ to solve the following linear system \begin{equation}\label{eq-p2-com-dw} V\, (dW,dy) + \varrho (0, dy) \approx (\cQ(R_1), R_2), \quad (dW,dy)\in\Range(\cQ)\times\Re^m \end{equation} with the residual satisfying the following condition \begin{equation}\label{eq-p2-com-r} \norm{V\, (dW,dy) + \varrho (0, dy) - (\cQ(R_1), R_2)} \le \varepsilon,\end{equation} where \begin{equation*} V: = \left[ \begin{array}{cc} \cQ & \\ & 0 \end{array} \right]+ \sigma \left[ \begin{array}{c} \cQ \\ -\cA \end{array} \right] \cU [\cQ \, -\cA^*], \end{equation*} $\cU$ is a given self-adjoint positive semidefinite linear operator on $\Sn$ and $\varepsilon>0$, $\sigma >0$ and $\varrho > 0$ are given. Again, instead of solving \eqref{eq-p2-com-dw} directly, we can solve a simpler linear system to compute $\cQ (dW)$ approximately, as shown in the next proposition. {The price to pay is that we now need to solve nonsymmetric linear system instead of a symmetric one.} \begin{prop}\label{prop:p2-com} Let \begin{equation*} \widehat V: = \left[ \begin{array}{cc} \cI & \\ & 0 \end{array} \right]+ \sigma \left[ \begin{array}{c} \cI \\ -\cA \end{array} \right] \cU [\cQ \; -\cA^*]. \end{equation*} Suppose $(\widehat{dW}, \widehat{dy})$ is an approximate solution to the following system: \begin{equation} \label{eq-p2com-dw} \widehat V\, (dW, dy) + \varrho (0, dy) \approx (R_1, R_2) \end{equation} with the residual satisfying $$ \norm{\widehat V\, (\widehat{dW}, \widehat{dy}) + \varrho (0, \widehat{dy}) - (R_1, R_2)}\le \frac{\varepsilon}{{\max\{\lambda_{\max}(\cQ),1\}}}. $$ Let $\widehat{dW}_\cQ = \Pi_{\Range(\cQ)}(\widehat{dW}) \in \Range(\cQ)$ Then $(\widehat{dW}_\cQ ,\widehat{dy} )$ solves \eqref{eq-p2-com-dw} with the residual satisfying \eqref{eq-p2-com-r}. Moreover, $\cQ\, \widehat{dW}_\cQ = \cQ\, \widehat{dW}$ and $\inprod{\widehat{dW}_\cQ}{\cQ\, \widehat{dW}_\cQ} = \inprod{\widehat{dW}}{\cQ\, \widehat{dW}}$. \end{prop} \begin{proof} The proof that $\cQ\, \widehat{dW}_\cQ = \cQ\, \widehat{dW}$ and $\inprod{\widehat{dW}_\cQ}{\cQ\, \widehat{dW}_\cQ} = \inprod{\widehat{dW}}{\cQ\, \widehat{dW}}$ is the same as in the previous proposition. Observe that $V = \Diag(\cQ,\cI)\widehat V$. Then, by using the fact that \begin{eqnarray*} && \hspace{-0.7cm} \norm{V\,(\widehat{dW}_\cQ,\widehat{dy}) + \varrho (0, \widehat{dy}) - (\cQ(R_1), R_2)} = \norm{V\,(\widehat{dW},\widehat{dy})+ \varrho (0, \widehat{dy}) - (\cQ(R_1), R_2)} \\[5pt] & \le&\norm{\Diag(\cQ,\cI)}_2\,\norm{\widehat V\, (\widehat{dW}, \widehat{dy}) + \varrho (0, \widehat{dy}) - (R_1, R_2)} \leq {\max\{\lambda_{\max}(\cQ),1\}} \frac{\varepsilon}{{\max\{\lambda_{\max}(\cQ),1\}}} = \varepsilon, \end{eqnarray*} we obtain the desired results readily. \end{proof} \section{Adaption of QSDPNAL for least squares SDP and inequality constrained QSDP } \label{sec:5} { Here we discuss how our algorithm \QSDPNAL can be modified and adapted for solving least squares semidefinite programming as well as general QSDP problems with additional unstructured inequality constraints which are not captured by the polyhedral set $\cK.$ \subsection{The case for least squares semidefinite programming}\label{sec:LS} In this subsection, we show that for least squares semidefinite programming problems, \QSDPNAL can be used in a more efficient way to {avoid} the difficulty of handling the subspace constraint $W\in\Range(\cQ)$. Consider the following least squares semidefinite programming problem \begin{eqnarray} \begin{array}{ll} \min \; \Big\{\displaystyle \frac{1}{2} \norm{\cB X - d}^2 + \inprod{C}{X} \, \mid\, \cA X = b, \; X \in \Sn_+\cap \cK \Big\}, \end{array} \label{eq-ls-qsdp} \end{eqnarray} where $\cA:\Sn\to\Re^{m}$ and $\cB:\Sn\to\Re^{s}$ are two linear maps, $C\in \Sn$, $b \in \Re^{m}$ and $d\in\Re^{s}$ are given data, $\cK$ is a simple nonempty closed convex polyhedral set in $\Sn$. It is easy to see that \eqref{eq-ls-qsdp} can be rewritten as follows \begin{eqnarray} \begin{array}{ll} \min \;\Big\{ \displaystyle \frac{1}{2} \norm{u}^2 + \inprod{C}{X} \,\mid\, \cB X - d = u,\; \cA X = b, \; X \in \Sn_+\cap \cK \Big\}. \end{array} \label{eq-SMLSp} \end{eqnarray} The dual of \eqref{eq-SMLSp} takes the following form \begin{eqnarray} \max \Big\{ -\delta_{\cK}^*(-Z) -\frac{1}{2}\norm{\xi}^2 + \inprod{d}{\xi} + \inprod{b}{y}\mid Z +\cB^*\xi + S +\cA^*y = C,\;\; S\in\Sn_+ \Big\}. \label{eq-SMLSd} \end{eqnarray} When {\sc Qsdpnal}-Phase I is applied to solve \eqref{eq-SMLSd}, instead of solving \eqref{eq-qppal-w}, the linear system corresponding to the quadratic term is given by \begin{equation}\label{LS-p1} (\cI + \sigma\cB\cB^*)\xi \approx R,\end{equation} where $R\in\Re^s$ and $\sigma >0$ are given data. Meanwhile, in {\sc Qsdpnal}-Phase II for solving problem \eqref{eq-SMLSd}, the linear system in the SNCG method is given by \begin{equation} \label{LS-p2} \left(\left[ \begin{array}{cc} \cI & \\ & 0 \end{array} \right] + \sigma\left[ \begin{array}{c} \cB \\ \cA \\ \end{array} \right]\cU\left[ \begin{array}{cc} \cB^* & \cA^*\\ \end{array} \right] \right)\left[ \begin{array}{c} d\xi \\ dy \\ \end{array} \right] \approx \left[ \begin{array}{c} R_1\\ R_2 \\ \end{array} \right], \end{equation} where $R_1\in\Re^s$ and $R_2\in\Re^{m}$ are given data, $\cU$ is a given self-adjoint positive semidefinite linear operator on $\Sn$. {It is clear that just like \eqref{eq-p1com-w}, one can solve \eqref{LS-p1} efficiently via the PCG method. For \eqref{LS-p2}, one can also solve it by the PCG method, which is more appealing compared to using a nonsymmetric iterative solver such as the preconditioned BiCGSTAB to solve the nonsymmetric linear system \eqref{eq-p2com-dw}. } \begin{remark} \label{rmk:partial-ppa} When the polyhedral constraint $X\in\cK$ in \eqref{eq-ls-qsdp} is absent, i.e., the polyhedral convex set $\cK = \Sn$, Jiang, Sun and Toh in \cite{jiang2014partial} {have proposed a} partial proximal point algorithm for solving the least squares semidefinite programming problem \eqref{eq-ls-qsdp}. {Here our Algorithm {\sc Qsdpnal} is built to solve the much more general class of convex composite QSDP problems.} \end{remark} \subsection{Extension to QSDP problems with inequality constraints} \label{sec:5.2} Consider the following general QSDP problem: \begin{equation} \min\, \Big\{ \displaystyle\frac{1}{2} \inprod{X}{\cQ X} + \inprod{C}{X} \mid \cA_E X = b_E, \;\cA_I X \le b_I, \; X \in \Sn_+\cap \cK \Big\}, \label{eq-qsdp} \end{equation} where $\cA_E:\Sn\to\Re^{m_E}$ and $\cA_I:\Sn\to\Re^{m_I}$ are two linear maps. By adding a slack variable $x$, we can equivalently rewrite \eqref{eq-qsdp} into the following standard form: \begin{eqnarray} \begin{array}{ll} \min & \displaystyle \frac{1}{2} \inprod{X}{\cQ X} + \inprod{C}{X} \\[5pt] \mbox{s.t.} &\cA_E X = b_E, \quad \cA_I X +\cD x = b_I, \quad X \in \Sn_+\cap \cK, \quad \cD x \ge 0, \end{array} \label{eq-qsdp-standard} \end{eqnarray} where $\cD:\Re^{m_I}\to\Re^{m_I}$ is a positive definite diagonal matrix which is introduced for the purpose of scaling the variable $x$. The dual of \eqref{eq-qsdp-standard} is given by \begin{equation} \begin{array}{rllll} \max & \displaystyle -\delta^*_{\cK}(-Z) -\frac{1}{2}\inprod{W}{\cQ W} + \inprod{b_E}{y_E} + \inprod{b_I}{y_I} \\[5pt] \mbox{s.t.} & Z - \cQ W + S + \cA_E^* y_E + \cA_I^*y_I = C, \\[5pt] & \cD^*(s + y_I) = 0, \quad S\in\Sn_+,\quad s\ge 0, \quad W\in\Range(\cQ). \end{array} \label{eq-d-qsdp-standard} \end{equation} We can express \eqref{eq-d-qsdp-standard} in a form which is similar to ({\bf D}) as follows: \begin{eqnarray} \begin{array}{rllll} \max & \displaystyle -\delta^*_{\cK}(-Z) -\frac{1}{2}\inprod{W}{\cQ W} + \inprod{b_E}{y_E} + \inprod{b_I}{y_I} \\[8pt] \mbox{s.t.} & \left( \begin{array}{c} \cI \\[3pt] 0 \end{array} \right) Z - \left( \begin{array}{c} \cQ \\[3pt] 0 \end{array} \right) W + \left( \begin{array}{cc} \cI & 0 \\[3pt] 0 & \cD^* \end{array} \right) \left( \begin{array}{c} S \\[3pt] s \end{array} \right) + \left( \begin{array}{cc} \cA^*_E & \cA_I^* \\[3pt] 0 & \cD^* \end{array} \right) \left( \begin{array}{c} y_E\\[3pt] y_I \end{array} \right) = \left( \begin{array}{c} C \\[3pt] 0 \end{array} \right), \\[12pt] & (S,s)\in\Sn_+\times \Re^{m_I}_+, \quad W\in\Range(\cQ). \end{array} \label{eq-d-qsdp-D} \end{eqnarray} We can readily extend \QSDPNAL to solve the above more general form of \eqref{eq-d-qsdp-D}, and our implementation of \QSDPNAL indeed can be used to solve \eqref{eq-d-qsdp-D}. } \section{Computational experiments}\label{sec:comp-example} \label{sec:5.2} In this section, we evaluate the performance our algorithm \QSDPNAL for solving large-scale QSDP problems \eqref{eq-qsdp}. Since \QSDPNAL contains two phases, we also report the numerical results obtained by running {\sc Qsdpnal}-Phase I (a first-order algorithm) alone for the purpose of demonstrating the power and importance of our two-phase framework for solving difficult QSDP problems. In the numerical experiments, we measure the accuracy of an approximate optimal solution $(X,Z,W,S,y_E,y_I)$ for QSDP \eqref{eq-qsdp} and its dual by using the following relative KKT residual: \begin{eqnarray*} \eta_{\textup{qsdp}} = \max\{\eta_P, \eta_D, \eta_Z, \eta_{S_1}, \eta_{S_2}, \eta_{I_1}, \eta_{I_2},\eta_{I_3},\eta_W\}, \label{stop:sqsdp} \end{eqnarray*} where {\small \begin{eqnarray*} && \eta_P = \frac{\norm{b_E - \cA_E X}}{1+\norm{b_E}},\quad \eta_D = \frac{\norm{Z - \cQ W + S + \cA_E^* y_E + \cA_I^* y_I - C}}{1 + \norm{C}}, \quad \eta_{Z} = \frac{\norm{X - \Pi_{\cK}(X-Z)}}{1+\norm{X}+\norm{Z}}, \\[5pt] && \eta_{S_1} = \frac{|\inprod{S}{X}|}{1+\norm{S}+\norm{X}}, \quad \eta_{S_2} = \frac{\norm{X - \Pi_{\Sn_+}(X)}}{1+\norm{X}},\quad \eta_{I_1} = \frac{\norm{\min(b_I - \cA_I X,0)}}{1+\norm{b_I}},\quad \eta_{I_2} = \frac{\norm{\max(y_I,0)}}{1+\norm{y_I}},\\[5pt] && \eta_{I_3} = \frac{|\inprod{b_I-\cA_I X}{y_I}|}{1+\norm{y_I}+\norm{b_I - \cA_I X}},\quad \eta_{W} = \frac{\norm{\cQ W - \cQ X}}{1+\norm{\cQ}}. \end{eqnarray*}} Additionally, we also compute the relative duality gap defined by \[\eta_{\textup{gap}} = \frac{\textup{obj}_P-\textup{obj}_D}{1+|\textup{obj}_P|+|\textup{obj}_D|},\] where $\textup{obj}_P := \frac{1}{2}\inprod{X}{\cQ X} + \inprod{C}{X}$ and $\textup{obj}_D := -\delta_{\cK}^*(-Z) - \frac{1}{2}\inprod{W}{\cQ W} + \inprod{b_E}{y_E} +\inprod{b_I}{y_I}$. We terminate both \QSDPNAL and {\sc Qsdpnal}-Phase I when $\eta_{{\rm qsdp}} < 10^{-6}$ with the maximum number of iterations set at 50,000. In our implementation of {\sc Qsdpnal}, we always run {\sc Qsdpnal}-Phase I first to {generate} a reasonably good starting point to warm start our Phase II algorithm. We terminate the Phase I algorithm and switch to the Phase II algorithm if a solution with a moderate accuracy (say a solution with $\eta_{\rm qsdp} < 10^{-4}$) is obtained or if the Phase I algorithm reaches the maximum number of iterations (say 1000 iterations). If the {underlying} problems contain inequality or polyhedral constraints, we further employ a restarting strategy similar to the one in \cite{YangST2015}, i.e., when the progress of {\sc Qsdpnal}-Phase II is not satisfactory, we will restart {the whole {\sc Qsdpnal} algorithm} by using the most recently computed $(Z,W,S,y,X,\sigma)$ as the initial point. {In addition, we also adopt a dynamic tuning strategy to adjust the penalty parameter $\sigma$ appropriately based on the progress of the primal and dual feasibilities of the computed iterates.} All our computational results are obtained from a workstation running on 64-bit Windows Operating System having 16 cores with 32 Intel Xeon E5-2650 processors at 2.60GHz and 64 GB memory. We have implemented \QSDPNAL in {\sc Matlab} version 7.13. \subsection{Evaluation of \QSDPNAL on the nearest correlation matrix problems} Our first test example is the problem of finding the nearest correlation matrix (NCM) to a given matrix $G \in \Sn$: \begin{eqnarray} \begin{array}{ll} \min \Big\{ \displaystyle\frac{1}{2}\norm{H\circ(X-G)}^2_F \mid {\rm diag}(X) \;=\; e, \; X \in \Sn_+\cap \cK \Big\}, \end{array} \label{eq-egHNCM-F} \end{eqnarray} where $H\in \Sn$ is a nonnegative weight matrix, $e\in\Re^n$ is the vector of all ones, and $\cK =\{W\in\Sn\;|\; L\leq W\leq U\}$ with $L,U\in \Sn$ being given matrices. In our numerical experiments, we first take a matrix $\widehat{G}$, which is a correlation matrix generated from gene expression data from \cite{li2010inexact}. For testing purpose, we then perturb $\widehat{G}$ to \begin{eqnarray*} G := (1 - \alpha)\widehat{G} + \alpha E, \end{eqnarray*} where $\alpha \in (0,1)$ is a given parameter and $E$ is a randomly generated symmetric matrix with entries uniformly distributed in $[-1,1]$ except for its diagonal elements which are all set to $1$. The weight matrix $H$ is generated from a weight matrix $H_0$ used by a hedge fund company. The matrix $H_0$ is a $93 \times 93$ symmetric matrix with all positive entries. It has about $24\%$ of the entries equal to $10^{-5}$ and the rest are distributed in the interval $[2, 1.28\times 10^3].$ The {\sc Matlab} code for generating the matrix $H$ is given by \begin{verbatim} tmp = kron(ones(110,110),H0); H = tmp(1:n,1:n); H = (H'+H)/2. \end{verbatim} The reason for using such a weight matrix is because the resulting problems generated are more challenging to solve as opposed to a randomly generated weight matrix. {We also test four more instances, namely {\tt PDidx2000}, {\tt PDidx3000}, {\tt PDidx5000} and {\tt PDidx10000}, where the raw correlation matrix $\widehat G$ is generated from the probability of default (PD) data obtained from the RMI Credit Research Initiative\footnote{\url{http://www.rmicri.org/cms/cvi/overview/.}} at the National University of Singapore.} We consider two choices of $\cK$, i.e., case (i): $\cK = \Sn$ and case (ii): $\cK = \{X\in \Sn \mid\, X_{ij} \ge -0.5, \;\forall\; i,j=1,\ldots,n\}$. \begin{table} \centering \begin{footnotesize} \caption{{\small The performance of \QSDPNAL and {\sc Qsdpnal}-Phase I on H-weighted NCM problems (dual of \eqref{eq-egHNCM-F}) (accuracy $= 10^{-6}$). In the table, ``a'' stands for \QSDPNAL and ``b'' stands for {\sc Qsdpnal}-Phase I, respectively. The computation time is in the format of ``hours:minutes:seconds''.}} \label{table:ncm_F} \begin{tabular}{| ccc | c |c| c | c| c|} \hline \mc{8}{|c|}{}\\[-1pt] \mc{8}{|c|}{ $\cK = \Sn$}\\[2pt] \hline \mc{3}{|c|}{} & \mc{1}{c|}{iter.a} &\mc{1}{c|}{iter.b} &\mc{1}{c|}{$\eta_{\textup{qsdp}}$}&\mc{1}{c|} {$\eta_\textup{gap}$}&\mc{1}{c|}{time}\\[2pt] \hline \mc{1}{|c}{problem} &\mc{1}{c}{$n$} &\mc{1}{c|}{$\alpha$}&\mc{1}{c|}{it (subs) $|$ itSCB}&\mc{1}{c|}{}&\mc{1}{c|}{a$|$b} &\mc{1}{c|}{a$|$b}&\mc{1}{c|}{a$|$b}\\[2pt] \hline \mc{1}{|c}{Lymph} &587 &0.10 &12 (40) $|$ 52 &251 & 9.1-7 $|$ 9.1-7 & 8.2-7 $|$ -3.9-7 &13 $|$ 23\\[2pt] \mc{1}{|c}{Lymph} &587 &0.05 &11 (32) $|$ 38 &205 & 9.5-7 $|$ 9.9-7 & 7.5-7 $|$ -4.1-7 &09 $|$ 19\\[2pt] \hline \mc{1}{|c}{ER} &692 &0.10 &12 (41) $|$ 54 &250 & 9.8-7 $|$ 9.9-7 & 5.4-7 $|$ -4.8-7 &17 $|$ 33\\[2pt] \mc{1}{|c}{ER} &692 &0.05 &12 (38) $|$ 43 &218 & 7.3-7 $|$ 9.7-7 & 2.5-7 $|$ -4.4-7 &14 $|$ 28\\[2pt] \hline \mc{1}{|c}{Arabidopsis} &834 &0.10 &12 (42) $|$ 56 &285 & 8.5-7 $|$ 9.9-7 & 2.8-7 $|$ -5.3-7 &27 $|$ 57\\[2pt] \mc{1}{|c}{Arabidopsis} &834 &0.05 &12 (41) $|$ 44 &230 & 8.0-7 $|$ 9.5-7 & -6.8-8 $|$ -4.5-7 &24 $|$ 46\\[2pt] \hline \mc{1}{|c}{Leukemia} &1255 &0.10 &12 (41) $|$ 62 &340 & 8.4-7 $|$ 9.9-7 & 3.1-7 $|$ -5.4-7 &1:08 $|$ 2:48\\[2pt] \mc{1}{|c}{Leukemia} &1255 &0.05 &12 (38) $|$ 49 &248 & 7.6-7 $|$ 8.7-7 & -1.3-7 $|$ -4.5-7 &58 $|$ 2:06\\[2pt] \hline \mc{1}{|c}{hereditarybc} &1869 &0.10 &13 (47) $|$ 76 &393 & 6.4-7 $|$ 9.9-7 & -2.2-7 $|$ -9.8-7 &3:01 $|$ 7:10\\[2pt] \mc{1}{|c}{hereditarybc} &1869 &0.05 &13 (45) $|$ 60 &311 & 8.6-7 $|$ 9.9-7 & -4.7-7 $|$ {\green{ -1.0-6}} &2:39 $|$ 5:44\\[2pt] \hline \mc{1}{|c}{PDidx2000} &2000 &0.10 &13 (51) $|$ 131 &590 & 9.5-7 $|$ 9.9-7 & 2.4-7 $|$ -8.5-7 &5:04 $|$ 11:43\\[2pt] \mc{1}{|c}{PDidx2000} &2000 &0.05 &14 (58) $|$ 139 &626 & 7.5-7 $|$ 9.9-7 & -5.6-8 $|$ -9.5-7 &5:52 $|$ 12:41\\[2pt] \hline \mc{1}{|c}{PDidx3000} &3000 &0.10 &14 (55) $|$ 145 &1201 & 8.1-7 $|$ 9.9-7 & -2.8-7 $|$ {\green{ 2.1-6}} &14:59 $|$ 1:15:01\\[2pt] \mc{1}{|c}{PDidx3000} &3000 &0.05 &14 (58) $|$ 136 &1263 & 6.8-7 $|$ 9.7-7 & -2.6-7 $|$ {\green{ 2.0-6}} &14:50 $|$ 1:19:27\\[2pt] \hline \mc{1}{|c}{PDidx5000} &5000 &0.10 &15 (63) $|$ 189 &1031 & 8.0-7 $|$ 9.9-7 & -1.9-7 $|$ {\green{ 1.8-6}} & 1:17:47 $|$ 4:17:10\\[2pt] \mc{1}{|c}{PDidx5000} &5000 &0.05 &14 (59) $|$ 164 &1699 & 9.2-7 $|$ 9.9-7 & -3.3-7 $|$ -1.3-7 & 1:11:46 $|$ 6:18:29\\[2pt] \hline \mc{1}{|c}{PDidx10000} &10000 &0.10 &16 (71) $|$ 200 &2572 & 7.1-7 $|$ 9.9-7 & 1.6-7 $|$ -1.5-7 & 9:57:18 $|$ 60:07:08\\[2pt] \mc{1}{|c}{PDidx10000} &10000 &0.05 &16 (73) $|$ 200 &2532 & 9.5-7 $|$ 9.9-7 & 4.7-8 $|$ 1.4-7 & 10:34:31 $|$ 59:34:13\\[2pt] \hline \mc{8}{|c|}{}\\[-1pt] \mc{8}{|c|}{$\cK = \{X\in \Sn \mid\, X_{ij}\ge -0.5\;\forall\; i,j=1,\ldots,n\}$}\\[2pt] \hline \mc{1}{|c}{Lymph} &587 &0.10 & 5 (14) $|$ 129 &244 & 9.8-7 $|$ 9.9-7 & -1.0-7 $|$ -4.4-7 &18 $|$ 30\\[2pt] \mc{1}{|c}{Lymph} &587 &0.05 & 5 (12) $|$ 120 &257 & 9.9-7 $|$ 9.9-7 & -3.4-7 $|$ -4.2-7 &15 $|$ 28\\[2pt] \hline \mc{1}{|c}{ER} &692 &0.10 & 5 (14) $|$ 126 &266 & 9.9-7 $|$ 9.9-7 & -1.5-7 $|$ -5.1-7 &22 $|$ 40\\[2pt] \mc{1}{|c}{ER} &692 &0.05 & 5 (14) $|$ 117 &217 & 8.4-7 $|$ 9.9-7 & -2.7-7 $|$ -4.4-7 &21 $|$ 32\\[2pt] \hline \mc{1}{|c}{Arabidopsis} &834 &0.10 & 6 (16) $|$ 240 &472 & 9.9-7 $|$ 9.9-7 & -5.4-7 $|$ -6.0-7 &1:03 $|$ 1:56\\[2pt] \mc{1}{|c}{Arabidopsis} &834 &0.05 & 6 (15) $|$ 240 &442 & 8.5-7 $|$ 9.9-7 & -4.4-7 $|$ -5.6-7 &1:02 $|$ 1:46\\[2pt] \hline \mc{1}{|c}{Leukemia} &1255 &0.10 & 7 (22) $|$ 188 &333 & 9.9-7 $|$ 9.9-7 & -4.4-7 $|$ -5.5-7 &2:10 $|$ 3:06\\[2pt] \mc{1}{|c}{Leukemia} &1255 &0.05 & 7 (19) $|$ 159 &253 & 9.9-7 $|$ 9.9-7 & -5.4-7 $|$ -5.3-7 &1:46 $|$ 2:18\\[2pt] \hline \mc{1}{|c}{hereditarybc} &1869 &0.10 & 8 (22) $|$ 397 &577 & 9.3-7 $|$ 9.9-7 & -8.0-7 $|$ -8.9-7 &10:28 $|$ 12:59\\[2pt] \mc{1}{|c}{hereditarybc} &1869 &0.05 & 8 (22) $|$ 361 &472 & 9.6-7 $|$ 9.9-7 & -8.1-7 $|$ -8.6-7 &9:39 $|$ 10:04\\[2pt] \hline \mc{1}{|c}{PDidx2000} &2000 &0.10 &20 (52) $|$ 672 &716 & 9.9-7 $|$ 9.9-7 & -6.8-7 $|$ -7.9-7 &21:32 $|$ 17:42\\[2pt] \mc{1}{|c}{PDidx2000} &2000 &0.05 &22 (60) $|$ 756 &1333 & 9.6-7 $|$ 5.8-7 & -6.3-7 $|$ -4.0-7 &25:20 $|$ 39:34\\[2pt] \hline \mc{1}{|c}{PDidx3000} &3000 &0.10 &34 (101) $|$ 659 &1647 & 9.9-7 $|$ 9.9-7 & -7.0-7 $|$ -9.4-7 & 1:14:15 $|$ 1:53:13\\[2pt] \mc{1}{|c}{PDidx3000} &3000 &0.05 &41 (117) $|$ 728 &1538 & 9.9-7 $|$ 9.9-7 & -6.2-7 $|$ {\green{ -1.2-6}} & 1:21:13 $|$ 1:50:47\\[2pt] \hline \mc{1}{|c}{PDidx5000} &5000 &0.10 &29 (79) $|$ 829 &1484 & 9.3-7 $|$ 8.4-7 & -5.5-7 $|$ 6.4-7 & 5:00:35 $|$ 7:25:19\\[2pt] \mc{1}{|c}{PDidx5000} &5000 &0.05 &33 (107) $|$ 1081 &1722 & 9.9-7 $|$ 9.9-7 & -6.4-7 $|$ -1.5-7 & 6:30:35 $|$ 7:16:08\\[2pt] \hline \mc{1}{|c}{PDidx10000} &10000 &0.10 &42 (136) $|$ 1289 &2190 & 9.9-7 $|$ 9.9-7 & -7.1-7 $|$ 2.6-7 & 58:44:49 $|$ 64:17:14\\[2pt] \mc{1}{|c}{PDidx10000} &10000 &0.05 &40 (122) $|$ 1519 &3320 & 9.9-7 $|$ 4.5-7 & -6.6-7 $|$ -1.7-8 & 65:13:19 $|$ 94:53:10\\[2pt] \hline \end{tabular} \end{footnotesize} \end{table} In Table \ref{table:ncm_F}, we report the numerical results obtained by \QSDPNAL and {\sc Qsdpnal}-Phase I in solving various instances of the H-weighted NCM problem \eqref{eq-egHNCM-F}. In the table, ``it (subs)'' denotes the number of outer iterations with subs in the parenthesis indicating the number of inner iterations of {\sc Qsdpnal}-Phase II and ``itSCB'' stands for the total number of iterations used in {\sc Qsdpnal}-Phase I. We can see from Table \ref{table:ncm_F} that \QSDPNAL is more efficient than the purely first-order algorithm {\sc Qsdpnal}-Phase I. In particular, for the instance {\tt PDidx10000} where the matrix dimension $n=10,000$, we are able to solve the problem in about 11 hours while the purely first-order method {\sc Qsdpnal}-Phase I needs about 60 hours. \subsection{Evaluation of \QSDPNAL on instances generated from BIQ problems} Based on the SDP relaxation of a binary integer quadratic (BIQ) problem considered in \cite{SunTY3c}, we construct our second QSDP test example as following: \begin{equation*} \begin{array}{rl} \mbox{(QSDP-BIQ)}\;\; \min & \displaystyle \frac{1}{2}\inprod{X}{\cQ X} + \frac{1}{2} \inprod{Q}{Y} + \inprod{c}{x} \\[5pt] \mbox{s.t.} & \textup{diag}(Y) - x = 0, \quad \alpha = 1, \quad X = \left( \begin{array}{cc} Y & x \\ x^T & \alpha \\ \end{array} \right) \in \Sn_+, \quad X\in \cK, \\[5pt] & -Y_{ij}+x_i\ge 0, \, -Y_{ij}+x_j\ge0,\, Y_{ij}-x_i-x_j\ge -1,\, \forall \, i<j, \, j=2,\ldots,n-1, \end{array} \label{qsdp-biq} \end{equation*} where the convex set $\cK = \{X \in \cS^{n} \mid X \geq 0 \}$. Here $\cQ:\Sn\to\Sn$ is a self-adjoint positive semidefinite linear operator defined by \begin{equation}\label{Qmap} \cQ(X) = \frac{1}{2}(AXB + BXA) \end{equation} with $A,B\in \Sn_+$ being matrices truncated from two different large correlation matrices (generated from Russell 1000 and Russell 2000 index, respectively) fetched from Yahoo finance by {\sc Matlab}. In our numerical experiments, the test data for $Q$ and $c$ are taken from Biq Mac Library maintained by Wiegele, which is available at \url{http://biqmac.uni-klu.ac.at/biqmaclib.html}. Table \ref{table:BIQI} reports the numerical results for \QSDPNAL and {\sc Qsdpnal}-Phase I in solving some large scale QSDP-BIQ problems. Note that from the numerical experiments conducted in \cite{chen2015efficient}, one can clearly conclude that {\sc Qsdpnal}-Phase I (a variant of SCB-isPADMM) is the most efficient first-order algorithm for solving QSDP-BIQ problems with a large number of inequality constraints. Even so, it can be observed from Table \ref{table:BIQI} that \QSDPNAL is still faster than {\sc Qsdpnal}-Phase I on most of the problems tested. \begin{footnotesize} \begin{longtable}{| c c | c | c | c| c|c|} \caption{The performance of \QSDPNAL and {\sc Qsdpnal}-Phase I on QSDP-BIQ problems (accuracy $= 10^{-6}$). In the table, ``a'' stands for \QSDPNAL and ``b'' stands for {\sc Qsdpnal}-Phase I, respectively. The computation time is in the format of ``hours:minutes:seconds''.}\label{table:BIQI} \\ \hline \mc{2}{|c|}{} &\mc{1}{c|}{} &\mc{1}{c|}{}&\mc{1}{c|}{}&\mc{1}{c|}{}&\mc{1}{c|}{}\\[-5pt] \mc{2}{|c|}{} & \mc{1}{c|}{iter.a} &\mc{1}{c|}{iter.b} &\mc{1}{c|}{$\eta_{\textup{qsdp}}$} &\mc{1}{c|}{$\eta_\textup{gap}$}&\mc{1}{c|}{time}\\[2pt] \hline \mc{1}{|@{}c@{}}{$d$} &\mc{1}{@{}c@{}|}{$m_E;m_I$ $|$ $n$} &\mc{1}{c|}{it (subs)$|$itSCB} &\mc{1}{c|}{}&\mc{1}{c|}{a$|$b}&\mc{1}{c|}{a$|$b} &\mc{1}{c|}{a$|$b}\\ \hline \endhead be200.3.1 &201 $;$ 59700 $|$ 201 &66 (135) $|$ 3894 &4701 & 7.8-7 $|$ 9.8-7 & -3.5-7 $|$ -7.2-7 &3:37 $|$ 3:57\\[2pt] \hline be200.3.2 &201 $;$ 59700 $|$ 201 &37 (74) $|$ 2969 &13202 & 9.7-7 $|$ 9.9-7 & -2.1-7 $|$ -6.7-8 &2:42 $|$ 12:20\\[2pt] \hline be200.3.3 &201 $;$ 59700 $|$ 201 &51 (107) $|$ 5220 &10375 & 8.1-7 $|$ 9.9-7 & -1.1-7 $|$ -6.5-7 &5:00 $|$ 8:52\\[2pt] \hline be200.3.4 &201 $;$ 59700 $|$ 201 &36 (72) $|$ 3484 &4966 & 9.8-7 $|$ 9.9-7 & -1.6-7 $|$ -4.1-7 &3:15 $|$ 4:14\\[2pt] \hline be200.3.5 &201 $;$ 59700 $|$ 201 &22 (44) $|$ 2046 &3976 & 9.8-7 $|$ 9.9-7 & -5.9-8 $|$ -3.0-7 &1:53 $|$ 3:28\\[2pt] \hline be250.1 &251 $;$ 93375 $|$ 251 &98 (196) $|$ 6931 &12220 & 9.9-7 $|$ 9.9-7 & 3.2-7 $|$ 3.5-8 &8:11 $|$ 14:07\\[2pt] \hline be250.2 &251 $;$ 93375 $|$ 251 &81 (169) $|$ 6967 &16421 & 9.3-7 $|$ 9.9-7 & 3.2-7 $|$ -5.7-7 &8:35 $|$ 20:01\\[2pt] \hline be250.3 &251 $;$ 93375 $|$ 251 &123 (250) $|$ 7453 &9231 & 9.3-7 $|$ 9.8-7 & -1.7-7 $|$ -5.1-7 &9:27 $|$ 10:25\\[2pt] \hline be250.4 &251 $;$ 93375 $|$ 251 &36 (72) $|$ 3583 &4542 & 9.9-7 $|$ 9.9-7 & 5.2-8 $|$ -2.1-7 &4:31 $|$ 5:06\\[2pt] \hline be250.5 &251 $;$ 93375 $|$ 251 &99 (198) $|$ 5004 &12956 & 8.3-7 $|$ 9.9-7 & 1.8-7 $|$ -1.8-7 &6:38 $|$ 15:52\\[2pt] \hline bqp500-1 &501 $;$ 374250 $|$ 501 &62 (131) $|$ 5220 &11890 & 9.9-7 $|$ 9.9-7 & -7.1-7 $|$ -8.2-8 &37:56 $|$ 1:23:58\\[2pt] \hline bqp500-2 &501 $;$ 374250 $|$ 501 &41 (84) $|$ 3610 &8159 & 5.5-7 $|$ 9.9-7 & -3.8-7 $|$ -8.7-8 &24:01 $|$ 55:14\\[2pt] \hline bqp500-3 &501 $;$ 374250 $|$ 501 &89 (200) $|$ 5877 &6402 & 9.9-7 $|$ 8.6-7 & 5.4-7 $|$ -1.9-7 &40:29 $|$ 41:51\\[2pt] \hline bqp500-4 &501 $;$ 374250 $|$ 501 &95 (256) $|$ 7480 &11393 & 6.3-7 $|$ 9.9-7 & -1.5-7 $|$ -1.1-7 &56:12 $|$ 1:17:56\\[2pt] \hline bqp500-5 &501 $;$ 374250 $|$ 501 &107 (247) $|$ 6976 &8823 & 5.1-7 $|$ 9.9-7 & 6.2-7 $|$ -1.0-7 &52:24 $|$ 59:11\\[2pt] \hline bqp500-6 &501 $;$ 374250 $|$ 501 &159 (412) $|$ 10461 &9587 & 8.3-7 $|$ 9.9-7 & -6.2-7 $|$ -1.3-7 & 1:18:11 $|$ 1:04:41\\[2pt] \hline bqp500-7 &501 $;$ 374250 $|$ 501 &92 (223) $|$ 8585 &9066 & 8.1-7 $|$ 9.9-7 & 4.7-8 $|$ -1.1-7 & 1:00:52 $|$ 1:00:35\\[2pt] \hline bqp500-8 &501 $;$ 374250 $|$ 501 &68 (140) $|$ 5828 &7604 & 6.7-7 $|$ 9.9-7 & -4.7-8 $|$ -1.1-7 &40:56 $|$ 51:58\\[2pt] \hline bqp500-9 &501 $;$ 374250 $|$ 501 &50 (108) $|$ 4704 &11613 & 9.5-7 $|$ 9.9-7 & -3.7-7 $|$ -9.8-8 &34:05 $|$ 1:21:17\\[2pt] \hline bqp500-10 &501 $;$ 374250 $|$ 501 &71 (163) $|$ 6462 &8474 & 8.7-7 $|$ 9.9-7 & -6.2-7 $|$ -8.7-8 &48:07 $|$ 57:33\\[2pt] \hline gka1e &201 $;$ 59700 $|$ 201 &74 (163) $|$ 5352 &9071 & 9.2-7 $|$ 9.9-7 & -3.0-7 $|$ -2.9-7 &7:59 $|$ 9:35\\[2pt] \hline gka2e &201 $;$ 59700 $|$ 201 &49 (98) $|$ 4008 &6659 & 9.2-7 $|$ 9.9-7 & 5.0-8 $|$ -1.7-7 &4:17 $|$ 6:29\\[2pt] \hline gka3e &201 $;$ 59700 $|$ 201 &35 (71) $|$ 2731 &4103 & 8.3-7 $|$ 9.7-7 & 2.3-7 $|$ -2.2-8 &2:59 $|$ 4:14\\[2pt] \hline gka4e &201 $;$ 59700 $|$ 201 &34 (68) $|$ 2999 &3430 & 9.9-7 $|$ 9.9-7 & -1.7-7 $|$ -4.6-7 &3:20 $|$ 3:21\\[2pt] \hline gka5e &201 $;$ 59700 $|$ 201 &43 (90) $|$ 3367 &2712 & 9.9-7 $|$ 9.9-7 & -4.9-8 $|$ -6.5-8 &3:54 $|$ 2:47\\[2pt] \hline \end{longtable} \end{footnotesize} \subsection{Evaluation of \QSDPNAL on instances generated from QAP problems} Next we test the following QSDP problem motivated from the SDP relaxation of a quadratic assignment problem (QAP) considered in \cite{povh2009copositive}. The SDP relaxation we used is adopted from \cite{YangST2015} but we add a convex quadratic term in the objective to modify it into a QSDP problem. Specifically, given the data matrices $A_1,A_2\in\cS^l$ of a QAP problem, the problem we test is given by: \begin{eqnarray*} \begin{array}{rl} \mbox{(QSDP-QAP)}\;\; \min & \displaystyle\frac{1}{2}\inprod{X}{\cQ X}+\inprod{A_2 \otimes A_1}{X} \\[5pt] {\rm s.t.} & \sum_{i=1}^l X^{ii} = I, \ \inprod{I}{X^{ij}} = \delta_{ij} \quad \forall \, 1\leq i \leq j\leq l, \\[5pt] & \inprod{E}{X^{ij}} = 1\quad \forall\, 1\leq i \leq j\leq l, \quad X \in \cS^{n}_+,\; X \in \cK, \end{array} \label{p2:qsdp-qap} \end{eqnarray*} where $n=l^2$, and $X^{ij}\in \Re^{l\times l}$ denotes the $(i,j)$-th block of $X$ when it is partitioned uniformly into an $l\times l$ block matrix with each block having dimension $l\times l$. The convex set $\cK = \{X \in \cS^{n} \mid X \geq 0 \}$, $E$ is the matrix of ones, and $\delta_{ij} = 1$ if $i=j$, and $0$ otherwise. Note that here we use the same self-adjoint positive semidefinite linear operator $\cQ:\Sn\to\Sn$ constructed in \eqref{Qmap}. In our numerical experiments, the test instances $(A_1,A_2)$ are taken from the QAP Library \cite{burkard1997qaplib}. In Table \ref{table:qsdpnal}, we present the detail numerical results for \QSDPNAL and {\sc Qsdpnal}-Phase I in solving some large scale QSDP-QAP problems. It is interesting to note that \QSDPNAL can solve all the $73$ difficult QSDP-QAP problems to an accuracy of $10^{-6}$ efficiently, while the purely first-order algorithm {\sc Qsdpnal}-Phase I can only solve $2$ of the problems (chr20a and tai25a) to required accuracy. The superior numerical performance of \QSDPNAL over {\sc Qsdpnal}-Phase I clearly demonstrates the importance and necessity of our proposed two-phase algorithm for which second-order information is incorporated in the inexact augmented Lagrangian algorithm in Phase II. \begin{footnotesize} \begin{longtable}{| c c | c | c | c| c|c|} \caption{The performance of \QSDPNAL and {\sc Qsdpnal}-Phase I on QSDP-QAP problems (accuracy $= 10^{-6}$). In the table, ``a'' stands for \QSDPNAL and ``b'' stands for {\sc Qsdpnal}-Phase I, respectively. The computation time is in the format of ``hours:minutes:seconds''.}\label{table:qsdpnal} \\ \hline \mc{2}{|c|}{} &\mc{1}{c|}{} &\mc{1}{c|}{}&\mc{1}{c|}{}&\mc{1}{c|}{}&\mc{1}{c|}{}\\[-5pt] \mc{2}{|c|}{} & \mc{1}{c|}{iter.a} &\mc{1}{c|}{iter.b} &\mc{1}{c|}{$\eta_{\textup{qsdp}}$} &\mc{1}{c|}{$\eta_\textup{gap}$}&\mc{1}{c|}{time}\\[2pt] \hline \mc{1}{|@{}c@{}}{problem} &\mc{1}{@{}c@{}|}{$m_E$ $|$ $n$} &\mc{1}{c|}{it (subs)$|$itSCB} &\mc{1}{c|}{}&\mc{1}{c|}{a$|$b}&\mc{1}{c|}{a$|$b} &\mc{1}{c|}{a$|$b}\\ \hline \endhead chr12a &232 $;$ 144 &45 (239) $|$ 1969 &50000 & 9.9-7 $|$ {\green{ 2.2-6}} & {\blue{ -6.0-6}} $|$ {\red{ -2.5-5}} &41 $|$ 6:34\\[2pt] \hline chr12b &232 $;$ 144 &56 (324) $|$ 2428 &50000 & 9.9-7 $|$ {\green{ 3.7-6}} & {\red{ -2.0-5}} $|$ {\red{ -6.0-5}} &50 $|$ 6:27\\[2pt] \hline chr12c &232 $;$ 144 &56 (358) $|$ 2201 &50000 & 9.9-7 $|$ {\green{ 4.5-6}} & {\red{ -1.6-5}} $|$ {\red{ -6.1-5}} &46 $|$ 6:27\\[2pt] \hline chr15a &358 $;$ 225 &84 (648) $|$ 2866 &50000 & 9.9-7 $|$ {\blue{ 5.7-6}} & {\red{ -1.6-5}} $|$ {\bf -1.0-4} &2:25 $|$ 12:23\\[2pt] \hline chr15b &358 $;$ 225 &90 (584) $|$ 4700 &50000 & 9.9-7 $|$ {\blue{ 7.3-6}} & {\red{ -1.3-5}} $|$ {\bf -1.3-4} &3:07 $|$ 12:31\\[2pt] \hline chr15c &358 $;$ 225 &65 (425) $|$ 2990 &50000 & 9.9-7 $|$ {\blue{ 6.8-6}} & {\red{ -2.4-5}} $|$ {\red{ -9.7-5}} &1:58 $|$ 12:40\\[2pt] \hline chr18a &511 $;$ 324 &256 (1957) $|$ 6003 &50000 & 7.3-7 $|$ {\blue{ 6.1-6}} & {\red{ -1.8-5}} $|$ {\bf -1.3-4} &14:34 $|$ 22:26\\[2pt] \hline chr18b &511 $;$ 324 &86 (565) $|$ 3907 &50000 & 9.9-7 $|$ {\blue{ 8.4-6}} & {\red{ -1.8-5}} $|$ {\bf -1.6-4} &5:26 $|$ 22:19\\[2pt] \hline chr20a &628 $;$ 400 &39 (274) $|$ 1751 &4133 & 9.5-7 $|$ 9.7-7 & {\red{ -3.3-5}} $|$ {\red{ -3.4-5}} &4:50 $|$ 5:46\\[2pt] \hline chr20b &628 $;$ 400 &72 (490) $|$ 4044 &50000 & 9.6-7 $|$ {\blue{ 9.1-6}} & {\red{ -3.7-5}} $|$ {\bf -1.4-4} &12:30 $|$ 58:57\\[2pt] \hline chr20c &628 $;$ 400 &144 (981) $|$ 5242 &50000 & 9.9-7 $|$ {\red{ 1.4-5}} & {\red{ -3.1-5}} $|$ {\bf -3.1-4} &21:56 $|$ 55:41\\[2pt] \hline chr22a &757 $;$ 484 &67 (473) $|$ 2804 &50000 & 9.9-7 $|$ {\green{ 5.0-6}} & {\red{ -1.0-5}} $|$ {\red{ -7.6-5}} &13:49 $|$ 1:21:01\\[2pt] \hline chr22b &757 $;$ 484 &69 (505) $|$ 3581 &50000 & 9.9-7 $|$ {\blue{ 6.5-6}} & {\red{ -1.2-5}} $|$ {\bf -1.1-4} &17:12 $|$ 1:19:48\\[2pt] \hline els19 &568 $;$ 361 &43 (403) $|$ 2437 &50000 & 9.8-7 $|$ {\green{ 1.2-6}} & {\green{ -4.2-6}} $|$ {\blue{ -8.6-6}} &4:39 $|$ 1:04:08\\[2pt] \hline esc16a &406 $;$ 256 &86 (506) $|$ 5446 &50000 & 9.9-7 $|$ {\blue{ 8.3-6}} & {\red{ -2.5-5}} $|$ {\bf -1.3-4} &4:47 $|$ 16:54\\[2pt] \hline esc16b &406 $;$ 256 &157 (1425) $|$ 9222 &50000 & 9.9-7 $|$ {\red{ 1.3-5}} & {\red{ -3.7-5}} $|$ {\bf -2.7-4} &11:18 $|$ 16:56\\[2pt] \hline esc16c &406 $;$ 256 &188 (1404) $|$ 13806 &50000 & 9.9-7 $|$ {\red{ 1.2-5}} & {\red{ -4.6-5}} $|$ {\bf -3.5-4} &14:29 $|$ 16:57\\[2pt] \hline esc16d &406 $;$ 256 &101 (603) $|$ 8043 &50000 & 9.9-7 $|$ {\green{ 4.8-6}} & {\red{ -1.1-5}} $|$ {\red{ -7.4-5}} &6:13 $|$ 16:55\\[2pt] \hline esc16e &406 $;$ 256 &110 (847) $|$ 4286 &50000 & 9.9-7 $|$ {\green{ 4.8-6}} & {\red{ -1.4-5}} $|$ {\red{ -5.6-5}} &5:50 $|$ 16:35\\[2pt] \hline esc16g &406 $;$ 256 &85 (581) $|$ 3818 &50000 & 9.9-7 $|$ {\blue{ 7.2-6}} & {\red{ -2.2-5}} $|$ {\red{ -9.4-5}} &4:23 $|$ 16:44\\[2pt] \hline esc16h &406 $;$ 256 &228 (1732) $|$ 11733 &50000 & 8.6-7 $|$ {\blue{ 8.6-6}} & {\blue{ -9.3-6}} $|$ {\red{ -8.7-5}} &13:58 $|$ 16:21\\[2pt] \hline esc16i &406 $;$ 256 &41 (307) $|$ 3165 &50000 & 9.6-7 $|$ {\green{ 4.6-6}} & {\red{ -2.0-5}} $|$ {\red{ -6.0-5}} &2:28 $|$ 16:54\\[2pt] \hline esc16j &406 $;$ 256 &163 (1179) $|$ 5603 &50000 & 9.9-7 $|$ {\blue{ 6.6-6}} & {\red{ -2.0-5}} $|$ {\bf -1.0-4} &8:03 $|$ 16:24\\[2pt] \hline esc32b &1582 $;$ 1024 &80 (456) $|$ 5026 &50000 & 9.9-7 $|$ {\bf 1.5-4} & {\red{ -2.9-5}} $|$ {\bf -5.3-4} & 1:53:02 $|$ 7:42:25\\[2pt] \hline esc32c &1582 $;$ 1024 &105 (667) $|$ 4203 &50000 & 8.9-7 $|$ {\blue{ 7.2-6}} & {\red{ -1.0-5}} $|$ {\red{ -7.3-5}} & 2:40:09 $|$ 7:36:49\\[2pt] \hline esc32d &1582 $;$ 1024 &141 (909) $|$ 4852 &50000 & 9.9-7 $|$ {\blue{ 5.7-6}} & {\blue{ -8.6-6}} $|$ {\red{ -6.4-5}} & 3:09:45 $|$ 7:19:17\\[2pt] \hline had12 &232 $;$ 144 &53 (320) $|$ 2903 &50000 & 9.3-7 $|$ {\green{ 3.3-6}} & {\blue{ -7.2-6}} $|$ {\red{ -2.7-5}} &55 $|$ 6:52\\[2pt] \hline had14 &313 $;$ 196 &60 (443) $|$ 3634 &50000 & 9.9-7 $|$ {\green{ 4.5-6}} & {\blue{ -6.3-6}} $|$ {\red{ -2.8-5}} &1:50 $|$ 11:21\\[2pt] \hline had16 &406 $;$ 256 &208 (1616) $|$ 7604 &50000 & 8.1-7 $|$ {\blue{ 9.9-6}} & {\green{ -4.7-6}} $|$ {\red{ -9.1-5}} &10:49 $|$ 16:54\\[2pt] \hline had18 &511 $;$ 324 &82 (537) $|$ 4367 &50000 & 9.9-7 $|$ {\blue{ 9.2-6}} & {\red{ -1.5-5}} $|$ {\red{ -7.3-5}} &6:10 $|$ 24:46\\[2pt] \hline had20 &628 $;$ 400 &121 (848) $|$ 5024 &50000 & 9.5-7 $|$ {\red{ 1.0-5}} & {\red{ -1.2-5}} $|$ {\bf -1.0-4} &20:30 $|$ 51:08\\[2pt] \hline kra30a &1393 $;$ 900 &107 (674) $|$ 4665 &50000 & 9.5-7 $|$ {\blue{ 6.5-6}} & {\red{ -6.7-5}} $|$ {\bf -1.7-4} & 1:51:20 $|$ 7:23:56\\[2pt] \hline kra30b &1393 $;$ 900 &107 (674) $|$ 4853 &50000 & 9.9-7 $|$ {\blue{ 6.5-6}} & {\red{ -5.7-5}} $|$ {\bf -1.7-4} & 2:00:03 $|$ 8:02:08\\[2pt] \hline kra32 &1582 $;$ 1024 &106 (636) $|$ 6875 &50000 & 9.9-7 $|$ {\blue{ 7.4-6}} & {\red{ -3.6-5}} $|$ {\bf -1.6-4} & 2:47:33 $|$ 10:28:26\\[2pt] \hline lipa30a &1393 $;$ 900 &64 (451) $|$ 2924 &50000 & 9.9-7 $|$ {\blue{ 5.6-6}} & {\blue{ -6.8-6}} $|$ {\red{ -3.1-5}} & 1:10:12 $|$ 6:52:34\\[2pt] \hline lipa30b &1393 $;$ 900 &257 (1918) $|$ 7507 &50000 & 9.9-7 $|$ {\blue{ 7.0-6}} & {\green{ -1.9-6}} $|$ {\bf -1.9-4} & 4:28:38 $|$ 7:18:10\\[2pt] \hline lipa40a &2458 $;$ 1600 &51 (349) $|$ 2193 &50000 & 7.7-7 $|$ {\green{ 4.2-6}} & {\green{ -2.3-6}} $|$ {\red{ -2.0-5}} & 3:03:58 $|$ 23:53:41\\[2pt] \hline lipa40b &2458 $;$ 1600 &156 (1339) $|$ 4750 &50000 & 9.1-7 $|$ {\green{ 3.9-6}} & {\blue{ 6.0-6}} $|$ {\red{ -8.9-5}} & 9:36:10 $|$ 18:57:01\\[2pt] \hline nug12 &232 $;$ 144 &84 (478) $|$ 4068 &50000 & 9.8-7 $|$ {\blue{ 5.4-6}} & {\red{ -3.0-5}} $|$ {\bf -1.1-4} &1:32 $|$ 6:34\\[2pt] \hline nug14 &313 $;$ 196 &93 (610) $|$ 4953 &50000 & 9.7-7 $|$ {\blue{ 6.9-6}} & {\red{ -2.6-5}} $|$ {\bf -1.1-4} &3:12 $|$ 10:27\\[2pt] \hline nug15 &358 $;$ 225 &102 (660) $|$ 5627 &50000 & 7.0-7 $|$ {\red{ 1.1-5}} & {\red{ -2.2-5}} $|$ {\bf -1.7-4} &4:12 $|$ 12:42\\[2pt] \hline nug16a &406 $;$ 256 &86 (530) $|$ 4945 &50000 & 9.9-7 $|$ {\blue{ 7.2-6}} & {\red{ -2.3-5}} $|$ {\bf -1.1-4} &4:39 $|$ 18:17\\[2pt] \hline nug16b &406 $;$ 256 &97 (631) $|$ 4777 &50000 & 9.9-7 $|$ {\red{ 1.2-5}} & {\red{ -2.5-5}} $|$ {\bf -2.0-4} &5:19 $|$ 19:06\\[2pt] \hline nug17 &457 $;$ 289 &110 (772) $|$ 5365 &50000 & 9.9-7 $|$ {\red{ 1.3-5}} & {\red{ -2.4-5}} $|$ {\bf -1.8-4} &7:41 $|$ 22:47\\[2pt] \hline nug18 &511 $;$ 324 &85 (559) $|$ 4367 &50000 & 9.9-7 $|$ {\blue{ 6.1-6}} & {\red{ -3.3-5}} $|$ {\red{ -9.9-5}} &6:10 $|$ 26:20\\[2pt] \hline nug20 &628 $;$ 400 &114 (746) $|$ 5220 &50000 & 9.9-7 $|$ {\blue{ 8.6-6}} & {\red{ -2.3-5}} $|$ {\bf -1.3-4} &19:25 $|$ 55:36\\[2pt] \hline nug21 &691 $;$ 441 &84 (569) $|$ 4322 &50000 & 9.7-7 $|$ {\blue{ 6.8-6}} & {\red{ -4.0-5}} $|$ {\bf -1.1-4} &18:48 $|$ 1:09:39\\[2pt] \hline nug22 &757 $;$ 484 &121 (822) $|$ 5822 &50000 & 9.6-7 $|$ {\blue{ 8.3-6}} & {\red{ -4.1-5}} $|$ {\bf -1.3-4} &34:03 $|$ 2:08:02\\[2pt] \hline nug24 &898 $;$ 576 &89 (542) $|$ 4345 &50000 & 9.9-7 $|$ {\blue{ 6.5-6}} & {\red{ -3.2-5}} $|$ {\bf -1.1-4} &34:08 $|$ 3:04:07\\[2pt] \hline nug25 &973 $;$ 625 &129 (860) $|$ 5801 &50000 & 9.9-7 $|$ {\blue{ 6.9-6}} & {\red{ -2.2-5}} $|$ {\bf -1.1-4} & 1:09:12 $|$ 3:41:22\\[2pt] \hline nug27 &1132 $;$ 729 &148 (951) $|$ 8576 &50000 & 9.9-7 $|$ {\blue{ 8.3-6}} & {\red{ -2.6-5}} $|$ {\bf -1.3-4} & 2:09:22 $|$ 4:59:06\\[2pt] \hline nug28 &1216 $;$ 784 &119 (758) $|$ 6389 &50000 & 9.7-7 $|$ {\blue{ 7.8-6}} & {\red{ -2.9-5}} $|$ {\bf -1.1-4} & 1:43:52 $|$ 5:50:50\\[2pt] \hline nug30 &1393 $;$ 900 &105 (777) $|$ 4912 &50000 & 9.9-7 $|$ {\blue{ 9.3-6}} & {\red{ -2.6-5}} $|$ {\bf -1.2-4} & 2:08:56 $|$ 7:40:43\\[2pt] \hline rou12 &232 $;$ 144 &78 (418) $|$ 6600 &50000 & 9.9-7 $|$ {\green{ 4.4-6}} & {\red{ -2.2-5}} $|$ {\red{ -9.3-5}} &2:13 $|$ 6:36\\[2pt] \hline rou15 &358 $;$ 225 &106 (639) $|$ 5952 &50000 & 9.9-7 $|$ {\blue{ 5.4-6}} & {\red{ -2.7-5}} $|$ {\bf -1.0-4} &4:12 $|$ 13:02\\[2pt] \hline rou20 &628 $;$ 400 &65 (359) $|$ 4238 &50000 & 9.9-7 $|$ {\green{ 3.6-6}} & {\red{ -2.5-5}} $|$ {\red{ -6.1-5}} &11:53 $|$ 1:00:07\\[2pt] \hline scr12 &232 $;$ 144 &56 (295) $|$ 2205 &50000 & 9.9-7 $|$ {\green{ 3.2-6}} & {\blue{ -7.5-6}} $|$ {\red{ -4.0-5}} &43 $|$ 6:57\\[2pt] \hline scr15 &358 $;$ 225 &121 (769) $|$ 5730 &50000 & 7.4-7 $|$ {\red{ 1.0-5}} & {\red{ -2.1-5}} $|$ {\bf -1.9-4} &4:39 $|$ 12:45\\[2pt] \hline scr20 &628 $;$ 400 &89 (590) $|$ 5621 &50000 & 9.9-7 $|$ {\blue{ 8.3-6}} & {\red{ -4.1-5}} $|$ {\bf -1.6-4} &18:49 $|$ 1:01:19\\[2pt] \hline tai12a &232 $;$ 144 &110 (807) $|$ 6090 &50000 & 9.9-7 $|$ {\blue{ 8.9-6}} & {\red{ -1.8-5}} $|$ {\bf -1.3-4} &2:40 $|$ 6:15\\[2pt] \hline tai12b &232 $;$ 144 &123 (856) $|$ 6323 &50000 & 8.6-7 $|$ {\blue{ 7.3-6}} & {\red{ -2.5-5}} $|$ {\bf -1.1-4} &2:43 $|$ 6:19\\[2pt] \hline tai15a &358 $;$ 225 &67 (405) $|$ 4301 &50000 & 9.4-7 $|$ {\green{ 3.0-6}} & {\red{ -2.8-5}} $|$ {\red{ -5.9-5}} &2:48 $|$ 13:09\\[2pt] \hline tai17a &457 $;$ 289 &95 (569) $|$ 6142 &50000 & 9.9-7 $|$ {\green{ 3.8-6}} & {\red{ -2.0-5}} $|$ {\red{ -6.5-5}} &6:33 $|$ 19:23\\[2pt] \hline tai20a &628 $;$ 400 &87 (498) $|$ 4762 &50000 & 9.7-7 $|$ {\green{ 3.0-6}} & {\red{ -2.2-5}} $|$ {\red{ -5.4-5}} &15:00 $|$ 1:00:55\\[2pt] \hline tai25a &973 $;$ 625 &25 (138) $|$ 3438 &10084 & 9.9-7 $|$ 9.9-7 & {\blue{ 9.4-6}} $|$ {\red{ -1.5-5}} &17:46 $|$ 47:55\\[2pt] \hline tai25b &973 $;$ 625 &164 (1219) $|$ 7803 &50000 & 9.8-7 $|$ {\red{ 1.4-5}} & {\red{ -4.8-5}} $|$ {\bf -2.6-4} & 1:33:02 $|$ 3:33:13\\[2pt] \hline tai30a &1393 $;$ 900 &97 (546) $|$ 4270 &50000 & 7.8-7 $|$ {\green{ 2.8-6}} & {\red{ -1.4-5}} $|$ {\red{ -4.1-5}} & 1:22:01 $|$ 7:21:02\\[2pt] \hline tai30b &1393 $;$ 900 &162 (1229) $|$ 6845 &50000 & 9.9-7 $|$ {\red{ 1.2-5}} & {\red{ -3.3-5}} $|$ {\bf -2.1-4} & 3:10:41 $|$ 7:21:29\\[2pt] \hline tai35a &1888 $;$ 1225 &95 (537) $|$ 5781 &50000 & 9.9-7 $|$ {\green{ 2.8-6}} & {\blue{ -9.2-6}} $|$ {\red{ -3.5-5}} & 3:34:56 $|$ 15:04:40\\[2pt] \hline tai35b &1888 $;$ 1225 &152 (1195) $|$ 5303 &50000 & 9.6-7 $|$ {\red{ 1.2-5}} & {\red{ -3.8-5}} $|$ {\bf -1.9-4} & 5:56:59 $|$ 13:37:28\\[2pt] \hline tai40a &2458 $;$ 1600 &79 (398) $|$ 6381 &50000 & 9.1-7 $|$ {\green{ 3.0-6}} & {\red{ -1.7-5}} $|$ {\red{ -3.4-5}} & 6:01:56 $|$ 23:07:21\\[2pt] \hline tho30 &1393 $;$ 900 &116 (761) $|$ 5468 &50000 & 8.9-7 $|$ {\blue{ 8.0-6}} & {\red{ -3.1-5}} $|$ {\bf -1.3-4} & 2:15:55 $|$ 7:39:54\\[2pt] \hline tho40 &2458 $;$ 1600 &122 (762) $|$ 3834 &50000 & 9.9-7 $|$ {\blue{ 7.4-6}} & {\red{ -2.9-5}} $|$ {\bf -1.0-4} & 6:28:52 $|$ 26:40:25\\[2pt] \hline \end{longtable} \end{footnotesize} \subsection{Evaluation of \QSDPNAL on instances generated from sensor network localization problems} We also test the QSDP problems arising from the following sensor network localization problems with $m$ anchors and $l$ sensors: \begin{equation}\label{snl} \begin{array}{rl} \min_{u_1,\ldots,u_l\in\Re^d} \Big\{ \frac{1}{2}\sum_{(i,j)\in\cN} \big(\norm{u_i - u_j}^2 - d_{ij}^2\big)^2 \,\mid\, \norm{u_i - a_k}^2 = d_{ik}^2,\, (i,k)\in\cM\Big\}, \end{array} \end{equation} where the location of each anchor $a_k\in \Re^d$, $k=1,\ldots,m$ is known, and the location of each sensor $u_i\in\Re^d$, $i=1,\ldots,l$, is to be determined. The distance measures $\{ d_{ij} \mid (i,j)\in\cN\}$ and $\{ d_{ik}\mid (i,k)\in\cM\}$ are known pair-wise distances between sensor-sensor pairs and sensor-anchor pairs, respectively. Note that our model \eqref{snl} is a variant of the model studied in \cite{biswas2006snl}. Let $U = [u_1\ u_2\ \ldots u_l]\in\Re^{d\times l}$ be the position matrix that needs to be determined. We know that \[\norm{u_i - u_j}^2 = e_{ij}^T U^T U e_{ij}, \qquad \norm{u_i-a_k}^2 = a_{ik}^T [U \ I_d]^T[U \ I_d]a_{ik},\] where $e_{ij} = e_i - e_j$ and $a_{ik} = [e_i; -a_k]$. Here, $e_i$ is the $i$th unit vector in $\Re^l$, and $I_d$ is the $d\times d$ identity matrix. Let $g_{ik} = a_{ik}$ for $(i,k)\in\cM$, $g_{ij} = [e_{ij}; {\bf 0}_m]$ for $(i,j)\in\cN$, and \[V = U^T U,\qquad X = [ V \ U^T; U \ I_d ]\in \cS^{(d+l)\times(d+l)}. \] Following the same approach in \cite{biswas2006snl}, we can obtain the following QSDP relaxation model with regularization term for \eqref{snl} \begin{equation} \label{snl_QSDP} \begin{array}{rl} \min & \frac{1}{2}\sum_{(i,j)\in\cN} \big(g_{ij}^T X g_{ij} - d_{ij}^2\big)^2 - \lambda\inprod{I_{n+d} - aa^T}{X} \\[8pt] {\rm s.t.} & g_{ik}^T X g_{ik} = d_{ik}^2,\, (i,k)\in\cM, \quad X\succeq 0, \end{array} \end{equation} where $\lambda$ is a given positive regularzation parameter and $a = [\hat e; \hat a]$ with $\hat e = e/\sqrt{l+m}$ and $\hat a = \sum_{k=1}^m a_k/\sqrt{l+m}$. Here $e\in\Re^n$ is the vector of all ones. The test examples are generated in the following {manner}. We first randomly generate $l$ points $\{\hat x_{i}\in\Re^d \mid i=1,\ldots,l\}$ in $[-0.5,0.5]^d$. Then, the edge set $\cN$ is generated by considering only pairs of points that have distances less than a given positive number $R$, i.e., \[\cN = \{(i,j)\,\mid\, \norm{\hat u_i - \hat u_j}\le R, \ 1\le i < j\le l\}.\] Given $m$ anchors $\{ a_{k}\in\Re^d \mid k=1,\ldots,m\}$, the edge set $\cM$ is similarly given by \[\cM = \{(i,k)\,\mid\, \norm{\hat u_i - a_k}\le R, \ 1\le i \le l, \ 1\le k\le m\}.\] We also assume that the {observed} distances $d_{ij}$ are perturbed by random noises $\varepsilon_{ij}$ as follows: \[d_{ij} = \hat d_{ij}|1+\tau\varepsilon_{ij}|,\quad (i,j)\in\cN,\] where $\hat d_{ij}$ is the true distance between point $i$ and $j$, $\varepsilon_{ij}$ are assumed to be independent standard Normal random variables, $\tau$ is the noise parameter. For the numerical experiments, we generate $10$ instances where the number of sensors $l$ ranges from 250 to 1500 and the dimension $d$ is set to be 2 or 3. W set the noise factor $\tau = 10\%$. The 4 anchors for the two dimensional case ($d=2$) are placed at \[(\pm 0.3,\pm 0.3),\] and the positions of the anchors for the three dimensional case ($d=3$) are given by \[\left( \begin{array}{cccc} 1/3 & 2/3 & 2/3 & 1/3 \\ 1/3 & 2/3 & 1/3 & 2/3 \\ 1/3 & 1/3 & 2/3 & 2/3 \\ \end{array} \right) - 0.5. \] \begin{footnotesize} \begin{longtable}{| c c | c | c | c| c|c|} \caption{The performance of \QSDPNAL and {\sc Qsdpnal}-Phase I on the sensor network localization problems (dual of \eqref{snl_QSDP}) (accuracy $= 10^{-6}$). In the table, ``a'' stands for \QSDPNAL and ``b'' stands for {\sc Qsdpnal}-Phase I, respectively. The computation time is in the format of ``hours:minutes:seconds''.}\label{table:snl} \\ \hline \mc{2}{|c|}{} &\mc{1}{c|}{} &\mc{1}{c|}{}&\mc{1}{c|}{}&\mc{1}{c|}{}&\mc{1}{c|}{}\\[-5pt] \mc{2}{|c|}{} & \mc{1}{c|}{iter.a} &\mc{1}{c|}{iter.b} &\mc{1}{c|}{$\eta_{\textup{qsdp}}$} &\mc{1}{c|}{$\eta_\textup{gap}$}&\mc{1}{c|}{time}\\[2pt] \hline \mc{1}{|@{}c@{}}{$d$} &\mc{1}{@{}c@{}|}{$m_E$ $|$ $n$ $|$ $R$} &\mc{1}{c|}{it (subs)$|$itSCB} &\mc{1}{c|}{}&\mc{1}{c|}{a$|$b}&\mc{1}{c|}{a$|$b} &\mc{1}{c|}{a$|$b}\\ \hline \endhead 2 &452 $|$ 252 $|$ 0.50 &12 (74) $|$ 652 &24049 & 7.1-7 $|$ 9.9-7 & -9.0-7 $|$ 6.8-7 &47 $|$ 6:10\\[2pt] \hline 2 &548 $|$ 502 $|$ 0.36 &12 (62) $|$ 1000 &12057 & 7.6-7 $|$ 9.9-7 & {\blue{ -9.2-6}} $|$ {\blue{ -9.1-6}} &1:49 $|$ 17:25\\[2pt] \hline 2 &633 $|$ 802 $|$ 0.28 &17 (85) $|$ 1000 &27361 & 3.0-7 $|$ 9.9-7 & {\green{ -2.4-6}} $|$ {\blue{ -9.9-6}} &5:42 $|$ 1:59:16\\[2pt] \hline 2 &684 $|$ 1002 $|$ 0.25 &17 (94) $|$ 1000 &50000 & 4.1-7 $|$ {\red{ 1.4-5}} & {\green{ -2.6-6}} $|$ -3.0-7 &10:18 $|$ 6:16:37\\[2pt] \hline 2 &781 $|$ 1502 $|$ 0.21 &21 (104) $|$ 1000 &50000 & 3.6-7 $|$ {\bf 9.5-4} & {\blue{ -6.3-6}} $|$ {\bf 5.1-3} &23:05 $|$ 13:47:39\\[2pt] \hline 2 &774 $|$ 2002 $|$ 0.18 &29 (156) $|$ 1000 &50000 & 7.3-7 $|$ {\bf 2.1-3} & {\green{ -3.8-6}} $|$ {\bf 1.4-2} &49:53 $|$ 23:20:28\\[2pt] \hline 3 &395 $|$ 253 $|$ 0.49 &11 (31) $|$ 408 &1487 & 9.8-7 $|$ 9.7-7 & {\green{ -1.4-6}} $|$ 1.3-7 &06 $|$ 18\\[2pt] \hline 3 &503 $|$ 503 $|$ 0.39 &14 (61) $|$ 877 &7882 & 3.5-7 $|$ 9.9-7 & {\green{ -1.1-6}} $|$ {\green{ 2.5-6}} &1:46 $|$ 7:18\\[2pt] \hline 3 &512 $|$ 803 $|$ 0.33 &15 (85) $|$ 1000 &10579 & 7.7-7 $|$ 9.9-7 & {\green{ -1.3-6}} $|$ 4.2-7 &7:13 $|$ 26:15\\[2pt] \hline 3 &513 $|$ 1003 $|$ 0.31 &16 (71) $|$ 1000 &14025 & 2.5-7 $|$ 9.9-7 & -7.6-7 $|$ {\green{ 4.2-6}} &8:46 $|$ 1:02:07\\[2pt] \hline 3 &509 $|$ 1503 $|$ 0.27 &19 (83) $|$ 1000 &23328 & 8.6-7 $|$ 9.9-7 & {\green{ -4.5-6}} $|$ {\blue{ 7.2-6}} &28:34 $|$ 4:13:07\\[2pt] \hline 3 &505 $|$ 2003 $|$ 0.24 &19 (97) $|$ 1000 &50000 & 9.5-7 $|$ {\bf 3.3-4} & {\red{ -1.4-5}} $|$ {\bf -3.7-4} &49:28 $|$ 16:45:24\\[2pt] \hline \end{longtable} \end{footnotesize} {Let $\cN_i = \{ p \mid (i,p)\in \cN\}$ be the set of neighbors of the $i$th sensor.} To further test our algorithm {\sc Qsdpnal}, we also generate the following valid inequalities and add them to problem \eqref{snl} \[\norm{\hat u_i - \hat u_j} \ge R,\, \forall\, (i,j)\in\widehat\cN,\] where { $ \widehat{\cN} = \bigcup_{i=1}^n \{ (i,j) \mid j \in \cN_p\backslash \cN_i \; \mbox{for some} \; p\in \cN_i\}.$ } Then, we obtain the following QSDP relaxation model \begin{equation} \label{snlI_QSDP} \begin{array}{rl} \min & \frac{1}{2}\sum_{(i,j)\in\cN} \big(g_{ij}^T X g_{ij} - d_{ij}^2\big)^2 - \lambda\inprod{I_{n+d} - aa^T}{X}\\[8pt] {\rm s.t.} & g_{ik}^T X g_{ik} = d_{ik}^2,\, (i,k)\in\cM, \\[5pt] &g_{ij}^T X g_{ij} \ge R^2,\, (i,j)\in\widehat\cN,\quad X\succeq 0. \end{array} \end{equation} In Table \ref{table:snl} and \ref{table:snlI}, we present the detail numerical results for \QSDPNAL and {\sc Qsdpnal}-Phase I in solving some instances of problem \eqref{snl_QSDP} and \eqref{snlI_QSDP}, respectively. Clearly, \QSDPNAL outperforms the purely first-order algorithm {\sc Qsdpnal}-Phase I by a significant margin. This superior numerical performance of \QSDPNAL over {\sc Qsdpnal}-Phase I again demonstrates the importance and necessity of our proposed two-phase framework. \begin{footnotesize} \begin{longtable}{| c c | c | c | c| c|c|} \caption{The performance of \QSDPNAL and {\sc Qsdpnal}-Phase I on the sensor network localization problems (dual of \eqref{snlI_QSDP}) (accuracy $= 10^{-6}$). In the table, ``a'' stands for \QSDPNAL and ``b'' stands for {\sc Qsdpnal}-Phase I, respectively. The computation time is in the format of ``hours:minutes:seconds''.}\label{table:snlI} \\ \hline \mc{2}{|c|}{} &\mc{1}{c|}{} &\mc{1}{c|}{}&\mc{1}{c|}{}&\mc{1}{c|}{}&\mc{1}{c|}{}\\[-5pt] \mc{2}{|c|}{} & \mc{1}{c|}{iter.a} &\mc{1}{c|}{iter.b} &\mc{1}{c|}{$\eta_{\textup{qsdp}}$} &\mc{1}{c|}{$\eta_\textup{gap}$}&\mc{1}{c|}{time}\\[2pt] \hline \mc{1}{|@{}c@{}}{$d$} &\mc{1}{@{}c@{}|}{$m_E;m_I$ $|$ $n$ $|$ $R$} &\mc{1}{c|}{it (subs)$|$itSCB} &\mc{1}{c|}{}&\mc{1}{c|}{a$|$b}&\mc{1}{c|}{a$|$b} &\mc{1}{c|}{a$|$b}\\ \hline \endhead 2 &452 $;$ 14402 $|$ 252 $|$ 0.50 &15 (119) $|$ 603 &50000 & 6.4-7 $|$ {\green{ 1.9-6}} & -9.1-7 $|$ -5.1-8 &1:50 $|$ 24:28\\[2pt] \hline 2 &548 $;$ 55849 $|$ 502 $|$ 0.36 &16 (180) $|$ 1357 &29565 & 4.4-7 $|$ 9.9-7 & -2.2-7 $|$ {\green{ -2.3-6}} &11:28 $|$ 1:09:24\\[2pt] \hline 2 &633 $;$ 118131 $|$ 802 $|$ 0.28 &15 (226) $|$ 2330 &36651 & 6.3-7 $|$ 9.9-7 & {\green{ -2.4-6}} $|$ {\blue{ -5.4-6}} & 1:06:04 $|$ 3:43:51\\[2pt] \hline 2 &684 $;$ 160157 $|$ 1002 $|$ 0.25 &20 (265) $|$ 3384 &50000 & 4.5-7 $|$ {\green{ 2.7-6}} & {\green{ -1.7-6}} $|$ {\green{ 2.2-6}} & 2:36:41 $|$ 8:30:28\\[2pt] \hline 2 &724 $;$ 201375 $|$ 1202 $|$ 0.23 &21 (487) $|$ 3115 &50000 & 9.8-7 $|$ {\green{ 3.8-6}} & {\blue{ 5.2-6}} $|$ {\green{ -2.8-6}} & 6:36:28 $|$ 11:57:56\\[2pt] \hline 3 &395 $;$ 16412 $|$ 253 $|$ 0.49 &12 (88) $|$ 471 &2897 & 3.1-7 $|$ 9.8-7 & -3.3-7 $|$ -5.7-7 &46 $|$ 1:18\\[2pt] \hline 3 &503 $;$ 53512 $|$ 503 $|$ 0.39 &14 (136) $|$ 949 &11003 & 4.4-7 $|$ 9.9-7 & -9.9-7 $|$ -3.1-7 &8:23 $|$ 20:26\\[2pt] \hline 3 &512 $;$ 104071 $|$ 803 $|$ 0.33 &17 (145) $|$ 1762 &14144 & 6.6-7 $|$ 9.9-7 & {\green{ -2.6-6}} $|$ 6.0-7 &32:17 $|$ 1:09:10\\[2pt] \hline 3 &513 $;$ 139719 $|$ 1003 $|$ 0.31 &21 (198) $|$ 2406 &31832 & 5.1-7 $|$ 9.9-7 & 8.4-8 $|$ 4.5-8 & 1:18:56 $|$ 4:48:04\\[2pt] \hline 3 &526 $;$ 180236 $|$ 1203 $|$ 0.29 &21 (250) $|$ 2639 &19010 & 8.3-7 $|$ 9.9-7 & -3.8-8 $|$ {\red{ 1.1-5}} & 2:15:54 $|$ 4:16:09\\[2pt] \hline \end{longtable} \end{footnotesize} \section{Conclusions}\label{sec:conclu} In this paper, we have designed a two-phase augmented Lagrangian {based method, called {\sc Qsdpnal},} for solving large scale convex quadratic semidefinite programming problems. The global and local convergence rate analysis of our algorithm are based on the classic results of proximal point algorithms \cite{rockafellar1976monotone,rockafellar1976augmented}, together with the recent advances in second order variational analysis of convex composite quadratic semidefinite programming \cite{cui2016on}. By devising ``smart'' numerical linear algebra, we have overcome various challenging numerical difficulties encountered in the {efficient} implementation of {\sc Qsdpnal}. Numerical experiments on various large scale QSDPs have demonstrated the efficiency and robustness of our proposed two-phase framework in obtaining accurate solutions. Specifically, for {well-posed} problems, our {\sc Qsdpnal}-Phase I is already powerful enough and it is not absolutely necessary to execute {\sc Qsdpnal}-Phase II. On the other hand, for more difficult problems, the purely first-order {\sc Qsdpnal}-Phase I algorithm may stagnate because of {extremely} slow local convergence. In contrast, {with the activation of {\sc Qsdpnal}-Phase II which has second order information wisely incorporated, our {\sc Qsdpnal} algorithm can still obtain highly accurate solutions efficiently.} {
1,314,259,995,372
arxiv
\section{\label{sec:level1}INTRODUCTION} Transitional flow regime is very frequently encountered in turbomachines and especially in aircraft engines at relatively low Reynolds numbers. As a consequence, a significant part of the flow on the blade surfaces is under the laminar-turbulent transition process. The boundary development, losses, efficiency, and momentum transfer are greatly affected by the laminar-turbulent transition. Therefore, accurate prediction for the transition process is crucial for the design of efficient as well as reliable aerospace designs \cite{pecnik2007application}. RANS simulations remain the most commonly used computational technique for analysis of turbulent flows. There has been considerable effort spent in the past two decades to develop RANS based transition models for engineering applications to predict various kinds of transitional flows \cite{menter2002transition,menter2004correlation,menter2006transition,langtry2009correlation,menter2015one,wei2017modeling,tousi2021active}. Each model has its strengths and weaknesses, and by far the correlation-based transition models by Langtry and Menter \cite{langtry2009correlation,menter2015one} have been widely used in engineering industries, in particular, aerospace industry. Most RANS models have adopted the Boussinesq turbulent viscosity hypothesis, i.e., anisotropy Reynolds stresses are proportional to the mean rate of strain, therefore also referred to as linear eddy viscosity models. It is well known that linear eddy viscosity models are limited due to the restrictions of the Boussinesq turbulent viscosity hypothesis on yielding accurate predictions for complex flow features such as flow with significant streamline curvature, separation, reattachment, and laminar-turbulent transition. Large eddy simulations (LES) or Direct numerical simulations (DNS) provide high-fidelity solution for such problems, but the calculations are often too expensive in computational time and cost, especially for high-Reynolds number flows. Therefore, accounting for the errors and uncertainties in the RANS model predictions provides a means to quantify trust in the predictions, as well as enabling the application of robust and reliability based design optimization. More expensive LES or DNS would only be considered necessary if the model form uncertainty is too large. The current study considers a physics-based approach that has been recently introduced by Emory \textit{et al.} \cite{emory2013modeling}, namely eigenspace perturbation method. This framework quantifies the model form uncertainty associated with the linear eddy viscosity model via sequential perturbations in the predicted amplitude (turbulence kinetic energy), shape (eigenvalues), and orientation (eigenvectors) of the anisotropy Reynolds stress tensor. This is an established method for RANS model UQ and has been applied to analyze and estimate the RANS uncertainty in flow through scramjets \cite{emory2011characterizing}, aircraft nozzle jets, turbomachinery, over stream-lined bodies \cite{gorle2019epistemic}, supersonic axisymmetric submerged jet \cite{mishra2017rans}, and canonical cases of turbulent flows over a backward-facing step \cite{iaccarino2017eigenspace,cremades2019reynolds}. This method has been used for robust design of Organic Rankine Cycle (ORC) turbine cascades \cite{razaaly2019optimization}. In aerospace applications, this method has been used for design optimization under uncertainty\cite{cook2019optimization,mishra2020design,matha2022extending,matha2022assessment}. In civil engineering applications, this method is being used to design urban canopies \cite{garcia2014quantifying}, ensuring the ventilation of enclosed spaces, and used in the wind engineering practice for turbulent bluff body flows \cite{gorle2015quantifying}. This perturbation method for RANS model UQ has been used in conjunction with Machine Learning algorithms to provide precise estimates of RANS model uncertainty in the presence of data \cite{xiao2016quantifying,wu2016bayesian,parish2016paradigm,xiao2017random,wang2017physics,wang2017comprehensive,heyse2021estimating}. The method is also being used for the creation of probabilistic aerodynamic databases, enabling the certification of virtual aircraft designs \cite{mukhopadhaya2020multi,nigam2021toolset}. All of the aforementioned studies that adopted the eigenspace perturbation framework focused on eigenvalue and eigenvector perturbations but did not consider the turbulence kinetic energy perturbation. According to Mishra and Iaccarino \cite{mishra2019theoretical}, turbulence kinetic energy perturbation varies the coefficient of turbulent viscosity in the Boussinesq turbulent viscosity hypothesis. Currently all eddy viscosity models utilize a predetermined constant value of this coefficient. In reality, the coefficient of turbulent viscosity varies between different turbulent flow scenarios and even between different regions in the same turbulent flow \cite{mishra2019theoretical}. Therefore, perturbing the amplitude of the anisotropy Reynolds stress tensor not only captures the full ranges of uncertainties introduced by the Boussinesq turbulent viscosity hypothesis, but plays an important role in capturing the true physics of the turbulent flow. However, studies of turbulence kinetic energy perturbation are lacking. The only studies that have been conducted to address the turbulence kinetic energy perturbation are proposed by \cite{gorle2013framework,cremades2019reynolds}. Yet to date, the combined effect of the turbulence kinetic energy and eigenvalue perturbation have not been examined for airfoil flows. It should be noted that introducing uniform perturbations in the entire flow field often lead to overly conservative confidence intervals, because decades of experience in RANS modeling show that the models are not always inaccurate. Consequently, it is reasonable for one to only introduce uncertainties in the regions of the flow where the model is deemed plausibly untrustworthy. Gorl{\'e} \textit{et al.} \cite{gorle2014deviation} first proposed the concept of \textit{ad hoc} ``marker function'' that identifies regions that deviate from parallel shear flow. A recent study of Gorl{\'e} \textit{et al.} \cite{gorle2019epistemic} employed this marker function and applied it to the simulation for a flow over a periodic wavy wall. Emory \textit{et al.} \cite{emory2013modeling} also provided a variety of marker functions aimed at spatially varying the magnitude of the eigenvalue perturbation in a computational domain. Nevertheless, marker function development is still very under-explored and more rigorous discussion and validation of new marker is needed. There are few methods for implementing the effects of the model form uncertainty on a transitional near-wall flow in a RANS formulation. In this case, the local-correlation laminar-turbulent transition model of Langtry and Menter \cite{langtry2009correlation} is used to close the mean transport equations. It has been extensively used to predict a wide variety of transitional flows such as natural transition and laminar-turbulent transition. However, there are few studies concerning the model form uncertainty in transition modeling. Therefore, the objective of this paper is to advance the understanding of the performance of the eigenspace perturbation approach for quantifying the model form uncertainty in RANS simulations of transitional flows over a SD7003 airfoil using the transition model of Langtry and Menter \cite{langtry2009correlation}. Specifically, the objectives of this study are (1) to develop a new regression based marker function $M_{k}$ for the perturbation to the amplitude of the anisotropy Reynolds stress tensor based on the turbulence kinetic energy discrepancy between the RANS and in-house DNS \cite{zhang2021turbulent} datasets; (2) to explore the effect of turbulence kinetic perturbation on various quantities of interests (QoIs) through a sets of uniform perturbations; (3) and to have a thorough understanding of the combined effect of the shape and marker-involved amplitude perturbation to the anisotropy Reynolds stress tensor. A novelty of this study lies in the application of the eigenspace perturbation method to transitional flows, as opposed to fully developed turbulent flows as is done in almost prior investigations. \section{Methodology} \subsection{\label{sec:level2}Governing equations} The flow was assumed to be two-dimensional and incompressible. The RANS formulation of the continuity and momentum equations is as follows: \begin{equation} \label{p_Continuity} \frac{\partial \left\langle U_{i} \right\rangle}{\partial x_{i}}=0, \end{equation} \begin{equation} \label{p_Momentum} \frac{ D \left\langle U_{j}\right\rangle}{\mathrm{Dt}}=-\frac{1}{\rho} \frac{\partial \left\langle P \right\rangle}{\partial x_{j}}+\nu \frac{\partial^{2} {\left\langle U_{j} \right\rangle}}{\partial x_{i} \partial x_{i}}-\frac{\partial \left\langle u_{i} u_{j}\right\rangle}{\partial x_{i}} \end{equation} \noindent where $\left\langle \ \right\rangle$ represents time-averaging, $\rho$ is the density, $\left\langle P \right\rangle$ is the time-averaged pressure, and $\nu$ is the kinematic viscosity. The $\left\langle U_{i}\right\rangle$ are the time-averaged velocity components. Reynolds stress terms in Eqs. \ref{p_Continuity} - \ref{p_Momentum}, i.e., $\left\langle u_{i}u_{j}\right\rangle$, are unknowns that need to be approximated using a RANS model. In the results presented in this study for a flow over a SD7003 airfoil, the modified version of shear-stress transport (SST) $k-\omega$ \cite{menter1993zonal,hellsten1998some,menter2001elements,menter2003ten} for transitional flow simulations by Langtry and Menter \cite{langtry2009correlation} is considered. The RANS based transition model \cite{langtry2009correlation} is a linear eddy viscosity model based on the Bossinesq turbulent viscosity hypothesis as follows: \begin{equation}\label{Eq:noMark_uiuj} \left\langle{u_{i} u_{j}}\right\rangle=\frac{2}{3} k \delta_{i j}-2 \nu_{\mathrm{t}} \left\langle S_{i j} \right\rangle, \end{equation} \noindent where $k$ is the turbulence kinetic energy, $\delta_{i j}$ is the Kronecker delta, $\nu_\mathrm{t}$ is the turbulent viscosity, and $\left\langle S_{i j} \right\rangle$ is the rate of mean strain tensor. Results obtained from the RANS based transition model bereft of any perturbations are refered to as ``baseline'' solutions. In Eq. \ref{Eq:noMark_uiuj}, the deviatoric anisotropic part is \begin{equation}\label{Eqn:Bou_Ani_Tensor} \begin{aligned} a_{i j} & \equiv\left\langle u_{i} u_{j}\right\rangle-\frac{2}{3} k \delta_{i j} \\ &=-\nu_{\mathrm{t}}\left(\frac{\partial\left\langle U_{i}\right\rangle}{\partial x_{j}}+\frac{\partial\left\langle U_{j}\right\rangle}{\partial x_{i}}\right) \\ &=-2 \nu_{\mathrm{t}} \left\langle S_{i j} \right\rangle. \end{aligned} \end{equation} The (normalized) anisotropy is defined by \begin{equation}\label{Eq:noMark_AnisotropyTensor} b_{i j}= \frac{a_{ij}}{2k} = \frac{\big \langle {u_{i} u_{j}} \big \rangle }{2 k}-\frac{\delta_{i j}}{3} = -\frac{\nu_{t} }{k}\big \langle {S_{i j}} \big \rangle. \end{equation} \subsection{Eigenspace perturbation method} The Reynolds stress tensor $\left\langle u_{i} u_{j}\right\rangle$ is symmetric positive semi-definite \cite{pope2001turbulent}, thus it can be eigen-decomposed as follows: \begin{equation} \label{Eq:noMarker_Rij} \left\langle u_{i} u_{j}\right\rangle=2 k\left(\frac{\delta_{i j}}{3}+v_{i n} \hat{b}_{n l} v_{j l}\right), \end{equation} \noindent in which $k \equiv {u_{i} u_{i}} / 2$, $v$ represents the matrix of orthonormal eigenvectors, $\hat{b}$ represents the diagonal matrix of eigenvalues ($\lambda_{i}$), which are arranged in a non-increasing order such that $\lambda_{1} \geq \lambda_{2} \geq \lambda_{3}$. The amplitude, the shape and the orientation of $\left\langle u_{i}u_{j} \right\rangle$ are explicitly represented by $k$, $\lambda_{i}$, and $v_{i j}$, respectively. Equations \ref{Eq:noMark_AnisotropyTensor} and \ref{Eq:noMarker_Rij} lead to \begin{equation}\label{Eq:noMarker_bij} b_{i j}=-\frac{\nu_{t} }{k}\big \langle {S_{i j}} \big \rangle = v_{i n} \hat{b}_{n l} v_{j l}. \end{equation} Equation \ref{Eq:noMarker_bij} indicates that the Boussinesq turbulent viscosity hypothesis requires that the shape and orientation of $\left\langle u_{i}u_{j} \right\rangle$ to be determined by $(\nu_{t}/k)\big \langle {S_{i j}} \big \rangle$. This assumption implies the $a_{i j}$ tensor is aligned with the $\big \langle {S_{i j}} \big \rangle$ tensor, which is not true in most circumstances in practice, in particular, complex flows, e.g., strongly swirling flows, flow with significant streamline curvature, and flow with separation and reattachment, and thus a source of the model form uncertainty. The eigenspace perturbation method was first proposed in \cite{emory2011modeling,gorle2012epistemic}. To quantify errors introduced by the model form uncertainty, perturbation is injected into the eigen-decomposed Reynolds stress defined in Eq. \ref{Eq:noMarker_Rij}. The perturbed Reynolds stresses are defined as \begin{equation}\label{Eqn_Rij_perturbed} \left\langle u_{i} u_{j}\right\rangle^{*}=2 k^{*}\left(\frac{1}{3} \delta_{i j}+v_{i n}^{*} \hat{b}_{n l}^{*} v_{j l}^{*}\right), \end{equation} \noindent where $k^{*}$ is the perturbed turbulence kinetic energy, $\hat{b}_{k l}^{*}$ is the diagonal matrix of perturbed eigenvalues, and $v_{i j}^{*}$ is the matrix of perturbed eigenvectors. For eigenvalue perturbations, Pecnik and Iaccarino \cite{emory2011modeling} proposed a perturbation approach, which enforces the realizability constraints on $\left\langle u_{i}u_{j} \right\rangle$ via the barycentric map \cite{banerjee2007presentation}, as shown in Fig. \ref{fig:BMap_Sketch.pdf}, because the map contains all realizable sates of $\left\langle u_{i}u_{j} \right\rangle$. Due to the realizability constraint of the semi-definiteness of $\left\langle u_{i}u_{j} \right\rangle$, there are three extreme states of componentiality of $\left\langle u_{i}u_{j} \right\rangle$: one component limiting state ($1C$), which has one non-zero principal fluctuation, i.e., $\hat{b}_{1c}=\operatorname{diag}[2 / 3,-1 / 3,-1 / 3]$; two component limiting state ($2C$), which has two non-zero principal fluctuations of the same intensity, i.e., $\hat{b}_{2c}=\operatorname{diag}[1 / 6,1 / 6, -1 / 3]$; and three component (isotropic) limiting state (3C), which has three non-zero principal fluctuations of the same intensity, i.e., $\hat{b}_{3c}=\operatorname{diag}[0,0,0]$. In addition, the $\hat{b}_{1c}$, $\hat{b}_{2c}$, and $\hat{b}_{3c}$ limiting states correspond to the three vertices of the barycentric map. Given an arbitrary point $\mathbf{x}$ within the barycentric map, any realizable $\left\langle u_{i}u_{j} \right\rangle$ can be determined by a convex combination of the three vertices $\mathbf{x}_{i c}$ (limiting states) and $\lambda_{l}$ as follows: \begin{equation}\label{Eq:noMarker_Coordinates_InsideBary} \mathbf{x} = \mathbf{x}_{1 \mathrm{c}}\left(\lambda_{1}-\lambda_{2}\right)+\mathbf{x}_{2 \mathrm{c}}\left(2 \lambda_{2}-2 \lambda_{3}\right)+\mathbf{x}_{3 \mathrm{c}}\left(3 \lambda_{3}+1\right). \end{equation} In order to define the perturbed eigenvalues $\hat{b}_{i j}^{*}$, first determine the location on the barycentric map for the Reynolds stresses computed by a linear eddy viscosity model and subsequently inject uncertainty by shifting it to a new location on the barycentric map. In Fig. \ref{fig:BMap_Sketch.pdf}, perturbations toward $1c$, $2c$, and $3c$ vertices of the barycentric map shift point $O$ to $B_{1c/2c/3c}$, respectively, which can be written as \begin{equation}\label{Eq:noMarker_xstar} \mathbf{x_{B(1c/2c/3c)}^{*}}=\mathbf{x_{O}}+\Delta_{B}\left(\mathbf{x}_{1c/2c/3c}-\mathbf{x_{B(1c/2c/3c)}}\right), \end{equation} \noindent where $\Delta_{B}$ is the magnitude of perturbation. Once the new location is determined, a new set of eigenvalues $\lambda_{i}$ can be computed from Eq. \ref{Eq:noMarker_Coordinates_InsideBary} and $b_{i j}$ can be reconstructed, which eventually yields $\left\langle u_{i}u_{j} \right\rangle^{*}$. As noted earlier in Eq. \ref{Eq:noMarker_bij}, the unperturbed anisotropy Reynolds stress tensor is modeled as $b_{i j}=-\nu_{t} \backslash {k} \big \langle {S_{i j}} \big \rangle = v_{i n} \hat{b}_{n l} v_{j l}$ or, equivalently, $a_{i j} = -2\nu_{t} \big \langle {S_{i j}} \big \rangle =2kv_{i n} \hat{b}_{n l} v_{j l}$. Accordingly, the anisotropy Reynolds stress tensor subject to turbulence kinetic energy perturbation becomes \begin{equation}\label{Eq:perturb_aij} a_{i j}^{*} = -2\nu_{t}^{*} \big \langle {S_{i j}} \big \rangle =2k^{*}v_{i n} \hat{b}_{n l} v_{j l}. \end{equation} Because perturbing $k$ does not affect the eigenvalues and eigenvectors of the anisotropy Reynolds stress tensor, the change in the turbulent viscosity hypothesis has to be accounted in the turbulent viscosity coefficient \cite{mishra2019theoretical}. Comparing the unperturbed anisotropy Reynolds stress tensor to Eq. \ref{Eq:perturb_aij}, it is easy to obtain \cite{mishra2019theoretical} \begin{equation}\label{Eq:k_nut} \frac{k^{*}}{k} = \frac{\nu_{T}^{*}}{\nu_{T}}, \quad \text{or equivalently,} \quad \nu_{T}^{*} = \frac{\nu_{T}k^{*}}{k}, \end{equation} \noindent where $k^{*} = k + \Delta_{k}$. From Eq. \ref{Eq:k_nut}, turbulence kinetic energy perturbation leads to spatial variation of turbulent viscosity coefficient. Specifically, the relation between the turbulent viscosity and the turbulent viscosity coefficient $C_{\mu}$ is given by \begin{equation} \label{nut_unperturb} \nu_{T} = C_{\mu}\frac{k^{2}}{\varepsilon}, \end{equation} \noindent where $\varepsilon$ is the dissipation rate. Thus, the perturbed turbulent viscosity can be expressed as follows: \begin{equation}\label{nut_perturb} \nu_{T}^{*} = C_{\mu}^{*}\frac{{k^{*}}^{2}}{\varepsilon}, \end{equation} \noindent where $C_{\mu}^{*} = C_{\mu} + \Delta_{C_{\mu}}$. Substituting Eqs. \ref{nut_unperturb} and \ref{nut_perturb} into Eq. \ref{Eq:k_nut}, we get \cite{mishra2019theoretical} \begin{equation} \label{Eq:Mishra_Eq} \frac{k}{k^{*}} = \frac{C_{\mu}^{*}}{C_{\mu}}, \quad \text{or equivalently,} \quad \Delta_{C_{\mu}} = -\frac{\Delta_{k}C_{\mu}}{k+\Delta_{k}}. \end{equation} In this study, the turbulence kinetic energy discrepancies between the RANS based predictions and the in-house DNS data \cite{zhang2021turbulent} are modeled by high-order regressions. These regressions generate values of $k^{*}$ that vary spatially in the computational domain: \begin{equation}\label{Eq:Marker_Mk_Method} k^{*} = k +\Delta k = kM_{k}, \quad M_{k} \sim f(x,y). \end{equation} In Eq. \ref{Eq:Marker_Mk_Method}, $M_{k}$ is a marker function of the $x$ and $y$ coordinate in a computational domain. Additionally, substituting Eq. \ref{Eq:Mishra_Eq} into Eq. \ref{Eq:Marker_Mk_Method} and rearranging, we get: \begin{equation} \label{Eq:1_M_k} \frac{1}{M_{k}} = \frac{C_{\mu}^{*}}{C_{\mu}}. \end{equation} Substituting Eq. \ref{Eq:1_M_k} to Eq. \ref{Eq:Mishra_Eq}, the relation between $M_{k}$ and $\Delta_{C_{\mu}}$ can be expressed as follows: \begin{equation}\label{Eq:Minghan_Eq} \Delta_{C_{\mu}} = \frac{C_{\mu}(1-M_{k})}{M_{k}}. \end{equation} Therefore, Eq. \ref{Eq:Minghan_Eq} provides the underlying model structure of turbulence kinetic energy perturbation with a marker function involved. A detailed description for the modeling of $k^{*}$ is presented in Section \ref{RegressionM}. In addition, eigenvector perturbations rotate the eigenvectors of the anisotropy Reynolds stress tensor with respect to the principal axes of the mean rate of strain. Recall that the eigenvectors of the anisotropy Reynolds stress tensor are forced to align along the principal axes of the mean rate of strain due to the limitations of the Boussinesq turbulent viscosity hypothesis \cite{pope2001turbulent}. This again violates the true physics of turbulent flow. Consequently, eigenvector perturbations extend the Boussinesq turbulent viscosity hypothesis to anisotropy turbulent viscosity hypothesis. Unlike eigenvalue perturbations, which are strictly constrained by realizability. Eigenvector perturbations are more difficult to be physically constrained in a local sense. In this study, eigenvector perturbations are omitted for brevity. Therefore, the present study restricts the contribution to the amplitude and shape perturbation to the anisotropy Reynolds stress tensor. \begin{figure} \centerline{\includegraphics[width=3.4in]{fig_Marker/BMap_Sketch.pdf}} \caption{Barycentric map.} \label{fig:BMap_Sketch.pdf} \end{figure} \subsection{Eigenspace perturbation framework in OpenFOAM} At present the eigenspace perturbation framework is available only in Stanford University's SU2 CFD suite \cite{mishra2019uncertainty} and the TRACE solver of DLR \cite{matha2022assessment}. In spite of its utility to the design and simulation community, there are no tested and validated implementations of this framework available in popular CFD software. OpenFOAM \cite{winkelman1980flowfield} is the most widely used open source CFD software in research and academia. A contribution of this investigation, is the development of a verified and validated implementation of the eigenspace perturbation framework for the OpenFOAM software. Relatively few studies have been conducted to implement the eigenspace perturbation framework in a RANS formulation using OpenFOAM, e.g., see \cite{cremades2019reynolds,hornshoj2021quantifying}. All of these studies employed the MATLAB software compounded with OpenFOAM to decompose and recompose the Reynolds stress tensor. This increases the complexity of using the eigenspace perturbation framework \cite{emory2013modeling} in OpenFOAM, which is prone to errors and violating the spirit of versatility. In addition, C++ is inherently faster than Matlab, which reduces the computational expense. In this study, the eigenspace perturbation framework along with the novel marker functions were completely implemented in C++ in OpenFOAM, which greatly reduces the number of user-defined inputs and allows the users without much knowledge of the fluid mechanics to use the eigenspace perturbation framework in OpenFOAM. \begin{figure*} \centerline{\includegraphics[width=6in]{fig_Marker/FlowChart_Method_Marker.pdf}} \caption{Flow chart showing the implementation of model form framework within OpenFOAM with marker configuration involved.} \label{fig:FlowChart_Method_Marker.pdf} \end{figure*} In the input files (located under the ``constant'' directory in OpenFOAM), user needs to specify what magnitude of $\Delta_{B}$ should be assigned, if $M_{k}$ is needed, and which eigenvalue perturbation ($1c$, $2c$, $3c$) is to be performed. The eigenspace framework conducts the perturbations during the execution of simulations, as illustrated in Fig. \ref{fig:FlowChart_Method_Marker.pdf}. At each control volume (CV), the baseline Reynolds stress tensor is calculated and decomposed into its eigenvalue and eigenvector matrices, which are perturbed using the eigenspace perturbation method as prescribed earlier. If $M_{k}$ is involved, perturbation to the turbulence kinetic energy will be performed. The perturbed eigenvalue and eigenvector matrices are then recomposed into a perturbed Reynolds stress tensor for each CV. These perturbed Reynolds stress matrices together with the perturbed turbulence kinetic energy are then used to compute the perturbed velocity field and the perturbed turbulent production to advance each node to the next time step. At convergence, the Reynolds stress also converges to its perturbed state. \section{Flow description and numerical method} The flow being considered is around an SD7003 airfoil, as shown in Fig. \ref{fig:no_Mk_SD7003domain.pdf}. At the low Reynolds number based on the chord length of $\operatorname{Re}_{c} = 60000$, a laminar separation bubble (LSB) is formed on the suction side of the airfoil. Note that the bubble moves upstream as the angle of attack (AoA) increases \cite{catalano2011rans}. In this study, an $8^{\circ}$ AoA (nearing stall) was considered. Figure \ref{fig:no_Mk_SD7003domain.pdf} schematically shows that the solution domain is a two-dimensional C-topology grid of $389$ (streamwise) $\times$ $280$ (wall-normal) $\times$ $1$ (spanwise) control volumes, which is comparable to the number of control volumes ($768 \times 176$) used in the numerical study of \cite{catalano2011rans}. The magnified view of the two-dimensional SD7003 airfoil labels the camber, suction side and pressure side, as shown in Fig. \ref{fig:no_Mk_SD7003domain.pdf}. The first grid node to the wall was placed at $y^{+} \approx 1.0$ in the turbulent boundary layer, in which more than $20$ CVs were placed. A grid convergence study has been performed to test the influence of the grid resolution on the results. Grid dependency study indicated that higher grid resolution in the near-wall region results in negligible changes in the predicted results: the effect of increasing the number of CVs in the wall-normal direction on the predicted mean velocity and Reynolds shear stress profile was at most $1\%$. Therefore, the simulation results based on the smaller grid ($389 \times 280$) has been used in the present analysis. The governing Eqs. \ref{p_Continuity} - \ref{p_Momentum} were closed by the RANS based transition model of \cite{langtry2009correlation} using OpenFOAM. The transport equations were discretized on a staggered mesh using finite volume method. The scheme is second order upwind for spatial dicretization, and Gauss linear scheme was used to evaluate the gradients. The PIMPLE algorithm was adopted for pressure-velocity coupling, which is a combination of PISO (Pressure Implicit with Splitting of Operator) \cite{ferziger2002computational} and SIMPLEC (Semi-Implicit Method for Pressure Linked Equations-Consistent) \cite{van1984enhancements}. It should be noted that PIMPLE algorithm can deal with large time steps where the maximum Courant (C) number may consistently be above $1$. In this study, the maximum value of C was set consistently equal to $0.6$, and OpenFOAM automatically adjusted the time step to achieve the set maximum. In addition, both residuals and distributions of lift and drag coefficients that vary with respect to time ($T$) were used to track convergence status. The solution fields were iterated until convergence, which required residuals of energy and momentum to drop more than four orders of magnitude, and both lift and drag coefficients almost stopped changing with time. This happened at $T \approx 0.3$, which corresponds to a normalized time $T^{*} = T U_{\infty} / c = 6.75$, and similar behavior has been observed by Catalano and Tognaccini \cite{catalano2011rans} in their numerical study for a low-Reynolds number flow over a SD7003 airfoil at $\mathrm{AoA} = 10^{\circ}$. Sampling began at $T = 0.6$ (double the time of convergence) and ended at $T = 1.4$, which required approximately $35000$ iterations for all simulations. The fluid was assumed to be air, with freestream turbulence intensity of $\operatorname{Tu} = 0.03\%$ and kinematic viscosity of $\nu = 1.5 \times 10^{-5} \mathrm{~m}^{2} / \mathrm{s}$. Ideally, the value of $\operatorname{Tu}$ should be close to zero. From Fig. \ref{fig:no_Mk_SD7003domain.pdf} at the inlet of the domain, the freestream velocity was set equal to $4.5 \ m/s$, which corresponds to $\operatorname{Re}_{c} = 60000$. The chord length was set equal to $c = 0.2 \ m$. At the outlet, a zero-gradient boundary condition was implemented for $\left\langle U_{i}\right\rangle$ ($\left\langle U \right\rangle$ for $x$ direction, $\left\langle V \right\rangle$ for $y$ direction), $k$, $\omega$ and pressure. At the wall, a no-slip boundary condition was used. \begin{figure*} \centerline{\includegraphics[width=5.0in]{fig_Marker/SD7003domain.pdf}} \caption[SD7003 computational domain and boundary conditions: {\color{red} \rule{0.7cm}{0.4mm}} far field, {\color{blue} \rule{0.7cm}{0.4mm}} outflow, {\color{black} \rule{0.7cm}{0.4mm}}, and no-slip walls.]{SD7003 computational domain and boundary conditions: {\color{red} \rule{0.7cm}{0.4mm}} far field, {\color{blue} \rule{0.7cm}{0.4mm}} outflow, {\color{black} \rule{0.7cm}{0.4mm}}, and no-slip walls. Depiction of the suction side, camber, and pressure side of the SD7003 airfoil is displayed in the magnified plot. A three-dimensional version of the computational domain is provided with freestream ($U_{\infty}$) encountering the leading edge at $8^{\circ}$ AoA.} \label{fig:no_Mk_SD7003domain.pdf} \end{figure*} \section{Regression model for amplitude perturbation} An important and novel focus of this study is the development of a marker function that modulates the degree of perturbations over the entire flow domain. We have explained earlier that this should lead to better calibrated confidence intervals. In this section, high-order polynomial regressions are constructed using MATLAB software in a least-squares sense to fit both the baseline RANS and in-house DNS datasets. Note that these high-order regressions lay the foundation for the development of the new marker functions. \subsection{Example: a linear regression}\label{RegressionM} The $n^{th}$ polynomial regression model that describes the relationship between a dependent $y$ and an independent $x$ can be expressed as \begin{equation} \label{Eqn:poly_fit} y(x)=p_{1} x^{n}+p_{2} x^{n-1}+\ldots+p_{n} x+p_{n+1} \end{equation} where $p = 1,\ldots,n+1$ stands for the coefficients in descending orders, $x$ is the independent variable. Fig. \ref{fig:Linear_Regression_Eg.pdf} illustrates a first-order or linear regression model on a random dataset. The errors $y_{i}-\hat{y}_{i}$ between the predicted values $\hat{y}_{i}$ and the actual data values $y_{i}$ are referred to as residuals. Using MATLAB software, the least-squares method finds the coefficients $p_{i}$ that best fit this datasets by minimizing the sum of squared residuals, i.e.: \begin{equation} RSS = \sum_{j=1}^{n}\left(y_{j}-\hat{y}_{j}\right)^{2}, \quad j = 1,\ldots,m \end{equation} where $RSS$ stands for residual sum of squares, $y_{j}$ is the $j$th actual value of the dependent variable to be predicted, $m$ represents the number of points of the datasets, $\hat{y}_{j}$ is the $j$th predicted value of $y_{j}$. \begin{figure} \centerline{\includegraphics[width=3.5in]{fig_Marker/Linear_Regression_Eg.pdf}} \caption{Linear regression relation between $x$ and $y$.} \label{fig:Linear_Regression_Eg.pdf} \end{figure} \subsubsection{Define untrustworthy regions}\label{Sec:Def_untrust} \begin{figure*} \centerline{\includegraphics[width=5.5in]{fig_Marker/cfcp_Tu003_forMarker.pdf}} \caption[Distribution of (a) pressure coefficient and (b) skin friction coefficient over the SD7003 airfoil at $\operatorname{Re}_{c}=6 \times 10^{4}$ and $\operatorname{AoA} = 8^{\circ}$.]{Distribution of (a) pressure coefficient and (b) skin friction coefficient over the SD7003 airfoil at $\operatorname{Re}_{c}=6 \times 10^{4}$ and $\operatorname{AoA} = 8^{\circ}$. Two headed arrow is added to indicate the untrustworthy region. (c) Schematic of transitional and turbulent regions over a SD7003 airfoil with important transitional parameters highlighted.} \label{fig:cfcp_Tu003_forMarker.pdf} \end{figure*} \begin{table} \begin{center} \caption{Comparison of transition parameters.} \label{table:transi_parameters} \begin{ruledtabular} \begin{tabular}{c c c c} Method & $X_{S}/c$ &$X_{T}/c$ &$X_{R}/c$\\ \hline SSTLM (Baseline) \cite{langtry2009correlation} & $0.03$& $0.15$ &$0.29$ \\ In-house DNS \cite{zhang2021turbulent} & $0.02$ &$0.16$&$0.27$\\ LES \cite{garmann2013comparative}& $0.02$ &$0.16$&$0.27$\\ ILES \cite{galbraith2010implicit}& $0.03$ &$0.18$&$0.27$\\ \end{tabular} \end{ruledtabular} \end{center} \end{table} To construct marker functions for $k^{*}$, first and foremost is to identify the regions where the turbulent viscosity hypothesis becomes invalid. This study identifies the regions where the RANS model gives plausible untrustworthy results based on the comparison between the baseline prediction and the in-house DNS data of \cite{zhang2021turbulent}. For the flow over an airfoil geometry, perhaps the local wall shear stress and the local pressure are the most important parameters, whose dimensionless forms become the skin friction coefficient $C_{f}=\tau_{w} / {0.5 \rho U_{\infty}^{2}}$, where $\tau_{w}$ is the wall shear stress, and the pressure coefficient $C_{p}=(p-p_{\infty}) / {0.5 \rho U_{\infty}^{2}}$, where $p$ is the undisturbed static pressure and $p_{\infty}$ is the static pressure in the freestream, respectively. In Figs. \ref{fig:cfcp_Tu003_forMarker.pdf} (a) and (b), the predicted $C_{f}$ and $C_{p}$ are plotted. According to the technique described by Boutilier and Yarusevych \cite{boutilier2012parametric}, Fig. \ref{fig:cfcp_Tu003_forMarker.pdf} (a) shows three ``kinks'' as representatives of the separation, transition and reattachment points, denoted $X_{S}/c$, $X_{T}/c$ and $X_{R}/c$, respectively. Moreover, the size of the LSB can be determined by finding the $X_{S}/c$ and $X_{R}/c$ points, which can be determined as the zeros of the skin friction coefficient \cite{de2021model}. The two methods showed good agreement with each other, and a summary of these important transition parameters are tabulated in Table \ref{table:transi_parameters}. In this study, the LSB is treated to be composed of a ``fore'' (from $X_{S}/c$ to $X_{T}/c$) and an ``aft'' (from $X_{T}/c$ to $X_{R}/c$) portion for the sake of analysis simplicity, followed by a fully turbulent region, as shown in Fig. \ref{fig:cfcp_Tu003_forMarker.pdf} (c). The in-house DNS \cite{zhang2021turbulent} and implicit LES (ILES)/LES data of \cite{galbraith2010implicit} and \cite{garmann2013comparative} for $C_{f}$ and $C_{p}$ are included for comparison. In the fore portion of the LSB, the predicted $C_{p}$ profile shows relatively good agreement with the ILES data of \cite{galbraith2010implicit}, while a clear discrepancy is observed in the aft portion, where it gives a smaller value of $C_{p}$, i.e. the region indicated by the two headed arrow. This kind of discrepancy was observed by Tousi \textit{et al.} \cite{tousi2021active} in their numerical study as well. Besides, the predicted $C_{p}$ shows good agreement with the reference data for the turbulent region on the suction side, as well as shows good agreement with the reference data for the entire pressure side. On the other hand, a noticeable discrepancy is observed on the $C_{f}$ profile at the negative ``trough'' in the aft portion of the LSB, as well as at the positive ``crest'' in the turbulent boundary layer after the reattachment point $X_{R}$. In Fig. \ref{fig:cfcp_Tu003_forMarker.pdf} (b), a shift of the predicted $C_{f}$ profile in the upstream direction at the trough is observed, and the value of $C_{f}$ is significantly under-predicted at the crest in the region of turbulent boundary layer for $0.3 < x/c < 0.6$. This behavior has been observed by other researchers as well, e.g., see \cite{catalano2011rans,bernardos2019rans,tousi2021active}. Therefore, it can be concluded that the region for $0.14 \leq x/c \leq 0.6$, as indicated by the double headed arrow shown in Fig. \ref{fig:cfcp_Tu003_forMarker.pdf} (b), should be identified as representative for the untrustworthy regions where perturbations should be introduced. \begin{figure} \centerline{\includegraphics[width=3.5in]{fig_Marker/BL_black.pdf}} \caption[Injection of uncertainty into the untrustworthy zones: \textcolor{red}{zone $ab$}, \textcolor{OliveGreen}{zone $cd$}, \textcolor{Cyan}{zone $em$} and \textcolor{gray}{zone $mf$}.]{Injection of uncertainty into the untrustworthy zones: \textcolor{red}{zone $ab$}, \textcolor{OliveGreen}{zone $cd$}, \textcolor{Cyan}{zone $em$} and \textcolor{gray}{zone $mf$}. The outer edge of the boundary layer ({\color{black} \protect\tikz[baseline]{\protect\draw[dashed] (0,.5ex)--++(.5,0) ;}} with $\circ$) at seven locations $I = x/c = 0.02$, $II = x/c = 0.03$, $III = x/c = 0.04$, $IV = x/c = 0.06$, $V = x/c = 0.08$, $VI = x/c = 0.10$, $VII = x/c = 0.12$ ($\cdots$) selected on the suction side are provided for reference.} \label{fig:BL_black} \end{figure} This study ensures that the amplitude perturbation is introduced across the entire boundary layer within the untrustworthy region $0.14 \leq x/c \leq 0.6$, which is further divided into the $ab$, $cd$, $em$ and $mf$ zone. In Fig. \ref{fig:BL_black}, the mean velocity profiles at seven locations downstream of the leading edge are used to illustrate the flow development in the streamwise direction, i.e. $I = x/c = 0.1$, $II = x/c = 0.15$, $III = x/c = 0.2$, $IV = x/c = 0.3$, $V = x/c = 0.4$, $VI = x/c = 0.5$, and $VII = x/c = 0.6$. Due to the airfoil curved upper surface, the mean velocity profiles are shifted down to the origin of $y/c$, denoted $y/c|_{o} = (y-y_{w})/c$ for sake of better contrast, where $y_{w}$ is the vertical location of the upper surface of the airfoil. Figure \ref{fig:BL_black} clearly shows that the boundary layer thickness increases as the flow develops in the streamwise direction downstream of the leading edge, i.e. the dash line with open circles indicates the approximate thickness of the outer edge of the boundary layer (OBL). In this study, the regions within which the amplitude perturbation will be introduced are shaded red, green, blue and gray corresponding to the $ab$, $cd$, $em$ and $mf$ zone, respectively, as shown in Fig. \ref{fig:BL_black}. It is clear that all these shaded regions extend well beyond OBL, i.e. $0 < y/c|_{o} < 0.05$, implying the propagation of the amplitude perturbation effect deeper into the outer boundary layer as the flow develops further downstream of the leading edge. \subsubsection{Polynomial regression for DNS/RANS turbulence kinetic energy datasets}\label{Sec:Poly_reg_k} In this study, MATLAB software was used to construct a set of least squares higher-order regression lines that are used to fit seventh-order polynomials to both the baseline RANS and in-house DNS datasets, i.e. gray lines with open circles, for turbulence kinetic energy normalized with the freestream velocity squared, $k/U_{\infty}^2$ are shown in Figs. \ref{fig:Marker_SSTLM_DNS_fit_k_all.pdf} (a) and (b). There are 5 locations selected for the $ab$ zone and 12 locations for the $cd$ zone. The regression based $k/U_{\infty}^2$ profiles for the $ab$ and $cd$ zone are colored red and green, respectively, with a uniform spacing of $x/c = 0.01$. As the flow proceeds further downstream, the regression based $k/U_{\infty}^2$ profiles for the $ef$ zone, which comprises a $em$ and $mf$ subzone, are colored blue, within which 15 locations are selected with a uniform spacing of $x/c = 0.02$. Within each zone same number of locations are selected for both the regression based RANS and in-house DNS $k/U_{\infty}^2$ profiles, as documented in Table \ref{table:marker_ranges}. As a result, a total of 32 locations are selected and placed uniformly on the suction side of the airfoil, ranging from the LSB to the fully turbulent flow further downstream. In addition, the locations are more densely packed by imposing a smaller spacing distance within the $ab$ and $cd$ zone, where the LSB evolves and complex flow features start developing. Therefore, a closer investigation into this region is taken. From Figs. \ref{fig:Marker_SSTLM_DNS_fit_k_all.pdf} (a) and (b), the regression based RANS $k/U_{\infty}^2$ profiles in general exhibit a similar behavior as that for in-house DNS, i.e. a gradual increase of the $k/U_{\infty}^2$ profile in the $ab$ and $cd$ zone, followed by a reduction of the profile further downstream in the $ef$ zone. \begin{figure} \centerline{\includegraphics[width=3.5in]{fig_Marker/Marker_SSTLM_DNS_fit_k_all.pdf}} \caption{(a) Regressed profile of normalized turbulence kinetic energy for the baseline RANS and (b) in-house DNS datasets (gray profiles) along the suction side of the SD7003 airfoil (geometry depicted by gray line): from left to right are \textcolor{red}{zone $ab$}, \textcolor{Green}{zone $cd$} and \textcolor{Blue}{zone $ef$}.} \label{fig:Marker_SSTLM_DNS_fit_k_all.pdf} \end{figure} \begin{table*} \begin{center} \caption{Zone ranges for the untrustworthy region.} \label{table:marker_ranges} \begin{ruledtabular} \begin{tabular}{c c c c c c} &\textcolor{red}{zone $ab$} & \textcolor{Green}{zone $cd$} &\textcolor{blue}{zone $em$} &\textcolor{gray}{zone $mf$}\\ \hline $x/c$ & \textcolor{red}{$0.14 \leq \frac{x}{c}\leq 0.18$}& \textcolor{Green}{$0.18 < \frac{x}{c}\leq 0.3$} &\textcolor{blue}{$0.3 < \frac{x}{c}\leq 0.4$} & \textcolor{gray}{$0.4 < \frac{x}{c}\leq 0.6$}\\ $y/c$ & \textcolor{red}{$y_{w} \leq \frac{y}{c}\leq 0.1$} &\textcolor{Green}{$y_{w} \leq \frac{y}{c}\leq 0.1$}&\textcolor{blue}{$y_{w} \leq \frac{y}{c}\leq 0.1$}&\textcolor{gray}{$y_{w} \leq \frac{y}{c}\leq 0.1$}\\ Number of locations & \textcolor{red}{5} & \textcolor{Green}{12} &\multicolumn{2}{c}{\textcolor{blue}{15}}\\ Spacing of $x/c$ & \multicolumn{2}{c}{\textcolor{black}{0.01}} & \multicolumn{2}{c}{\textcolor{black}{0.02}} \\ \end{tabular} \end{ruledtabular} \end{center} \end{table*} \subsubsection{Spatial discrepancies in $k/U_{\infty}^2$ regressions from DNS/RANS comparison} In Figs. \ref{fig:SSTLMDNS_origin_fit_all.pdf} (a) and (b), these 32 regression based $k/U_{\infty}^2$ profiles are shifted to the origin of the $x/c$ and $y/c$ axes, respectively, for sake of strong contrast. The baseline predictions and the in-house DNS data are also included for reference, depicted by the gray lines with open circles and shifted to the origin of the $x/c$ axis to be distinguished from these regression based profiles. From Fig. \ref{fig:SSTLMDNS_origin_fit_all.pdf} (a), the regression based RANS $k/U_{\infty}^2$ profiles increase in magnitude as the flow moves further downstream. This is qualitatively similar to that for the in-house DNS profile shown in Fig. \ref{fig:SSTLMDNS_origin_fit_all.pdf} (b). From Figs. \ref{fig:SSTLMDNS_origin_fit_all.pdf} (a) and (b), the regression based RANS $k/U_{\infty}^2$ profiles increase in a somewhat larger magnitude than that for in-house DNS for the $ab$ zone; however, they are significantly reduced in magnitude compared to that for in-house DNS for the $cd$ zone (in the aft portion of the LSB), i.e., the magnitude of reduction is around $50 \%$. For the $ef$ zone, both the regression based RANS and in-house DNS $k/U_{\infty}^2$ profiles show a gradual decrease in magnitude as the flow moves further downstream. Overall, the regression based RANS and in-house DNS $k/U_{\infty}^2$ profiles are similar in magnitude for the $ab$ and $ef$ zones, but the discrepancy is significant in the aft portion of the LSB for the $cd$ zone. \begin{figure} \centerline{\includegraphics[width=3.5in]{fig_Marker/SSTLMDNS_origin_fit_all.pdf}} \caption[(a) Regression based profile of normalized turbulence kinetic energy for baseline RANS and (b) in-house DNS for \textcolor{red}{zone $ab$}, \textcolor{Green}{zone $cd$} and \textcolor{Blue}{zone $ef$}.]{(a) Regressed profile of normalized turbulence kinetic energy for baseline RANS and (b) in-house DNS for \textcolor{red}{zone $ab$}, \textcolor{Green}{zone $cd$} and \textcolor{Blue}{zone $ef$}. Actual datasets (gray profiles) for baseline RANS and in-house DNS are provided for reference.} \label{fig:SSTLMDNS_origin_fit_all.pdf} \end{figure} \subsubsection{Marker for $k^{*}$} \begin{figure} \centerline{\includegraphics[width=3.5in]{fig_Marker/avek_RANSDNS_xpls_0pt14To0pt60.pdf}} \caption[Mean of regression lines for normalized turbulence kinetic energy of both baseline RANS (line with green squares) and in-house DNS (line with red circles). (a) zone ab; (b) zone cd; and (c) zone ef.]{Mean of regression lines for normalized turbulence kinetic energy of both baseline RANS (line with green squares) and in-house DNS (line with red circles). (a) zone ab; (b) zone cd; and (c) zone ef. Also included are profiles of baseline RANS (gray-dashed) and and in-house DNS (gray-solid) for reference.} \label{fig:avek_RANSDNS_xpls_0pt14To0pt60.pdf} \end{figure} \begin{figure*} \centerline{\includegraphics[width=5.5in]{fig_Marker/kcorrection_factor_RANSDNS_three.pdf}} \caption{Defining the marker function for (a) zone $ab$, (b) zone $cd$ and (c) zone $ef$ based on the corresponding discrepancy data.} \label{fig:kcorrection factor_RANSDNS_three.pdf} \end{figure*} \begin{figure} \centerline{\includegraphics[width=3.5in]{fig_Marker/RANSDNS_xpls_0pt3To0pt60_withIndicator.pdf}} \caption{Defining the marker function for subzone $mf$.} \label{fig:RANSDNS_xpls_0pt3To0pt60_withIndicator.pdf} \end{figure} \begin{figure} \centerline{\includegraphics[width=3.5in]{fig_Marker/RANS_contour_Marker_CF_k.pdf}} \caption[Contours of $M_{k}$ (Eq. \ref{Eqn:Markerfunc}) for (a) $0 < M_{k} < 1$ and (b) $1 < M_{k} < 10$ in an $xy$ plane.]{Contours of $M_{k}$ (Eq. \ref{Eqn:Markerfunc}) for (a) $0 < M_{k} < 1$ and (b) $1 < M_{k} < 10$ in an $xy$ plane. The dashed lines in (a) and (b) denote the actual locations on the suction side of the airfoil, which separate the $ab$, $cd$, $em$ and $mf$ zone.} \label{fig:RANS_contour_Marker_CF_k.pdf} \end{figure} As noted earlier relatively few methods thus far have been developed to construct marker functions, e.g., see \cite{emory2013modeling} and \cite{gorle2014deviation}. Essentially, they can be classified into two categories: (1) spatially varying magnitude of $\Delta_{B}$ and (2) identifying regions that deviate from parallel shear flow. All of these methods essentially use only one explanatory variable to predict the error in RANS model predictions. In this study, a novel method based on least squares high-order regressions is developed to construct a switch marker function for $k^{*}$. This method uses a set of explanatory variables dedicated to the identified untrustworthy zones, which aims at introducing correct level of uncertainty by strictly comparing the RANS predictions for turbulence kinetic energy to the in-house DNS data. We performed numerous tests and found that increasing the order of polynomial regressions higher than seven no longer gave more accurate results. Consequently, the seventh-order polynomial regression lines for the $k/U_{\infty}^2$ profiles were conducted, as shown in Fig. \ref{fig:avek_RANSDNS_xpls_0pt14To0pt60.pdf}. For each of the $ab$, $cd$ and $ef$ zone, the averaged regression relations for both RANS and in-house DNS are computed using the equation defined as follows: \begin{equation}\label{Eqn:normk_fit} {k}_{RANS/DNS}^{ave}|_{zone \ ab/cd/ef} = \frac{\sum_{i=1}^{n} P_{i}(\frac{y}{c}|_{o})}{n}, \end{equation} where $i$ represents the $i$th location on the suction side of the SD7003 airfoil (there are 32 selected locations), $P_{i}$ represents the polynomial regression at the $i$th location, and $n$ is the number of locations for each zone, as summarized in Table \ref{table:marker_ranges}. The regression based $k/U_{\infty}^2$ profiles for the $ab$, $cd$ and $ef$ zone are plotted in Figs. \ref{fig:avek_RANSDNS_xpls_0pt14To0pt60.pdf} (a), (b) and (c), respectively. The two solid lines with filled markers represent the mean of the regression based datasets for both RANS and in-house DNS. In each zone, Figs. \ref{fig:avek_RANSDNS_xpls_0pt14To0pt60.pdf} (a), (b) and (c) clearly show the discrepancy between these two averaged regression relations. For the $ab$ zone, Fig. \ref{fig:avek_RANSDNS_xpls_0pt14To0pt60.pdf} (a) shows a small discrepancy close to zero at the wall, i.e. $y/c|_{o} = 0$, as well as in the far outer region, i.e. $y/c|_{o} > 0.025$. Besides, the discrepancy tends to increase with $y/c|_{o}$ and peaks around $y/c|_{o} = 0.015$. Within the $cd$ zone shown in Fig. \ref{fig:avek_RANSDNS_xpls_0pt14To0pt60.pdf} (b), there is a large discrepancy at the wall and the discrepancy retains at a nearly consistent level until peaks around $y/c|_{o} = 0.015$, then gradually decreases with $y/c|_{o}$ to approach the value of zero. It is interesting that the discrepancy peaks around $y/c|_{o} = 0.15$ for both the $ab$ and $cd$ zone (aft portion of the LSB). On the other hand, a relatively small discrepancy is observed consistently throughout the entire boundary layer for the $ef$ zone, as shown in Fig. \ref{fig:avek_RANSDNS_xpls_0pt14To0pt60.pdf} (c). This indicates that the RANS based transition model \cite{langtry2009correlation} tends to become more trustworthy in the predictions for the turbulence kinetic energy in the far downstream region than that within/close to the LSB, i.e. the $ab$ and $cd$ zone. From Figs. \ref{fig:avek_RANSDNS_xpls_0pt14To0pt60.pdf} (a), (b) and (c), the discrepancy between the averaged regression relations for RANS and in-house DNS describes the degree of untrustworthiness in the $y/c|_{o}$ direction ranging from the $ab$ zone to the $ef$ zone across the suction side, therefore the discrepancy can be used as an approximation to a marker function. This study defines a correction factor based on the $k/U_{\infty}^2$ discrepancy between the regression based RANS and in-house DNS, which can be written as follows: \begin{equation}\label{Eqn:CF_k} CF_{k} = \left| \frac{Averaged \ DNS}{Averaged \ SSTLM} \right| . \end{equation} Equation \ref{Eqn:CF_k} indicates that $CF_{k}$ is constantly positive, which satisfies the physical realizability constraint, i.e. $k^{*} \geq 0$. For each zone, the discrepancy data obtained using Eq. \ref{Eqn:CF_k} are depicted by the blue solid circles, as shown in Figs. \ref{fig:kcorrection factor_RANSDNS_three.pdf} (a), (b) and (c). Using MATLAB software, the marker function for each zone can be constructed by fitting to the corresponding $CF_{k}$ data, i.e., fitting a seventh-order polynomial to the discrepancy data for the $ab$ and $ef$ zone, while fitting a Fourier series to the discrepancy data for the $cd$ zone, as shown in Figs. \ref{fig:kcorrection factor_RANSDNS_three.pdf} (a), (c), and (b), respectively. Therefore, a switch marker function that introduces local injection of perturbation with respect to the $ab$, $cd$ and $ef$ zone can be written as follows: \begin{equation}\label{Eqn:Markerfunc} \text {Switch $M_{k}$ }= \begin{cases}\textcolor{red}{a_{0}^{ab}(\frac{y-y_{w}^{ab}}{c})^{7} + a_{1}^{ab}(\frac{y-y_{w}^{ab}}{c})^{6} +...+}\\ \textcolor{red}{a_{5}^{ab}(\frac{y-y_{w}^{ab}}{c})^{2}+a_{6}^{ab}(\frac{y-y_{w}^{ab}}{c}) +a_{7}^{ab}} \quad \textcolor{red}{\text { if }} \textcolor{red}{zone \ ab,} \\ \\ \textcolor{Green}{a_{0}^{cd}+a_{1}^{cd} \cos \left(w\left(\frac{y-y_{w}^{cd}}{c}\right)\right) +} \\ \textcolor{Green}{b_{1}^{cd} \sin \left(w\left(\frac{y-y_{w}^{cd}}{c}\right)\right)+}\\ \textcolor{Green}{+ a_{2}^{cd} \cos \left(\left(2w\left(\frac{y-y_{w}^{cd}}{c}\right)\right)\right)+} \quad \textcolor{Green}{\text { if } zone \ cd,} \\ \textcolor{Green}{b_{2}^{cd} \sin \left(\left(2w\left(\frac{y-y_{w}^{cd}}{c}\right)\right)\right)} \\ \\ \textcolor{blue}{a_{0}^{em}(\frac{y-y_{w}^{em}}{c})^{7}+a_{1}^{em}(\frac{y-y_{w}^{em}}{c})^{6}+...+}\\ \textcolor{blue}{a_{5}^{em}(\frac{y-y_{w}^{em}}{c})^{2}+a_{6}^{em}(\frac{y-y_{w}^{em}}{c})+a_{7}^{em}} \quad \textcolor{blue}{\text { if }} \textcolor{blue}{ zone \ em ,}\\ \\ \textcolor{gray}{2.8} \qquad \qquad \textcolor{gray}{\text { if } zone \ mf ,} \end{cases} \end{equation} \noindent where a0, a1, a2, a3, a4, a5, a6, a7 represent the polynomial coefficients; a0, a1, b1, a2, b2, w represent the Fourier coefficients. Therefore, the perturbed turbulence kinetic energy, $k^{*}$, is defined as follows: \begin{equation}\label{Eqn:kstar} k^{*} = kM_{k}. \end{equation} It is worth noting that the development of spatial variations in $M_{k}$, are what the turbulence machine learning efforts are focused on. Because when a neural network model is developed to predict the perturbation in the flow, this neural network model will not predict the same perturbation at all points in the flow domain. Instead, it will naturally lead to a non-uniform perturbation. The key differences between my work and the work based on machine learning is two pronged: (1) the choice of the model and (2) the choice of the modeling basis (or the explanatory variables utilized to predict the perturbation). We have used a seventh-order regression, the work based on machine learning uses a random forest or neural network. We have utilized a small set of explanatory variables in $M_{k}$ that is developed based on physics arguments and prior experience. The work based on machine learning utilize a large set of explanatory variables (called features) that is almost $100$ in number and includes invariants of the mean velocity field, scaled distance from the wall. If a uniform value of $M_{k}$ is used, then Eq. \ref{Eqn:kstar} becomes \begin{equation}\label{Eqn:Deltak} k^{*} = k\Delta_{k}, \end{equation} \noindent where $k$ is the perturbed turbulence kinetic energy from the previous time step, and $\Delta_{k}$ represents a uniform value of $M_{k}$. The value of $\Delta_{k}$ must be larger than zero to satisfy physical realizability. Due to airfoil curved surfaces, $y_{w}$ varies along the suction side. If let $y_{w} = f(x)$ represent the curved upper surface, then its gradient can be calculated by taking the derivative of $f(x)$, e.g., $df(x)/dx$. In Eq. \ref{Eqn:Markerfunc}, the strategy to choose a reasonable magnitude for $y_{w}$ as representative of a zone is to find the minimum value of $y_{w}$, which ensures that the realizability constraint of $M_{k} \geq 0$ is satisfied, and hence $k^{*} \geq 0$. Note that the value of $df(x)/dx$ approaches to the value of zero around $x/c = 0.266$, i.e. sitting within the $cd$ zone. This implies that the minimum value of $y_{w}$ is located closer to the leading edge for $x/c < 0.266$, while the minimum value of $y_{w}$ is located closer to the trailing edge for $x/c > 0.266$. As noted earlier in Fig. \ref{fig:BL_black}, the $ef$ zone is composed of two subzones, i.e. $em$ and $mf$, as illustrated in Fig. \ref{fig:RANSDNS_xpls_0pt3To0pt60_withIndicator.pdf}. Figure \ref{fig:RANSDNS_xpls_0pt3To0pt60_withIndicator.pdf} enlarges the $ef$ zone, in which the regression based $k/U_{\infty}^2$ profiles for both RANS and in-house DNS are shifted down to the origin of $y$, to highlight the discrepancy in the region for $0.3 < x/c < 0.6$, within which the profiles for the $em$ subzone are painted blue, while the profiles are painted gray for the $mf$ subzone. It is clear that a similar level of discrepancy between the regression based RANS and in-house DNS profile across the $mf$ subzone is observed, as shown in Fig. \ref{fig:RANSDNS_xpls_0pt3To0pt60_withIndicator.pdf}. From Fig. \ref{fig:RANSDNS_xpls_0pt3To0pt60_withIndicator.pdf}, the discrepancy is significant in the vicinity of the wall, i.e at $y/c|_{o} = 0.004$, which corresponds to an approximate value of $2.8$ for $CF_{k}$ in Fig. \ref{fig:kcorrection factor_RANSDNS_three.pdf} (c). For sake of simplicity, the uniform value of $2.8$ for $M_{k}$ is employed for the $mf$ subzone in Eq. \ref{Eqn:Markerfunc}. We visualize the spatial variation of the magnitude of $M_{k}$ from the contours of $0 < M_{k} < 1$ and $1 < M_{k} < 10$, as shown in Figs. \ref{fig:RANS_contour_Marker_CF_k.pdf} (a) and (b), respectively. From Fig. \ref{fig:RANS_contour_Marker_CF_k.pdf} (a), it is clear that the magnitude of $0 < M_{k} < 1$ is more prevalent in the $ab$ zone, and in the upper portion of the $cd$ zone. In Fig. \ref{fig:RANS_contour_Marker_CF_k.pdf} (a), an overall decreasing trend of $M_{k}$ in magnitude with $y/c$ is observed for the $ab$ zone. On the other hand, the $M_{k}$ magnitude for both the $cd$ and $em$ zone varies with $y/c$ in a fashion consistent with the behavior observed in Figs. \ref{fig:kcorrection factor_RANSDNS_three.pdf} (b) and (c). Moreover, a uniform magnitude of $M_{k}$ is observed for the $ef$ zone, which confirms the uniform magnitude of $2.8$. \section{Results and discussion} \subsection{Sensitivity to $\Delta_k$} \subsubsection{Skin friction coefficient} A set of $C_{f}$ distributions undergoing the $\Delta_{k}$ perturbations are shown in Fig. \ref{fig:cf_uniformk_line.pdf}. The baseline prediction for $C_{f}$ is used as a reference. The increasing magnitude of $\Delta_{k}$ is indicated by lighter to darker hues, as shown in Fig. \ref{fig:cf_uniformk_line.pdf}. In addition, the red solid arrows are added to indicate the trend of $C_{f}$ with increasing $\Delta_{k}$, and the regions that contain a peak negative (trough) and a peak positive (crest) value of $C_{f}$ are enlarged to distinguish the clusters of $C_{f}$ profiles. In Fig. \ref{fig:cf_uniformk_line.pdf}, the magnitude of $C_{f}$ profiles increases with $\Delta_{k}$ perturbations for $\Delta_{k} < 1$ ($\Delta_{k} = \big\{ 0.1, 0.25, 0.5, 0.75 \big\}$) and $\Delta_{k} > 1$ ($\Delta_{k} = \big\{ 2, 4, 6, 8 \big\}$), respectively, at the trough (around $X_{T}$), indicating a monotonic increase. As the flow moves further downstream within the aft portion of the LSB, the magnitude of $C_{f}$ tends to decrease monotonically when the value of $\Delta_{k}$ is increased; as the flow proceeds further downstream of $X_{R}$, a monotonic increase with $\Delta_{k}$ in the magnitude of $C_{f}$ again occurred for $\Delta_{k} < 1$ and $\Delta_{k} > 1$. It should be noted that the $C_{f}$ profiles tend to converge and collapse onto a single curve when $\Delta_{k}$ is increased. The baseline prediction is well enveloped in between the $\Delta_{k} < 1$ and $\Delta_{k} > 1$ perturbations. Compared to the baseline prediction, rather subtle increases in the magnitude of $C_{f}$ for $\Delta_{k} < 1$ is observed, as contrasted with more noticeable increases in $C_{f}$ for $\Delta_{k} > 1$. This indicates that the simulation's response to the injection of $\Delta_{k}$ is more dependent on $\Delta_{k} > 1$ than $\Delta_{k} < 1$. This behavior is highlighted in the enlarged trough and crest region. In addition, the dashed red arrow is added along the line of zeros of $C_{f}$ to indicate the tendency of a shift of $X_{R}$ in the upstream direction when the value of $\Delta_{k}$ is increased for both $\Delta_{k} < 1$ and $\Delta_{k} > 1$. Since wall shear stress is a consequence of momentum transfer from the mean flow to the wall surface \cite{monteith2013principles}, the magnitude of mean velocity is closely related to the magnitude of $C_{f}$. In the aft portion of the LSB, the $\Delta_{k} > 1$ perturbations overall yield a smaller magnitude of $C_{f}$, and an increase in the magnitude of mean velocity is expected, while the opposite is true for the $\Delta_{k} < 1$ perturbations. \begin{figure} \centerline{\includegraphics[width=3.5in]{fig_Marker/cf_uniformk_line.pdf}} \caption[Skin friction coefficient distributions over the suction side of the airfoil with enlarged regions at the trough and the crest. Displayed are $k^{*}$ perturbations with uniform $\Delta_{k}$: $\Delta_{k} < 1$ ($\Delta_{k} = \big\{ 0.1, 0.25, 0.5, 0.75 \big\}$) and $\Delta_{k} > 1$ ($\Delta_{k} = \big\{ 2, 4, 6, 8 \big\}$).]{Skin friction coefficient distributions over the suction side of the airfoil with enlarged regions at the trough and the crest. Displayed are $k^{*}$ perturbations with uniform $\Delta_{k}$: $\Delta_{k} < 1$ ($\Delta_{k} = \big\{ 0.1, 0.25, 0.5, 0.75 \big\}$) and $\Delta_{k} > 1$ ($\Delta_{k} = \big\{ 2, 4, 6, 8 \big\}$); increasing values indicated by lighter to darker hues. Red solid arrows ($\color{red} \longrightarrow$) are provided to indicate increasing magnitude of $C_{f}$ with $\Delta_{k}$; the red dashed arrow ($\color{red} \dashrightarrow$) is provided to indicate the shift in reattachment point with $\Delta_{k}$. Note that the value of $\Delta_{k}$ must be larger than zero to satisfy realizability. The baseline prediction is provided for reference.} \label{fig:cf_uniformk_line.pdf} \end{figure} \subsubsection{Mean velocity field} \begin{figure*} \centerline{\includegraphics[width=5.5in]{fig_Marker/RANS_contour_streamline_normU_All_uniformk_subplot_foucs.pdf}} \caption[Contours of $\left\langle U \right \rangle/U_{\infty}$ with different values of $\Delta_{k}$: $\Delta_{k} < 1$ ($\Delta_{k} = \big\{ 0.1, 0.25, 0.5 \big\}$) and $\Delta_{k} > 1$ ($\Delta_{k} = \big\{4, 6, 8 \big\}$) in an $xy$ plane.]{Contours of $\left\langle U \right \rangle/U_{\infty}$ with different values of $\Delta_{k}$: $\Delta_{k} < 1$ ($\Delta_{k} = \big\{ 0.1, 0.25, 0.5 \big\}$) and $\Delta_{k} > 1$ ($\Delta_{k} = \big\{4, 6, 8 \big\}$) in an $xy$ plane. Baseline prediction is provided for reference, and in-house DNS data are included for comparison. Streamlines show the size of the LSB on the suction side of the airfoil.} \label{fig:RANS_contour_streamline_normU_All_uniformk_subplot_foucs.pdf} \end{figure*} Contours of the mean velocity normalized with the free stream velocity, $\left\langle U \right \rangle/U_{\infty}$ from the baseline, $\Delta_{k}$ perturbations, and in-house DNS of \cite{zhang2021turbulent} in an $xy$ plane are shown in Fig. \ref{fig:RANS_contour_streamline_normU_All_uniformk_subplot_foucs.pdf}. The streamlines for depicting a large recirculation vortex within the LSB, characterized by the region of reverse flow ($\left\langle U \right \rangle/U_{\infty} < 0$) \cite{rist2002numerical}, are included as well. This large recirculating region contains large-scale events (coherent structures), which are at low-frequency fluctuations due to very-large scale of unsteadiness of the recirculating region itself \cite{kiya1985structure}. As a consequence, the $\left\langle U \right \rangle/U_{\infty}$ contours exhibit a LSB surviving after time-averaging, as shown in Fig. \ref{fig:RANS_contour_streamline_normU_All_uniformk_subplot_foucs.pdf}. This behavior has been observed in the experimental measurements of \cite{zhang2018lagrangian}, RANS analysis of \cite{catalano2011rans}, and ILES/LES data of \cite{galbraith2010implicit} and \cite{garmann2013comparative}. Figure \ref{fig:RANS_contour_streamline_normU_All_uniformk_subplot_foucs.pdf} clearly shows that the baseline prediction for the LSB shows a comparable length to in-house DNS; however, the LSB's height is under-predicted. This inaccurate prediction for the LSB height alters the effective shape of the airfoil, hence inaccuracy in simulation results \cite{gaster1967structure,spalart2000mechanisms}. This reflects the error in RANS model predictions in the region of the LSB. Compared to the baseline prediction, rather subtle responses to the $\Delta_{k} < 1$ perturbations ($\Delta_{k} = 0.1, 0.25, 0.5$) are observed, which confirms the behavior shown in Fig. \ref{fig:cf_uniformk_line.pdf}. On the other hand, more noticeable changes are observed with the $\Delta_{k} > 1$ perturbations ($\Delta_{k} = 4, 6, 8$), i.e., a clear suppression of the LSB length; in addition, it is clear that the magnitude of mean velocity increases downstream of the LSB within the attached turbulent boundary layer, characterized by the more clustered streamlines compared to the baseline prediction. This confirms the reduction in the magnitude of $C_{f}$ in the aft portion of the LSB, as shown in Fig. \ref{fig:cf_uniformk_line.pdf}. There are two monotonic behaviors: first, the size of the recirculating region deceases monotonically with $\Delta_{k}$ (shallower region of streamlines), showing a tendency of deviating from the in-house DNS contour; second, the magnitude of $\left\langle U \right \rangle/U_{\infty}$ monotonically increases with $\Delta_{k}$ in the attached turbulent boundary layer (more densely clustered streamlines), showing a tendency of approaching closer to the in-house DNS contour. \subsubsection{Reynolds shear stress} Contours of the Reynolds shear stress normalized with the freestream velocity squared, $-\left\langle u_{1}u_{2} \right \rangle/U_{\infty}^2$ from the baseline, $\Delta_{k}$ perturbations, and in-house DNS of \cite{zhang2021turbulent} in an $xy$ plane are presented in Fig. \ref{fig:RANS_contour_streamline_normuv_All_uniformk_subplot_foucs.pdf}. Also included are the streamlines for the depiction of the recirculation vortex region. From Fig. \ref{fig:RANS_contour_streamline_normuv_All_uniformk_subplot_foucs.pdf}, all of the $-\left\langle u_{1}u_{2} \right \rangle/U_{\infty}^2$ contour plots show a magnitude of nearly zero in the region near the leading edge and in the outer region of the flow, and a peak is found within the LSB around $X_{T}$, i.e., the bright yellow region, from which the magnitude of $-\left\langle u_{1}u_{2} \right \rangle/U_{\infty}^2$ reduces as the flow moves further downstream. A similar behavior was also observed by Zhang and Rival \cite{zhang2018lagrangian} in their experimental measurements. Overall, the baseline prediction for Reynolds shear stress gives a smaller value than the in-house DNS data, especially in the LSB. It should be noted that the contour plots of $-\left\langle u_{1}u_{2} \right \rangle/U_{\infty}^2$ and $\left\langle U \right \rangle/U_{\infty}$ show a similar trend: a lack of sensitivity to the $\Delta_{k} < 1$ ($\Delta_{k} = 0.1, 0.25, 0.5$) perturbations, while a rather strong sensitivity to the $\Delta_{k} > 1$ ($\Delta_{k} = 4, 6, 8$) perturbations in both the transitional and turbulent region. In general, the Reynolds shear stress contour exhibits a larger response to $\Delta_{k}$ compared to the mean velocity contour. From Fig. \ref{fig:RANS_contour_streamline_normuv_All_uniformk_subplot_foucs.pdf}, the $\Delta_{k} < 1$ perturbations give a somewhat larger value of $-\left\langle u_{1}u_{2} \right \rangle/U_{\infty}^2$ than the baseline prediction, while the $\Delta_{k} > 1$ perturbations do the opposite. In addition, the $\Delta_{k} < 1$ perturbations slightly reduce the magnitude of $-\left\langle u_{1}u_{2} \right \rangle/U_{\infty}^2$ when the value of $\Delta_{k}$ is increased, as opposed to the $\Delta_{k} > 1$ perturbations, which greatly reduces the magnitude of $-\left\langle u_{1}u_{2} \right \rangle/U_{\infty}^2$. In addition, it is clear that the peak value for $-\left\langle u_{1}u_{2} \right \rangle/U_{\infty}^2$ gradually becomes smaller as the value of $\Delta_{k}$ is increased, in particular for $\Delta_{k} > 1$. This is accompanied with a suppression of the recirculating region and hence a decrease in the turbulence kinetic energy \cite{lengani2014pod}. In Fig. \ref{fig:RANS_contour_streamline_normuv_All_uniformk_subplot_foucs.pdf}, the $\Delta_{k} > 1$ perturbations tend to approach closer to the in-house DNS data in the turbulent boundary layer, while the $\Delta_{k} < 1$ perturbations tend to result in a closer agreement with the in-house DNS data in the LSB. According to Davide \textit{et al.} \cite{lengani2014pod}, the overall turbulence kinetic energy can be decomposed into the large-scale coherent (Kelvin-Helmholtz induced) and stochastic (turbulence-induced) contributions. With the total energy in the mean flow remained constant, the $\Delta_{k}$ perturbations in a sense redistribute the Reynolds-shear-stress momentum transfer between turbulence and mean flow. \begin{figure*} \centerline{\includegraphics[width=5.5in]{fig_Marker/RANS_contour_streamline_normuv_All_uniformk_subplot_foucs.pdf}} \caption[Contours of $-\left\langle u_{1}u_{2} \right \rangle/U_{\infty}^2$ with different values of $\Delta_{k}$: $\Delta_{k} < 1$ ($\Delta_{k} = \big\{ 0.1, 0.25, 0.5 \big\}$) and $\Delta_{k} > 1$ ($\Delta_{k} = \big\{4, 6, 8 \big\}$) in an $xy$ plane.]{Contours of $-\left\langle u_{1}u_{2} \right \rangle/U_{\infty}^2$ with different values of $\Delta_{k}$: $\Delta_{k} < 1$ ($\Delta_{k} = \big\{ 0.1, 0.25, 0.5 \big\}$) and $\Delta_{k} > 1$ ($\Delta_{k} = \big\{4, 6, 8 \big\}$) in an $xy$ plane. Baseline prediction is provided for reference, and in-house DNS data are included for comparison. Streamlines show the size of the LSB on the suction side of the airfoil.} \label{fig:RANS_contour_streamline_normuv_All_uniformk_subplot_foucs.pdf} \end{figure*} \subsection{Comparison between uniform $\Delta_{k}$ and $M_{k}$} \subsubsection{Skin friction coefficient and pressure coefficient} \begin{figure} \centerline{\includegraphics[width=3.5in]{fig_Marker/cfcp_uniformkVsOnlyMarker.pdf}} \caption[(a) Profile of skin friction coefficient and (b) pressure coefficient with enlarged regions at the flat spot and the kink.]{(a) Profile of skin friction coefficient and (b) pressure coefficient with enlarged regions at the flat spot and the kink. Displayed are envelopes for uniform $k^{*}$ perturbations: $\Delta_{k} = 0.1$ (red envelope), $\Delta_{k} = 8$ (gray envelope), and $M_{k}$ (blue envelope). The baseline prediction is provided for reference. $\circ$ in-house DNS data \cite{zhang2021turbulent}.} \label{fig:cfcp_uniformkVsOnlyMarker.pdf} \end{figure} Distributions of the skin friction coefficient and the pressure coefficient, $C_{f}$ and $C_{p}$, are shown in Figs. \ref{fig:cfcp_uniformkVsOnlyMarker.pdf} (a) and (b). The in-house DNS \cite{zhang2021turbulent} and ILES/LES data of \cite{galbraith2010implicit} and \cite{garmann2013comparative} are included for comparison. In Figs. \ref{fig:cfcp_uniformkVsOnlyMarker.pdf} (a) and (b), an enveloping behavior with respect to the baseline prediction is observed. Figure \ref{fig:cfcp_uniformkVsOnlyMarker.pdf} (a) shows that the effect of $M_{k}$ is more prevalent in the aft portion of the LSB $0.25 < x/c < 0.28$, as well as in the region downstream of the LSB $0.28 < x/c < 0.6$. This reflects the effect of spatial variability in $M_{k}$. In addition, the uncertainty bound generated from the $M_{k}$ perturbation sits within the gray envelope of $\Delta_{k} = 8$. Figure \ref{fig:cfcp_uniformkVsOnlyMarker.pdf} (a) clearly shows that the uncertainty bound generated from the $M_{k}$ perturbation is well encompassed by the uniform $\Delta_{k} = 0.1$ and $\Delta_{k} = 8$ perturbations. It is interesting to note that the $\Delta_{k} = 8$ perturbation overall tends to approach closer to the in-house DNS \cite{zhang2021turbulent} and ILES/LES data of \cite{galbraith2010implicit} and \cite{garmann2013comparative} than the $\Delta_{k} = 0.1$ perturbation does. At the trough shown in Fig. \ref{fig:cfcp_uniformkVsOnlyMarker.pdf} (a), the $\Delta_{k} = 8$ perturbation gives a larger magnitude of $C_{f}$, sitting below the baseline prediction and showing a clear tendency to approach closer to the in-house DNS \cite{zhang2021turbulent} and LES \cite{garmann2013comparative} data. In addition, Fig. \ref{fig:cfcp_uniformkVsOnlyMarker.pdf} (a) clearly shows that the reattachment point is well encompassed by the $\Delta_{k} = 8$ perturbation. Further downstream of reattachment point, the uncertainty bound generated from both the $\Delta_{k} = 8$ and $M_{k}$ perturbations show a tendency to approach closer to the in-house DNS \cite{zhang2021turbulent} and the ILES/LES data of \cite{galbraith2010implicit} and \cite{garmann2013comparative}, while the $\Delta_{k} = 0.1$ perturbation under-predicts the baseline prediction and deviates from the reference data. At the flat spot and the kink ($X_{R}$) followed by a steep drop on the $C_{p}$ profile, the uncertainty bound generated from the $M_{k}$ perturbation is encompassed by the $\Delta_{k} = 8$ perturbation, as shown in the enlarged regions in Fig \ref{fig:cfcp_uniformkVsOnlyMarker.pdf} (b). In addition, a tendency for the $\Delta_{k} = 8$ and $M_{k}$ perturbations to approach closer to the in-house DNS \cite{zhang2021turbulent} and LES data of \cite{garmann2013comparative} is observed at the flat spot and the kink. On the other hand, it is interesting that the $\Delta_{k} = 0.1$ perturbation shows a tendency of approaching toward the ILES data of \cite{galbraith2010implicit} in the enlarged regions shown in Fig. \ref{fig:cfcp_uniformkVsOnlyMarker.pdf} (b). In the region downstream of the kink and along the entire pressure side, both $\Delta_{k}=0.1$ and $\Delta_{k}=8$ perturbations are almost negligible in magnitude, i.e., a collapse onto the baseline prediction. It should be noted that the baseline prediction overall shows good agreement with the in-house DNS data \cite{zhang2021turbulent} and ILES/LES data of \cite{galbraith2010implicit} and \cite{garmann2013comparative}, especially good agreement with the ILES/LES data of \cite{galbraith2010implicit} and \cite{garmann2013comparative} for the pressure side. This indicates a low level of the model form uncertainty in the predictions for $C_{p}$ for these regions. \subsubsection{Mean velocity field} The $\left\langle U \right \rangle/U_{\infty}$ profiles across the entire boundary layer on the suction side of the airfoil are plotted in Fig. \ref{fig:UQ_uniformkvsOnlyMarker_U_RepeatOn_Tu0027.pdf}. Overall, the baseline prediction at each location is encompassed by the $\Delta_{k} = 0.1$ and $\Delta_{k} = 8$ perturbations, exhibiting an enveloping behavior, as shown in Fig. \ref{fig:UQ_uniformkvsOnlyMarker_U_RepeatOn_Tu0027.pdf}. The baseline prediction for the $\left\langle U \right \rangle/U_{\infty}$ profile at $x/c = 0.15$ ($X_{T}$) matches the in-house DNS profile of \cite{zhang2021turbulent}, except in the regions $y/c|_{o} < 0.007$ (next to the wall) and $y/c|_{o} > 0.011$ (upper portion of the boundary layer), where it gives slightly smaller values of $\left\langle U \right \rangle/U_{\infty}$, as shown in Fig. \ref{fig:UQ_uniformkvsOnlyMarker_U_RepeatOn_Tu0027.pdf}. At $x/c = 0.2$ (in the aft portion of the LSB), the baseline prediction for the $\left\langle U \right \rangle/U_{\infty}$ profile shows good agreement with the in-house DNS profile in the region of reverse flow $y/c|_{o} < 0.011$, with a somewhat reduction in the predicted $\left\langle U \right \rangle/U_{\infty}$ profile in the upper portion of the boundary layer $0.011 < y/c|_{o} < 0.027$. For the attached turbulent boundary layer, the baseline predictions for the $\left\langle U \right \rangle/U_{\infty}$ profiles at $x/c = 0.3$, $x/c = 0.4$ and $x/c = 0.5$ give smaller values of $\left\langle U \right \rangle/U_{\infty}$ compared to the in-house DNS profiles, and the discrepancies are comparable with each other. \begin{figure} \centerline{\includegraphics[width=4.0in]{fig_Marker/UQ_uniformkvsOnlyMarker_U_RepeatOn_Tu0027.pdf}} \caption[Streamwise mean velocity profiles in the aft portion of the LSB ($x/c = 0.15$ and $0.2$) and in the attached turbulent boundary layer ($x/c = 0.3, 0.4$ and $0.5$).]{Streamwise mean velocity profiles in the aft portion of the LSB ($x/c = 0.15$ and $0.2$) and in the attached TBL ($x/c = 0.3, 0.4$ and $0.5$). From left to right are $x/c = 0.15, 0.2, 0.3, 0.4$ and $0.5$, respectively. Displayed are envelopes for two extreme $\Delta_{k}$ perturbations considered in this study: $\Delta_{k} = 0.1$ (red envelope), $\Delta_{k} = 8$ (gray envelope), and $M_{k}$ (blue envelope). The baseline prediction is provided for reference. $\circ$ in-house DNS data \cite{zhang2021turbulent}.} \label{fig:UQ_uniformkvsOnlyMarker_U_RepeatOn_Tu0027.pdf} \end{figure} Figure \ref{fig:UQ_uniformkvsOnlyMarker_U_RepeatOn_Tu0027.pdf} shows that the $\Delta_{k} = 0.1$ perturbation under-predicts the baseline prediction, and the simulation's response to the $\Delta_{k} = 0.1$ perturbation is negligibly small within both the transitional and turbulent boundary layer. This well confirms the behavior of the $\Delta_{k} = 0.1$ perturbation in the prediction for $C_{f}$, as shown in Fig. \ref{fig:cfcp_uniformkVsOnlyMarker.pdf} (a). As the flow proceeds downstream from $x/c = 0.15$ to $x/c = 0.3$, the uncertainty bounds generated from the $\Delta_{k} = 0.1$ perturbations gradually increase in size, although the increase is rather subtle. This confirms the slightly increased $C_{f}$ in magnitude compared to the baseline prediction for the aft portion of the LSB. As the flow moves further downstream from $x/c = 0.4$ to $x/c = 0.5$ (in the attached turbulent boundary layer), the $\Delta_{k} = 0.1$ perturbation gradually reduces the size of the uncertainty bounds at a decreasing rate, reflecting the damping effect of the positive values of $C_{f}$ on the mean flow. On the other hand, the $\Delta_{k} = 8$ perturbation over-predicts the baseline prediction, exhibiting rather noticeable uncertainty bounds, as shown in Fig. \ref{fig:UQ_uniformkvsOnlyMarker_U_RepeatOn_Tu0027.pdf}. As the flow proceeds from $x/c = 0.15$ to $x/c = 0.3$, it is interesting to note that the uncertainty bounds generated from the $\Delta_{k} = 8$ perturbations increase blatantly in size, showing a tendency of approaching closer to the in-house DNS data. In addition, the effect of the $\Delta_{k} = 8$ perturbation tends to become more prevalent in the near-wall region, which well confirms the significantly reduced $C_{f}$ in magnitude compared to the baseline prediction for $0.15 < x/c < 0.3$, as shown in Fig. \ref{fig:cfcp_uniformkVsOnlyMarker.pdf} (a). As the flow proceeds further downstream from $x/c = 0.4$ to $x/c = 0.5$, the uncertainty bounds become larger in the upper section of the mean velocity profiles, while remain at a relatively small magnitude in the near-wall region due to the large positive values of $C_{f}$ at the crest shown in Fig. \ref{fig:cfcp_uniformkVsOnlyMarker.pdf} (a), reflecting the weakening propagation of the effect of the positive $C_{f}$ values deeper into the outer boundary layer. Unlike the $\Delta_{k} = 0.1$ and $\Delta_{k} = 8$ perturbation, $M_{k}$ identifies the untrustworthy regions in which uncertainty will be injected. In Fig. \ref{fig:UQ_uniformkvsOnlyMarker_U_RepeatOn_Tu0027.pdf}, the uncertainty bounds generated from the $M_{k}$ perturbations in general over-predict the baseline prediction, and sit within the uncertainty bounds generated from the $\Delta_{k} = 8$ perturbations. It should be noted that the sole effect of the $M_{k}$ perturbation on the predicted mean velocity profile is rather small. In section \ref{Sec:compound}, the $M_{k}$ perturbation is compounded with the eigenvalue perturbation ($1c$, $2c$, $3c$) to construct more effective uncertainty bounds. \subsubsection{Reynolds shear stress} The predicted profiles for the Reynolds shear stress normalized with the freestream velocity squared, $-\left\langle u_{1}u_{2} \right \rangle/U_{\infty}^2$ are shown in Fig. \ref{fig:UQ_uniformk_uv_RepeatOn_Tu0027.pdf}. Undergoing the $\Delta_{k} = 0.1$ and $\Delta_{k} = 8$ perturbations, an enveloping behavior with respect to the baseline prediction can be observed. Figure \ref{fig:UQ_uniformk_uv_RepeatOn_Tu0027.pdf} shows that the baseline prediction for the $-\left\langle u_{1}u_{2} \right \rangle/U_{\infty}^2$ profile at $x/c = 0.15$ significantly over-predicts the in-house DNS profile, implying a higher level of momentum transfer due to the Reynolds shear stress. In the aft portion of the LSB and downstream of the LSB near the reattachemnt point ($X_{R}$), the predictions for the $-\left\langle u_{1}u_{2} \right \rangle/U_{\infty}^2$ profiles at $x/c = 0.2$ and $x/c = 0.3$ exhibit a shape of parabolic arch, revealing a same effect as the in-house DNS data, i.e., a strong increase in the Reynolds shear stress profile around the peak of the parabolic arch. The magnitude of the increase is much greater for the in-house DNS data probably due to the larger height of the LSB produced than the baseline prediction. Further downstream of the LSB, the baseline predictions for the $-\left\langle u_{1}u_{2} \right \rangle/U_{\infty}^2$ profiles at $x/c = 0.4$ and $x/c = 0.5$ (in the attached turbulent boundary layer) show relatively good agreement with the in-house DNS data, although some discrepancies exist in the regions next to the wall and in the upper section of the Reynolds shear stress profiles. \begin{figure} \centerline{\includegraphics[width=3.7in]{fig_Marker/UQ_uniformk_uv_RepeatOn_Tu0027.pdf}} \caption[Reynolds shear stress profiles in the aft portion of the LSB ($x/c = 0.15$ and $0.2$) and in the attached turbulent boundary layer ($x/c = 0.3, 0.4$ and $0.5$).]{Reynolds shear stress profiles in the aft portion of the LSB ($x/c = 0.15$ and $0.2$) and in the attached TBL ($x/c = 0.3, 0.4$ and $0.5$). From left to right are $x/c = 0.15, 0.2, 0.3, 0.4$ and $0.5$, respectively. Displayed are envelopes for two extreme $\Delta_{k}$ perturbations considered in this study: $\Delta_{k} =0.1$ (red envelope), $\Delta_{k} = 8$ (gray envelope), and $M_{k}$ (blue envelope). The baseline prediction is provided for reference. $\circ$ in-house DNS data \cite{zhang2021turbulent}.} \label{fig:UQ_uniformk_uv_RepeatOn_Tu0027.pdf} \end{figure} In Fig. \ref{fig:UQ_uniformk_uv_RepeatOn_Tu0027.pdf}, the $\Delta_{k} = 0.1$ perturbation increases the magnitude of the $-\left\langle u_{1}u_{2} \right \rangle/U_{\infty}^2$ profile compared to the baseline prediction. In the aft portion of the LSB ($x/c = 0.15$, $x/c = 0.2$ and $x/c = 0.3$), the $\Delta_{k} = 0.1$ perturbations retain the shape of parabolic arch, with a peak value around the maximum height of the arch gradually reducing in magnitude to zero from the peak in the opposite directions toward the wall and the OBL, respectively. In addition, the $\Delta_{k} = 0.1$ perturbations increase the momentum transfer due to the Reynolds shear stress compared to the baseline prediction. Consequently, the $\Delta_{k} = 0.1$ perturbations tend to approach closer to the in-house DNS data except at $x/c = 0.15$, where a deviation from the in-house DNS data is observed. As the flow proceeds further downstream within the attached turbulent boundary layer ($x/c = 0.4$ and $x/c = 0.5$), the effect of the $\Delta_{k} = 0.1$ perturbation gradually deteriorates with $x/c$, with some of the in-house DNS data being encompassed. On the other hand, the $\Delta_{k} = 8$ perturbation decreases the magnitude of the $-\left\langle u_{1}u_{2} \right \rangle/U_{\infty}^2$ profile compared to the baseline prediction, with the size of the uncertainty bound significantly larger than that for the $\Delta_{k} = 0.1$ perturbation, reflecting the simulation's much stronger response to the $\Delta_{k} = 8$ perturbation. Likewise, a shape of parabolic arch and a similar behavior to the $\Delta_{k} = 0.1$ perturbations are observed for the $\Delta_{k} = 8$ perturbations in the aft portion of the LSB ($x/c = 0.15$, $x/c = 0.2$ and $x/c = 0.3$) as well: peaking around the maximum height of the parabolic arch and gradually decreasing in magnitude toward the wall and OBL. As a result, the $\Delta_{k} = 8$ perturbations reduce the momentum transfer due to the Reynolds shear stress to a great extend around the peak of the parabolic arch. This shows a tendency for the $\Delta_{k} = 8$ perturbations to deviate from the in-house DNS data except at $x/c = 0.15$, where the $\Delta_{k} = 8$ perturbations tend to approach closer to the in-house DNS profile that much lag behind the baseline prediction. Within the attached turbulent boundary layer ($x/c = 0.4$ and $x/c = 0.5$), an important observation for the $\Delta_{k} = 8$ perturbation is that the uncertainty bounds retain a value of zero not only at the wall but also extend for some distance above the wall, which violates the ``rule'' that all Reynolds stresses decrease to zero at the wall surface due to the no-slip wall condition \cite{versteeg2007introduction}. This marks the behavior of ``over perturbation'' with $\Delta_{k} = 8$, and is not physically realizable. Since few studies have been conducted to determine the upper bound of $k^{*}$, this study here sheds light on a possible way of determining the upper bound of $k^{*}$ using the result of Reynolds shear stress. Therefore, the maximum magnitude of $\Delta_{k}$ must ensure that Reynolds stresses must behave in a physically-realizable manner in the near-wall region. The $M_{k}$ perturbation in general under-predicts the baseline prediction across the suction side, in general sitting within the gray envelope, with a subtle movement to the red envelope being discerned in the lower section of the Reynolds shear stress profiles for $x/c = 0.2$ and $x/c = 0.3$. Within the attached turbulent boundary layer ($x/c = 0.4$ and $x/c = 0.5$), the uncertainty bounds generated from the $M_{k}$ perturbation remain constantly below the baseline prediction, which is consistent with the uniform magnitude of $\Delta_{k} = 2.8$. It should be noted that the simulation's response to the perturbed Reynolds shear stress profile is in general stronger than that for the perturbed mean velocity profile. This indicates that the level of sensitivity to the $\Delta_{k}$ perturbation varies with different QoIs being observed. In Fig. \ref{fig:UQ_uniformk_uv_RepeatOn_Tu0027.pdf}, the $M_{k}$ function successfully avoids over-perturbations through strictly comparing to the available high-fidelity data, ensuring that only the physical realistic perturbations are considered. \subsection{Combining $M_{k}$ with $1c$, $2c$, and $3c$}\label{Sec:compound} \subsubsection{Skin friction coefficient} Distributions of the skin friction coefficient and the pressure coefficient, $C_{f}$ and $C_{p}$, are shown in Figs. \ref{fig:cfcp_marker.pdf} (a) and (b), respectively. Also included are the in-house DNS \cite{zhang2021turbulent} and ILES/LES data of \cite{galbraith2010implicit} and \cite{garmann2013comparative} for comparison. Integrating the $M_{k}$ perturbation with the eigenvalue perturbation ($1c$, $2c$ and $3c$) using Eqs. \ref{Eqn_Rij_perturbed}, \ref{Eqn:Markerfunc} and \ref{Eqn:kstar} yields compound effect, namely, $1c\_M_{k}$, $2c\_M_{k}$ and $3c\_M_{k}$. Also included are the eigenvalue perturbation ($1c$ and $3c$) as a reference for the $1c\_M_{k}$ $2c\_M_{k}$ and $3c\_M_{k}$ perturbation. In the aft portion of the LSB, Fig. \ref{fig:cfcp_marker.pdf} (a) clearly shows that the $1c\_M_{k}$ perturbation decreases the magnitude of $C_{f}$ more than the $2c\_M_{k}$ perturbation does compared to the baseline prediction, while the $3c\_M_{k}$ perturbation results in the uncertainty bound that almost overlaps the one generated from the $3c$ perturbation, indicating simulation's low sensitivity to the $3c\_M_{k}$ perturbation. In addition, both uncertainty bounds generated from the $3c\_M_{k}$ and $3c$ perturbations in general sit slightly below the baseline prediction except at the trough around $x/c = 0.2$ (in the aft portion of the LSB), where they sit somewhat above the baseline prediction. As a consequence, an enveloping behavior with respect to the baseline prediction is observed. On the other hand, the uncertainty bounds generated from the $1c\_M_{k}$ and $2c\_M_{k}$ perturbations lie significantly above the baseline prediction, encompassing the reference data for $X_{R}$, as well as the steep rise followed by $X_{R}$. Interestingly, it is clear that this promising increase associated with the $1c\_M_{k}$ and $2c\_M_{k}$ perturbations is not a simple sum of the $M_{k}$ and $1c$/$2c$ uncertainty bounds up, but a ``synergy'' has developed. Moreover, the synergy behavior associated with the $1c\_M_{k}$ perturbation results in the encompassing of the gap between the baseline prediction and the reference data in the aft portion of the LSB as well as at the crest. Besides, it is interesting to note that the uncertainty bounds generated from $1c\_M_{k}$ and $2c\_M_{k}$ perturbations tend to retain the shape of the $C_{f}$ profile at the crest for $0.3 < x/c < 0.4$, with the $1c\_M_{k}$ perturbation effectively encompassing the in-house DNS data of \cite{zhang2021turbulent}. This confirms the effect of spatial variability in $M_{k}$. As the flow proceeds further downstream $0.4 < x/c < 0.6$, a rapid collapse of the uncertainty bounds generated from the $1c\_M_{k}$ and $2c\_M_{k}$ perturbations is observed. This confirms the uniform magnitude of $M_{k}$ used in the region for $0.4 < x/c < 0.6$. On the other hand, the $3c\_M_{k}$ and $3c$ perturbations become almost indistinguishable from each other, lying somewhat below the baseline prediction across the entire suction side, except for a slight decrease found associated with the $3c\_M_{k}$ perturbation in the region for $0.35 < x/c < 0.4$. \begin{figure} \centerline{\includegraphics[width=3.5in]{fig_Marker/cfcp_marker.pdf}} \caption[(a) Profile of skin friction coefficient and (b) pressure coefficient with enlarged regions at the flat spot and the kink followed by a sharp drop of $C_{p}$.]{(a) Profile of skin friction coefficient and (b) pressure coefficient with enlarged regions at the flat spot and the kink followed by a sharp drop of $C_{p}$. Displayed are uncertainty bounds for $1c\_M_{k}$, $2c\_M_{k}$ and $3c\_M_{k}$ perturbations (red envelope). $\Delta_{B1}$ stands for $\Delta_B = 1.0$. Profiles of baseline prediction and eigenvalue perturbations ($1c$ and $3c$) are provided for reference. $\circ$ in-house DNS data \cite{zhang2021turbulent}.} \label{fig:cfcp_marker.pdf} \end{figure} In Fig. \ref{fig:cfcp_marker.pdf} (b), at the flat spot the $1c\_M_{k}$ perturbation increases the magnitude of $C_{p}$ more than the $2c\_M_{k}$ perturbation does compared to the baseline prediction. Both $1c\_M_{k}$ and $2c\_M_{k}$ perturbations show a tendency to approach toward the in-house DNS \cite{zhang2021turbulent} and LES data of \cite{garmann2013comparative}, and sit within the uncertainty bound generated from the $1c$ perturbation. Interestingly, there is no discernible synergy behavior appearing at the flat spot, the uncertainty bound generated from the $1c\_M_{k}$ and $2c\_M_{k}$ perturbations tend to reduce somewhat in size compared to the $1c$ and $2c$ perturbations instead. On the other hand, the uncertainty bounds generated from the $3c\_M_{k}$ and $3c$ perturbations become almost indistinguishable at the flat spot, sitting slightly below the baseline prediction in a trend of approaching toward the ILES data of \cite{galbraith2010implicit}. At the kink around $X_{R}$, the uncertainty bounds generated from the $1c\_M_{k}$ and $2c\_M_{k}$ perturbations under-predict the baseline prediction and tend to approach closer to the reference data, while the uncertainty bound for the $3c\_M_{k}$ perturbation over-predicts the baseline prediction and retains the behavior of collapsing onto the $3c$ perturbation, showing a trend of deviating from the reference data. As a consequence, a discernible enveloping behavior with respect to the baseline prediction is observed at both the flat spot and the kink around $X_{R}$, where most uncertainty is generated, as shown in Fig. \ref{fig:cfcp_marker.pdf} (b). In addition, the collapsing behavior of the $3c\_M_{k}$ and $3c$ perturbations at the flat spot and the kink indicates the simulation's low sensitivity to the $3c\_M_{k}$ perturbation. Compared to $C_{f}$, it is clear that $Cp$ overall is less sensitive to all kinds of perturbations. This is because the wall pressure is determined by the freestream, which is only modified minutely by the eigenvalue perturbations \cite{emory2013modeling}. In addition, this reflects that the degree of response to the $\Delta_{k}$ perturbation varies with which QoIs being observed. On the pressure side, the simulation's response to all kinds of perturbations are rather small, indicating a low level of model form uncertainty and hence high trustworthiness in the baseline prediction for $C_{p}$. It should be noted that because $3c$ perturbation retains the isotropic nature of the turbulent viscosity model, it yields limited influence on the perturbed results \cite{mishra2019theoretical}. This is well reflected in the smaller size of the uncertainty bound generated from the $3c$ perturbation compared to the $1c$ and $2c$ perturbations. Such inefficacy of $3c$ perturbation has been observed by Emory \textit{et al.} \cite{emory2013modeling} as well. Importantly, this inefficacy persists when being compounded with $M_{k}$, which might partly explain the collapsing behavior of the $3c\_M_{k}$ profile onto the $3c$ profile as can be observed in Figs. \ref{fig:cfcp_marker.pdf} (a) and (b). Moreover, this collapsing behavior not only happens to the results of $C_{f}$ and $C_{p}$ but also happens to the mean velocity profile and the turbulence quantities, as can be observed in the following sections. \subsubsection{Mean velocity field} \begin{figure*} \centerline{\includegraphics[width=5.5in]{fig_Marker/RANS_contourf_normU_All_subplot_outlook.pdf}} \caption[Contours of $\left\langle U \right \rangle/U_{\infty}$ with $1c\_M_{k}$, $2c\_M_{k}$, $3c\_M_{k}$, $1c$, $2c$, $3c$ and $M_{k}$ perturbations in an $xy$ plane.]{Contours of $\left\langle U \right \rangle/U_{\infty}$ with $1c\_M_{k}$, $2c\_M_{k}$, $3c\_M_{k}$, $1c$, $2c$, $3c$ and $M_{k}$ perturbations in an $xy$ plane. Isolines of the mean streamwise velocity are superimposed on the contour plots. The contour of baseline prediction is provided for reference, and the contour of in-house DNS data \cite{zhang2021turbulent} is included for comparison.} \label{fig:RANS_contourf_normU_All_subplot_outlook.pdf} \end{figure*} \begin{figure} \centerline{\includegraphics[width=3.7in]{fig_Marker/markerfunc_U_five.pdf}} \caption[Profile of $\left\langle U \right \rangle/U_{\infty}$ in the aft portion of the LSB for (a) $x/c = 0.15$ and (b) $x/c = 0.2$; and profile of $\left\langle U \right \rangle/U_{\infty}$ in the attached TBL for (c) $x/c = 0.3$, (d) $x/c = 0.4$ and (e) $x/c = 0.5$. Displayed are uncertainty bounds for $1c\_M_{k}$, $2c\_M_{k}$ and $3c\_M_{k}$ perturbations (red envelope).]{Profile of $\left\langle U \right \rangle/U_{\infty}$ in the aft portion of the LSB for (a) $x/c = 0.15$ and (b) $x/c = 0.2$; and profile of $\left\langle U \right \rangle/U_{\infty}$ in the attached TBL for (c) $x/c = 0.3$, (d) $x/c = 0.4$ and (e) $x/c = 0.5$. Displayed are uncertainty bounds for $1c\_M_{k}$, $2c\_M_{k}$ and $3c\_M_{k}$ perturbations (red envelope). $\Delta_{B1}$ stands for $\Delta_B = 1.0$. Profiles of baseline prediction and eigenvalue perturbations ($1c$ and $3c$) are provided for reference. $\circ$ in-house DNS data \cite{zhang2021turbulent}.} \label{fig:markerfunc_U_five.pdf} \end{figure} \begin{figure*} \centerline{\includegraphics[width=5.5in]{fig_Marker/arrows_1c2c3c_Base_Marker_DNS_Marker.pdf}} \caption[Contours of normalized mean velocity $\left\langle U \right \rangle/U_{\infty}$ with in-plane velocity vectors superimposed on the contours in an $xy$ plane.]{Contours of normalized mean velocity $\left\langle U \right \rangle/U_{\infty}$ with in-plane velocity vectors superimposed on the contours in an $xy$ plane. The region in the vicinity of the wall is enlarged to highlight the flow behavior in the LSB, as well as in the turbulent region right downstream of the LSB. A focus on a section of the airfoil suction side is considered: $0.14 < x/c < 0.44$.} \label{fig:arrows_1c2c3c_Base_Marker_DNS_Marker.pdf} \end{figure*} Contours of the mean velocity normalized by the freestream velocity, $\left\langle U \right \rangle/U_\infty$ from the baseline, $1c\_M_{k}$, $2c\_M_{k}$ and $3c\_M_{k}$ perturbations, eigenvalue perturbations ($1c$, $2c$ and $3c$), $M_{k}$ perturbation and in-house DNS of \cite{zhang2021turbulent} in an $xy$ plane are shown in Fig. \ref{fig:RANS_contourf_normU_All_subplot_outlook.pdf}. From Fig. \ref{fig:RANS_contourf_normU_All_subplot_outlook.pdf}, all of the $\left\langle U \right \rangle/U_\infty$ contours show a recirculating region, i.e., the eye-like green region, where the negative value of velocity is present. In addition, the mean velocity contour generated from the $M_{k}$ perturbation results in a shorter LSB and a slightly increased $\left\langle U \right \rangle/U_\infty$ in magnitude in the region downstream of the reattachment point $0.3 < x/c < 0.6$, in which the untrustworthy zones are identified. This indicates that the $M_{k}$ perturbation tends to suppress the LSB compared to the baseline prediction. This reduces the turbulence kinetic energy contained in the large-scale coherent structures \cite{lengani2014pod}, implying the increased mean-flow energy in the vicinity of the LSB $0.3 < x/c < 0.6$, therefore increased magnitude of $\left\langle U \right \rangle/U_\infty$ in this region, as shown in Fig. \ref{fig:RANS_contourf_normU_All_subplot_outlook.pdf}. For the $1c$ and $2c$ eigenvalue perturbations, the contours show a reduction in the length of the LSB compared to the baseline prediction, which results in an overall increase in the mean-flow magnitude further downstream of the reattachment point, while the $3c$ perturbation does the opposite. In addition, it is clear that the $1c\_M_{k}$ and $2c\_M_{k}$ perturbations further increase the magnitude of $\left\langle U \right \rangle/U_\infty$ than the $1c$ and $2c$ perturbations do compared to the baseline prediction, which confirms the greatly reduced $C_{f}$ in magnitude in the aft portion of the LSB, while the $3c\_M_{k}$ perturbation remains at nearly same magnitude as that for the $3c$ perturbation, which confirms the collapse of the $C_{f}$ profiles generated from the $3c\_M_{k}$ and $3c$ perturbations, as shown in Fig. \ref{fig:cfcp_marker.pdf} (a). This indicates a weak compound effect of the $3c\_M_{k}$ perturbation. Compared to the baseline prediction, both $1c\_M_{k}$ and $2c\_M_{k}$ perturbations shorten the region of reverse flow (deviating from the in-house DNS data), while increase the mean-flow magnitude in the attached turbulent boundary layer (approaching closer to the in-house DNS data); on the other hand, the $3c\_M_{k}$ perturbation bolsters the region of reverse flow, showing a tendency of approaching closer to the in-house DNS data, while shows a reduction in the magnitude of $\left\langle U \right \rangle/U_\infty$ in the attached turbulent boundary layer, causing a deviation from the in-house DNS data. The predictions for the mean velocity profile normalized by $U_{\infty}$, i.e., $\left\langle U \right \rangle/U_{\infty}$, are presented in Figs. \ref{fig:markerfunc_U_five.pdf} (a) - (e). The in-house DNS data of \cite{zhang2021turbulent} is included for comparison. In Figs. \ref{fig:markerfunc_U_five.pdf} (a) - (e), the $1c$ and $3c$ eigenvalue perturbations are used as a reference for the $1c\_M_{k}$, $2c\_M_{k}$ and $3c\_M_{k}$ perturbation. From Figs. \ref{fig:markerfunc_U_five.pdf} (a) - (e), an enveloping behavior with respect to the baseline prediction is observed, i.e., the $1c\_M_{k}$ and $2c\_M_{k}$ perturbations leading the baseline prediction, while the $3c\_M_{k}$ perturbation lagging behind. A similar behavior of the $1c$ and $3c$ perturbations with respect to the baseline prediction was also observed by Luis \textit{et al.} \cite{cremades2019reynolds} in their numerical study for a turbulent flow over a backward-facing step. In addition, the $3c\_M_{k}$ profile tends to collapse onto the $3c$ profile, reflecting the simulation's low sensitivity to the $3c\_M_{k}$ perturbation, which is consistent with the behavior shown in Fig. \ref{fig:RANS_contourf_normU_All_subplot_outlook.pdf}. At $x/c = 0.15$ ($X_{T}$), the uncertainty bound generated from the $1c\_M_{k}$ perturbation increases the magnitude of the mean velocity profile more than the $2c\_M_{k}$ perturbation does in both the region of reverse flow ($U/U_{\infty} < 0$) for $0 < y/c|_{o} < 0.007$ and the upper portion of the boundary layer for $0.011 < y/c|_{o} < 0.023$, showing a tendency to approach closer to the in-house DNS data. On the other hand, the $3c\_M_{k}$, $3c$ and baseline profiles show a collapse, indicating a type of similarity. This might be partly explained by the almost same values of $C_{f}$ found in the $3c\_M_{k}$, $3c$ and baseline profiles around $X_{T}$, as shown in Fig. \ref{fig:cfcp_marker.pdf} (a). It should be noted that the uncertainty bounds generated from the $1c\_M_{k}$, $2c\_M_{k}$, and $3c\_M_{k}$ perturbations and the baseline prediction are negligibly small for $0.007 < y/c|_{o} < 0.011$, i.e., all collapsing onto a single curve, which shows good agreement with the in-house DNS data, as shown in Fig. \ref{fig:markerfunc_U_five.pdf} (a). This reveals a low level of the model form uncertainty and hence relatively high trustworthiness in this region. As the flow moves further downstream to $x/c = 0.2$ (the aft portion of the LSB), the effect of the $1c\_M_{k}$ and $2c\_M_{k}$ perturbations has permeated the entire boundary layer, showing a tendency of approaching closer to the in-house DNS data in the upper portion of the boundary layer. Moreover, the uncertainty bounds generated from the $1c\_M_{k}$ and $2c\_M_{k}$ perturbations over-predict the baseline prediction but lie somewhat above the $1c$ perturbation in the region for $0.007 < y/c|_{o} < 0.011$, overall showing closer agreement with the in-house DNS data, as shown in Fig. \ref{fig:markerfunc_U_five.pdf} (b). Again, this reflects the effect of spatial variability in $M_{k}$. Note that the baseline prediction overall shows good agreement with the in-house DNS data in the region of reverse flow at $x/c = 0.2$, at which the baseline prediction, $3c$, and $3c\_M_{k}$ perturbations collapse onto a single curve. The almost identical magnitude of $C_{f}$ retained by the $3c\_M_{k}$, $3c$ and baseline predictions for $x/c = 0.2$, as shown in Fig. \ref{fig:cfcp_marker.pdf} (a), might partly explain this type of similarity. Away from the wall, a collapsing behavior is again observed for the $3c\_M_{k}$ and $3c$ perturbations, with a slight offset from the baseline prediction. Note that the $1c$ perturbation is well encompassed by $1c\_M_{k}$ in the aft portion of the LSB, as shown in Figs. \ref{fig:markerfunc_U_five.pdf} (a) - (b). This confirms the larger values of $C_{f}$ for the $1c\_M_{k}$ perturbation in this region. At $x/c = 0.3$ (downstream of the LSB near \textbf{$X_{R}$}), $x/c = 0.4$ and $x/c = 0.5$ (in the reattached turbulent boundary layer), the uncertainty bounds generated from the $1c\_M_{k}$ and $2c\_M_{k}$ perturbations in general tend to approach closer to the in-house DNS data, with the perturbation effect gradually deteriorating further downstream. This is consistent with the gradual reduction in the positive values of $C_{f}$ as the flow moves further downstream of $X_{R}$, as shown in Fig. \ref{fig:cfcp_marker.pdf} (a). Also the difference between the $1c$ and $1c\_M_{k}$ perturbation becomes smaller, which confirms the comparable magnitude of $C_{f}$ in the region of the attached turbulent boundary layer, as shown in Fig. \ref{fig:cfcp_marker.pdf} (a). On the other hand, a collapse is also observed for the $3c\_M_{k}$ and $3c$ perturbations at $x/c = 0.4$ and $x/c = 0.5$, which confirms the almost identical values of $C_{f}$ retained by the $3c\_M_{k}$ and $3c$ perturbations shown in Fig. \ref{fig:cfcp_marker.pdf} (a). It is interesting to note that $1c$ and $2c$ perturbations respond favorably to $M_{k}$, while $3c$ perturbation remains almost immune to it. Figure \ref{fig:arrows_1c2c3c_Base_Marker_DNS_Marker.pdf} shows contours of mean velocity, with the region of reverse flow being enlarged. The region of reverse flow is evidenced by the velocity vectors added in the LSB. The baseline prediction is provided for reference. Also included is the in-house DNS data of \cite{zhang2021turbulent} for comparison. Compared to the in-house DNS data, Fig. \ref{fig:arrows_1c2c3c_Base_Marker_DNS_Marker.pdf} clearly shows that the baseline prediction shifts the region of reverse flow in the upstream direction. Within the LSB (green region), the velocity vectors for the $1c$ and $2c$ perturbations clearly indicate a subdued reverse-flow field, resulting in a shorter LSB and hence a better agreement with the DNS data, and the opposite is true for the $3c$ perturbation. For the attached turbulent boundary layer, the velocity vectors indicate an overall increase in the mean velocity field for the $1c$ and $2c$ perturbations, showing a tendency of approaching to the DNS mean flow field, while the $3c$ perturbation shows an overall reduction in the mean velocity field. Integrating $M_{k}$ into the $1c$, $2c$, and $3c$ perturbation tends to suppress the size of the LSB, but increases the mean flow field downstream of the LSB. Among these perturbations, $1c\_M_{k}$ increases the mean flow field more than $2c\_M_{k}$ in the attached turbulent boundary layer, to the largest extend contributing to a closer approach to the in-house DNS data. \subsubsection{Reynolds shear stress} \begin{figure*} \centerline{\includegraphics[width=5.5in]{fig_Marker/RANS_contourf_normuv_All_1c2c3cOnlyMarkerfunc_subplot_foucs.pdf}} \caption[Contours of $-\left\langle u_{1}u_{2} \right \rangle/U_{\infty}^2$ with $1c\_M_{k}$, $2c\_M_{k}$, $3c\_M_{k}$, $1c$, $2c$, $3c$ and $M_{k}$ perturbations in an $xy$ plane.]{Contours of $-\left\langle u_{1}u_{2} \right \rangle/U_{\infty}^2$ with $1c\_M_{k}$, $2c\_M_{k}$, $3c\_M_{k}$, $1c$, $2c$, $3c$ and $M_{k}$ perturbations in an $xy$ plane. Isolines of the Reynolds shear stress are superimposed on the contour plots. The contour of baseline prediction is provided for reference, and the contour of in-house DNS data \cite{zhang2021turbulent} is included for comparison.} \label{fig:RANS_contourf_normuv_All_1c2c3cOnlyMarkerfunc_subplot_foucs.pdf} \end{figure*} \begin{figure} \centerline{\includegraphics[width=3.7in]{fig_Marker/markerfunc_uv_five.pdf}} \caption[Profile of $-\left\langle u_{1}u_{2} \right\rangle/U_{\infty}^2$ in the aft portion of the LSB for (a) $x/c = 0.15$ and (b) $x/c = 0.2$; and profile of $-\left\langle u_{1}u_{2} \right\rangle/U_{\infty}^2$ in the attached TBL for (c) $x/c = 0.3$, (d) $x/c = 0.4$ and (e) $x/c = 0.5$. Displayed are uncertainty bounds for $1c\_M_{k}$, $2c\_M_{k}$ and $3c\_M_{k}$ perturbations (red envelope).]{Profile of $-\left\langle u_{1}u_{2} \right\rangle/U_{\infty}^2$ in the aft portion of the LSB for (a) $x/c = 0.15$ and (b) $x/c = 0.2$; and profile of $-\left\langle u_{1}u_{2} \right\rangle/U_{\infty}^2$ in the attached TBL for (c) $x/c = 0.3$, (d) $x/c = 0.4$ and (e) $x/c = 0.5$. Displayed are uncertainty bounds for $1c\_M_{k}$, $2c\_M_{k}$ and $3c\_M_{k}$ perturbations (red envelope). $\Delta_{B1}$ stands for $\Delta_B = 1.0$. Profiles of baseline prediction and eigenvalue perturbations ($1c$ and $3c$) are provided for reference. $\circ$ in-house DNS data \cite{zhang2021turbulent}.} \label{fig:markerfunc_uv_five.pdf} \end{figure} Contours of the Reynolds shear stress normalized by the freestream velocity squared, $-\left\langle u_{1}u_{2} \right \rangle/U_{\infty}^2$ from the baseline, $1c\_M_{k}$, $2c\_M_{k}$ and $3c\_M_{k}$ perturbations, eigenvalue perturbations ($1c$, $2c$, $3c$), $M_{k}$ perturbation in an $xy$ plane are shown in Fig. \ref{fig:RANS_contourf_normuv_All_1c2c3cOnlyMarkerfunc_subplot_foucs.pdf}. Also included is the in-house DNS data of \cite{zhang2021turbulent} for comparison. In Fig. \ref{fig:RANS_contourf_normuv_All_1c2c3cOnlyMarkerfunc_subplot_foucs.pdf}, all of the $-\left\langle u_{1}u_{2} \right \rangle/U_{\infty}^2$ contour plots show a peak around $X_{T}$, i.e., the bright yellow region, downstream from the peak a gradual reduction in the magnitude of $-\left\langle u_{1}u_{2} \right \rangle/U_{\infty}^2$ is observed. From Fig. \ref{fig:RANS_contourf_normuv_All_1c2c3cOnlyMarkerfunc_subplot_foucs.pdf}, the $M_{k}$ perturbation overall reduces the magnitude of $-\left\langle u_{1}u_{2} \right \rangle/U_{\infty}^2$ in both the transitional and turbulent region compared to the baseline prediction. From Fig. \ref{fig:RANS_contourf_normuv_All_1c2c3cOnlyMarkerfunc_subplot_foucs.pdf}, it is clear that the $1c\_M_{k}$ and $2c\_M_{k}$ perturbations further reduce the magnitude of $-\left\langle u_{1}u_{2} \right \rangle/U_{\infty}^2$ compared to the $1c$ and $2c$ perturbations, while the $3c\_M_{k}$ perturbation remains at a nearly same magnitude as that of the $3c$ perturbation. In addition, the $1c\_M_{k}$ and $2c\_M_{k}$ perturbations under-predict the baseline prediction, and in general tend to approach closer to the in-house DNS data in the attached turbulent boundary layer, although an under-prediction for $-\left\langle u_{1}u_{2} \right \rangle/U_{\infty}^2$ is observed in the LSB; on the other hand, the $3c\_M_{k}$ perturbation over-predicts the baseline prediction, showing better agreement with the in-house DNS data within the LSB. The predicted profiles for the Reynolds shear stress normalized by the freestream velocity squared, $-\left\langle u_{1}u_{2} \right \rangle/U_{\infty}^2$, are shown in Figs. \ref{fig:markerfunc_uv_five.pdf} (a) - (e). Also included are the in-house DNS data of \cite{zhang2021turbulent} for comparison, as well as the $1c$ and $3c$ eigenvalue perturbations used as a reference for the ${1c}\_M_{k}$, ${2c}\_M_{k}$ and ${3c}\_M_{k}$ perturbations. Figures \ref{fig:markerfunc_uv_five.pdf} (a) - (e) show that the baseline prediction is well enveloped by the uncertainty bounds generated from the ${1c}\_M_{k}$, ${2c}\_M_{k}$ and ${3c}\_M_{k}$ perturbations. In addition, the ${1c}\_M_{k}$ and ${2c}\_M_{k}$ perturbations reduce the magnitude of the predicted $-\left\langle u_{1}u_{2} \right \rangle/U_{\infty}^2$ profiles compared to the baseline prediction, while the ${3c}\_M_{k}$ perturbation does the opposite. A similar behavior of the $1c$ and $3c$ perturbations with respect to the baseline prediction was also observed by Luis \textit{et al.} \cite{cremades2019reynolds} in their numerical study for a turbulent flow over a backward-facing step. Figures \ref{fig:markerfunc_uv_five.pdf} (a) - (e) show that the simulation's sensitivity to the $3c\_M_{k}$ perturbation is rather low, with the ${3c}\_M_{k}$ profile nearly collapsing onto the $3c$ profile. A similar behavior is also observed in Fig. \ref{fig:markerfunc_U_five.pdf} (a) - (e). At $x/c = 0.15$ ($X_{T}$), Fig. \ref{fig:markerfunc_uv_five.pdf} (a) shows that the $1c\_M_{k}$ perturbation results in a rather strong reduction in the magnitude of $-\left\langle u_{1}u_{2} \right \rangle/U_{\infty}^2$ across the entire boundary layer compared to the baseline prediction, showing a tendency of approaching closer to the in-house DNS data, and a ``synergy behavior'' is observed. On the other hand, the $3c\_M_{k}$ perturbation that tends to deviate from the in-house DNS data, indicating a weak response to the $M_{k}$ perturbation. In Figs. \ref{fig:markerfunc_uv_five.pdf} (b) - (e), the baseline predictions and in-house DNS data are similar in shape: the convexity of the profile strongly increases in the vicinity of the wall, and then relaxes as the distance from the wall increases, with some discrepancies that overall mark the under-prediction of the momentum transfer due to the Reynolds shear stress within both the transitional and turbulent boundary layer. Therefore, there is an important observation: the synergy behavior seems only active for the $1c\_M_{k}$ and $2c\_M_{k}$ perturbations. As the flow moves further downstream to $x/c = 0.2$ (the aft portion of the LSB) and $x/c = 0.3$ (downstream of the LSB near $X_{R}$), the $1c\_M_{k}$ perturbation decreases the predicted Reynolds shear stress more than the $1c$ perturbation does in the outer portion of the boundary layer, indicating a deviation from the in-house DNS data, while shows a somewhat reduction and no discernible change (a collapse onto the $1c$ profile) at $x/c = 0.2$ and $x/c = 0.3$, respectively, in the near-wall region, as shown in Figs. \ref{fig:markerfunc_uv_five.pdf} (b) and (c). This reflects the spatial variability in $M_{k}$. On the other hand, the $-\left\langle u_{1}u_{2} \right \rangle/U_{\infty}^2$ profiles generated from the $3c\_M_{k}$ perturbations nearly collapse onto that for the $3c$ perturbations at $x/c = 0.2$ and $x/c = 0.3$; consequently, the $-\left\langle u_{1}u_{2} \right \rangle/U_{\infty}^2$ profiles generated from the $3c\_M_{k}$ perturbations tend to approach closer to the in-house DNS data at $x/c = 0.2$ and $x/c = 0.3$, as shown in Figs. \ref{fig:markerfunc_uv_five.pdf} (b) and (c). At $x/c = 0.4$ and $x/c = 0.5$ (in the attached turbulent boundary layer), a synergy behavior is again observed for the $1c\_{M_{k}}$ perturbations across the entire boundary layer, which enhances the deviation from the in-house DNS data at $x/c = 0.4$, while encompasses the in-house DNS data at $x/c = 0.5$. On the other hand, a collapse is again observed for the $3c\_M_{k}$ and $3c$ perturbations. Besides, the uncertainty bounds generated from the $2c\_M_{k}$ and $3c\_M_{k}$ perturbations successfully encompass the in-house DNS data in the lower portion of the attached turbulent boundary layer at $x/c = 0.4$ and $x/c = 0.5$, although there is a small discrepancy present in the region next to the wall. \section{Conclusions} The goal of the present study was to advance our understanding of a physics-based methodology to quantify transition model-form uncertainty in RANS predictions of unsteady flow over a SD7003 airfoil. The method is based on the framework proposed in the study of \cite{emory2013modeling}, which introduces perturbations to a decomposition of the Reynolds stress tensor, namely, the amplitude and the eigenvalue of the anisotropy Reynolds stress tensor. In this study, the methodology was completely implemented in C++ in OpenFOAM. Based on the baseline predictions for $C_{f}$ and $C_{p}$, we presented analyses to locate the untrustworthy region, which is further divided into four zones to cover both the LSB and turbulent flow region further downstream. A novel regression based marker function was developed to inject an accurate level of the amplitude perturbation into the identified untrustworthy region. We presented analyses to understand the effect of the uniform amplitude perturbation to the skin friction coefficient, mean velocity, and Reynolds shear stress. Importantly, we observed a monotonic behavior of the magnitude of the predicted bounds with $\Delta_{k}$ perturbations, in particular more noticeable bounds for $\Delta_{k} > 1$: a clear shift of the reattachment point in the upstream direction, a noticeable suppression of the length of the LSB, and a greatly reduced magnitude of Reynolds shear stress in the LSB region; for the turbulent flow region further downstream of the LSB, results for both the mean velocity and the Reynolds shear stress showed better agreement with the in-house DNS data of \cite{zhang2021turbulent}. Such monotonic behavior is imperative for the development of a marker function that aims to predict plausible bounds for QoIs. The predicted bounds generated from the marker function $M_{k}$ was contrasted with the uniform amplitude perturbations $\Delta_{k} = 0.1$ and $\Delta_{k} = 8$ for different QoIs. Results for the QoIs clearly showed the spatial variability in $M_{k}$, and the bounds generated from $M_{k}$ in general sat within the bounds generated from $\Delta_{k} = 8$. The $\Delta_{k} = 8$ perturbations showed a clear tendency to approach closer to the reference data \cite{galbraith2010implicit,garmann2013comparative,zhang2021turbulent} for $C_{f}$ and $C_{p}$, and well encompassed the reattachment point in the predicted bounds. Overall, the $\Delta_{k} = 0.1$ perturbation was the opposite of the behavior of $\Delta_{k} = 8$: deviating from the reference data and showing rather small bounds. On the pressure side, the $C_{p}$ profile for $\Delta_{k} = 0.1$, baseline prediction, and $\Delta_{k} = 8$ showed a collapse, which indicated a low model form uncertainty. Importantly, the over-perturbation behavior associated with the predicted Reynols shear stress profile undergoing the $\Delta_{k} = 8$ perturbation could facilitate the approximating of the upper-bound of the amplitude perturbation. When compounding $M_{k}$ with the eigenvalue perturbations $1c$, $2c$, the predicted bounds for $C_{f}$ was dramatically increased to encompass the reattachment point and the reference data of \cite{galbraith2010implicit,garmann2013comparative,zhang2021turbulent} at the crest, which showed a synergy behavior and consistently sat above the baseline prediction. Overall, the uncertainty bounds retained the shape of the baseline prediction for $C_{f}$, which confirmed the effect of spatial variability in $M_{k}$. The predicted $1c\_M_{k}$ and $2c\_M_{k}$ bounds for $C_{p}$ sat above the baseline prediction at the flat spot, which did not exhibit a synergy behavior, but reduced in magnitude compared to the $1c$ and $2c$ perturbations instead. The opposite was true at the kink (or the reattachment point) of the $C_{p}$ distribution, where the $1c\_M_{k}$ and $2c\_M_{k}$ perturbations under-predicted the baseline prediction. The $3c$ and $3c\_M_{k}$ bounds for both $C_{f}$ and $C_{p}$ showed a collapse, which deviated slightly away from the baseline prediction. the perturbed mean velocity profile approached a lot closer to the in-house DNS data near the reattachment point. When the contours of the mean velocity were plotted in an $xy$ plane, the $1c$ and $2c$ perturbations suppressed the LSB compared to the baseline prediction, which increased the magnitude of the mean flow. This behavior was enhanced by compounding with $M_{k}$: $1c\_M_{k}$ and $2c\_M_{k}$ further increased the magnitude of the mean flow in the attached turbulent boundary layer through a more suppression of the LSB. This behavior is qualitatively similar to that observed in the in-house DNS contour \cite{zhang2021turbulent}. Again, the $3c\_M_{k}$ remained at nearly same magnitude as that for the $3c$ perturbation, which bolstered the region of reverse flow to approach closer to the in-house DNS data \cite{zhang2021turbulent}. When the predictions for the mean velocity profile were plotted in coordinates shifted vertically, the predicted bounds generated from the $1c\_M_{k}$ and $2c\_M_{k}$ perturbations in general led ahead the baseline prediction, while the $3c\_M_{k}$ perturbation lagged behind it, which showed an enveloping behavior with respect to the baseline prediction. This behavior is qualitatively similar to the $1c$ and $3c$ perturbations observed by Luis \textit{et al.} \cite{cremades2019reynolds}. At the transition point $X_{T}$, all of the perturbations and the baseline prediction showed a collapse for $0.007 < y/c|_{o} < 0.011$, and showed a good agreement with the in-house DNS data \cite{zhang2021turbulent}. As the flow moves further downstream of $X_{R}$, the $1c\_M_{k}$ and $2c\_M_{k}$ perturbations showed a tendency to approach closer to the in-house DNS data, while the effect of perturbation gradually deteriorated due to gradual reduction in the positive values of $C_{f}$. Overall, the compound effect of $3c\_M_{k}$ was weak, which indicated the immunity of the $3c$ perturbation to the marker function. With the velocity vectors added to the mean velocity contour, a clear visualization again, confirmed the effect of all of the perturbations in the region of reverse flow and the attached turbulent boundary layer. The dimensionless Reynolds shear stress contours in an $xy$ plane were also analyzed. The $1c\_M_{k}$ and $2c\_M_{k}$ perturbations under-predicted the baseline prediction, which showed a tendency to approach closer to the in-house DNS data in the region downstream of the LSB. While, the $3c\_M_{k}$ perturbation over-predicted the baseline prediction, and showed good agreement with the in-house DNS data \cite{zhang2021turbulent} in the region of the LSB. When the predictions for the dimensionless Reynolds shear stress $-\left\langle u_{1}u_{2} \right \rangle/U_{\infty}^2$ profile were plotted, the $1c\_M_{k}$ and $2c\_M_{k}$ perturbations reduced the magnitude of the $-\left\langle u_{1}u_{2} \right \rangle/U_{\infty}^2$ profiles compared to the baseline prediction, while the $3c\_M_{k}$ perturbation did the opposite, which resulted in an enveloping behavior. This behavior is qualitatively similar to that observed by Luis \textit{et al.} At the transition point, the $1c\_M_{k}$ perturbation greatly reduced the magnitude of the $-\left\langle u_{1}u_{2} \right \rangle/U_{\infty}^2$ profile, which marked a synergy behavior. An important observation was that the synergy behavior seems only active for the $1c\_M_{k}$ and $2c\_M_{k}$ perturbations. Overall, the marker function $M_{k}$ was effective in the eigenspace perturbation framework in constructing uncertainty bounds for both mean velocity and turbulence properties. Future work will focus on the development of different types of marker functions based on a variety of transitional flow scenarios. Eigenvector perturbations to the Reynolds stress tensor should also be conducted to complete the full range of the model form uncertainty in the Boussinesq turbulent viscosity models. Also a wider range of RANS based transition models will be tested using the eigenspace perturbation framework with marker involved. \begin{acknowledgments} The support of the Natural Sciences and Engineering Research Council (NSERC) of Canada for the research program of Professor Xiaohua Wu and Professor David E. Rival is gratefully acknowledged. \end{acknowledgments} \section*{Data Availability Statement} The data that support the findings of this study are available from the corresponding author upon reasonable request. \section*{References} \section{} \subsection{} \subsubsection{}
1,314,259,995,373
arxiv
\section{Introduction} The method of shortcuts to adiabaticity is a series of techniques controlling dynamical systems in efficient ways~\cite{STA13, STA19}. The word ``adiabaticity'' assumes that the system is operated very slowly. However, by using some techniques, we can mimic adiabatic dynamics under fast operations. One of the prominent methods is the counterdiabatic driving~\cite{DR03, DR05, Berry09}. We introduce an additional term to the time-evolution generator to prevent nonadiabatic transitions. The method is reviewed in an article of this issue~\cite{Nakahara22}. The main aim of this article is to discuss one of the other useful techniques of shortcuts to adiabaticity. We discuss a quantity called dynamical invariant~\cite{LR}. Keeping the original form of the Hamiltonian unchanged, we can design the protocol so that the state gives a desired time evolution. The method of control by the dynamical invariant is called invariant-based inverse engineering~\cite{CRSCGM}. The dynamical invariant was also used as a method to treat quantum computations~\cite{SDS11}. The organization of this article is as follows. First, we discuss fundamental properties of the dynamical invariant and the idea of inverse engineering in Sec.~\ref{sec:di}. Next, we give several simple applications in Sec.~\ref{sec:app1}. Then, we discuss a relation to the counterdiabatic formalism in Sec.~\ref{sec:app2} and several applications using the dynamical invariant in Sec.~\ref{sec:app3}. The last section \ref{sec:summary} is devoted to summary. \section{Dynamical invariant formalism} \label{sec:di} \subsection{Dynamical invariant} We consider a quantum system described by a time-dependent Hamiltonian $H(t)$. A Hermitian operator $I(t)$ is called a dynamical invariant or a Lewis--Riesenfeld invariant when it satisfies \begin{align} i\hbar\frac{\partial I(t)}{\partial t}=[H(t),I(t)]. \label{di} \end{align} As we discuss in the following sections, this type of operators is familiar in quantum mechanics and is found in many different contexts. Here, we describe the fundamental properties of the dynamical invariant. The formal solution of Eq.~(\ref{di}) is written as \begin{align} I(t)=U(t)I(0)U^\dag(t), \label{u} \end{align} where $U(t)$ is the unitary time-evolution operator satisfying \begin{align} i\hbar\partial_tU(t)=H(t)U(t), \end{align} with $U(0)=1$. Equation (\ref{u}) shows that the eigenvalues of $I(t)$ are time independent. We write the spectral representation \begin{align} I(t)=\sum_n\lambda_n |\phi_n(t)\rangle\langle\phi_n(t)|. \end{align} $\{\lambda_n\}_{n=1,2,\dots}$ represents the set of eigenvalues and each element takes a real constant value. $\{|\phi_n(t)\rangle\}_{n=1,2,\dots}$ is the corresponding set of eigenstates. Comparing this spectral representation with Eq.~(\ref{u}), we find that $|\phi_n(t)\rangle$ is equivalent to $|\chi_n(t)\rangle=U(t)|\phi_n(0)\rangle$ up to a phase. We can write \begin{align} |\chi_n(t)\rangle=e^{i\alpha_n(t)}|\phi_n(t)\rangle, \end{align} where $\alpha_n(t)$ is real. We apply $i\hbar\partial_t-H(t)$ on both sides of this equation. Since $|\chi_n(t)\rangle$ satisfies the Schr\"odinger equation, we have \begin{align} 0=\left(i\hbar\partial_t-H(t) -\hbar\dot{\alpha}(t)\right)|\phi_n(t)\rangle, \end{align} where the dot symbol denotes the time derivative. Then, we obtain \begin{align} \alpha_n(t) =\frac{1}{\hbar}\int_0^t \langle\phi_n(s)|(i\hbar\partial_s-H(s))|\phi_n(s)\rangle ds. \label{alpha} \end{align} We note that $|\chi_n(t)\rangle$ is independent of the phase choice of $|\phi_n(t)\rangle$, except the one at the initial time. $|\chi_n(t)\rangle$ is invariant under the replacement $|\phi_n(t)\rangle\to e^{i\theta_n(t)}|\phi_n(t)\rangle$ where $\theta_n(t)$ represents an arbitrary real function with $\theta_n(0)=0$. The phase $\alpha_n(t)$ takes a familiar form known in the adiabatic approximation of dynamical systems. The first term in Eq.~(\ref{alpha}) is known as the geometric phase and the second term as the dynamical phase. In fact, the general solution of the Schr\"odinger equation is written as \begin{align} |\psi(t)\rangle=\sum_n c_n e^{i\alpha_n(t)}|\phi_n(t)\rangle, \end{align} where $c_n$ represents a constant determined from the initial condition. This representation denotes that the solution of the Schr\"odinger equation is given by the ``adiabatic state'' of the dynamical invariant. The term ``invariant'' indicates that the eigenvalues of $I(t)$ are time independent. In the classical limit, the commutation relation is replaced by the Poisson bracket and the operation in Eq.~(\ref{di}) is interpreted as the total time derivative: \begin{align} \frac{d}{dt}(\cdot) = \frac{\partial}{\partial t}(\cdot)-\frac{1}{i\hbar}[H,(\cdot)]. \end{align} Then, the classical analogue of $I(t)$ becomes a constant of motion. A remarkable feature of this method is that the time-evolved state can be obtained by solving the eigenvalue problem of the dynamical invariant. Of course, we must find the explicit operator form of the dynamical invariant before attacking the eigenvalue problem. For our aim to realize an ideal control of the system, we see in the following that it is not necessarily to solve the eigenvalue problem. \subsection{Invariant-based inverse engineering} The authors in Ref.~\cite{CRSCGM} proposed to use the dynamical invariant for a control of dynamical systems. The term ``shortcut to adiabaticity'' was coined there. In this subsection, we review the method of inverse engineering briefly. To understand the fundamental idea, it is convenient to use a basis-operator representation as discussed in Refs.\cite{SDS11, GN12, Takahashi13, TMM14}. We assume that the Hilbert space of the system has finite dimension $N$. We write the Hamiltonian \begin{align} H(t)=\sum_\mu h_\mu(t)X_\mu, \end{align} and the dynamical invariant \begin{align} I(t)=\sum_\mu b_\mu(t)X_\mu. \end{align} Each component of $\{h_\mu(t)\}$ and $\{b_\mu(t)\}$ is real. Greek indices generally run from 1 to $N^2$. $\{X_\mu\}$ represents the set of basis operators. Their operators are Hermitian and satisfy the orthonormal condition \begin{align} \frac{1}{N}{\rm Tr}\, X_\mu X_\nu=\delta_{\mu,\nu}, \end{align} and the commutation relation \begin{align} [X_\mu,X_\nu]=i\sum_\lambda f_{\mu\nu\lambda}X_\lambda, \end{align} where $f_{\mu\nu\lambda}$ represents the structure constant. $f_{\mu\nu\lambda}$ is real and totally antisymmetric under permutation of indices. The simplest example is when $N=2$. Then, the basis operators are given by the unit operator and the three Pauli operators, for example. By using the basis-operator representation, we find that Eq.~(\ref{di}) is written as \begin{align} \dot{b}_\mu(t)=\sum_{\nu,\lambda}f_{\mu\nu\lambda}h_\nu(t) b_\lambda(t). \label{dix} \end{align} To obtain the dynamical invariant, we solve this set of equations for a given set of $\{h_\mu(t)\}$. Although these equations are linear ones, it is generally a difficult task even when the dimension of the Hilbert space is not so large. In the inverse engineering, as the name suggests, we obtain $\{h_\mu(t)\}$ for a given set of $\{b_\mu(t)\}$. Then, Eq.~(\ref{dix}) is interpreted as a simple algebraic relation. There is no need to solve differential equations. We can introduce a vector representation \begin{align} A[b(t)]\left(\begin{array}{c} h_1(t) \\ h_2(t) \\ \vdots \end{array}\right) =\left(\begin{array}{c} \dot{b}_1(t) \\ \dot{b}_2(t) \\ \vdots \end{array}\right), \label{div} \end{align} where $A$ represents an antisymmetric real matrix. Each component of $A$ is a linear combination of $b_\mu(t)$. Since $A$ is not invertible, we must be careful in handling this matrix equation~\cite{TMM14}. Here, we do not give general discussions on the formal solution of Eq.~(\ref{div}). In principle, the solution $\{h_\mu(t)\}$ in Eq.~(\ref{dix}) can be obtained for various choices of $\{b_\mu(t)\}$. The original aim of shortcuts to adiabaticity is to prevent nonadiabatic transitions in systems under control. Then, the problem is to find appropriate choices of $\{b_\mu(t)\}$ that meet the purposes of system control. The solution of the original Schr\"odinger equation is represented by the adiabatic state of the dynamical invariant. Since the dynamical invariant is not an observable quantity, this property is not a convenient one. When we consider a time evolution from $t=0$ to $t=t_{\rm f}$, we set the condition that the dynamical invariant and the Hamiltonian commute with each other at $t=0$ and $t=t_{\rm f}$: \begin{align} [H(0),I(0)]=[H(t_{\rm f}),I(t_{\rm f})]=0. \label{bc} \end{align} Then, it becomes possible to find a time evolution from an eigenstate of $H(0)$ to that of $H(t_{\rm f})$. These conditions can be written as \begin{align} \sum_{\nu,\lambda}f_{\mu\nu\lambda}h_\nu(0) b_\lambda(0) =\sum_{\nu,\lambda}f_{\mu\nu\lambda}h_\nu(t_{\rm f}) b_\lambda(t_{\rm f})=0. \label{bcx1} \end{align} By using Eq.~(\ref{dix}), we can also write Eq.~(\ref{bcx1}) as \begin{align} \dot{b}_\mu(0)=\dot{b}_\mu(t_{\rm f})=0, \end{align} which means that we start and finish the time evolution of $b_\mu(t)$ slowly. In the inverse engineering, we determine $\{h_\mu(t)\}$ for a given $\{b_\mu(t)\}$. When we choose $\{b_\mu(t)\}$, we must be careful for the boundary conditions of $\{b_\mu(t)\}$ at $t=0$ and $t=t_{\rm f}$ so that the Hamiltonian at those times can take proper forms. We treat several examples in the next section. We summarize the invariant-based inverse engineering as follows. First, we find an operator form of $I(t)$ satisfying Eq.~(\ref{di}) for a given operator form of $H(t)$. The time-dependent coefficients of $I(t)$ and $H(t)$ are related with each other. Second, we choose a specific form of the coefficients of $I(t)$. Third, the coefficients of $H(t)$ are determined from Eq.~(\ref{dix}). The coefficients are chosen so that the boundary conditions in Eq.~(\ref{bc}) are satisfied. In practical applications, the most important point in the first step is that $H(t)$ and $I(t)$ are expanded in terms of a small number of operators. The equation can always be solved if we use all kinds of operators $\{X_\mu\}_{\mu=1,2,\dots, N^2}$ defined in the $N$-dimensional Hilbert space. However, it is not useful when we consider the implementation of the protocol. The exact compact solution is known for limited cases as we discuss in the following sections. In the second step, we have many possible choices of the coefficients of $I(t)$. They are determined so that $H(t)$ obtained in the third step has a physically feasible form. Furthermore, they are required to satisfy the boundary conditions at $t=0$ and $t=t_{\rm f}$. These constraints restrict possible forms of the coefficients significantly. The advantage of this method is that the original form of the Hamiltonian is unchanged. We do not need to introduce additional operators to the Hamiltonian in contrast with the counterdiabatic driving. Furthermore, once if we can find a set of operators satisfying Eq.~(\ref{di}), the procedure becomes simple. We do not need to solve difficult problems such as eigenvalue problems and differential equations. On the other hand, finding a possible operator form of the dynamical invariant becomes a formidable task except several known examples. We also find a difficulty when the Hamiltonian is restricted to a specific form. In that case, even we can find a formal solution of $I(t)$ from Eq.~(\ref{di}), $\{h_\mu(t)\}$ from Eq.~(\ref{dix}) for a given $\{b_\mu(t)\}$ often gives an infeasible form. \section{Examples} \label{sec:app1} \subsection{Two-level system} As the simplest application, we treat the case where the dimension of the Hilbert space is equal to two~\cite{CTM11}. This example is used to drive a single spin-$1/2$ particle. The magnetic field is applied to control the spin state. In this case, the standard basis operators are given by the Pauli operators $\bm{\sigma}=(\sigma^x,\sigma^y,\sigma^z)$. The Hamiltonian of the system is generally written as \begin{align} H(t)=\frac{\hbar}{2}h(t)\bm{n}(t)\cdot\bm{\sigma}, \end{align} where $h(t)$ is nonnegative and $\bm{n}(t)$ is a unit vector. The dynamical invariant is also written by using a unit vector $\bm{e}(t)$ as \begin{align} I(t)=\bm{e}(t)\cdot\bm{\sigma}. \end{align} The eigenvalues of $I(t)$ are $\pm 1$ since $I^2(t)=1$. We note that the dynamical invariant generally has the ambiguity of a multiplicative constant. The part proportional to the unit operator is an irrelevant constant is dropped out without losing generality. Calculating the commutation relations, we obtain from Eq.~(\ref{dix}) \begin{align} \dot{\bm{e}}(t)=h(t)\bm{n}(t)\times\bm{e}(t). \label{div2} \end{align} As a simple example, we parametrize $\bm{e}(t)$ as \begin{align} \bm{e}(t)=\left(\begin{array}{c} \sin\theta(t) \\ 0 \\ \cos\theta(t) \end{array}\right). \end{align} The time dependence of $\theta(t)$ is determined below. Solving Eq.~(\ref{div2}) with respect to $\bm{n}(t)$, we obtain \begin{align} \bm{n}(t)=\left(\begin{array}{c} \sqrt{1-\left(\frac{\dot{\theta}(t)}{h(t)}\right)^2}\sin\theta(t) \\ \frac{\dot{\theta}(t)}{h(t)} \\ \sqrt{1-\left(\frac{\dot{\theta}(t)}{h(t)}\right)^2}\cos\theta(t) \end{array}\right). \label{n} \end{align} We see that the difference between $\bm{e}$ and $\bm{n}$ represents nonadiabatic effects. When the magnitude of $\dot{\theta}(t)/h(t)$ takes a small value, $\bm{n}$ is close to $\bm{e}$, which is consistent with the property that the adiabaticity condition is determined by $|\dot{\theta}(t)|/h(t)\ll 1$. We examine the boundary conditions \begin{align} \dot{\theta}(0)=\dot{\theta}(t_{\rm f})=0. \end{align} There are many possible choices of $\theta(t)$ satisfying these boundary conditions. The simplest choice is a polynomial function \begin{align} \theta(t)=\theta(0)+(\theta(t_{\rm f})-\theta(0)) \left[3\left(\frac{t}{t_{\rm f}}\right)^2 -2\left(\frac{t}{t_{\rm f}}\right)^3\right]. \label{thetapoly} \end{align} It is required that the resulting $\bm{n}(t)$ takes a physically feasible form. In the present example, we see that the condition $|\dot{\theta}(t)|/h(t)\le 1$ is required. It gives the relation \begin{align} h(t)t_{\rm f}\ge 6|\theta(t_{\rm f})-\theta(0)| \left[\frac{t}{t_{\rm f}} -\left(\frac{t}{t_{\rm f}}\right)^2\right]. \label{threshold} \end{align} We need to take a large value of $h(t)t_{\rm f}$ so that this relation holds for any $t$ with $0\le t\le t_{\rm f}$. \begin{figure}[t] \centering\includegraphics[width=0.8\columnwidth]{two.eps} \caption{ Inverse engineering for a two-level system. (a). A trajectory of $\bm{e}(t)$ for the dynamical invariant is denoted by the dashed (black) line. The corresponding trajectories of $\bm{n}(t)$ are denoted by solid lines. We take $h=2h_0$ (blue) and $h=h_0$ (red) where $h_0=3\pi/4t_{\rm f}$ represents the threshold value determined from Eq.~(\ref{threshold}). We note that all the trajectories are on the unit sphere. (b). The direction of the magnetic field is fixed to the $y$-direction as $\bm{n}(t)=(0,1,0)$. We plot the magnetic field $h(t)$ to be applied when $\bm{e}(t)$ takes a trajectory presented in the panel (a). } \label{fig1} \end{figure} We show a trajectory of $\bm{e}(t)$ and the corresponding $\bm{n}(t)$ in the panel (a) of Fig.~\ref{fig1}. We consider the case \begin{align} \theta(0)=0, \quad \theta(t_{\rm f})=\frac{\pi}{2}, \end{align} and set $h(t)$ to a time-independent value. We note that $\bm{e}(t)$ represents the Bloch vector and $\bm{n}(t)$ is the direction of the magnetic field to be applied. We find a singular behavior of $\bm{n}(t)$ at the point where the equality holds in Eq.~(\ref{threshold}). The adiabaticity condition $h(t)t_{\rm f}\gg 1$ is required to obtain a smooth trajectory close to $\bm{e}(t)$. It is also possible to keep the vector $\bm{n}(t)$ in the $y$-direction. We choose the magnitude of the magnetic field as \begin{align} h(t)=|\dot{\theta}(t)|=\frac{6|\theta(t_{\rm f})-\theta(0)|}{t_{\rm f}} \left[\frac{t}{t_{\rm f}} -\left(\frac{t}{t_{\rm f}}\right)^2\right]. \end{align} This is plotted in the panel (b) of Fig.~\ref{fig1}. Then, we find from Eq.~(\ref{n}) that $\bm{n}(t)$ is independent of $\theta(t)$. The magnetic field is applied to the axis perpendicular to the plane where the Bloch vector lies. This protocol is easily understood without using the present technique. When the Hamiltonian takes a restricted form, finding $b_\mu(t)$, $\bm{e}(t)$ in the present example, with the required boundary conditions becomes a cumbersome task. For example, one of the components of $\bm{n}(t)$ is set to zero: \begin{align} \bm{n}(t)=\left(\begin{array}{c} \sin\Theta(t) \\ 0 \\ \cos\Theta(t) \end{array}\right). \end{align} Then, parametrizing $\bm{e}(t)$ as \begin{align} \bm{e}(t)=\left(\begin{array}{c} \sin\theta(t)\cos\varphi(t) \\ \sin\theta(t)\sin\varphi(t) \\ \cos\theta(t) \end{array}\right), \end{align} we obtain the relation between $h(t)\bm{n}(t)$ and $\bm{e}(t)$ \begin{align} h(t)\left(\begin{array}{c} \cos\Theta(t) \\ \sin\Theta(t) \end{array}\right) =\left(\begin{array}{c} -\frac{\dot{\theta}(t)}{\tan\theta(t)\tan\varphi(t)}+\dot{\varphi}(t) \\ -\frac{\dot{\theta}(t)}{\sin\varphi(t)} \end{array}\right). \end{align} Since each component of the right hand side goes to infinity at $\theta\to 0$ and $\varphi\to 0$, a careful choice is required for $\bm{e}(t)$. In the above examples, we only discussed pure state systems. It is a straightforward task to apply the formalism to mixed states. We can find some application in Ref.~\cite{FWN12}. A similar analysis is possible when the dimension of the Hilbert space is not so large. Four-level systems were discussed in Refs.~\cite{GN12, GWN14} based on the Lie algebraic structure. We can find applications of few-level systems under various settings in many works~\cite{STA13, STA19}. The result for two-level systems was also used to describe many-spin systems with mean-field interactions~\cite{Takahashi17, Takahashi19}. Furthermore, a similar analysis is possible for a generating function of full counting statistics in a classical stochastic system~\cite{THFH20}. \subsection{Harmonic oscillator} In the general discussion and the example of two-level systems, we treated the case where the dimension of the Hilbert space is finite. It is possible to apply the same idea to systems with infinite-dimensional Hilbert space. We next consider a harmonic oscillator whose angular frequency changes as a function of time. This system was first discussed in Ref.~\cite{LR} to solve the Schr\"odinger equation with the time-dependent Hamiltonian. The result was used to implement the inverse engineering in Ref.~\cite{CRSCGM}. We consider the one-dimensional Hamiltonian \begin{align} H(t)=\frac{1}{2m}p^2+\frac{m}{2}\omega^2(t)x^2, \end{align} with the position and momentum operators, $x$ and $p$. The particle mass $m$ represents a positive constant and the angular frequency $\omega(t)$ is a time-dependent function. Then, it was found in Ref.~\cite{LR} that the following form of $I(t)$ satisfies Eq.~(\ref{di}): \begin{align} I(t)=\frac{1}{2m}\left(b(t)p-m\dot{b}(t)x\right)^2 +\frac{m\omega^2(0)}{2}\left(\frac{x}{b(t)}\right)^2, \label{diho} \end{align} provided that $b(t)$ obeys the Ermakov equation \begin{align} \ddot{b}(t)+\omega^2(t)b(t)=\frac{\omega^2(0)}{b^3(t)}. \label{ermakov} \end{align} For a given $b(t)$, $\omega(t)$ is determined from the relation \begin{align} \omega^2(t)=\frac{1}{b(t)}\left(-\ddot{b}(t) +\frac{\omega^2(0)}{b^3(t)}\right). \label{omega} \end{align} The boundary conditions are given by $\dot{I}(0)=\dot{I}(t_{\rm f})=0$, which give \begin{align} \dot{b}(0)=\dot{b}(t_{\rm f})=0, \quad \ddot{b}(0)=\ddot{b}(t_{\rm f})=0. \end{align} The simplest polynomial function is \begin{align} b(t)=1+ \left(\sqrt{\frac{\omega(0)}{\omega(t_{\rm f})}}-1\right) \left[10\left(\frac{t}{t_{\rm f}}\right)^3 -15\left(\frac{t}{t_{\rm f}}\right)^4 +6\left(\frac{t}{t_{\rm f}}\right)^5\right]. \label{bho} \end{align} \begin{figure}[t] \centering\includegraphics[width=0.8\columnwidth]{ho.eps} \caption{ Inverse engineering for a harmonic oscillator system. In the panel (a), we plot $b(t)$ in Eq.~(\ref{bho}) for several values of $\omega(t_{\rm f})/\omega(0)$. The corresponding results of $\omega^2(t)$ from Eq.~(\ref{omega}) are plotted in the panels (b)-(d). We set $\omega(0)t_{\rm f}=2$ in the panel (b), $\omega(0)t_{\rm f}=1$ in (c), and $\omega(0)t_{\rm f}=0.5$ in (d). } \label{fig2} \end{figure} In the panel (a) of Fig.~\ref{fig2}, we plot trajectories of $b(t)$ for several values of $\omega(t_{\rm f})/\omega(0)$. The resulting $\omega(t)$ depends not only on $b(t)$ but also on $\omega(0)t_{\rm f}$. We plot $\omega^2(t)$ in the panels (b)-(d) of Fig.~\ref{fig2}. We see that $\omega^2(t)$ strongly depends on the value of $\omega(0)t_{\rm f}$. For a large $\omega(0)t_{\rm f}$, the adiabaticity condition is satisfied and the corresponding result of $\omega(t)$ has a smooth trajectory. In the opposite limit of small $\omega(0)t_{\rm f}$, we find that $\omega^2(t)$ shows a rapid change and goes negative in some cases. These properties are basically the same as the previous example of two-level systems. Generally speaking, the inverse engineering becomes problematic when the adiabaticity condition is not satisfied. The described procedure above is enough to find protocols to be implemented. As a supplementary calculation, we demonstrate the diagonalization of the dynamical invariant in the present example. Finding the explicit forms of the eigenstates is instructive since we can discuss how the quantum state changes as a function of $t$. We introduce an operator \begin{align} a(t)=\sqrt{\frac{m\omega_0}{2\hbar}}\frac{x}{b(t)} +\frac{i}{\sqrt{2\hbar m\omega_0}}(b(t)p-m\dot{b}(t)x). \end{align} It satisfies the commutation relation $[a(t),a^\dag(t)]=1$ and the dynamical invariant is written as \begin{align} I(t)=\left(a^\dag(t)a(t)+\frac{1}{2}\right)\hbar\omega_0. \end{align} Thus, the dynamical invariant can easily be diagonalized by the standard procedure of harmonic oscillators. When we start the time evolution from the ground state of $H(0)$, the wave function, the solution of the Schr\"odinger equation, is obtained from $a(t)|\psi(t)\rangle=0$ up to a phase. It is written in a coordinate representation as \begin{align} \langle x|\psi(t)\rangle =e^{i\alpha(t)} \left(\frac{m\omega(0)}{\pi\hbar b^2(t)}\right)^{1/4} \exp\left[ -\frac{m\omega(0)}{2\hbar} \left(1-i\frac{b(t)\dot{b}(t)}{\omega(0)}\right) \left(\frac{x}{b(t)}\right)^2\right]. \end{align} In the adiabatic approximation, the wave function is represented by a real Gaussian form except the phase $e^{i\alpha(t)}$. We see that the nonadiabatic effect in this case is represented by the imaginary part in the second exponential function. It gives an oscillating behavior of the wave function. At $t=0$ and $t=t_{\rm f}$, the imaginary part vanishes and the wave function coincides with that by the adiabatic approximation. We note that Eq.~(\ref{diho}) is not the only possible form of the dynamical invariant. For example, the following linear form satisfies Eq.~(\ref{di}): \begin{align} I(t)=b(t)p-m\dot{b}(t)x, \end{align} provided that $b(t)$ satisfies the equation for classical harmonic oscillators \begin{align} \ddot{b}(t)=-\omega^2(t)b(t). \end{align} This linear invariant was discussed in Ref.~\cite{LL82} and we can find some application to quantum field theory~\cite{PFR07}. This solution restricts possible protocols in the inverse engineering because the boundary conditions $\dot{b}(0)=\dot{b}(t_{\rm f})=0$ and $\ddot{b}(0)=\ddot{b}(t_{\rm f})=0$ give $\omega(0)=\omega(t_{\rm f})=0$. This linear invariant was used for momentum or position scaling~\cite{MMPPT20} and for coupled/multi-dimensional harmonic oscillators~\cite{TTLPM20,SM21,LLM22}. The harmonic oscillator Hamiltonian only involves quadratic operators and we can construct a dynamical invariant which is a homogeneous polynomial of $x$ and $p$. This property is due to commutation relations \begin{align} &[(\mbox{$k$th-order polynomial of $x$ and $p$}), (\mbox{quadratic polynomial of $x$ and $p$})] \nonumber\\ &= (\mbox{$k$th-order polynomial of $x$ and $p$}). \end{align} We can construct a closed algebra within a limited space of operators. We note a general property of the dynamical invariant that the product of dynamical invariants also represents a dynamical invariant. It is not evident whether we can find higher-order invariants that cannot be factorized. As a nontrivial generalization, it is known that the following set of operators satisfy Eq.~(\ref{di}): \begin{align} & H(t)=\frac{1}{2m}p^2-F(t)x+\frac{m}{2}\omega^2(t)x^2 +\frac{1}{b^2(t)}U\left(\frac{x-x_{\rm c}(t)}{b(t)}\right), \\ & I(t) =\frac{1}{2m}\left[b(t)(p-m\dot{x}_{\rm c}(t)) -m\dot{b}(t)(x-x_{\rm c}(t))\right]^2 +\frac{m\omega^2(0)}{2}\left(\frac{x-x_{\rm c}(t)}{b(t)}\right)^2 +U\left(\frac{x-x_{\rm c}(t)}{b(t)}\right), \end{align} where $F(t)$ and $U(x)$ represent arbitrary functions. Here, $b(t)$ satisfies the Ermakov equation (\ref{ermakov}) and $x_{\rm c}(t)$ satisfies the equation for a forced oscillator: \begin{align} \ddot{x}_{\rm c}(t)+\omega^2(t)x_{\rm c}(t)=\frac{1}{m}F(t). \end{align} This type of the Hamiltonian was discussed in Ref.~\cite{LL82} and was used for an inverse engineering in Ref.~\cite{TICRGM11}. The potential $U((x-x_{\rm c}(t))/b(t))/b^2(t)$ has a scale-invariant form and is also known as an example that the explicit form of the counterdiabatic term is available~\cite{Jarzynski13, delCampo13}. \section{Dynamical invariant and counterdiabatic driving} \label{sec:app2} The dynamical invariant is introduced as an auxiliary object to treat the solution of the Schr\"odinger equation by an eigenvalue problem. Once if we can find a pair of operators $I(t)$ and $H(t)$ satisfying Eq.~(\ref{di}), we can use it to construct a counterdiabatic driving. The counterdiabatic driving is formulated by introducing an additional counterdiabatic term $H_1(t)$ for a given original Hamiltonian $H_0(t)$~\cite{DR03, DR05, Berry09, Nakahara22}. The total Hamiltonian is given by \begin{align} {\cal H}(t)= H_0(t)+H_1(t). \end{align} When the original Hamiltonian is written by the spectral representation \begin{align} H_0(t)=\sum_n E_n(t)|n(t)\rangle\langle n(t)|, \end{align} the counterdiabatic term is written as \begin{align} H_1(t)=i\hbar\sum_{m,n (m\ne n)}|m(t)\rangle \langle m(t)|\dot{n}(t) \rangle\langle n(t)|. \end{align} As a special case, when the eigenvalues $E_1(t), E_2(t), \dots$ are independent of $t$, we can identify $H_0(t)$ as a dynamical invariant and $H_1(t)$ as the corresponding Hamiltonian: \begin{align} & H_0(t)=\epsilon I(t), \\ & H_1(t)=H(t). \end{align} Since the dimension of the dynamical invariant is arbitrary, we introduce a constant $\epsilon$ such that the dimension of $\epsilon I(t)$ coincides with the dimension of energy. When the eigenvalues of $H_0(t)$ are time-dependent, $H_0(t)$ and $H_1(t)$ satisfy the relation \begin{align} [H_0(t),i\hbar\partial_t H_0(t)-[H_1(t),H_0(t)]]=0. \label{h01} \end{align} Thus, the dynamical invariant is interpreted as the special case where the second entry in the commutation relation vanishes. Equation (\ref{h01}) is recognized as a method for obtaining approximate counterdiabatic terms~\cite{SP17, HT21}. For a given $H_0(t)$, we seek $H_1(t)$ such that the norm of the left hand side takes a minimum value. \section{Another views of dynamical invariant} \label{sec:app3} As we mentioned before, the equation for the dynamical invariant (\ref{di}) takes a familiar form. For example, the Liouville--von Neumann equation takes the same form as Eq.~(\ref{di}). Then, the density operator represents a dynamical invariant. This property shows that the dynamical invariant is not a special quantity but is ubiquitous in any quantum systems. In this section, we present several problems that use an equivalent object to the dynamical invariant. We expect that those examples offer another views on the method of shortcuts to adiabaticity. \subsection{Lax pair} A relation between quantum shortcuts to adiabaticity and classical nonlinear integrable systems was pointed out in Ref.~\cite{OT16}. In classical nonlinear integrable systems, we treat nontrivial nonlinear equations. The integrability denotes that the system has infinite number of conserved quantities. There are highly sophisticated techniques on such systems. The Lax formalism was used to describe the integrable systems in a unified way~\cite{Lax}. A set of two operators $(L(t),M(t))$ is called a Lax pair when it satisfies \begin{align} \frac{\partial L(t)}{\partial t}=[M(t),L(t)]. \end{align} We see from the comparison to Eq.~(\ref{di}) that $L$ is equivalent to the dynamical invariant. The existence of the Lax pair represents the integrability of the corresponding classical nonlinear system. We can define $\psi$ satisfying \begin{align} & L\psi=\lambda\psi, \\ & \frac{\partial}{\partial t}\psi=M\psi. \end{align} In the classical nonlinear integrable systems, $\psi$ is used to construct the solution of the corresponding nonlinear equation by the inverse scattering method. The existence of the dynamical invariant implies infinite series of conserved quantities represented by the time-independent eigenvalues of $L$, $\lambda$. Instead of giving general discussions, we here introduce several examples that are relevant to the present problems. The most familiar example is given by the following form of the Lax pair: \begin{align} & L=-\frac{\partial^2}{\partial x^2}+u(x,t), \\ & M=-4\frac{\partial^3}{\partial x^3} +3\frac{\partial}{\partial x}u(x,t) +3u(x,t)\frac{\partial}{\partial x}. \end{align} Here, $u(x,t)$ is real and satisfies the Korteweg-de Vries (KdV) equation~\cite{KdV} \begin{align} \frac{\partial u(x,t)}{\partial t}= 6u(x,t)\frac{\partial u(x,t)}{\partial x} -\frac{\partial^3u(x,t)}{\partial x^3}. \end{align} This equation is known to have multi-soliton solutions. The simplest solution is the single soliton \begin{align} u(x,t)=\frac{-2\kappa^2}{\cosh^2(\kappa x-4\kappa^3t)}, \end{align} where $\kappa$ is a positive constant. This form of the Lax pair is practically useful since $L$ is interpreted as a one-dimensional Hamiltonian with a moving soliton potential. To prevent the nonadiabatic transitions, we need to introduce the counterdiabatic term obtained from the form of $M$. It involves a cubic term in momentum operator and is difficult to implement. However, we can discuss a deformation of the counterdiabatic term to a simple implementable form~\cite{OT16}. The example of the KdV system is not a mere example of quantum controls. It is known in classical nonlinear integrable systems that the hierarchical structure exists in the KdV systems. We can find an infinite series of Lax pairs. This means that we can find the counterdiabatic terms exhaustively in the type of a Hamiltonian $H=p^2+u(x,t)$. As a promising example for physical implementations, we present nonlinear lattice systems described by the Toda equations~\cite{Toda1, Toda2}, which can be represented by many-spin systems. For $N$-qubit systems, the Lax pair is written as \begin{align} & L=\frac{1}{2}\sum_{n=1}^N J_n(t)\left( \sigma_n^x\sigma_{n+1}^x+\sigma_n^y\sigma_{n+1}^y\right) +\frac{1}{2}\sum_{n=1}^N h_n(t)\sigma_n^z, \\ & M=-\frac{i}{2}\sum_{n=1}^N J_n(t)\left( \sigma_n^x\sigma_{n+1}^y-\sigma_n^y\sigma_{n+1}^x\right). \end{align} $J_n(t)$ and $h_n(t)$ satisfy the Toda equations \begin{align} & \frac{dJ_n(t)}{dt}=J_n(t)(h_{n+1}(t)-h_n(t)), \\ & \frac{dh_n(t)}{dt}=2(J_n^2(t)-J_{n-1}^2(t)). \end{align} It is known that the Toda equations also have multi-soliton solutions. The corresponding spin Hamiltonian for $L$ is an isotropic XY model (XX model) with a magnetic field in $z$-direction. By using a solitonic form of the coupling constant $J_n(t)$ and the magnetic field $h_n(t)$, we can discuss a spin transport in a spin-chain system~\cite{OT16}. It is also known that the present spin model can be mapped onto a fermion model by using the Jordan--Wigner transformation. The coupling $J_n$ denotes a hopping between adjacent sites in that case. \subsection{Quantum brachistochrone equation} Various optimal control of systems may be obtained in shortcuts to adiabaticity. The word ``optimal'' is somewhat ambiguous and its meaning strongly depends on the problems to be solved. we can consider optimizations in many different ways. The method of quantum brachistochrone is one of the optimization methods and can be a prominent method by its generality~\cite{CHKO06}. For a quantum trajectory, we define an action to determine the optimal Hamiltonian and the corresponding state by a variational principle. The main part of the action is determined from the geometric structure of quantum states. It is well known that the overlap between quantum states is characterized by the Fubini-Study metric. To solve practical optimization problems, we introduce constraints for the Hamiltonian to be obtained. Then, the minimization condition of the action gives a form in Eq.~(\ref{di}). For example, when we represent the constraints as \begin{align} {\rm Tr}\,H(t)X_\mu = h_\mu(t), \end{align} by using basis operators $X_\mu$ with $\mu=1,2,\dots,M$, the corresponding dynamical invariant takes a form \begin{align} I(t)=\sum_{\mu=1}^M\lambda_\mu(t)X_\mu. \end{align} Here, $\{\lambda_n(t)\}$ is obtained by solving the quantum brachistochrone equation. It is not surprising that the system is characterized by a dynamical invariant. The point here is that the form of the dynamical invariant is determined from the constraints. The use of the action integral allows us to study stabilities of the counterdiabatic driving~\cite{Takahashi13}. The optimized solution is obtained by using the variation of the action up to the first order. The stability can be studied by expanding the action up to the second order. \subsection{Flow equation} As a final application of the dynamical invariant, we point out that the flow equation takes the same form as the equation for the dynamical invariant. The flow equation is a method diagonalizing a matrix by iterations~\cite{GW93, GW94, Wegner94, Kehrein}. For a given Hermitian matrix $H$, we consider a time evolution described by \begin{align} i\frac{\partial H(t)}{\partial t}=[\eta(t),H(t)], \label{flow} \end{align} with $H(0)=H$. $\eta(t)$ represents a generator of the time evolution. One of possible choices is given by~\cite{Wegner94} \begin{align} \eta(t)=i[H_{\rm diag}(t),H(t)], \end{align} where $H_{\rm diag}(t)$ is obtained by setting off-diagonal components of $H(t)$ to zero. That is, we have for a given set of base kets $\{|n\rangle\}$ \begin{align} \langle m|H_{\rm diag}(t)|n\rangle=\delta_{m,n}\langle n|H(t)|n\rangle. \end{align} Then, we can show \begin{align} \frac{\partial}{\partial t}\sum_{m,n (m\ne n)}|\langle m|H(t)|n\rangle|^2 &=-2\sum_{m,n}(\epsilon_n(t)-\epsilon_m(t))^2|\langle m|H(t)|n\rangle|^2 \le 0, \label{negative} \end{align} where $\epsilon_n(t)=\langle n|H(t)|n\rangle$. This relation shows that the magnitude of each off-diagonal component of $H(t)$ gradually decreases as a function of $t$. Then, we expect \begin{align} \lim_{t\to\infty} H(t)=\lim_{t\to\infty} H_{\rm diag}(t). \end{align} Since Eq.~(\ref{flow}) is equivalent to Eq.~(\ref{di}), the eigenvalues of $H(t)$ are independent of $t$, which means that the diagonal components at $t\to\infty$ represent the eigenvalues of the original matrix $H$. From the aspect of shortcuts to adiabaticity, the generator $\eta(t)$ is interpreted as a counterdiabatic term for $H(t)$. We specify the form of the counterdiabatic term instead of specifying the time dependence of the matrix $H(t)$. Then, the resulting dynamics is interpreted as a diagonalization process of the original matrix $H$. \section{Summary} \label{sec:summary} We have presented a brief introduction to the dynamical invariant formalism of shortcuts to adiabaticity. After some of fundamental properties are summarized, we discussed the method of inverse engineering together with several simple examples. We also discussed the relation to the counterdiabatic driving and several different aspects of the dynamical invariant. The most important property of the dynamical invariant is that we can understand the dynamical system in the same way as the static systems. Once if we can find the dynamical invariant operator, the problem is reduced to solving an eigenvalue equation. For our purposes of quantum control, it is not necessary to solve the eigenvalue problem. Due to many possible choices of the coefficients of the dynamical invariant, the resulting protocol is not unique and is obtained in an adapted manner. Although experimental implementations can be broadly covered by the examples of a two-level system and a harmonic oscillator, it is an interesting challenging problem to find a dynamical invariant for other systems. We expect that we can find a nontrivial use of the dynamical invariant by combining ideas from different fields utilizing a similar quantity to the dynamical invariant. \enlargethispage{20pt} \funding{The author was supported by JSPS KAKENHI Grants No. JP20K03781 and No. JP20H01827. } \ack{The author is grateful to Gonzalo Muga and Mikio Nakahara for useful comments.}
1,314,259,995,374
arxiv
\section{Introduction} By an \emph{Abel-Grassmann's groupoid} (briefly an \emph{$AG$-groupoid}) we shall mean any groupoid which satisfies the identity $(xy)z=(zy)x$.~Such a groupoid is also called a \emph{left almost semi\-group} (briefly an \emph{$LA$-semigroup}) or a \emph{left invertive groupoid} or a \emph{right modular groupoid} (cf.\,\cite{Hol, KN, MIq}). This structure is closely related to a commutative semigroup, because if an $AG$-groupoid contains a right identity, then it becomes a commutative monoid. Also, if an $AG$-groupoid $A$ with a left zero $z$ is finite, then (under certain conditions) $A\setminus\{z\}$ is a commutative group \cite{MK}. The name \emph{Abel-Grassmann's groupoids} was suggested by Stojan Bogdanovi\'c on a seminar in Ni\v{s}. First time this name appeared in the paper \cite{PS1} and in the book \cite{DK}. An $AG$-groupoid $A$ satisfying the identity $x(yz)=y(xz)$ is called an \emph{$AG^{**}$-groupoid}. Such groupoids were studied by many authors. For example, in \cite{MB} it has been proved that an $AG^{**}$-groupoid containing a left cancellative $AG^{**}$-subgroupoid can be embedded in a commutative monoid whose cancellative elements form a commutative group whose identity coincides with the identity of the commutative monoid. Also, each $AG^{**}$-groupoid satisfying the identity $(xx)x=x(xx)$ can be uniquely expressed as a semilattice of certain Archimedean $AG^{**}$-groupoids \cite{MK2}. Some other decompositions of certain $AG^{**}$-groupoids are given in \cite{P, PS}. Further, certain fundamental congruences on $AG^{**}$-groupoids are described in \cite{MK1, PB}. Finally, the kernel normal system of an inversive $AG^{**}$-groupoid has been studied in \cite{BPS}. In this paper we investigate \emph{completely inverse $AG^{**}$-groupoids}, i.e., $AG^{**}$-groupoids in which every element $a$ has a unique inverse $a^{-1}$ such that $aa^{-1}=a^{-1}a$. In Section $2$ we establish some necessary definitions and facts concerning $AG^{**}$-groupoids. In Section \nolinebreak $3$ we give a few interesting results about completely inverse $AG^{**}$-groupoids. Recall from \cite{DG} that any completely inverse $AG^{**}$-groupoid satisfies Lallement's lemma for regular semigroups. Using this fact, we describe the maximum idempotent-separating congruence $\mu$ (which is equal to the least semilattice congruence) on a completely inverse $AG^{**}$-groupoid $A$.~In particular, $A$ is a semilattice $E_A$ of $AG$-groups $e\mu$ ($e\in E_A$).~Also, we show that the interval $[1_A,\mu]$ is a modular lattice.~The main result of this section says that any $AG$-groupoid $A$ is a completely inverse $AG^{**}$-groupoid if and only if $A$ is a strong semilattice of $AG$-groups.~On the one hand, in the light of this fact, we are able to construct completely inverse $AG^{**}$-groupoids.~On the other hand, completely inverse $AG^{**}$-groupoids are very similar to Clifford semigroups (i.e., (strong) semilattices of groups). At the beginning of Section $4$ we prove that any congruence $\rho$ on a completely inverse $AG^{**}$-groupoid is uniquely determined by $(i)$ its kernel and trace; $(ii)$ \nolinebreak the set of $\rho$-classes containing idempotents.~Furthermore, we determine the least $AG$-group congruence $\sigma$ and describe all $AG$-group congruences in terms of their kernels. Also, we give some equivalent conditions for a completely inverse $AG^{**}$-groupoid $A$ to be $E$-unitary and we describe all $E$-unitary congruences on $A$. In Section $5$ we characterize abstractly congruences on an arbitrary completely inverse $AG^{**}$-groupoid $A$ via the so-called congruences pairs for $A$.~Furthermore, we study the trace classes of the complete lattice $\mathcal{C}(A)$ of all congruences on $A$.~The main result of this section says that the map $\rho\to\text{tr}(\rho)$ ($\rho\in\mathcal{C}(A)$) is a complete lattice homomorphism of $\mathcal{C}(A)$ onto the lattice of all congruences on the semilattice $E_A$. Also, if $\theta$ denotes the congruence on $\mathcal{C}(A)$ induced by this map, then for every $\rho\in\mathcal{C}(A)$, $\rho\theta$ is a modular lattice (with commutating elements). Moreover, $\rho\theta=[\rho_\theta,\mu(\rho)]$.~If in addition, $A$ is $E$-unitary, then $\rho_\theta=\rho\cap\sigma$, and the mapping $\rho\to\rho\cap\sigma$ ($\rho\in\mathcal{C}(A)$) is a complete lattice homomorphism of $\mathcal{C}(A)$ onto the lattice of idempotent pure congruences.~Finally, we investigate the lattice $\mathcal{FC}(A)$ of all fundamental congruences on $A$.~We prove that $\mathcal{FC}(A)=\{\mu(\rho):\rho\in\mathcal{C}(A)\}\cong\mathcal{C}(E_A)$. In Section $6$ we show first that each completely inverse $AG^{**}$-groupoid $A$ possesses a largest idempotent pure congruence $\tau$.~Also, we study the kernel classes of $\mathcal{C}(A)$.~We prove a result analogous to a result from the previous section. In particular, we show that the interval $[\rho\cap\mu,\tau(\rho)]$ consist of all congruences on $A$ such that their kernels are equal to $\ker(\rho)$. Further, we go back to study $E$-unitary congruences.~We determine all $E$-unitary congruences on $A$; that is, we show that a congruence is $E$-unitary if and only if its kernel is equal to the kernel of some $AG$-group congruence on $A$.~Finally, we give once again necessary and sufficient conditions for a completely inverse $AG^{**}$-groupoid to be $E$-unitary. The terminology used in this paper coincides with semigroup terminology (see the book \cite{Pet}). \section{Preliminaries} One can easily check that in an arbitrary $AG$-groupoid $A$, the \emph{medial law} is valid, that \nolinebreak is, the equality \begin{eqnarray}\label{medial} (ab)(cd)=(ac)(bd) \end{eqnarray} holds for all $a,b,c,d\in A$. Recall from \cite{PS} that an \emph{$AG$-band} $A$ is an $AG$-groupoid satisfying the identity $x^2=x$. If in addition, $ab=ba$ for all $a,b\in A$, then we say that $A$ is an \emph{$AG$-semilattice}. Let $A$ be an $AG$-groupoid and $B\subseteq A$.~Denote the set of all idempotents of $B$ by $E_B$, that is, $E_B=\{b\in B:b^2=b\}$.~From $(\ref{medial})$ it follows that if $E_A \neq\emptyset$, then $E_A E_A\subseteq E_A$, therefore, $E_A$ is an $AG$-band. An $AG$-groupoid satisfying the identity $x(yz)=y(xz)$ is said to be an \emph{$AG^{**}$-groupoid}. Any $AG^{**}$-groupoid is {\em paramedial}, i.e., it satisfies the identity \begin{eqnarray}\label{paramedial} (wx)(yz)=(zx)(yw). \end{eqnarray} Notice that each $AG$-groupoid with a left identity is an $AG^{**}$-groupoid.~Further, observe that if $A$ is an $AG^{**}$-groupoid, then (\ref{paramedial}) implies that if $E_A\neq\emptyset$, then $E_A$ is an $AG$-semi\-lattice. Indeed, in this case $E_A$ is an $AG$-band and $ef=(ee)(ff)=(fe)(fe)=fe$ for all $e,f\in E_A$. Moreover for $a,b\in A$ and $e\in E_A$, using $(1)$ and $(2)$ we have $$ e(ab)=(ee)(ab)=(ea)(eb)=(ba)e=(ea)b. $$ We have just proved the following result (its second part was proved earlier in \cite{PB}). \begin{proposition}\label{semilattice} Let $A$ be an $AG^{**}$-groupoid.~Then \begin{equation}\label{e*} e\cdot ab=ea\cdot b \end{equation} for all $a,b\in A$ and $e\in E_A$. In particular, the set of all idempotents of an arbitrary $AG^{**}$-groupoid is either empty or a semilattice. \end{proposition} We say that an $AG^{**}$-groupoid $A$ is \emph{completely regular} if for every $a\in A$ there exists $x\in A$ such that $a=(ax)a$ and $ax=xa$.~Observe that in such a case, $$(ax)(ax)=(aa)(xx)=x(aa\cdot x)=x(xa\cdot a)=x(ax\cdot a)=xa=ax\in E_A,$$ therefore, $E_A$ forms a semilattice. Let $A$ be an $AG$-groupoid with a left identity $e$ and $a\in A$. An element $a^{*}$ of $A$ is said to be a \emph{left} (\emph{right}) \emph{inverse} of $a$ if $a^{*}a=e$ (resp. $aa^{*}=e$), and an element of $A$ which is both a left and right inverse of $a$ is called an \emph{inverse} of $a$. Let $a^{*}$ be a left inverse of $a$. Then $aa^{*}=(ea)a^{*}=(a^{*}a)e=e$. It follows that any left inverse $a^{*}$ of $a$ is also its right inverse, therefore, it is its inverse. In particular, if $a^{**}$ is another left inverse of $a$, then $a^{*}=(a^{*}a)a^{*}=(a^{**}a)a^{*}=(a^{*}a)a^{**}=(a^{**}a)a^{**}=a^{**}$. The conclusion is that each left inverse of $a$ is its unique inverse. Further, if $f$ is a left identity of $A$, then $fe=e=ee$, so $e=f$, i.e., $e$ is a unique left identity of $A$. Dually, any right inverse of $a$ is its unique inverse. Denote as usual the inverse of $a$ by $a^{-1}$. Finally, it is clear that $a=(a^{-1})^{-1}$, $(ab)^{-1}=a^{-1}b^{-1}$. An $AG$-groupoid with a left identity in which every element has a left inverse is called an \emph{$AG$-group}. \begin{proposition}\label{AG-groups} Let $A$ be an $AG$-groupoid with a left identity $e$.~Then the following conditions are equivalent$:$ $(a)$ $A$ is an $AG$-group$;$ $(b)$ every element of $A$ has a right inverse$;$ $(c)$ every element $a$ of $A$ has a unique inverse $a^{-1};$ $(d)$ the equation $xa = b$ has a unique solution for all $a,b\in A$. \end{proposition} \begin{proof} By above $(a)\implies (b)\implies (c)$. $(c)\implies (d)$. Let $a,b\in A$. Then $b=eb=(aa^{-1})b=(ba^{-1})a$, i.e., $ba^{-1}$ is a solution of the equation $xa = b$. Also, if $c$ and $d$ are solutions of this equation, then $$c=ec=(a^{-1}a)c=(ca)a^{-1}=(da)a^{-1}=d.$$ $(d)\implies (a)$. This is obvious. \end{proof} Notice that if $g$ is an arbitrary idempotent of an $AG$-group $A$ with a left identity $e$, then $gg=g=eg$. Hence $e=g$, therefore, $E_A=\{e\}$. Denote by $V(a)$ the set of all \emph{inverses} of $a$, that is, $$ V(a)=\{a^{*}\in A:a=(aa^{*})a, \ \ a^{*}=(a^{*}a)a^{*}\}. $$ An $AG$-groupoid $A$ is called \emph{regular} (in \cite{BPS} it is called \emph{inverse}) if $V(a)\neq\emptyset$ for all $a\in A$. Note that $AG$-groups are of course regular $AG$-groupoids, but the class of all regular $AG$-groupoids is vastly more extensive than the class of all $AG$-groups.~For example, every $AG$-band $A$ is evidently regular, since $a=(aa)a$ for every $a\in A$. In \cite{BPS} it has been proved that in any regular $AG^{**}$-groupoid, $|V(a)|=1$ $(a\in A)$, therefore, we call it an \emph{inverse $AG^{**}$-groupoid}. In that case, we denote a unique inverse of $a\in A$ by $a^{-1}$. Furthermore, recall from \cite{BPS} that in any regular $AG$-groupoid $A$, $V(a)V(b)\subseteq V(ab)$ for all $a,b\in A$.~Indeed, let $a^{*}\in V(a)$ and $b^{*}\in V(b)$. Then $$(ab)(a^{*}b^{*})\cdot ab=(ab)a\cdot (a^{*}b^{*})b=(ab)a\cdot (bb^{*})a^{*}=(ab)(bb^{*})\cdot aa^{*}=(bb^{*}\cdot b)a\cdot aa^{*},$$ so $$(ab)(a^{*}b^{*})\cdot ab=(ba)(aa^{*})=(aa^{*}\cdot a)b=ab.$$ By symmetry, $a^{*}b^{*}=(a^{*}b^{*})(ab)\cdot (a^{*}b^{*})$, as exactly required.~Finally, there are regular $AG$-groupoids without idempotents. On the other hand, if $a^{*}\in V(a)$ and $aa^{*}=a^{*}a$ in the $AG$-groupoid $A$, then $aa^{*}\in E_A$ (cf.\,\cite{BPS}). \section{Completely inverse $AG^{**}$-groupoids} One can prove (cf.\,\cite{BPS}) that in an inverse $AG^{**}$-groupoid $A$, $aa^{-1}=a^{-1}a$ if and only if $aa^{-1},a^{-1}a\in E_A$. Also, in \cite{BPS} the authors studied congruences on inverse $AG^{**}$-groupoids satisfying the identity $xx^{-1}=x^{-1}x$. We will call such groupoids \emph{completely inverse $AG^{**}$-groupoids}. Each $AG$-group is a completely inverse $AG^{**}$-groupoid. \begin{example}\label{E1} Let $A$ be a commutative inverse semigroup. Put $a\cdot b=a^{-1}b$ for all $a,b\in A$, where $a^{-1}$ is a unique inverse of $a$ in the inverse semigroup $A$.~Then it is easy to check that $(A,\cdot)$ is an $AG^{**}$-groupoid and $E_{(A,\cdot)}=E_A$. Furthermore, $(a\cdot a)\cdot a=a$, so $a$ is its own unique inverse in $(A,\cdot)$ for every $a\in A$, so $a\cdot a\in E_{(A,\cdot)}$ for all $a\in A$ and $(A,\cdot)$ is a completely inverse $AG^{**}$-groupoid. Also, we have that $a^{-1}\cdot (a\cdot b)= a^{-1}\cdot a^{-1}b=aa^{-1}b$. Hence $$ a^{-1}\cdot (a^{-1}\cdot (a\cdot b))=a^{-1}\cdot aa^{-1}b=aaa^{-1}b=aa^{-1}ab=ab, $$ that is, $$ ab=a^{-1}\cdot (a^{-1}\cdot (a\cdot b))=a\cdot (a^{-1}\cdot (a^{-1}\cdot b)) $$ for all $a,b\in A$. Let $\rho$ be a congruence on $(A,\cdot)$. From the above equalities follows easily that $\rho$ is a congruence on the commutative inverse semigroup $A$.~Also, if $(a,a\cdot a)\in\rho$ in $(A,\cdot)$, then $(a,a^{-1}a)\in\rho$ in $A$, since $a\cdot a=a^{-1}a$. Thus $(a^2,aa^{-1}a)\in\rho$ in $A$, so $(a^2,a)\in\rho$ in $A$. Lallement's Lemma implies that there exists $e\in E_A\cap a\rho$ and so $e\in E_{(A,\cdot)}\cap a\rho$. On the other hand, trivially $a\cdot a\in E_{a\rho}$ in $(A,\cdot)$. Conversely, one can easily see that if $\rho$ is a congruence on $A$, then $\rho$ is also a congruence on $(A,\cdot)$. Further, if $(a,a^2)\in\rho$ in $A$, then $(a,e)\in\rho$ in $A$ for some $e\in E_A$. Since $e\cdot e=e$, then $(a,a\cdot a)\in\rho$ in $(A,\cdot)$. \end{example} A groupoid $A$ is said to be \emph{idempotent-surjective} if for each congruence $\rho$ on $A$, every idempotent $\rho$-class contains an idempotent of $A$. The following theorem was proved in \cite{DG}. Now we give another proof. \begin{theorem}\label{i-s} Completely inverse $AG^{**}$-groupoids are idempotent-surjective. \end{theorem} \begin{proof} Let $\rho$ be a congruence on a completely inversive $AG^{**}$-groupoid $A$, $a\in A$ and $a\,\rho\,a^2$. We know that there exists an element $x\in A$ such that $a^2=(a^2x)a^2$, $x=(xa^2)x$ and $a^2x=xa^2\in E_A$. Note that $$ (a^2x)(aa)=a(a^2x\cdot a)=a(xa^2\cdot a)=a(aa^2\cdot x)=(aa^2)(ax)=a^2(a^2x)=a^2(xa^2), $$ that is, $a^2=a^2(xa^2)$. Put $e=a(xa)$. Then $e~\rho~a^2(xa^2)=a^2~\rho~a$. Also, $$e^2=(a\cdot xa)(a\cdot xa)=a\cdot (a\cdot xa)(xa)=a\cdot (ax)(xa\cdot a)=a\cdot (ax)(a^2x).$$ Furthermore, using $(2)$ $$ (ax)(a^2x)=(ax)(xa^2)=(a^2x)(xa)=(xa^2)(xa)=(xa^2\cdot x)a $$ by (\ref{e*}), since $xa^2\in E_A$. Hence $(ax)(a^2x)=xa$. Consequently, $$ e^2=a(xa)=e\in E_A, $$ as required. \end{proof} Let $\rho$ be a congruence on a completely inverse $AG^{**}$-groupoid $A$ and $a,b\in A$. It is evident that $(a\rho)^{-1}=a^{-1}\rho$. Hence if $(a,b)\in\rho$, then $(a^{-1},b^{-1})\in\rho$. Moreover, $A/\rho$ is a completely inverse $AG^{**}$-groupoid. Further, let $A$ be an arbitrary groupoid and $\mathcal{V}$ be a fixed class of groupoids. We say that a congruence $\rho$ on $A$ is a \emph{$\mathcal{V}$-congruence} if $A/\rho\in\mathcal{V}$. For example, if $\mathcal{V}$ is the class of all semilattices, then $\rho$ is a \emph{semilattice} congruence on $A$ if $A/\rho$ is a semilattice. Moreover, $A$ is called a \emph{semilattice $A/\rho$ of $AG$-groups} if there is a semilattice congruence $\rho$ on $A$ such that every $\rho$-class is an $AG$-group.~In that case, $A$ is a \emph{semilattice $Y=A/\rho$ of $AG$-groups $A_\alpha$}, $\alpha\in Y$, where $A_\alpha$ are the $\rho$-classes of $A$, or briefly a \emph{semilattice $Y=A/\rho$ of $AG$-groups $A_\alpha$}.~Notice that in such a case, $A_\alpha A_\beta\subseteq A_{\alpha\beta}$, where $\alpha\beta$ is the product of $\alpha$ and $\beta$ in $Y$. Also, $A_{\alpha\beta}=A_{\beta\alpha}$ and $A_{(\alpha\beta)\gamma}=A_{\alpha(\beta\gamma)}$. Finally, we say that a congruence $\rho$ on a groupoid $A$ is \emph{idempotent-separating} if every $\rho$-class contains at most one idempotent of $A$. The following simple result will at times be useful. \begin{lemma}\label{unipotent} A completely inverse $AG^{**}$-groupoid containing only one idempotent is an $AG$-group. \end{lemma} \begin{proof} Let $E_A=\{e\},a\in A$. Then $e=aa^{-1}=a^{-1}a$. Hence $ea=(aa^{-1})a=a$. Thus $A$ is an $AG$-group. \end{proof} For elementary facts about (inverse) semigroups the reader is referred to the book of Petrich \cite{Pet}.~It is well known that each completely regular inverse semigroup is a semilattice of groups. We prove now an analogous result. \begin{theorem}\label{mu} Let $A$ be a completely inverse $AG^{**}$-groupoid.~Define on $A$ the relation $\mu$ by $$(a,b)\in\mu\iff aa^{-1}=bb^{-1}$$ for all $a,b\in A$. Then$:$ $(a)$ \ $\mu$ is the least semilattice congruence on $A;$ $(b)$ \ every $\mu$-class is an $AG$-group$;$ $(c)$ \ $\mu$ is the maximum idempotent-separating congruence on $A;$ $(d)$ \ $A$ is a semilattice $A/\mu$ of $AG$-groups$;$ $(e)$ \ $E_A\cong A/\mu$. \noindent Hence $A$ is a semilattice $E_A$ of $AG$-groups $G_e$ $(e\in E_A)$, where $G_e=\{a\in A:aa^{-1}=e\}$. \end{theorem} \begin{proof} $(a)$. Clearly, $\mu$ is an equivalence relation on $A$.~Let $(a,b)\in\mu$ and $c\in A$. Then $$(ca)(ca)^{-1}=(ca)(c^{-1}a^{-1})=(cc^{-1})(aa^{-1})=(cc^{-1})(bb^{-1})=(cb)(cb)^{-1}$$ and similarly $(ac)(ac)^{-1}=(bc)(bc)^{-1}$. Hence $\mu$ is a congruence on $A$. Also, $$ (aa^{-1})(aa^{-1})^{-1}=(aa^{-1})(a^{-1}(a^{-1})^{-1})=(aa^{-1})(a^{-1}a)=(aa^{-1})(aa^{-1})=aa^{-1}, $$ so $(a, aa^{-1})\in\mu$, where $aa^{-1}\in E_A$.~Since $E_A$ is a semilattice, then $S/\mu$ \nolinebreak is a semilattice, too.~Consequently, $\mu$ is a semilattice congruence on $A$.~Moreover, since $e^{-1}=e$ for every $e\in E_A$, then $\mu$ is idempotent-separating. Finally, suppose that there is a semilattice congruence $\rho$ on $A$ such that $\mu\nsubseteq\rho$.~Then the relation $\mu\cap\rho$ is a semilattice congruence on $A$ which is properly contained in $\mu$, so not every $(\mu\cap\rho)$-class contains an idempotent of $A$, since each $\mu$-class contains exactly one idempotent, a contradiction with Theorem \ref{i-s}.~Consequently, $\mu$ must be the least semilattice congruence on $A$. $(b)$. We have noticed above that $\mu$ is idempotent-separating. It is evident that every $\mu$-class is itself a completely inverse $AG^{**}$-groupoid, since $a^{-1}\in a\mu$ for all $a\in A$. In view of Lemma \ref{unipotent}, every $\mu$-class is an $AG$-group. $(c)$. Let $\rho$ be an idempotent-separating congruence on $A,(a,b)\in\rho$.~Then $a^{-1}\,\rho\,b^{-1}$. It follows that $(aa^{-1},bb^{-1})\in\rho$. Thus $aa^{-1}=bb^{-1}$, so $(a,b)\in\mu$. Consequently, $\rho\subseteq\mu$. The rest is obvious. \end{proof} Let $\mathcal{C}(A)$ denote the complete lattice of all congruences on a groupoid $A$. It is well known that if a sublattice $\mathcal{L}$ of $\mathcal{C}(A)$ has the property that $\alpha\beta=\beta\alpha$ for all $\alpha,\beta\in\mathcal{L}$, then $\mathcal{L}$ is a modular lattice. Let $A$ be a completely inverse $AG^{**}$-groupoid.~Consider the complete lattice $[1_A,\mu]$ of all idempotent-separating congruences on $A$ (see Theorem \ref{mu}$(c)$). Let $\rho_1,\rho_2\in [1_A,\mu]$ and $(a,b)\in\rho_1\rho_2$. Then there is $c\in A$ such that $a\,\rho_1\,c\,\rho_2\,b$. In particular, $(a,c),(c,b)\in\mu$. Hence $$ a=aa^{-1}\cdot a = cc^{-1}\cdot a~\rho_2~bc^{-1}\cdot a=ac^{-1}\cdot b~\rho_1~cc^{-1}\cdot b=bb^{-1}\cdot b=b, $$ so $(a,b)\in\rho_2\rho_1$.~Thus $\rho_1\rho_2\subseteq\rho_2\rho_1$.~By symmetry, $\rho_2\rho_1\subseteq\rho_1\rho_2$. We have just shown the following theorem. \begin{theorem}\label{m} Let $A$ be a completely inverse $AG^{**}$-groupoid.~Then the interval $[1_A,\mu]$, consisting of all idempotent-separating congruences on $A$, is a modular lattice. \end{theorem} \begin{corollary}\label{C1} The lattice of congruences on an $AG$-group is modular. \end{corollary} A completely inverse $AG^{**}$-groupoid $A$ is a semilattice $E_A$ of $AG$-groups $G_e$ $(e\in E_A)$, where $G_e=\{a\in A:aa^{-1}=e\}$ (Theorem \ref{mu}). The relation $\leq$ defined on the semilattice $E_A$ by $e\leq f\Leftrightarrow e=ef$ is the so-called \emph{natural partial order} on $E_A$. Let $e\geq f$ and $a_e\in G_e$. Then $fa_e\in G_fG_e\subseteq G_{fe}=G_{f}$. Hence we may define a map $\phi_{e,f}:G_e\to G_f$ by $$a_e\phi_{e,f}=fa_e~~(a_e\in G_e).$$ Also, for all $a_e,b_e\in G_e$, $(fa_e)(fb_e)=(ff)(a_eb_e)=f(a_eb_e)$, so \begin{eqnarray}\label{homomorphism} (a_e\phi_{e,f})(b_e\phi_{e,f})=(a_eb_e)\phi_{e,f}, \end{eqnarray} i.e., $\phi_{e,f}$ is a homomorphism between the $AG$-groups $G_e$ and $G_f$. In particular, $e\phi_{e,f}=f$ (this follows also from $e\geq f$).~Observe that $\phi_{e,e}$ is the identical automorphism of the $AG$-group $G_e$. Suppose now that $e\geq f\geq g$. Then for every $a_e\in G_e$, $$(a_e\phi_{e,f})\phi_{f,g}=g(fa_e)=(gg)(fa_e)=(gf)(ga_e)=g(ga_e)=ga_e=a_e\phi_{e,g},$$ since $ga_e\in G_gG_e\subseteq G_{ge} = G_g$, that is, \begin{eqnarray}\label{system} \phi_{e,f}\phi_{f,g}=\phi_{e,g} \end{eqnarray} for every $e,f,g\in E_A$ such that $e\geq f\geq g$. Finally, let $a_e\in G_e$ and $a_f\in G_f$ (and so $a_ea_f\in G_{ef}$; also $e,f\geq ef$).~Then we get $a_ea_f=(ef)(a_ea_f)=(ef\cdot ef)(a_ea_f)=((ef)a_e)((ef)a_f)$, i.e., \begin{eqnarray}\label{operation} a_ea_f=(a_e\phi_{e,ef})(a_f\phi_{f,ef}). \end{eqnarray} Remark that we have used only the medial law in the proof of the equalities (\ref{homomorphism}), (\ref{system}) and (\ref{operation}), therefore, if an $AG$-groupoid $A$ is a semilattice $E_A$ of the $AG$-groups $G_e$ ($e\in E_A$), then these equalities hold true. Let now $Y$ be a semilattice, $\mathcal{F}=\{A_\alpha:\alpha\in Y\}$ be a family of disjoint $AG$-groupoids of type $\mathcal{T}$, indexed by the set $Y$ ($\mathcal{F}$ may be a family of disjoint $AG$-groups). Suppose also that for each pair $(\alpha,\beta)\in Y\times Y$ such that $\alpha\geq\beta$ there is an associated homomorphism $\phi_{\alpha,\beta}:A_\alpha\to A_\beta$ such that $(a)$ \ $\phi_{\alpha,\alpha}$ is the identical automorphism of $A_\alpha$ for every $\alpha\in Y$, and $(b)$ \ $\phi_{\alpha,\beta}\phi_{\beta,\gamma}=\phi_{\alpha,\gamma}$ for all $\alpha,\beta,\gamma\in Y$ such that $\alpha\geq\beta\geq\gamma$. \medskip\noindent Put $A=\bigcup\{A_\alpha:\alpha\in Y\}$, and define a binary operation $\cdot$ on $A$ by the rule that if $a_\alpha\in A_\alpha$ and $a_\beta\in A_\beta$, then $$a_\alpha\cdot a_\beta=(a_\alpha\phi_{\alpha,\alpha\beta})(a_\beta\phi_{\beta,\alpha\beta}),$$ where the multiplication on the right side takes place in the $AG$-groupoid $A_{\alpha\beta}$. It is a matter of routine to check that $(A,\cdot)$ is an $AG$-groupoid.~If in addition, each $AG$-groupoid $A_\alpha$ is an $AG^{**}$-groupoid (in particular, an $AG$-group), then $(A,\cdot)$ is itself an $AG^{**}$-groupoid.~Finally, in the light of the condition $(a)$, the new multiplication coincides with the given of each $A_\alpha$, so $A$ is certainly a semilattice $Y$ of $AG$-groupoids $A_\alpha$.~We usually denote the product in $A$ also by juxtaposition, and write $A=[Y;A_\alpha;\phi_{\alpha,\beta}]$. We call the $AG$-groupoid $[Y;A_\alpha;\phi_{\alpha,\beta}]$ a \emph{strong semilattice of $AG$-groupoids $A_\alpha$}. In fact, we have proved the following theorem (see (\ref{homomorphism}), (\ref{system}) and (\ref{operation})). \begin{theorem}\label{strong semilattice} Let an $AG$-groupoid $A$ be a semilattice $A/\rho$ of $AG$-groups.~Then $A$ is a strong semilattice of $AG$-groups.~In fact, $$A=[E_A;G_e;\phi_{e,f}],$$ where for all $e,f\in E_A,$ $G_e=e\rho;$ $\phi_{e,f}:G_e\to G_f$ is given by $$a_e\phi_{e,f}=fa_e~~(a_e\in G_e),$$ and $$a_ea_f=(a_e\phi_{e,ef})(a_f\phi_{f,ef})~~(a_e\in G_e,a_f\in G_f).$$ In particular, $A$ is an $AG^{**}$-groupoid. \end{theorem} \begin{proof}Let $A$ be a semilattice $A/\rho$ of $AG$-groups, then $\rho$ is idempotent-separating. Hence $E_A\cong A/\rho$, so $E_A$ is necessarily a semilattice.~Thus $A$ is a semilattice $E_A$ of $AG$-groups $G_e=e\rho$ ($e\in E_A$).~This implies the thesis of the theorem. \end{proof} It is well known that if a semigroup $S$ is a semilattice of groups, then its idempotents are \emph{central}, that is, $se=es$ for all $s\in S$ and $e\in E_S$.~The following proposition \nopagebreak says particularly that there is no non-associative $AG$-groupoids which are a semilattice of $AG$-groups and their idempotents are central. \begin{proposition}\label{abelian} Let $A$ be an $AG$-groupoid which is a semilattice of $AG$-groups.~If \nolinebreak the idempotents of $A$ are central, then $A$ is a strong semilattice of abelian groups. In particular, $A$ is a commutative semigroup. \end{proposition} \begin{proof} Let $A=[E_A;G_e;\phi_{e,f}]$. If the idempotents of $A$ are central, then particularly for all $e\in E_A$, $ae=ea$ for every $a\in G_e$.~This implies that every $G_e$ is a commutative group, so $A$ is a strong semilattice of abelian groups. From the definition of the multiplication in $[E_A;G_e;\phi_{e,f}]$ and from the fact that abelian groups are commutative semigroups follows that $A$ is a commutative semigroup. \end{proof} \begin{remark} Let $A$ be a completely inverse $AG^{**}$-groupoid.~Then $ae=ea$ for all $a\in A,$ $e\in E_A$ if and only if $a=a(a^{-1}a)$ for every $a\in A$. Indeed, $$ea=e(aa^{-1}\cdot a)=(e\cdot aa^{-1})a=(a\cdot aa^{-1})e=(a(a^{-1}a))e.$$ This implies that if $a=a(a^{-1}a)$, then the idempotents of $A$ are central.~The converse implication is obvious. In the proof of Theorem \ref{i-s} we have shown that $a^2=a^2(xa^2)$ for every $a\in A$, where $x\in V(a^2)$. Furthermore, $A^{(2)}=\{a^2:a\in A\}$ is an $AG^{**}$-groupoid, since $a^2b^2=(ab)^2$ for all $a,b\in A$.~Also, $(a^{-1})^2\in V(a^2)$ for every $a\in A$.~Evidently, $E_A\subseteq A^{(2)}$.~Consequently, $A^{(2)}$ is a completely inverse $AG^{**}$-groupoid in which the idempotents are central.~From Proposition \ref{abelian} we obtain the following theorem. \end{remark} \begin{theorem}\label{A^2} If $A$ is a completely inverse $AG^{**}$-groupoid, then $A^{(2)}$ is a strong semi\-lattice of abelian groups with semilattice $E_A$ of idempotents. \end{theorem} The next theorem gives necessary and sufficient conditions for an $AG$-groupoid to be a completely inverse $AG^{**}$-groupoid. \begin{theorem}\label{major} The following conditions concerning an $AG$-groupoid $A$ are equivalent$:$ $(a)$ \ $A$ is a completely inverse $AG^{**}$-groupoid$;$ $(b)$ \ $A$ is a semilattice of $AG$-groups$;$ $(c)$ \ $A$ is a strong semilattice of $AG$-groups. \end{theorem} \begin{proof}$(a)\implies (b)$ by Theorem \ref{mu} and $(b)\implies (c)$ by Theorem \ref{strong semilattice}. $(c)\implies (a)$. In that case, $A$ is an $AG^{**}$-groupoid (see again Theorem \ref{strong semilattice}). Also, let $a\in A$. Then $a$ belongs to some $AG$-group $G_e$, where $e$ is a left identity of $G_e$. Consider now a unique inverse $a^{-1}$ of $a$ in $G_e$. Then evidently $a=(aa^{-1})a,$ $a^{-1}=(a^{-1}a)a^{-1}$ and $aa^{-1}=a^{-1}a=e$. Consequently, $A$ is a completely inverse $AG^{**}$-groupoid. \end{proof} \begin{remark} In view of the above theorem, we are able to construct completely inverse $AG^{**}$-groupoids. \end{remark} Let $A$ be a completely inverse $AG^{**}$-groupoid.~The relation $\leq_A$ defined on $A$ by $a\leq_A b$ if $a\in E_Ab$ is the \emph{natural partial order} on $A$. Notice the restriction of $\leq_A$ to $E_A$ is equal to the natural partial order $\leq$ on $E_A$, therefore, we will be write briefly $\leq$ instead of $\leq_A$. \medskip The following result can be deduced from \cite{P}. \begin{lemma}\label{<} In any completely inverse $AG^{**}$-groupoid $A$, the relation $\leq$ is a compatible partial order on $A$.~Also, $a\leq b$ implies $a^{-1}\leq b^{-1}$ for all $a,b\in A$. \end{lemma} \begin{proof}We include a simple proof. It is evident that $\leq$ is reflexive and preserves inverses. Let $a\leq b$ and $b\leq a$, i.e., $a=eb$ and $b=fa$ for some $e,f\in E_A$.~Then by Proposition \ref{semilattice}, $ea=a$.~Using again Proposition \ref{semilattice}, $a=eb=e(fa)=(ef)a=(fe)a=f(ea)=fa=b$. Hence $\leq$ is antisymmetric. From Proposition \ref{semilattice} it follows also that $\leq$ is transitive. Finally, if $a\leq b$ and $c\leq d$, that is, $a=eb$ and $c=fd$ for some $e,f\in E_A$, then we obtain that $ac=(eb)(fd)=(ef)(bd)$.~Thus $ac\leq bd$. \end{proof} For some equivalent definitions of the relation $\leq$, consult \cite{P}. Moreover, we have the following proposition. \begin{proposition} In any completely inverse $AG^{**}$-groupoid $A$, $\leq\cap$ $\mu=1_A$, that is, if $A=[E_A;G_e;\phi_{e,f}]$, then $\leq_{|G_e}=1_{G_e}$ for every $e\in E_A$. \end{proposition} \begin{proof} Let $a(\leq\cap$ $\mu)b$. Then $aa^{-1}=bb^{-1}$ and $a=eb$ for some $e\in E_A$, therefore we get $aa^{-1}=(eb)(eb^{-1})=(ee)(bb^{-1})=e(bb^{-1})=(eb)b^{-1}=ab^{-1}$. Consequently, $$a=(aa^{-1})a=(bb^{-1})a=(ab^{-1})b=(aa^{-1})b=(bb^{-1})b=b,$$ as required. \end{proof} Finally, for any nonempty subset $B$ of a completely inverse $AG^{**}$-groupoid $A$, we call $$B\omega=\{a\in A:\exists~(b\in B)~b\leq a\}$$ the \emph{closure of $B$ in $A$}; if $B=B\omega$, then $B$ is \emph{closed in $A$}. Note that $B\omega$ is closed in $A$. It is clear that a subgroupoid $B$ of a completely inverse $AG^{**}$-groupoid $A$ is itself a completely inverse $AG^{**}$-groupoid if and only if $b\in B$ implies $b^{-1}\in B$ for every $b\in B$. In such a case, $B$ is a \emph{completely inverse $AG^{**}$-subgroupoid} of $A$.~Using Lemma \ref{<}, one can prove the following proposition. \begin{proposition}\label{omega}If $B$ is a completely inverse $AG^{**}$-subgroupoid of a completely inverse $AG^{**}$-groupoid $A$, then $B\omega$ is a closed completely inverse $AG^{**}$-subgroupoid of $A$. \end{proposition} In particular, $E_A\omega$ is a closed completely inverse $AG^{**}$-subgroupoid of $A$.~It is easy to see that $$ E_A\omega=\{a\in A:(\exists\,e\in E_A)~ea\in E_A\}. $$ \section{Certain $E$-unitary congruences} Let $\rho$ be a congruence on a completely inverse $AG^{**}$-groupoid $A$.~By the \emph{kernel} ker($\rho$) (respectively the \emph{trace} tr($\rho$)) of $\rho$ we shall mean the set $\{a\in A: (a,a^2)\in\rho\}$ (respectively the restriction of $\rho$ to the set $E_A$). Note that tr($\rho$) is a congruence on the semilattice \nolinebreak $E_A$. Also, in the light of Theorem \ref{i-s}, $$ \ker(\rho)=\{a\in A:\exists~(e\in E_A)\,(a,e)\in\rho\}=\bigcup\{e\rho:e\in E_A\}. $$ The following proposition may be sometimes useful. \begin{proposition}\label{ab=ba} Let $A=[E_A;G_e;\phi_{e,f}]$ be a completely inverse $AG^{**}$-groupoid and let $a,b\in A$ be such that $ab\in E_A$.~Then $ab=ba$. \end{proposition} \begin{proof}Let $ab=e\in E_A$. Then $$ba=b(aa^{-1}\cdot a)=(aa^{-1})(ba)=(ab)(a^{-1}a)\in E_A.$$ Since $ab,ba\in G_e$, then $ab=ba$. \end{proof} The following theorem says particularly that each congruence on a completely inverse $AG^{**}$-groupoid is uniquely determined by its kernel and trace. \begin{theorem}\label{kernel-trace} If $\rho$ is a congruence on a completely inverse $AG^{**}$-groupoid $A$, then $$ (a,b)\in\rho\iff (aa^{-1},bb^{-1})\in{\rm tr}(\rho)~\&~ab^{-1}\in\ker(\rho). $$ Thus for all $\,\rho_1,\rho_2\in\mathcal{C}(A)$, $$ \rho_1\subseteq\rho_2\iff{\rm tr}(\rho_1)\subseteq{\rm tr}(\rho_2)~\&~\ker(\rho_1)\subseteq\ker(\rho_2). $$ In particular, each congruence on a completely inverse $AG^{**}$-groupoid is uniquely determined by its kernel and trace. \end{theorem} \begin{proof} Let $(a,b)\in\rho$. Then evidently $(a^{-1},b^{-1}),(ab^{-1},bb^{-1})\in\rho$, so $(aa^{-1},bb^{-1})\in\text{tr}(\rho)$ and $ab^{-1}\in\ker(\rho)$. Conversely, let now $(aa^{-1},bb^{-1})\in\text{tr}(\rho),ab^{-1}\in\text{ker}(\rho)$.~In view of Theorem \ref{mu}, $(a\rho,b\rho)\in\mu_{S/\rho}$, so $((ab^{-1})\rho,(bb^{-1})\rho)\in\mu_{S/\rho}$.~Since $ab^{-1}\in\text{ker}(\rho)$, then $(ab^{-1})\rho\in E_{A/\rho}$. Evidently, $(bb^{-1})\rho\in E_{A/\rho}$.~Hence $(ab^{-1})\rho=(bb^{-1})\rho$ (by Theorem \ref{mu}$(c)$).~Thus $$a\rho=(aa^{-1}\cdot a)\rho=(bb^{-1}\cdot a)\rho=(ab^{-1}\cdot b)\rho=(bb^{-1}\cdot b)\rho=b\rho,$$ as required.~The rest of the theorem follows from the first equivalence. \end{proof} \begin{remark}\label{Clifford} Note that the first part of the above theorem is true for an arbitrary Clifford semigroup, the proof is very similar.~In fact, if $ab^{-1}\in\text{ker}(\rho)$, then $$(ab^{-1})\rho=(b^{-1}a)\rho=(b^{-1}b)\rho,$$ so $a\rho=(aa^{-1}\cdot a)\rho=(bb^{-1}\cdot a)\rho=(b\cdot b^{-1}a)\rho=(b\cdot b^{-1}b)\rho=b\rho$. Clearly, the condition $ab^{-1}\in\text{ker}(\rho)$ from Theorem \ref{kernel-trace} is equivalent to the condition $a^{-1}b\in\text{ker}(\rho)$. In the light of Proposition \ref{ab=ba}, it is also equivalent to $b^{-1}a\in\text{ker}(\rho)$. \end{remark} \begin{theorem}\label{idempotent classes} Let $\rho_1,\rho_2$ be congruences on a completely inverse $AG^{**}$-groupoid $A$.~Then the following statements are equivalent$:$ $(a)$ \ $e\rho_1\subseteq e\rho_2$ for every $e\in E_A;$ $(b)$ \ $\rho_1\subseteq\rho_2$. \noindent In particular, every congruence $\rho$ on a completely inverse $AG^{**}$-groupoid is uniquely determined by the set of $\rho$-classes containing idempotents. \end{theorem} \begin{proof} $(a)\implies (b)$. Let $a\in b\rho_1$. Then $$aa^{-1}\in (bb^{-1})\rho_1\subseteq (bb^{-1})\rho_2~~\&~~ab^{-1}\in (bb^{-1})\rho_1\subseteq (bb^{-1})\rho_2.$$ In the light of Theorem \ref{kernel-trace}, $a\in b\rho_2$, that is, $\rho_1\subseteq\rho_2$. $(b)\implies (a)$. This is trivial. \end{proof} In Section $5$ we shall characterize abstractly the congruences on a completely inverse $AG^{**}$-groupoid $A$ via the congruence pairs for $A$. A nonempty subset $B$ of a groupoid $A$ is called \emph{left} (\emph{right}) \emph{unitary} if $ba\in B$ (resp. $ab\in B$) implies $a \in B$ for every $b \in B,a\in A$. Also, we say that $B$ is \emph{unitary} if it is both left and right unitary. Finally, a groupoid $A$ is said to be \emph{$E$-unitary} if $E_A$ is unitary. \begin{proposition}\label{E-unitary} Let $E_A$ be a left unitary subset of an $AG$-groupoid.~Then $E_A$ is also right unitary.~If in addition, $A$ is an $AG^{**}$-groupoid, then the following conditions are equivalent$:$ $(a)$ \ $A$ is $E$-unitary$;$ $(b)$ \ $E_A$ is left unitary$;$ $(c)$ \ $E_A$ is right unitary. \end{proposition} \begin{proof} $(a)\implies (b),(c)$. Obvious. $(b)\implies (a)$. Let $a\in A,e\in E_A$ and let $ae=f\in E_A$. Then $(ae)f\in E_A$, therefore, $(fe)a\in E_A$. Thus $a\in E_A$, since $fe\in E_A$ and $E_A$ is left unitary. $(c)\implies (a)$. Let $a\in A,e\in E_A$ and $ea=f\in E_A$. Then, using $(3)$, $$f=f(ea)=(fe)a=(ae)f.$$ Hence $ae\in E_A$. Thus $a\in E_A$. \end{proof} $AG$-groups are examples of $E$-unitary completely inverse $AG^{**}$-groupoids. A congruence $\rho$ on a completely inverse $AG^{**}$-groupoid is a \emph{$AG$-group} congruence if $A/\rho$ is an $AG$-group.~By Lemma \ref{unipotent}, $\rho$ is an $AG$-group congruence if and only if tr$(\rho)=E_A\times E_A$.~Since $A\times A$ is an $AG$-group congruence on $A$, then the intersection of all the $AG$-group congruences on $A$ is the least $AG$-group congruence on $A$. A more useful characterization of the least $AG$-group congruence on $A$ is given in the following theorem. \begin{theorem}\label{sigma} In any completely inverse $AG^{**}$-groupoid $A$, $$ \sigma=\{(a,b)\in A\times A:(\exists~e\in E_A)\,ea=eb\} $$ is the least $AG$-group congruence with the kernel $E_A\omega$. \end{theorem} \begin{proof} It is evident that $\sigma$ is reflexive and symmetric. Let $(a,b),(b,c)\in\sigma$, so that $ea=eb$ and $fb=fc$ for some $e,f\in E_A$. Using Proposition \ref{semilattice}, we have that $$ (fe)a=f(ea)=f(eb)=(fe)b=(ef)b=e(fb)=e(fc)=(ef)c=(fe)c, $$ where $fe\in E_A$. Thus $(a,c)\in\sigma$. Consequently, $\sigma$ is an equivalence relation on $A$.~Further, let $(a,b)\in\sigma$, that is, $ea=eb$, where $e\in E_A$, and let $c\in A$. Then again in the light of Proposition \ref{semilattice}, $e(ac)=(ea)c=(eb)c=e(bc)$. Also, $$(cc^{-1})e\cdot ca=(cc^{-1})c \cdot ea =(cc^{-1})c \cdot eb=(cc^{-1})e \cdot cb,$$ where $(cc^{-1})e\in E_A$. Hence $\sigma$ is a congruence on $A$. Since $(ef)e=(ef)f$ and $ef\in E_A$ for all $e,f\in E_A$, then $\sigma$ is an $AG$-group congruence on $A$. Also, let $\rho$ be an $AG$-group congruence on $A$ and $(a,b)\in\sigma$. Then $ea=eb$, where $e\in E_A$, so $(e\rho)(a\rho)=(e\rho)(b\rho)$. Hence $a\rho=b\rho$, since $e\rho$ is a left identity of the $AG$-group $A/\rho$. Thus $\sigma\subseteq\rho$. Consequently, $\sigma$ is the least $AG$-group congruence on $A$. Finally, $$ a\in\text{ker}(\sigma)\Longleftrightarrow (\exists\,f\in E_A)~(a,f)\in\sigma\Longleftrightarrow (\exists\,e,f\in E_A)~ea=ef\Longleftrightarrow a\in E_A\omega, $$ as required. \end{proof} From Theorem \ref{kernel-trace} follows that $(a,b)\in\sigma\Leftrightarrow ab^{-1}\in E_A\omega$. Also, in the light of the end of Section $3$, $E_A\omega$ is a closed completely inverse $AG^{**}$-subgroupoid of $A$. Evidently, $E_A\subseteq E_A\omega$ and if $ab\in E_A\omega$, then $ba\in E_A\omega$. \medskip A nonempty subset $B$ of a completely inverse $AG^{**}$-groupoid $A$ is called: $(F)$ \ \emph{full} if $E_A\subseteq B$; $(S)$ \ \emph{symmetric} if $xy\in B$ implies $yx\in B$ for all $x,y\in A$. \medskip\noindent A completely inverse $AG^{**}$-subgroupoid $N$ of $A$ is said to be \emph{normal} if it full, closed and symmetric. In that case, we shall write $N\lhd A$. Denote the set of all $AG$-group congruences on a completely inverse $AG^{**}$-group\-oid $A$ by $\mathcal{GC}(A)$. It is clear that $\mathcal{GC}(A)=[\sigma,A\times A]$ is a complete sublattice of $\mathcal{C}(A)$. Note that $\mathcal{GC}(A)\cong\mathcal{C}(A/\sigma)$ and so the lattice $\mathcal{GC}(A)$ is modular (by Corollary \ref{C1}). Further, let $\mathcal{N}(A)$ be the set of all normal completely inverse $AG^{**}$-subgroupoids of $A$.~It is obvious that $E_A\omega\subseteq N$ for every $N\lhd A$, and if $\emptyset\neq\mathcal{F}\subseteq\mathcal{N}(A)$, then $\bigcap \{B\,:\,B\in\mathcal{F}\}\in\mathcal{N}(A)$. Consequently, $\mathcal{N}(A)$ is a complete lattice. The following theorem (proved in \cite{P1}) describes the $AG$-group congruences on a completely inverse $AG^{**}$-groupoid in the terms of its normal completely inverse $AG^{**}$-subgroupoids. \begin{theorem}\label{AG-group congruences} Let $A$ be a completely inverse $AG^{**}$-groupoid, $N\lhd A$.~Then the relation $$ \rho_N=\{(a,b)\in A\times A:ab^{-1}\in N\} $$ is the unique $AG$-group congruence $\rho$ on $A$ for which $\ker(\rho)=N$. Conversely, if $\rho\in\mathcal{GC}(A)$, then $\ker(\rho)\in\mathcal{N}(A)$ and $\rho=\rho_N$ for $N=\ker(\rho)$. Consequently, the map $\phi:\mathcal{N}(A)\to\mathcal{GC}(A)$ given by $N\phi=\rho_N$ $(N\in\mathcal{N}(A))$ is a complete lattice isomorphism of $\mathcal{N}(A)$ onto $\mathcal{GC}(A)$.~In particular, the lattice $\mathcal{N}(A)$ is modular.\qed \end{theorem} We say that a congruence $\rho$ on a groupoid $A$ is \emph{idempotent pure} if $e\rho\subseteq E_A$ for all $e\in E_A$. Notice that any idempotent pure congruence $\rho$ on an arbitrary completely inverse $AG^{**}$-groupoid $A$ is contained in $\sigma$. Indeed, if $(a,b)\in\rho$, then $(ab^{-1},bb^{-1})\in\rho$, so $ab^{-1}\in E_A\subseteq E_A\omega$. Thus $(a,b)\in\sigma$, as required. The following theorem gives necessary and sufficient conditions for a completely inverse $AG^{**}$-groupoid to be $E$-unitary. \begin{theorem}\label{E-unitary2} Let $A=[E_A;G_e;\phi_{e,f}]$ be a completely inverse $AG^{**}$-groupoid. Then the following conditions are equivalent$:$ $(a)$ \ $A$ is $E$-unitary$;$ $(b)$ \ $\ker(\sigma)=E_A;$ $(c)$ \ $\sigma$ is the maximum idempotent pure congruence on $A;$ $(d)$ \ $\sigma\cap\mu=1_A;$ $(e)$ \ $\phi_{e,f}$ is a monomorphism for all $e,f\in E_A$ such that $e\geq f$. \end{theorem} \begin{proof} In view of Proposition \ref{E-unitary}, $(a)$ and $(b)$ are equivalent, since $\ker(\sigma)=E_A\omega$. $(b)\implies (c)$. This follows from the preceding remark. $(c)\implies (d)$. Indeed, tr$(\sigma\cap\mu)\subseteq\text{tr}(\mu)=1_{E_A}$ (by Theorem \ref{mu}$(c)$). Furthermore, ker$(\sigma\cap\mu)\subseteq\text{ker}(\sigma)=E_A$. In the light of Theorem \ref{kernel-trace}, $\sigma\cap\mu=1_A$. $(d)\implies (e)$. Let $a_e,b_e\in G_e$ be such that $a_e\phi_{e,f}=b_e\phi_{e,f}$.~Then $fa_e=fb_e$, therefore, $(a_e,b_e)\in\sigma$. Since clearly $(a_e,b_e)\in\mu$, then $a_e=b_e$. $(e)\implies (a)$. Let $a_f\in G_f$ be such that $ea_f=g$ ($e,f,g\in E_A$). Then $ef=g$. Hence $eg=g$, that is, $e\geq g$, so $a_f\phi_{e,g}=g\phi_{e,g}$, therefore, $a_f=g\in E_A$. Thus $A$ is $E$-unitary (by Proposition \ref{E-unitary}). \end{proof} Let $\rho, \upsilon$ be congruences on $A$ such that $\rho \subseteq \upsilon$.~Then the map $\Phi:A/\rho \rightarrow A/\upsilon,$ where $(a\rho)\Phi=a\upsilon$ for every $a\in A$, is a well-defined epimorphism between these groupoids. Denote its kernel by $$ \upsilon /\rho = \{(a\rho, b\rho)\in A/\rho \times A/\rho: (a,b)\in\upsilon \}. $$ Then $(A/\rho)/(\upsilon /\rho) \cong A/\upsilon$. Moreover, every congruence $\alpha$ on $A/\rho$ is of the form $\upsilon/\rho$, where $\upsilon \supseteq \rho$ is a congruence on $A$. Indeed, the relation $\upsilon$, defined on $A$ by: $(a, b)\in \upsilon$ if and only if $(a\rho, b\rho)\in \alpha$, is a congruence on $A$ such that $\rho \subseteq \upsilon$ and $\alpha = \upsilon /\rho$. We are able now to determine all $E$-unitary congruences on any completely inverse $AG^{**}$-groupoid. \begin{theorem}\label{E-unitary congruences} The intersection of an $AG$-group congruence and a semilattice congruence on a completely inverse $AG^{**}$-groupoid $A$ is an $E$-unitary congruence on $A$. Moreover, any $E$-unitary congruence on a completely inverse $AG^{**}$-groupoid $A$ can be expressed uniquely in this way. \end{theorem} \begin{proof} Let $\rho_N$ be an $AG$-group congruence ($N\lhd A$) and $\upsilon$ be a semilattice congruence on $A$. Put for simplicity $\rho=\rho_N\cap\upsilon$, and observe that $\rho_N/\rho$ is an $AG$-group congruence on $A/\rho$ and $\upsilon/\rho$ is a semilattice congruence on $A/\rho$. Since $\rho_N/\rho\cap\upsilon/\rho=1_{A/\rho}$, then $\sigma_{A/\rho}\cap\mu_{A/\rho}=1_{A/\rho}$ (see Theorem \ref{mu}$(a)$). In the light of Theorem \ref{E-unitary2}, $\rho$ is an $E$-unitary congruence on $A$. Conversely, let $\rho$ be an $E$-unitary congruence on $A$, $\rho_N/\rho=\sigma_{A/\rho}$ and let $\upsilon/\rho=\mu_{A/\rho}$, where $\rho\subseteq\rho_N,\upsilon$. Then $\rho_N$ is an $AG$-group congruence and $\upsilon$ is a semilattice congruence on $A$. Also, $(\rho_N\cap\upsilon)/\rho=\sigma_{A/\rho}\cap\mu_{A/\rho}=1_{A/\rho}$ (again by Theorem \ref{E-unitary2}). Thus $\rho=\rho_N\cap\upsilon$, as required. Finally, let $\rho=\rho_{N_1}\cap\upsilon_1=\rho_{N_2}\cap\upsilon_2$, where $N_i \lhd A$ and $\upsilon_i$ is a semilattice congruence on $A$ ($i = 1, 2$). Let $(a, b) \in \upsilon_1$.~Since $\upsilon_1 \cap \upsilon_2$ is a semilattice congruence on $A$, then there exists $e,f\in E_A$ such that $(a, e)\in\upsilon_1\cap\upsilon_2,$ $(e, f)\in\rho_{N_1},(f, b)\in\upsilon_1\cap\upsilon_2$ (Theorem \ref{i-s}), so $(e, f)\in\upsilon_1\cap\rho_{N_1} =\upsilon_2\cap\rho_{N_2}\subseteq\upsilon_2$. Hence $(a, b)\in\upsilon_2$, i.e., $\upsilon_1\subseteq\upsilon_2$. By symmetry, we deduce that $\upsilon_1 = \upsilon_2$. Put $\upsilon_1 = \upsilon_2 = \upsilon$, so that $\rho=\rho_{N_1}\cap\upsilon=\rho_{N_2}\cap\upsilon$. If $(a, b)\in\rho_{N_1}$, then $(aab, abb)\in \upsilon\cap\rho_{N_1}\subseteq\rho_{N_2}$, therefore, $(a, b)\in\rho_{N_2}$ (by cancellation).~Hence $\rho_{N_1}\subseteq\rho_{N_2}$. By symmetry, $\rho_{N_2}\subseteq\rho_{N_1}$. Thus $\rho_{N_1}=\rho_{N_2}$, as required. \end{proof} \begin{corollary}\label{C2} In any completely inverse $AG^{**}$-groupoid $A$, the relation $$\pi=\sigma\cap\mu$$ is the least $E$-unitary congruence on $A$. \end{corollary} Observe that if $\rho$ is an $E$-unitary congruence on $A$, then ker$(\rho)=\text{ker}(\rho_N$) for some $N\lhd A$.~In the last section we will show that the converse implication is also true, that is, for any $AG$-group congruence $\rho_N$ on $A$ ($N\lhd A$), the family $$\mathcal{U}_{N}=\{\rho_N\cap\upsilon:\mu\subseteq\upsilon\}$$ coincides with the set of all ($E$-unitary) congruence $\rho$ on $A$ such that $$\text{ker}(\rho)=\text{ker}(\rho_N).$$ Finally, denote by $\mathcal{U}(A)$ the set of all $E$-unitary congruences on a completely inverse $AG^{**}$-groupoid $A$. Since the intersection of an arbitrary nonempty family of $E$-unitary congruences on $A$ is again an $E$-unitary congruence on $A$, and $\mathcal{U}(A)$ has a least element, then the following corollary is valid. \begin{corollary}\label{C3} Let $A$ be a completely inverse $AG^{**}$-groupoid.~Then the set\hspace{0.3mm} $\mathcal{U}(A)$ is a complete $\cap$-sublattice of $\mathcal{C}(A)$ with the least element $\pi$ and the greatest element $A\times A$. Moreover, $\mathcal{U}_{N}=\{\rho_N\cap\upsilon:\mu\subseteq\upsilon\}$ $(N\lhd A)$ is a complete sublattice of $\mathcal{U}(A)$ with the least element $\rho_N\cap\mu$ and the greatest element $\rho_N$. \end{corollary} In view of the corollary, for each $\rho\in\mathcal{C}(A)$, there is the least $E$-unitary congruence $\pi_{\rho}$ containing $\rho$. We will show in Section $6$ that $\pi_{\rho}=\sigma\rho\hspace{0.35mm}\sigma\cap\mu\rho\mu$. \section{The trace classes of $\mathcal{C}(A)$} Let $\rho$ be a congruence on $A$, where $A$ denotes (unless otherwise stated) an arbitrary completely inverse $AG^{**}$-groupoid.~Put $K=\text{ker}(\rho)$.~It is immediate that $K$ is a full completely inverse $AG^{**}$-subgroupoid of $A$.~In the light of Proposition \ref{ab=ba}, $K$ is also symmetric.~Finally, put $\rho_{(K,\tau)}=\rho$, where $\tau=\text{tr}(\rho)$.~Theorem \ref{kernel-trace} states that \begin{equation}\label{ee4} (a,b)\in\rho_{(K,\tau)}\iff (aa^{-1},bb^{-1})\in\tau~~\&~~ab^{-1}\in K. \end{equation} Notice that if $a\in\text{ker}(\rho_{(K,\tau)})$, that is, $(a,e)\in\rho_{(K,\tau)}$, where $e\in E_A$, then $$ ea\in K~~\&~~(e,aa^{-1})\in\text{tr}(\rho_{(K,\tau)}). $$ Observe further that if $ea\in K$ and $(e,aa^{-1})\in\text{tr}(\rho)$, then $\,a=(aa^{-1})a\,\rho\,ea$, therefore, $a\in K$. Also, the following special case is of particular interest. \begin{proposition}\label{U(A)} Let $A$ be a completely inverse $AG^{**}$-groupoid.~Then $\rho\in\mathcal{U}(A)$ if and only if $\,\ker(\rho)$ is closed in $A$. \end{proposition} \begin{proof} Let $\rho\in\mathcal{U}(A)$ and $a\in(\text{ker}(\rho))\omega$.~Then $b=ea$ for some $b\in\text{ker}(\rho)$ and $e\in E_A$. Hence $b\rho=(e\rho)(a\rho)$, where $e\rho,b\rho\in E_{A/\rho}$ and so $a\rho\in E_{A/\rho}$, since $A/\rho$ is $E$-unitary. Thus $a\in\text{ker}(\rho)$. Consequently, $(\text{ker}(\rho))\omega=\text{ker}(\rho)$. Conversely, let $(e\rho)(a\rho)=f\rho$, where $a\in A$ and $e,f\in E_A$, then $ea\in\text{ker}(\rho)$. Hence $a\in(\text{ker}(\rho))\omega=\text{ker}(\rho)$, that is, $a\rho\in E_{A/\rho}$. Thus $\rho$ is $E$-unitary. \end{proof} In Section $3$ we have called a completely inverse $AG^{**}$-subgroupoid of $A$ normal if it is full, symmetric and closed in $A$.~Also, we say that a completely inverse $AG^{**}$-subgroupoid $K$ is \emph{seminormal} if $K$ is full and symmetric. Finally, for any ordered pair $(K,\tau)$, where $K$ is a seminormal completely inverse $AG^{**}$-subgroupoid of $A$ and $\tau$ is a congruence on $E_A$ such that \vspace{2mm}$(CP)$ \ if $ea\in K$ and $(e,aa^{-1})\in\tau$, then $a\in K$ ($a\in A,e\in E_A$), \vspace{2mm}\newline define a relation $\rho_{(K,\tau)}$ like the above. In that case, $(K,\tau)$ is a \emph{congruence pair} for $A$ and we can define a relation $\rho_{(K,\tau)}$ as in \eqref{ee4} above. The following theorem together with the above consideration and Theorem \ref{kernel-trace} says that any congruence on $A$ is of the form $\rho_{(K,\tau)}$, where $(K,\tau)$ is a congruence pair for $A$, and this expression is unique. \begin{theorem}\label{ACP}If $(K,\tau)$ is a congruence pair for a completely inverse $AG^{**}$-groupoid $A$, then $\rho_{(K,\tau)}$ is the unique congruence on $A$ with $\ker(\rho_{(K,\tau)})=K$ and ${\rm tr}(\rho_{(K,\tau)})=\tau$. Conversely, if $\rho$ is a congruence on $A$, then $(\ker(\rho),{\rm tr}(\rho))$ is a congruence pair for $A$ and $\rho_{(\ker(\rho),{\rm tr}(\rho))}=\rho$. \end{theorem} \begin{proof} It is sufficient to show the direct part of the theorem.~Put $\rho=\rho_{(K,\tau)}$. It is clear that $\rho$ is reflexive and symmetric.~Let now $(a,b),(b,c)\in\rho$.~Then $(aa^{-1},cc^{-1})\in\tau$ and $(b^{-1}a)(bc^{-1})=(b^{-1}b)(ac^{-1})=(bb^{-1})(ac^{-1})\in K$. Also, $$bb^{-1}~\tau~(aa^{-1})(c^{-1}c)=(ac^{-1})(a^{-1}c)=(ac^{-1})(ac^{-1})^{-1}.$$ In the light of the condition $(CP)$, $ac^{-1}\in K$. Thus $\rho$ is transitive. Let $(a,b)\in\rho$ and $c\in A$. Then $$(ca)(ca)^{-1}=(ca)(c^{-1}a^{-1})=(cc^{-1})(aa^{-1})~\tau~(cc^{-1})(bb^{-1})=(cb)(cb)^{-1},$$ $$(ac)(ac)^{-1}=(ac)(a^{-1}c^{-1})=(aa^{-1})(cc^{-1})~\tau~(bb^{-1})(cc^{-1})=(bc)(bc)^{-1}.$$ Also, $$(ca)(cb)^{-1}=(ca)(c^{-1}b^{-1})=(cc^{-1})(ab^{-1})\in E_AK\subseteq KK\subseteq K,$$ $$(ac)(bc)^{-1}=(ac)(b^{-1}c^{-1})=(ab^{-1})(cc^{-1})\in KE_A\subseteq KK\subseteq K.$$ Consequently, $\rho$ is a congruence on $A$. Finally, let $a\in\text{ker}(\rho)$, that is, $(a,e)\in\rho$ for some $e\in E_A$. Then clearly $ea\in K$ and $(e,aa^{-1})\in\tau$. Hence $a\in K$ (by $(CP)$). Thus $\text{ker}(\rho)\subseteq K$. Conversely, let $a\in K$. Then $a^{-1}\in K$. Hence $(a^{-1}a,a)\in\rho$ and so $a\in\text{ker}(\rho)$. Thus ker$(\rho)=K$. Evidently, tr$(\rho)=\tau$. In view of Theorem \ref{kernel-trace}, $\rho_{(K,\tau)}$ is uniquely determined by the congruence pair $(K,\tau)$. \end{proof} It is easy to see that if $K$ is closed in $A$, then the condition $(CP)$ is not necessary in the proof of the direct part of Theorem \ref{ACP}.~Combining this fact with Proposition \ref{U(A)} and Theorem \ref{kernel-trace} we obtain the following corollary. \begin{corollary}\label{C4} Each $E$-unitary congruence $\rho$ on a completely inverse $AG^{**}$-groupoid $A$ is of the form $\rho_{(K,\tau)}$, where $K\lhd A$ and $\tau\in\mathcal{C}(E_A)$, and this expression is unique. \end{corollary} \begin{remark} One can modify Proposition III.$2.3$ \cite{Pet} for completely inverse $AG^{**}$-groupoids. \end{remark} Further, let $\rho$ be a congruence on $A$. Put $$\mu(\rho)=\{(a,b)\in A\times A:(a\rho,b\rho)\in\mu_{A/\rho}\}.$$ Clearly, $\mu(\rho)\in\mathcal{C}(A)$ and $\rho\subseteq\mu(\rho)$. From Theorem \ref{mu} follows that $$(a,b)\in\mu(\rho)\iff (aa^{-1},bb^{-1})\in\rho.$$ Put $\mu(\rho)=\rho^{\theta}$. It is clear that $\text{tr}(\rho)=\text{tr}(\rho^\theta)$. Also, if $\text{tr}(\rho_1)=\text{tr}(\rho_2)$ ($\rho_1,\rho_2\in\mathcal{C}(A)$), then from the above equality follows that $\rho_1^{\theta}=\rho_2^{\theta}$. Consequently, $\rho^{\theta}$ is the maximum congruence with respect to tr$(\rho)$. \newline Also, put (see Theorem \ref{sigma}) $$\rho_\theta=\{(a,b)\in A\times A:(aa^{-1},bb^{-1})\in\rho~~\&~~(\exists~e\in E_{(aa^{-1})\rho})~ea=eb\}.$$ Since $a=(aa^{-1})a$, then $\rho_\theta$ is reflexive.~Obviously, $\rho_\theta$ is symmetric.~The proof that $\rho_\theta$ is transitive and left compatible is closely similar to the corresponding proof for the relation $\sigma$ (see Theorem \ref{sigma}). Let $(a,b)\in\rho_\theta$ and $c\in A$. Then $$(ac)(ac)^{-1}=(ac)(a^{-1}c^{-1})=(aa^{-1})(cc^{-1})~\rho~(bb^{-1})(cc^{-1})=(bc)(bc)^{-1}.$$ Also, $e(cc^{-1})~\rho~(aa^{-1})(cc^{-1})=(ac)(ac)^{-1}$ and $$e(cc^{-1})\cdot ac = ea\cdot (cc^{-1})c= eb\cdot (cc^{-1})c=e(cc^{-1})\cdot bc.$$ Consequently, $\rho_\theta$ is a congruence on $A$.~Finally, from the definition of $\rho_\theta$ follows that tr$(\rho)=\text{tr}(\rho_\theta)$, and since the definition of $\rho_\theta$ depends only on idempotents, then $\rho_\theta$ is the minimum congruence with respect to tr$(\rho)$. We have just proved part of the following theorem. \begin{theorem}\label{main trace} Let $A$ be an arbitrary completely inverse $AG^{**}$-groupoid.~Define a map $\Theta:\mathcal{C}(A)\to\mathcal{C}(E_A)$ by $$ \rho\hspace{0.1mm}\Theta={\rm tr}(\rho)~~(\rho\in\mathcal{C}(A)). $$ Then $\Theta$ is a complete lattice homomorphism of $\mathcal{C}(A)$ onto $\mathcal{C}(E_A)$.~Also, if $\theta$ denotes the congruence on $\mathcal{C}(A)$ induced by $\Theta$, that is, $$ \theta=\{(\rho_1,\rho_2)\in\mathcal{C}(A)\times\mathcal{C}(A):{\rm tr}(\rho_1)={\rm tr}(\rho_2)\}, $$ then for every $\rho\in\mathcal{C}(A)$, $$\rho\theta=[\rho_\theta,\rho^\theta]$$ is a complete modular sublattice $($with commuting elements$)$ of $\,\mathcal{C}(A)$. \end{theorem} \begin{proof} The proof that $\Theta$ is a complete homomorphism is closely similar to the corresponding proof of Theorem III.2.5 \cite{Pet}, since the join of any nonempty family $\mathcal{F}$ of congruences in an arbitrary universal algebra is given by $\bigcup_{n\hspace{0.2mm}\in\hspace{0.35mm}\mathbb{N}}(\bigcup\mathcal{F})^n$. Further, let $\tau$ be a congruence on $E_A$. Define an equivalence relation $\rho$ on $A$ by $$\rho=\{(a,b)\in A\times A:(aa^{-1},bb^{-1})\in\tau\}.$$ It is easy to check that $\rho$ is compatible with the operation on $A$.~Consequently, $\rho\in\mathcal{C}(A)$. Obviously, $\text{tr}(\rho)=\tau$. Thus $\Theta$ maps $\mathcal{C}(A)$ onto $\mathcal{C}(E_A)$. Finally, $\rho\theta$ is an interval of a complete lattice, so it is itself \nolinebreak a complete lattice.~Let $\rho_1,\rho_2\in\rho\theta$ and $a(\rho_1\rho_2)b$.~Then $a\rho_1 c\rho_2b$, where $c\in A$, so $(aa^{-1})\rho_1(cc^{-1})\rho_2(bb^{-1})$. Hence $(aa^{-1})\rho_2(cc^{-1})\rho_1(bb^{-1})$, since ${\rm tr}(\rho_1)={\rm tr}(\rho_2)$.~Moreover, $(cc^{-1})\rho_2(bc^{-1})$. It follows that $(aa^{-1})\rho_2(bc^{-1})$. Consequently, $$a=(aa^{-1}\cdot a)\rho_2(bc^{-1}\cdot a)=(ac^{-1})b.$$ Further, $(ac^{-1})\rho_1(cc^{-1})$ and so $(ac^{-1})\rho_1(bb^{-1})$. Hence $(ac^{-1}\cdot b)\rho_1(bb^{-1}\cdot b)=b$. We have just shown that $a\rho_2(ac^{-1}\cdot b)\rho_1b$, that is, $\rho_1\rho_2\subseteq\rho_2\rho_1$.~By symmetry, we deduce that $\rho_1\rho_2=\rho_2\rho_1$, therefore, the lattice $\rho\theta$ is modular. \end{proof} We call the classes of $\theta$ in the above theorem, the \emph{trace classes} of $A$. \begin{lemma}\label{L1} Let $A$ be a completely inverse $AG^{**}$-groupoid.~Then $$ \rho_\theta\subseteq\gamma_\theta\iff {\rm tr}(\rho)\subseteq{\rm tr}(\gamma)\iff\rho^\theta\subseteq\gamma^\theta$$ for all $\rho,\gamma\in\mathcal{C}(A)$. Also, if $\rho\subseteq\gamma$, then $\rho_\theta\subseteq\gamma_\theta$ and $\rho^\theta\subseteq\gamma^\theta$. \end{lemma} \begin{proof}This follows directly from the definitions of $\rho_\theta$ and $\rho^\theta$. \end{proof} \begin{lemma}\label{L2} Let $\mathcal{F}$ be an arbitrary nonempty family of congruences on a completely inverse $AG^{**}$-groupoid. Put $$\mathcal{F}_\theta=\{\rho_\theta:\rho\in\mathcal{F}\},~~\mathcal{F}\hspace{0.4mm}^\theta=\{\rho^\theta:\rho\in\mathcal{F}\}.$$ Then $$ \bigvee\mathcal{F}_\theta=\Big(\bigvee\mathcal{F}\Big)_\theta~~\&~~\bigcap\mathcal{F}\hspace{0.4mm}^\theta=\Big(\bigcap\mathcal{F}\Big)^\theta. $$ \end{lemma} \begin{proof}The proof is similar to the proof of Lemma III.2.9 \cite{Pet}. \end{proof} \begin{lemma}\label{L3} Let $A$ a completely inverse $AG^{**}$-groupoid.~Then $\sigma=(A\times A)_\theta$. \end{lemma} \begin{proof} This is obvious. \end{proof} The following corollary gives another equivalent conditions for a completely inverse $AG^{**}$-groupoid to be $E$-unitary. \begin{corollary}\label{E-unitary3} Let $A$ be a completely inverse $AG^{**}$-groupoid.~The following conditions are equivalent$:$ $(a)$ \ $A$ is $E$-unitary$;$ $(b)$ \ $\rho_\theta=\rho\cap\sigma$ for every $\rho\in\mathcal{C}(A);$ $(c)$ \ $\rho_\theta$ is an idempotent pure congruence on $A$ for every $\rho\in\mathcal{C}(A)$. \end{corollary} \begin{proof} Recall that $A$ is $E$-unitary if and only if $\sigma$ is the maximum idempotent pure congruence on $A$ (Theorem \ref{E-unitary2}). $(a)\implies (b)$. If $\rho\in\mathcal{C}(A)$, then $\rho_\theta\subseteq\rho\cap(A\times A)_\theta=\rho\cap\sigma$ (Lemmas \ref{L1}, \ref{L3}). On the other hand, $$\text{tr}(\rho\cap\sigma)=\text{tr}(\rho)\cap\text{tr}(\sigma)=\text{tr}(\rho)\cap (E_A\times E_A)=\text{tr}(\rho)=\text{tr}(\rho_\theta)$$ and $$\text{ker}(\rho\cap\sigma)=\text{ker}(\rho)\cap\text{ker}(\sigma)=\text{ker}(\rho)\cap E_A=E_A\subseteq\text{ker}(\rho_\theta).$$ Thus $\rho\cap\sigma\subseteq\rho_\theta$ (Theorem \ref{kernel-trace}). Consequently, $\rho_\theta=\rho\cap\sigma$. $(b)\implies (a)$. Clearly, $\mu_\theta=1_A$.~Moreover, $\mu_\theta=\mu\cap\sigma=\pi$ (Corollary \ref{C2}), therefore, $\pi=1_A$, so $A$ is $E$-unitary. It is now clear that $(a)$ implies $(c)$. We show the opposite implication. Indeed, if $(c)$ holds, then $(A\times A)_\theta=\sigma$ is idempotent pure. Since $\rho_\theta\subseteq\sigma$ for every $\rho\in\mathcal{C}(A)$, then each $\rho_\theta$ is idempotent pure, too, as required. \end{proof} We have mentioned in the above proof that if $A$ is $E$-unitary, then $\sigma$ is the maximum idempotent pure congruence on $A$, therefore, the set of all idempotent pure congruences $[1_A,\sigma]$ on an $E$-unitary completely inverse $AG^{**}$-groupoid $A$ forms a complete sublattice of the lattice $\mathcal{C}(A)$. From the above corollary we obtain the following proposition. \begin{proposition}\label{i-p} Let $A$ be an $E$-unitary completely inverse $AG^{**}$-groupoid.~Then the mapping $\chi:\mathcal{C}(A)\to \mathcal{C}(A)$ defined by $$\rho\chi=\rho\cap\sigma~~(\rho\in\mathcal{C}(A))$$ is a complete lattice homomorphism of $\mathcal{C}(A)$ onto the lattice of all idempotent pure congruences on $A$. \end{proposition} \begin{proof} In view of Corollary \ref{E-unitary3}, $\rho_\theta=\rho\cap\sigma$ for every $\rho\in\mathcal{C}(A)$.~Hence $\chi$ is a complete $\vee$-homomorphism (by Lemma \ref{L2}).~It is evident that $\chi$ is a complete $\cap$-homomorphism. Finally, if $\rho$ is idempotent pure, then $\rho\subseteq\sigma$ and so $\rho\chi=\rho$. Thus $\chi$ maps $\mathcal{C}(A)$ onto the lattice of all idempotent pure congruences on $A$, as exactly required. \end{proof} We now investigate the $\theta$-classes of $A$. \begin{lemma}\label{pomoc} In any completely inverse $AG^{**}$-groupoid $A$, $\mu_{A/\rho}=\mu(\rho)/\rho$ for every $\rho\in\mathcal{C}(A)$.~In particular, $[\rho/\rho,\rho^\theta/\rho]$ is the modular lattice of all idempotent-separating congruences on $A/\rho$ $(\rho\in\mathcal{C}(A))$. \end{lemma} \begin{proof} It is easy to see that $\mu(\rho)/\rho$ is idempotent-separating, so $\mu(\rho)/\rho\subseteq\mu_{A/\rho}$. On the other hand, if $\gamma/\rho$, where $\rho\subseteq\gamma$, is an idempotent-separating congruence on $A/\rho$, then $\text{tr}(\gamma)\subseteq\text{tr}(\rho)$ and so $\text{tr}(\gamma)=\text{tr}(\rho)$. Hence $\rho\subseteq\gamma\subseteq\mu(\rho)$, therefore, $\gamma/\rho\subseteq\mu(\rho)/\rho$.~Thus $\mu_{A/\rho}=\mu(\rho)/\rho$.~The second part of the lemma follows from Theorem \ref{m}. \end{proof} The following theorem follows easily from the above lemma. \begin{theorem}\label{trace class} Let $A$ be a completely inverse $AG^{**}$-groupoid, $\rho\in \mathcal{C}(A)$.~Define a map $\phi : [\rho_\theta, \rho^\theta] \to A/\rho_\theta$ by $\rho\phi = \rho/\rho_\theta$ for all $\rho\in [\rho_\theta, \rho^\theta]$.~Then $\phi$ is a complete isomorphism of the trace class $[\rho_\theta, \rho^\theta]$ onto the modular lattice of all idempotent-separating congruences on $A/\rho_\theta$. \end{theorem} \begin{remark} Note that $\phi_{\hspace{0.4mm}|\hspace{0.4mm}[\gamma,\hspace{0.3mm}\mu(\rho)]}$, where $\gamma\in\rho\theta$, is a complete isomorphism of the interval $[\gamma, \mu(\rho)]$ onto the lattice of all idempotent-separating congruences on $A/\gamma$. \end{remark} Recall that $A$ is \emph{fundamental} if and only if $\mu = 1_A$. By the above remark we have the following corollary. \begin{corollary}\label{fundamental} Let $\rho$ be a congruence on a completely inverse $AG^{**}$-groupoid \nolinebreak $A$. Then $A/\rho$ is fundamental if and only if $\rho=\mu(\rho)$. \end{corollary} Denote by $\mathcal{FC}(A)$ the set of all fundamental congruences on $A$, that is, $$\mathcal{FC}(A)=\{\mu(\rho):\rho\in\mathcal{C}(A)\}.$$ Since $1_A\subseteq\rho$, then $\mu=\mu(1_A)\subseteq\mu(\rho)$ for all $\rho\in\mathcal{C}(A)$, what means that $\mu$ is the least fundamental congruence on $A$. Also, from Lemma \ref{L2} follows that $\mathcal{FC}(A)$ is a complete $\cap$-sublattice of $\mathcal{C}(A)$. We have just proved a part of the following theorem. \begin{theorem} Let $A$ be a completely inverse $AG^{**}$-groupoid.~Then $\mathcal{FC}(A)$ is a complete $\cap$-sublattice of $\mathcal{C}(A)$ with the least element $\mu$ and the greatest element $A\times A$. For any nonempty family $\{\rho_i: i\in I\}$ of fundamental congruences on $A$, the join of $\{\rho_i: i\in I\}$ in $\mathcal{FC}(A)$ is given by $\mu (\bigvee\{\rho_i: i\in I\})$.~Also, $\mathcal{FC}(A)\cong\mathcal{C}(E_A)$. \end{theorem} \begin{proof} Let $\emptyset\not=\{\rho_i: i\in I\}\subseteq\mathcal{FC}(A)$. Then $$\Big(\bigvee\{\rho_i: i\in I\}, \mu\Big(\bigvee \{\rho_i: i\in I\}\Big)\Big)\in\theta.$$ On the other hand, if $\rho\in\Big[\bigvee\{\rho_i: i\in I\}, \mu(\bigvee \{\rho_i: i\in I\})\Big]$, then $\mu(\rho)=\rho$ if and only if $\rho=\mu(\bigvee \{\rho_i: i\in I\})$.~Consequently, $\mu(\bigvee \{\rho_i: i\in I\})$ is the join of $\{\rho_i: i\in I\}$ in $\mathcal{F}(S)$. Finally, if $\mu(\rho_1)\neq\mu(\rho_2)$, where $\rho_1,\rho_2\in\mathcal{C}(A)$, then $\text{tr}(\rho_1)\neq\text{tr}(\rho_2)$, therefore, the restriction of the map $\Theta$ from Theorem \ref{main trace} to the set $\mathcal{FC}(A)$ is the required complete lattice isomorphism. \end{proof} \section{The kernel classes of $\mathcal{C}(A)$} Let $A$ be an $AG^{**}$-groupoid. For every nonempty subset $Q$ of $A$ there exists an associated equivalence relation $\mathcal{Q}$ on $A$ which is induced by the partition: $\{Q,A\setminus Q\}$. Define on $A$ an equivalence relation $\tau^Q$ by $$ \tau^Q=\{(a,b)\in A\times A:(\forall x,y\in A^{1})~x(ay)\in Q\iff x(by)\in Q\}, $$ where $A^{1}=A\cup\{1\}$, $1\not\in A$ and $1a=a1=a$ for all $a\in A$. Observe that if $(a,b)\in\tau^Q$, then putting $x=y=1$ in the definition of $\tau^Q$, we obtain that either $a,b\in Q$ or $a,b\notin Q$. Thus $\tau^Q\subseteq\mathcal{Q}$. \begin{proposition}\label{saturates} Let $Q$ be a nonempty subset of an $AG^{**}$-groupoid $A$.~Then $\tau^Q$ is the largest congruence $\rho$ on $A$ for which $Q$ is the union of some $\rho$-classes. \end{proposition} \begin{proof} Let $(a,b)\in\tau^Q,x,y\in A^1$ and $c\in A$. Observe that $$ x(ac\cdot y)=(ac)(xy)=(xy\cdot c)a. $$ Hence if $x(ac\cdot y)\in Q$, then $(xy\cdot c)b\in Q$, since $(a,b)\in\tau^Q$. Thus we get $x(bc\cdot y)\in Q$. By symmetry, we conclude that $\tau^Q$ is right compatible. Further, the equality $$ x(ca\cdot y)=(ca)(xy)=(cx)(ay) $$ implies that $\tau^Q$ is also left compatible.~Consequently, $\tau^Q$ is a congruence on $A$ and $Q$ is the union of some $\tau^Q$-classes, since $\tau^Q\subseteq\mathcal{Q}$.~Finally, if $\rho$ is any congruence on $A$ for which $Q$ is the union of some $\rho$-classes, then $\rho\subseteq\mathcal{Q}$. Hence if $(a,b)\in\rho$, then either $a,b\in Q$ or $a,b\notin Q$.~Thus for all $x,y\in A^1$, $x(ay)\in Q\Leftrightarrow x(by)\in Q$, so $a~\tau^Q~b$.~Consequently, $\rho\subseteq\tau^Q$. \end{proof} \begin{corollary} In any completely inverse $AG^{**}$-groupoid $A$, the relation $\tau\hspace{0.15mm}^{E_A}$ is the largest idempotent pure congruence on $A$. \end{corollary} We shall write $\tau$ instead of $\tau\hspace{0.15mm}^{E_A}$, or $\tau_A$ if necessary. Let $\rho$ be a congruence on $A$, where $A$ denotes (unless otherwise stated) an arbitrary completely inverse $AG^{**}$-groupoid. Put $$ \tau(\rho)=\{(a,b)\in A\times A:(a\rho,b\rho)\in\tau_{A/\rho}\}. $$ Clearly, $\tau(\rho)\in\mathcal{C}(A)$ and $\rho\subseteq\tau(\rho)$. Using Theorem \ref{i-s}, one can prove without difficulty that $\tau(\rho)=\tau\hspace{0.2mm}^{\text{ker}(\rho)}$. Thus $\tau(\rho)$ is the maximum congruence with respect to ker$(\rho)$. Denote it by $\rho^\kappa$. Further, put $\rho_\kappa=\rho\cap\mu$. Then $\text{ker}(\rho_\kappa)=\text{ker}(\rho)$, since $\mu$ is a semilattice congruence. On the other hand, $\mu$ is idempotent-separating, so $\rho_\kappa$ is the minimum congruence with respect to ker$(\rho)$. Finally if $K$ is a seminormal completely inverse $AG^{**}$-subgroupoid of $A$. Then the pair $(K,1_{E_A})$ is a congruence pair for $A$, since then the condition $(CP)$ is trivially met for this pair, and $\text{ker}(\rho_{(K,1_{E_A})})=K$.~Consequently, $K$ is seminormal if and only if $K$ is a kernel of some congruence on $A$.~Denote by $\mathcal{SN}(A)$ the set of seminormal completely inverse $AG^{**}$-subgroupoids of $A$. It is easy to see that $\mathcal{SN}(A)$ is a lattice under inclusion. It is clear that if $\emptyset\not=\{\rho_i: i\in I\}\subseteq\mathcal{C}(A)$, then $$ \text{ker}\Big(\bigcap\{\rho_i:i\in I\}\Big)=\bigcap\{\text{ker}(\rho_i):i\in I\}, $$ therefore, we have just proved the following theorem. \begin{theorem}\label{main kernel} Let $A$ be an arbitrary completely inverse $AG^{**}$-groupoid.~Define a map $K:\mathcal{C}(A)\to\mathcal{P}(A)$ by $$ \rho\hspace{0.1mm}K=\text{ker}(\rho)~~(\rho\in\mathcal{C}(A)). $$ Then $K$ is a complete lattice $\cap$-homomorphism of $\mathcal{C}(A)$ onto $\mathcal{SN}(A)$.~Also, if $\kappa$ denotes the $\cap$-congruence on $\mathcal{C}(A)$ induced by $K$, that is, $$ \kappa=\{(\rho_1,\rho_2)\in\mathcal{C}(A)\times\mathcal{C}(A):\ker(\rho_1)=\ker(\rho_2)\}, $$ then for ever $\rho\in\mathcal{C}(A)$, $$\rho\kappa=[\rho_\kappa,\rho^\kappa]$$ is a complete sublattice of $\,\mathcal{C}(A)$. \end{theorem} We call the classes of $\kappa$ in the above theorem, the \emph{kernel classes} of $A$. \begin{example}\label{counterex} The following example shows that ker$(\rho) \subseteq \text{ker}(\gamma)$ (or even $\rho\subseteq \gamma$) does not imply (in general) that $\rho^\kappa \subseteq \gamma^\kappa$.~\hspace{0.3mm}Indeed, let $A = \{a, b, e, f\}$ be a commutative inverse semigroup with the multiplication table given below: $$ \begin{array}{|c||c|c|c|c|} \hline \cdot & a & b & e & f\\ \hline \hline a & e & e & a & a\\ \hline b & e & f & a & b\\ \hline e & a & a & e & e\\ \hline f & a & b & e & f\\ \hline \end{array} $$ Then clearly $1_A \subseteq \rho = 1_A \cup \{(a, e), (e, a)\}$. On the other hand, $\rho^\kappa = \rho \cup \{(b, f), (f, b)\}$ and $1_A^\kappa = 1_A \cup \{(e, f), (f, e), (a, b), (b, a)\}$ and so $1_A^\kappa = \tau \nsubseteq \rho^\kappa = \tau (\rho)$. Notice also that $\tau \cap \tau (\rho) = 1_A$. \end{example} Using Theorem \ref{kernel-trace} one can easily prove the following proposition. \begin{proposition}\label{decomposition}If $\rho$ is a congruence on a completely inverse $AG^{**}$-groupoid, then $$\rho=\rho_\theta\vee\rho_\kappa=\rho^\theta\cap\rho^\kappa.$$ \end{proposition} We now investigate the $\kappa$-classes of $A$. \begin{lemma}\label{pomoc1} In any completely inverse $AG^{**}$-groupoid $A$, $\tau_{A/\rho}=\tau(\rho)/\rho$ for every $\rho\in\mathcal{C}(A)$.~In particular, $[\rho/\rho,\rho^\kappa/\rho]$ is the lattice of all idempotent pure congruences on $A/\rho$ $(\rho\in\mathcal{C}(A))$. \end{lemma} \begin{proof} One can easily see that $\tau(\rho)/\rho$ is idempotent pure and so $\tau(\rho)/\rho\subseteq\tau_{A/\rho}$. On the other hand, if $\gamma/\rho$, where $\rho\subseteq\gamma$, is idempotent pure, then $\text{ker}(\gamma)\subseteq\text{ker}(\rho)$, therefore, $\text{ker}(\gamma)=\text{ker}(\rho)$.~Hence $\rho\subseteq\gamma\subseteq\tau(\rho)$, so $\gamma/\rho\subseteq\tau(\rho)/\rho$.~Consequently, $\tau_{A/\rho}=\tau(\rho)/\rho$. \end{proof} From the above lemma follows the following theorem. \begin{theorem}\label{kernel class} Let $A$ be a completely inverse $AG^{**}$-groupoid, $\rho\in \mathcal{C}(A)$.~Define a map $\phi : [\rho_\kappa, \rho^\kappa] \to A/\rho_\kappa$ by $\rho\phi = \rho/\rho_\kappa$ for all $\rho\in [\rho_\kappa, \rho^\kappa]$.~Then $\phi$ is a complete isomorphism of the kernel class $[\rho_\kappa, \rho^\kappa]$ onto the lattice of all idempotent pure congruences on $A/\rho_\kappa$. \end{theorem} Note that $\phi_{\hspace{0.4mm}|\hspace{0.4mm}[\gamma,\hspace{0.3mm}\tau(\rho)]}$, where $\gamma\in\rho\kappa$, is a complete isomorphism of the interval $[\gamma, \tau(\rho)]$ onto the lattice of all idempotent pure congruences on $A/\gamma$. Recall that $A$ is \emph{$E$-disjunctive} if and only if $\tau = 1_A$. By the above remark we have the following corollary. \begin{corollary}\label{E-disjunctive} Let $\rho$ be a congruence on a completely inverse $AG^{**}$-groupoid \nolinebreak $A$. Then $A/\rho$ is $E$-disjunctive if and only if $\rho=\tau(\rho)$. \end{corollary} \begin{remark} Note that in view of the end of Example \ref{counterex}, the set of all $E$-disjunctive congruences on a commutative inverse semigroup (in particular, on a completely inverse $AG^{**}$-groupoid) $A$ does not form (in general) a sublattice of $\mathcal{C}(A)$. Also, a completely inverse $AG^{**}$-groupoid $A$ is an $AG$-group if and only if $A$ is both $E$-unitary and $E$-disjunctive. Finally, notice that a congruence $\rho$ on $A$ is idempotent pure if and only if $\rho\cap\mu=1_A$. In particular, $\tau\cap\mu=1_A$, therefore, $A$ is a subdirect product of $A/\tau$ and $E_A$, where $A/\tau$ is an $E$-disjunctive completely inverse $AG^{**}$-groupoid. \end{remark} \medskip Finally, we go back to study the lattice $\mathcal{U}(A)$ of all $E$-unitary congruences on a completely inverse $AG^{**}$-groupoid $A$. First, we prove the following useful result. \begin{lemma}\label{lemma} The following conditions are valid for a congruence $\rho$ on a completely inverse $AG^{**}$-groupoid $A$\emph{:} $(a)$ \ $\rho\vee\sigma=\sigma\rho\hspace{0.3mm}\sigma;$ $(b)$ \ $a(\rho\vee\sigma)b\Leftrightarrow (ea)\rho(eb)$ for some $e\in E_A;$ $(c)$ \ $\ker(\rho\vee\sigma)=(\ker(\rho))\omega$. \end{lemma} \begin{proof} Using Proposition \ref{semilattice}, we may show, in a very similar way like in the proof of Lemma III.5.4$(i)$ \cite{Pet}, the condition $(a)$.~Furthermore, the condition $(b)$ follows directly from Proposition \ref{semilattice} and $(a)$.~Finally, the proof of $(c)$ is closely similar to the corresponding proof of Corollary III.5.5 \cite{Pet}. \end{proof} Using Proposition \ref{semilattice} and Lemma \ref{lemma}$(b)$, we are able to show the following theorem. \begin{theorem}\label{onto} Let $A$ be an arbitrary completely inverse $AG^{**}$-groupoid.~Then the map $\phi:\mathcal{C}(A)\to\mathcal{C}(A)$ defined by $$ \rho\phi=\rho\vee\sigma $$ is a homomorphism of $\mathcal{C}(A)$ onto the lattice $[\sigma,A\times A]$ of all $AG$-group congruences on $A$. \end{theorem} Define the relation $\bar{\sigma}$ on $\mathcal{C}(A)$ by putting $$ (\rho_1, \rho_2) \in \bar{\sigma}\Leftrightarrow\rho_1\vee\sigma = \rho_2 \vee \sigma . $$ In the light of the above theorem, $\bar{\sigma}$ is a congruence on $\mathcal{C}(A)$, since $\phi \phi^{- 1} = \bar{\sigma}$. \begin{proposition} Let $A$ be a completely inverse $AG^{**}$-groupoid and $\rho \in \mathcal{C}(A)$. Then the elements $\rho, \pi_{\rho}$ and $\rho \vee \sigma$ are $\bar{\sigma}$-equivalent and $\rho \subseteq \pi_{\rho} \subseteq \rho \vee \sigma$.~Moreover, the element $\rho \vee \sigma$ is the largest in the $\bar{\sigma}$-class $\rho \bar{\sigma}$. \end{proposition} \begin{proof} Since $\pi_{\rho}$ is the least $E$-unitary congruence containing $\rho$ and $\rho \vee \sigma$ is $E$-unitary, then $\rho \subseteq \pi_{\rho} \subseteq \rho \vee \sigma$. Hence we get $\rho \vee \sigma \subseteq \pi_{\rho} \vee \sigma\subseteq \rho \vee \sigma$, so $\rho \vee \sigma = \pi_{\rho} \vee \sigma$, therefore, $(\rho, \pi_{\rho}) \in \bar{\sigma}$. Evidently, $(\rho, \rho \vee \sigma) \in \bar{\sigma}$.~This implies the first part of the proposition. The second part is clear. \end{proof} Further, let $a,b\in A$ and $\rho\in\mathcal{C}(A)$. If $(a,b)\in\sigma$, then evidently $(a\rho)\sigma(b\rho)$ in $S/\rho$. If in addition, $\rho\subseteq\sigma$, then $(a\rho)\sigma(b\rho)$ in $S/\rho$ implies that $(a,b)\in\sigma$ in $S$.~It follows that $A/\sigma\cong (A/\rho)/\sigma$, i.e., $A$ and $A/\rho$ have isomorphic maximal $AG$-group homomorphic images. In that case, we may say that $\rho$ \emph{preserves the maximal $AG$-group homomorphic images}.~Since for every $\rho\in\mathcal{C}(A)$ we have $\rho_\theta\subseteq\rho$, then we obtain the following factorization: $$A\to A/\rho_\theta\to A/\rho \cong (A/\rho_\theta)/(\rho/\rho_\theta).$$ Using the obvious terminology, we have the following proposition. \begin{proposition} Every homomorphism of completely inverse $AG^{**}$-groupoids can be factored into a homomorphism preserving the maximal $AG$-group homomorphic images and an idempotent-separating homomorphism. \end{proposition} \begin{proof} The proof is similar to the proof of Proposition III.5.10 \cite{Pet}. \end{proof} The following theorem gives another equivalent conditions for a congruence to be $E$-unitary (cf.~the end of Section $4$). \begin{theorem}\label{ME} Let $\rho$ be a congruence on a completely inverse $AG^{**}$-groupoid $A$.~Then the following conditions are equivalent\emph{:} \par {\indent\rm$(a)$} \ $\rho$ is $E$-unitary\emph{;} \par {\indent\rm$(b)$} \ $\ker(\rho)$ is closed\emph{;} \par {\indent\rm$(c)$} \ $\ker(\rho) =\ker(\rho\vee\sigma);$ \par {\indent\rm$(d)$} \ $\rho\vee\sigma=\tau(\rho);$ \par {\indent\rm$(e)$} \ $\tau(\rho)\in\mathcal{GC}(A)$. \end{theorem} \begin{proof} In the light of Proposition \ref{U(A)}, $(a)$ and $(b)$ are equivalent. $(b)\implies (c)$. This follows from Lemma \ref{lemma}$(c)$. $(c)\implies (d)$. Indeed, $\text{ker}(\tau(\rho))=\text{ker}(\rho)=\text{ker}(\rho\lor\sigma)$ and so $\rho\vee\sigma\subseteq\tau(\rho)$, therefore, $\tau(\rho)\in\mathcal{GC}(A)$. $(d)\implies (a)$. Let $\tau(\rho)\in\mathcal{GC}(A)$. Then $\tau(\rho)$ is $E$-unitary. Since the conditions $(a)$ and $(b)$ are equivalent, we get $\text{ker}(\rho)=\text{ker}(\tau(\rho))$ is closed. Thus $\rho$ is $E$-unitary. $(c)\implies (d)$. By the above $\rho\lor\sigma\subseteq\tau(\rho)$ and so $\rho\lor\sigma, \tau(\rho)\in\mathcal{GC}(S)$. Further\-more, $\text{ker}(\tau(\rho))=\text{ker}(\rho)=\text{ker}(\rho\lor\sigma)$. Hence $\rho\lor\sigma=\tau(\rho)$ (by Theorem \ref{AG-group congruences}). $(d)\implies (e)$. This is trivial. \end{proof} In view of the above theorem, Theorem \ref{main kernel} and Corollary \ref{C3}, $$\rho_N\kappa=\{\rho_N\cap\nu:\mu\subseteq\nu\}=[\rho_N\cap\mu,\rho_N]$$ for every $N\lhd A$. Consequently, $$ \mathcal{U}(A)=\bigcup_{N\hspace{0.2mm}\lhd\hspace{0.35mm}A}\{\rho_N\cap\nu:\mu\subseteq\nu\}. $$ Thus we have the following statement (see the end of Section $4$). \begin{proposition}\label{pi_rho} Let $\rho$ be a congruence on a completely inverse $AG^{**}$-groupoid \nolinebreak $A$. Then$:$ $(a)$ \ $\rho\vee\mu=\mu\rho\mu;$ $(b)$ \ $a(\rho\vee\mu)b\Leftrightarrow (aa^{-1})\rho(bb^{-1});$ $(c)$ \ $\pi_\rho=\sigma\rho\hspace{0.3mm}\sigma\cap\mu\rho\mu$. \end{proposition} \begin{proof} $(a)$. It is clear that $\mu\rho\mu\subseteq\rho\vee\mu$ is a reflexive, symmetric and compatible relation on $A$. We show that it is also transitive. Let $a(\mu\rho\mu)b(\mu\rho\mu)c$. Then there exist elements $r,s,t,w\in A$ such that $$aa^{-1}=rr^{-1},\ \ (r,s)\in\rho, \ \ ss^{-1}=bb^{-1},$$ $$bb^{-1}=tt^{-1}, \ \ (t,w)\in\rho, \ \ ww^{-1}=cc^{-1}.$$ Also, $(rr^{-1})\rho(ss^{-1})=(tt^{-1})\rho(ww^{-1})$. Consequently, $$a~\mu(aa^{-1})=(rr^{-1})\rho(ww^{-1})=(cc^{-1})\mu~c.$$ Hence $(a,c)\in\mu\rho\mu$, as required, and so $\mu\rho\mu$ is a congruence on $A$ contained in $\rho\vee\mu$. Since evidently $\rho,\mu\subseteq\mu\rho\mu$, then $(a)$ holds. $(b)$. $(\Longrightarrow)$. Let $a(\rho\hspace{0.35mm}\vee\hspace{0.2mm}\mu)b$. Then by $(a)$, $aa^{-1}=cc^{-1},(c,d)\in\rho$ and $dd^{-1}=bb^{-1}$ for some $c,d\in A$. Hence $(cc^{-1})\rho(dd^{-1})$. Thus $(aa^{-1})\rho(bb^{-1})$. $(\Longleftarrow)$. If $(aa^{-1})\rho(bb^{-1})$, then $a~\mu(aa^{-1})\rho(bb^{-1})\mu~b$. Thus $a(\rho\vee\mu)b$. $(c)$. In the light of Lemma \ref{lemma} $(a)$ and the condition $(b)$, $\alpha=\sigma\rho\hspace{0.3mm}\sigma\cap\mu\rho\mu$ is a congruence on $A$. It is evident that $\rho\subseteq\alpha$ and ker$(\alpha)=$ ker$(\sigma\rho\hspace{0.3mm}\sigma)$, therefore, $\alpha$ is an $E$-unitary congruence on $A$ which contains $\rho$. Finally, let $\rho\subseteq\beta=\rho_N\cap\nu\in\mathcal{U}(A)$, where $N\lhd A$ and $\mu\subseteq\nu$. Then $\rho\vee\sigma\subseteq\rho_N\vee\sigma=\rho_N$ and $\rho\vee\mu\subseteq\nu\vee\mu=\nu$. It follows that $\alpha\subseteq\rho_N\cap\nu=\beta$, as required. \end{proof} Using the condition $(b)$ one can prove the following theorem. \begin{theorem}Let $A$ be an arbitrary completely inverse $AG^{**}$-groupoid.~Then the map $\phi:\mathcal{C}(A)\to\mathcal{C}(A)$ defined by $$\rho\phi=\rho\vee\mu$$ is a homomorphism of $\,\mathcal{C}(A)$ onto the lattice $[\mu,A\times A]$ of semilattice congruences on $A$. \end{theorem} Let $\rho\in\mathcal{C}(A)$.~Since $\rho\subseteq A\times A$, then there is the least semilattice congruence $\mu_\rho$ containing $\rho$ (note that $\mu_\rho=\pi_\rho$, see the proof of Proposition \ref{pi_rho}$(c)$). \medskip Define the relation $\overline{\mu}$ on $\mathcal{C}(A)$ by putting $$ (\rho_1, \rho_2) \in \overline{\mu}\Leftrightarrow\rho_1\vee\mu = \rho_2 \vee \mu. $$ In view of the above theorem, $\overline{\mu}$ is a congruence on $\mathcal{C}(A)$. \begin{proposition} Let $A$ be a completely inverse $AG^{**}$-groupoid, $\rho \in \mathcal{C}(A)$. Then the elements $\rho, \mu_{\rho}$ and $\rho \vee \mu$ are $\overline{\mu}$-equivalent and $\rho \subseteq \mu_{\rho} \subseteq \rho \vee \mu$.~Moreover, the element $\rho \vee \mu$ is the largest in the $\overline{\mu}$-class $\rho \overline{\mu}$. \end{proposition} Also, let $a,b\in A$ and $\rho\in\mathcal{C}(A)$. If $(a,b)\in\mu$, then clearly $(a\rho)\mu(b\rho)$ in \nolinebreak $S/\rho$. If in addition, $\rho\subseteq\mu$, then $(a\rho)\mu(b\rho)$ in $S/\rho$ implies that $(a,b)\in\mu$, since $\mu$ is idempotent-separating.~It follows that $A/\mu\cong (A/\rho)/\mu$, that is, $A$ and $A/\rho$ have isomorphic minimal idempotent-separating homomorphic images.~We may say that $\rho$ \emph{preserves the minimal idempotent-separating homomorphic images}.~Since for all $\rho\in\mathcal{C}(A)$, $\rho_\kappa\subseteq\rho$, then we have the following factorization: $$ A\to A/\rho_\kappa\to A/\rho \cong (A/\rho_\kappa)/(\rho/\rho_\kappa). $$ We get the following proposition. \begin{proposition} Every homomorphism of completely inverse $AG^{**}$-group\-oids can be factored into a homomorphism preserving the minimal idempotent-separa\-ting homomorphic images and an idempotent pure homomorphism. \end{proposition} \begin{proof}Let $\rho$ be a congruence on a completely inverse $AG^{**}$-groupoid $A$. Then obviously $\rho_\kappa\subseteq\mu$, and hence the canonical homomorphism of $A$ onto $A/\rho_\kappa$ preserves the minimal idempotent-separating homomorphic images. Also, the mapping $a\rho_\kappa\to a\rho$ $(a\in A)$ is an idempotent pure homomorphism of $A/\rho_\kappa$ onto $A/\rho$, since $\text{ker}(\rho)=\text{ker}(\rho_\kappa)$.~The thesis of the proposition follows now from the above factorization. \end{proof} Since $\mu$ is also the least semilattice congruence on $A$ (Theorem \ref{mu}), then we may replace in the above proposition the words "minimal idempotent-separating" by the words "maximal semilattice". Once again we prove some equivalent conditions for $A$ to be $E$-unitary. \begin{theorem} Let $A$ be a completely inverse $AG^{**}$-groupoid.~The following conditions are equivalent$:$ \par {\indent\rm$(a)$} \ $A$ is $E$-unitary$;$ \par {\indent\rm$(b)$} \ $\sigma=\tau;$ \par {\indent\rm$(c)$} \ every idempotent pure congruence on $A$ is $E$-unitary$;$ \par {\indent\rm$(d)$} \ there exists an idempotent pure $E$-unitary congruence on $A;$ \par {\indent\rm$(e)$} \ $\tau$ is $E$-unitary. \end{theorem} \begin{proof} $(a)\implies (b)$. Let $\pi= 1_A$. Then $\sigma=\tau(\pi)=\tau(1_A)=\tau$. $(b)\implies (a)$. Let $\sigma=\tau$. Then $\pi =\sigma_\kappa=\tau_\kappa=1_A$. $(a)\implies (c)$. Firstly, $A/\tau$ is $E$-unitary. Indeed, if $(e\tau)(a\tau) = f\tau\in E_{A/\tau}$, where $a \in A$ and $e, f \in E_A$, then $(ea, f) \in\tau$.~Hence $ea \in E_A$.~Thus $a \in E_A$. Secondly, if $\rho\in\mathcal{C}(A)$ is idempotent pure, then $\rho\subseteq\tau$. Consequently, $\rho$ is $E$-unitary. $(c)\implies (d)$. Obvious. $(d)\implies (e)$. If $\rho$ is an idempotent pure $E$-unitary congruence on $A$, then we get $\pi\subseteq\rho\subseteq\tau\subseteq\sigma$, so $\tau$ is $E$-unitary (Theorem \ref{ME}). $(e)\implies (a)$. Let $ea = f$, where $a\in A$ and $e,f\in E_A$. Then $(e\tau)(a\tau) = f\tau$ and so $a\in\text{ker}(\tau) = E_A$. Thus $S$ is $E$-unitary. \end{proof}
1,314,259,995,375
arxiv
\section{Introduction} We consider a class of multi-agent systems called \emph{leader-follower systems}. One or more agents in the network are \emph{leaders}; the states of these agents serve as the reference states for the system. The remaining agents are \emph{followers} that update their states based on relative information about their own states and the states of their neighbors. Hence, a system owner may control the entire network by controlling the leader agents. These dynamics arise in applications such as vehicle formation control~\cite{RBM05}, distributed clock synchronization~\cite{EKPS04}, and distributed localization in sensor networks~\cite{BH09}, for example. It has been shown that the performance of leader-follower systems, where the followers are subject to stochastic disturbances, depends on the location of the leaders in the network~\cite{BH06,PB10}. This relationship naturally raises the question of how to choose leaders, from among all agents in the network, that give the best performance. The \emph{$k$-leader selection problem} posed in~\cite{PB10}, is to select $k$ leaders that minimizes the \emph{network coherence}, an $H_2$ norm of the leader-follower system that quantifies the variance of the nodes states from the target states. The optimal leader set can be found by an exhaustive search over all subsets of size $k$, but this solution is not tractable in large networks. Several recent works have proposed efficient approximation algorithms for the $k$-leader selection problem. In these works, the full network topology is known, and the leader set is selected using an offline algorithm. In~\cite{LFJ14}, the authors use a convex relaxation of the combinatorial $k$-leader selection problem, and they present an efficient interior point method this relaxed problem. In~\cite{CBP14}, the authors show that the mean-square deviation from the desired state is proportional to super-modular set function~\cite{N78}. As such, a greedy, polynomial time solution can be used to find a leader set for which the mean-square error is within a provable bound of optimal. Other works have explored the optimal leader selection in leader-follower systems without stochastic disturbances~\cite{CABP14} and in systems where both the leaders and followers are subject to stochastic disturbances and the leaders also have access to relative state information~\cite{LFJ14}. Finally, recent work has shown that the optimal single leader and optimal pair of leaders in a network are those nodes with maximal information centrality and joint centrality, respectively~\cite{FL13}. In this work, we investigate a variation on the $k$-leader selection problem for $k=1$ that we call the \emph{in-network leader selection problem}. We consider two performance measures, the total variance of the deviation from the desired trajectory and the maximum variance of this deviation, over all agents. Initially, a single agent is selected as the leader, and the network must have a single leader at all times. The agents must collaborate to find the leader that minimizes the specified performance measure (total or maximal variance), and they must do so within the network using only communication between neighbors. This problem may arise in a multi-agent system that has limited bandwidth to the system owner. The system owner controls the network through a single communication channel with the leader. The system can determine the optimal leader within the network and inform the owner when leadership is transferred from one agent to another. Ideally, should the network topology change, the leadership should be transferred to the new optimal leader. We first show a connection between optimal leader selection and two discrete facility location problems, the $p$-median problem and the $p$-center problem~\cite{HD02}. We then leverage previously proposed algorithms for these facility location problems to develop a solution for the in-network leader selection problem for acyclic graphs. Our approach is self-stabilizing, meaning that if, after the algorithm has found an optimal leader, the network topology changes, the algorithm will then find an optimal leader for the new topology. The remainder of this work is organized as follows. In Section~\ref{model.sec}, we present the system model and background on the leader selection problem. In Section~\ref{problem.sec}, we formalize the in-network leader selection problem. Section~\ref{algorithm.sec} gives our algorithm and its analysis, including the relationship to the $p$-median and $p$-center facility location problems. In Section~\ref{extend.sec}, we show how our algorithm can be extended to leader-follower systems in weighted graphs. Finally, we conclude in Section~\ref{conclusion.sec}. \vspace{-.2cm} \section{Background and System Model} \label{model.sec} We consider a system of $n$ interconnected agents or nodes. The communication structure is modeled by a connected, undirected graph $G = (V,E)$, where $V$ is the set of of agents, with $|V| = n$ and $E$ is the set of communication links, with $|E| = m$. An edge $(i,j)$ exists in $E$ if and only if node $i$ and node $j$ can send information between them. We denote the neighbor set of a node $i$ by $\Ne{i}$. Every agent has a scalar-valued state $x_i(t)$, and the state of the system is given by the vector $x(t) \in \mathbb{R}^n$. A subset of the agents $\ensuremath{U} \subseteq V$ are \emph{leaders}. We assume that the states of these agents serve as a reference for the network; these states remain fixed at an identical, constant trajectory $\ensuremath{\overline{x}} \in \mathbb{R}$, i.e., \begin{equation} x_i(t) = \ensuremath{\overline{x}},~\text{for all}~ {t \geq 0},~\text{for all}~i \in \ensuremath{U}. \label{leader.eq} \end{equation} The remaining agents, those in $V~\backslash~\ensuremath{U}$, are \emph{followers}. Each follower $i$ updates its state based on its own state and those of its neighbors using a simple noisy consensus algorithm, \begin{equation} \dot{x}_i(t) = \sum_{j \in \Ne{i}} x_i(t) - x_j(t) + w_i(t), \label{follower.eq} \end{equation} where $w_i(t)$ is a white stochastic disturbance with zero-mean and unit variance. Without loss of generality, we let the nodes be ordered so that $x(t) = [\ensuremath{x^\textit{f}(t)}\ensuremath{^{\mathsf{T}}}~\ensuremath{x^l(t)}\ensuremath{^{\mathsf{T}}}]\ensuremath{^{\mathsf{T}}}$, where $\ensuremath{x^\textit{f}(t)}$ denotes is the $|V \setminus \ensuremath{U}|$-vector containing the states of the follower agents and $\ensuremath{x^l(t)}$ is the $|\ensuremath{U}|$-vector containing the states of the leader agents. Let $L$ be the Laplacian matrix of the graph induced by the leader-follower dynamics. For now, we restrict our study to consensus dynamics over an unweighted graph, and thus we use the unweighted Laplacian matrix. In Section~\ref{extend.sec}, we show how our problem formulation and solution can be extended to dynamics that depend on a weighed Laplacian matrix. Each component of $L$ is defined as \[ L_{ij} = \left(\begin{array}{ll} -1&~\text{if}~(i,j) \in E~\text{and $i$ is a follower} \\ \text{degree($i$)}&~\text{$i=j$ and $i$ is a follower} \\ 0&~\text{otherwise.} \end{array} \right. \] $L$ can be decomposed according to the leader/follower designations as \[ L = \left[ \begin{array}{cc} \ensuremath{L_{ff}} & \ensuremath{L_{fl}} \\ 0 & 0 \end{array} \right], \] where $\ensuremath{L_{ff}}$ defines the interactions between followers and $\ensuremath{L_{fl}}$ defines the impact of the leaders on the followers. We note that (since $G$ is connected), $\ensuremath{L_{ff}}$ is positive definite~\cite{PB10}. With this decomposition, the evolution of the states of the follower nodes can be expressed as, \begin{equation} \dot{x}^f(t) = - \ensuremath{L_{ff}} x(t) + w^f(t), \end{equation} where $w^f(t)$ is the $|V \setminus \ensuremath{U}|$-vector of stochastic disturbances. In the absence of the noise terms $w_i(t)$, the agents would converge to the desired state $\ensuremath{\overline{x}}$. With these noise terms, the agents' states do not converge, however, the steady-state variance of their deviations from $\ensuremath{\overline{x}}$ are bounded~(see e.g., \cite{PB10}). Formally, we define the steady-state variance of agent $i$ as \[ \ssv{i} = \lim_{t \rightarrow \infty} \expec{(x_i(t) - \ensuremath{\overline{x}})^2}. \] It was shown in ~\cite{BH06} that this variance is related to the $(i,i)^{th}$ entry of $\ensuremath{L_{ff}}^{-1}$ as $\ssv{i} = \frac{1}{2} (\ensuremath{L_{ff}}^{-1})_{ii}$. It has also been shown that, for $|\ensuremath{U}| = 1$, the steady-state variance of a node $i$ can also be expressed in terms of the \emph{resistance distance} from $i$ to the leader, where the resistance distance is defined as follows. Let the graph represent an electrical network where each edge is a unit resistor. The resistance distance $\resist{i}{j}$ between two nodes $i$ and $j$ is the potential difference between them when a one ampere current source is connected from node $j$ to node $i$. \begin{theorem}[See \cite{GBS08}] For a single leader $s$, the steady-state variance of the deviation from $\ensuremath{\overline{x}}$ at an agent $i$ is related to the resistance distance between $s$ to $i$ as $\ssv{i} = \frac{1}{2} \resist{i}{s}$. \end{theorem} In a general graph, resistance distance is a distance function. For a simple, connected graph only, the resistance distance between nodes $i$ and $j$ is equivalent to the conventional graph distance $\dist{i}{j}$ where $\dist{i}{j}$ is the length of the shortest path between $i$ and $j$~\cite{KR93}. \section{Problem Formulation} \label{problem.sec} The steady-state variance of each agent depends on the choice of the leader set $\ensuremath{U}$. The leader selection problem involves identifying a set $\ensuremath{U}$ that minimizes a function of these variances. We next define the functions that we use to measure the performance of a leader set, followed by a formal definition of the leader selection problems we address in this work. \subsection{Performance Measures} We quantify the relationship between the leader set and the steady-state variance using two performance measures. The first is the \emph{total steady-state variance}, which is, \[ \Terr{\ensuremath{U}} := \sum_{i \in V \setminus U} \ssv{i}. \] This error measure, related to the coherence of the network~\cite{BJMP12}, has been studied in previous works on leader selection~\cite{PB10,LFJ14,CABP14}. We also consider the \emph{maximum steady-state variance} over all agents, \[ \Merr{\ensuremath{U}} := \max_{i \in V \setminus U} \ssv{i}. \] As far as we are aware, this error measure has not been considered in previous works. \begin{figure} \centering \includegraphics[scale=.6]{mixed_graph} \caption{Example graph with optimal leaders. For LS-TV, the optimal leader is node $b$, for which $\Terr{\{b\}} = 4.5$ and $\Merr{\{b\}} = 1.5$. For LS-MV, the optimal leader is node $a$, for which $\Terr{\{a\}} = 5$ and $\Merr{\{a\}} = 1$.} \label{mixed_graph.fig} \end{figure} \subsection{The Leader Selection Problem} The goal of the leader selection problem is to identify a leader set $\ensuremath{U}$ such that the steady-state variance of the agents in minimized. The \emph{total variance $k$-leader selection problem} ($k$-LS-TV) is \begin{equation} \label{Tleaderprob.eq} \begin{array}{cc} \text{minimize} &~\Terr{\ensuremath{U}}\\ \text{subject to}&~|\ensuremath{U}| = k. \end{array} \end{equation} The \emph{maximal variance $k$-leader selection problem} ($k$-LS-MV) is \begin{equation} \label{Mleaderprob.eq} \begin{array}{cc} \text{minimize} &~\Merr{\ensuremath{U}}\\ \text{subject to}&~|\ensuremath{U}| = k. \end{array} \end{equation} For $k=1$, we omit $k$ from the naming convection, denoting these problems by LS-TV\ and LS-MV. We note that the optimal leader set may be different depending on which performance measure is used. An example of this is given in Figure~\ref{mixed_graph.fig} for $k=1$. A naive solution to the this problem is to compute $\Terr{\ensuremath{U}}$ ($\Merr{\ensuremath{U}}$) for all subsets $\ensuremath{U} \subseteq V$ with $|\ensuremath{U}| = k$ and to choose the leader set for which this function is minimized. However, this approach has combinatorial complexity. Several works have proposed approximation algorithms for LS-TV, that run in polynomial time. Notably, in~\cite{CBP14}, it was shown that $\Terr{\ensuremath{U}}$ is proportional to a super-modular function, which implies that a simple greedy (polynomial time) leader selection scheme yields a leader set whose error is within a bounded approximation of optimal. We note that the error function $\Merr{\ensuremath{U}}$ is not super-modular. A proof this is given in Appendix~\ref{notsubmod.app}. Therefore, the results for the approximation algorithm for LS-TV\ do not necessarily extend to LS-MV. \vspace{-.2cm} \subsection{The In-Network Leader Selection Problem} We now propose a variation on the $k$-leader selection problem where agents collaborate to determine the optimal leader set. For this initial investigation of the problem, we restrict our focus to the case where $k=1$ and where the network is a connected, acyclic graph. Initially, a single agent is selected as leader arbitrarily. The state of this leader is the reference state $\ensuremath{\overline{x}}$ for the network. In each round, every agent communicates with its neighbors. During this communication, the leader agent may choose to transfer the leadership role from itself to one of its neighbors. Since the system is synchronous, the old and new leader can schedule the leadership handoff so that it occurs instantaneously. The new leader adopts the reference state $\ensuremath{\overline{x}}$ and then follows the dynamics in (\ref{leader.eq}). After an agent transfers its leadership, it behaves as a follower according to the dynamics in (\ref{follower.eq}). The dynamics of the remaining nodes remain unchanged after the handoff. The goal is for the leadership role to eventually be transferred to and remain at the leader $u$ for which $F(\{u\})$ is minimized, where $F(\cdot) = \Merr{\cdot}$ or $F(\cdot) = \Terr{\cdot}$. Our aim is to develop algorithms for in-network leader selection that are \emph{self-stabilizing}, which is formally defined as follows. \begin{definition} \label{stable.def} A leader selection algorithm is \emph{self-stabilizing} if, starting with any initial leader, the leadership role is transferred to the optimal leader in a finite number of steps (\emph{convergence}) \emph{and} it remains there for all subsequent steps (\emph{closure}). \end{definition} A self-stabilizing algorithm is robust to changes in the network. For example, suppose the optimal leader is selected, and then the network topology changes. Provided that after this change, the network topology remains stable for ``long enough'', the algorithm will then find the optimal leader for the new topology. The standard definition of a self-stabilizing algorithm~\cite{D74} requires that the algorithm converge to the desired solution from any initial state. Definition~\ref{stable.def} differs from this definition slightly in that we require that a single leader is selected in the initial state. This assumption implies that a self-stabilizing leader selection algorithm need not be robust to agent failures since the failure of the leader agent violates the assumption. Conceivably, under certain assumptions about the frequency of failures, a distributed leader election algorithm~\cite{L96} could be used to replace a failed leader agent. The integration of leader election and optimal in-network leader selection is a subject for future work. \vspace{-.2cm} \section{In-Network Leader Selection Algorithm} \label{algorithm.sec} In this section, we present our algorithms for in-network leader selection. To develop our algorithms, we leverage connections between the leader selection problem and two discrete facility location problems, the $p$-median problem and the $p$-center problems. We next present a summary of these problems, followed by a description of our algorithms and their analysis. \vspace{-.2cm} \subsection{The $p$-Median and $p$-Center Problems} The $p$-median and $p$-center problems belong to a larger class of discrete facility location problems~\cite{HD02}. In this class of problems, there is a discrete set of demand nodes, a discrete set of candidate facility locations, and specified distance $\fdist{i}{j}$ between each demand node $i$ and candidate location $j$. Each demand node will be assigned to the closest facility. The objective is to select a set of $p$ facility locations that minimizes some function of the distances between the demand nodes and their assigned facilities. In both the $p$-median and $p$-center problems, the candidate facility locations coincide with the locations of the demand nodes. We denote this set of nodes by $V$. For the $p$-median problem, $p \leq |V|$ facility locations are selected so as to minimize the sum of the distances between each demand node and its assigned facility, \begin{equation} \label{pmedianprob.eq} \begin{array}{ll} \text{minimize} &\displaystyle ~\sum_{i \in V} \min_{u \in \ensuremath{U}} \fdist{i}{u} \\ \text{subject to} & \ensuremath{U} \subseteq V,~ |\ensuremath{U}| = p. \end{array} \end{equation} In the $p$-center problem, $p$ facility locations are selected to minimize the maximum distance between any demand node and its assigned facility, \begin{equation} \label{pcenterprob.eq} \begin{array}{ll} \text{minimize} &\displaystyle ~\max_{i \in V} \min_{u \in \ensuremath{U}} \fdist{i}{u} \\ \text{subject to} &~ \ensuremath{U} \subseteq V,~|\ensuremath{U}| = p. \end{array} \end{equation} In general, the $p$-median problem is NP-Hard, and the $p$-center problem is NP-complete. In simple graphs, however, these problems can be solved in polynomial-time~\cite{KH79}. If we consider the agents of the network as the set of demand nodes/candidate facility locations and let the $\fdist{i}{j} = \resist{i}{j}$ for all nodes $i,j$, then for $p = 1$, a solution to the $p$-median problem gives a solution to LS-TV. Similarly, a solution to the $p$-center problem gives a solution to LS-MV. \begin{algorithm} \caption{Algorithm for in-network leader selection for LS-TV.} \label{leader.alg} \begin{algorithmic} \State {Send $s_i(t)$ to all agents $j$ in $\Ne{i}$} \State {Receive $s_j(t)$ from all $j \in \Ne{i}$} \If {$|\Ne{i}| = 1$} \State{$s_i(t+1) \gets 1$} \Else \State {$s_i(t+1) \gets 1+ \sum \left(\Nsminus{i}\right)$} \EndIf \If {(\textit{leader} = TRUE) and \\~~~~~~~~~~~($s_j(t) > s_i(t+1)$ for some $j \in \Ne{i}$) } \State{$k = \arg \max_{j \in \Ne{i}} s_j(t)$} \State{$\textit{leader} \gets$ FALSE} \State{transfer leadership to agent $k$} \ElsIf{receive leadership transfer from neighbor} \State{$\textit{leader} \gets$ TRUE} \EndIf \end{algorithmic} \end{algorithm} \subsection{Self-Stabilizing Leader Selection Algorithm} We now describe our in-network leader selection algorithm for acyclic graphs. Recall that in an unweighted, acyclic, connected graph, the resistance distance $\resist{i}{j}$ between nodes $i$ and $j$ is equal to the graph distance $\dist{i}{j}$\footnote{The resistance distance between neighboring nodes $i$ and $j$ is $\resist{i}{j} = 1$.}. When the distance $\fdist{i}{j}$ in (\ref{pmedianprob.eq}) and (\ref{pcenterprob.eq}) is given by the graph distance, a solution to the $1$-median problem is called a \emph{median} of the graph and a solution to the $1$-center problem is called a \emph{center} of the graph. Thus, a median of the graph is an optimal leader for LS-TV, and a center of the graph is an optimal leader for LS-MV. A graph may have more than one median or center; should the graph have multiple medians or centers, any one can be selected as the optimal leader. Our approach for in-network leader selection is based on a simple self-stabilizing approach for finding the median and center of graph that was proposed in~\cite{BGKP99}, which we summarize below. For ease of presentation, we adopt a synchronized communication model. Communication takes place in rounds, and in each round, an agent exchanges information with all of its neighbors. We note median and center-finding algorithms have been proven correct under much less restrictive communication models. In the median finding algorithm, each agent has a variable $s_i(t)$.These variables are called the \emph{$s$-values} of the agents. Since the algorithm is self-stabilizing, there is no need to specify an initial value for $s_i(0)$; the algorithm will converge to the correct solution from any initial value. In each round, the agent sends its $s_i(t)$ to all of its neighbors. Let $\Nsminus{i}$ be the set of values $s_i(t)$ received from $j \in \Ne{i}$ in round $t$, with one maximum $s_j(t)$ removed. The agent then updates $s_i(t)$ as follows, \begin{equation} \label{medianalg.eq} s_i(t+1) = \left\{ \begin{array}{ll} 1 &~\text{if}~|\Ne{i}| = 1 \\ 1 + \sum \left(\Nsminus{i}\right)&~\text{otherwise}. \end{array}\right. \end{equation} Here $\sum \left(\Nsminus{i}\right)$ denotes the sum over the elements in the set $\Nsminus{i}$. The center finding algorithm operates in a similar manner. Each agent has a variable $h_i(t)$. These variables are called the \emph{$h$-values} of the agents. In each round, all neighbors exchange their $h$-values. Let $\Nhminus{i}$ be the set of values $h_i(t)$ received from $j \in \Ne{i}$, with one maximum $h_j(t)$ removed. The agent updates $h_i(t)$ as follows, \begin{equation} \label{centeralg.eq} h_i(t+1) = \left\{ \begin{array}{ll} 0 &~\text{if}~|\Ne{i}| = 1 \\ 1 + \max \left(\Nhminus{i}\right)&~\text{otherwise}. \end{array}\right. \end{equation} Here $\max \left(\Nhminus{i}\right)$ denotes the maximal value in the set $\Nhminus{i}$. The following theorem gives a formal statement of the convergence and closure guarantees of these algorithms \begin{theorem}[See \cite{BGKP99}] \label{selfstable.thm} There exists a finite time $T$ such that $s_i(t+1) = s_i(t)$ ($h_i(t+1) = h_i(t)$) for all $i \in V$, for all $t \geq T$. For all $t \geq T$, the medians (centers) of the graph are the only nodes with $s_i(t) \geq s_j(t)$ ($h_i(t) \geq h_j(t)$) for all $j \in \Ne{i}$. \end{theorem} This theorem implies that there is a time $T$ after which the $s$-values ($h$-values) of the agents do not change. When the system reaches this time $T$, we say that the $s$-values ($h$-values) have \emph{stabilized}. While the self-stabilizing graph median and graph center algorithms can be used to identify an optimal leader (an agent whose $s$ or $h$-value is greater than or equal to all of its neighbors), it does not solve the in-network leader selection problem completely. We also require that the network has a single leader throughout the execution of the algorithm, not just after the values stabilize. Our in-network leader selection algorithm leverages the self-stabilizing algorithms above to locate the optimal leader. It also ensures that the network has a single leader at all times. The leadership role is transferred from agent to agent. After the $h$-values or $s$-values stabilize, the leadership role is transferred to an optimal leader in finite time, and this leader remains the leader for all future rounds of the algorithm. Our self-stabilizing algorithm for in-network leader selection for LS-TV\ is given in Algorithm~\ref{leader.alg}. One agent is initially selected as leader. The agents each have an $s$-value $s_i(t)$ with an arbitrary initial value. The algorithm executes in synchronous rounds. Each round is divided into two phases. In the first phase, the agents update their $s_i(t)$ values according to (\ref{medianalg.eq}). In the second phase, the agent that currently holds the leadership role checks if it has any neighbors $j$ with $s_j(t) > s_i(t+1)$. If so, the leader transfers leadership to a neighbor $k$ with maximal $s_k(t)$. The algorithm for LS-MV\ is nearly identical to Algorithm~\ref{leader.alg}. The only difference is that agents each store $h_i(t)$ instead of $s_i(t)$, and they update this variable according to (\ref{centeralg.eq}). The leadership transfer phase is identical, with the $h$-values replacing $s$-values. Psueodocode for this algorithm is given in Appendix~\ref{maxleadercode.app}. \subsection{Algorithm Analysis} We now prove the that Algorithm~\ref{leader.alg} is a self-stabilizing algorithm for in-network leader selection that selects the leader $u$ that minimizes $\Terr{\{u\}}$. \begin{theorem} \label{leaderT.thm} Algorithm~\ref{leader.alg} is a self-stabilizing in-network leader selection algorithm for LS-TV. \end{theorem} To prove this theorem, we first introduce some useful definitions and results from~\cite{BGKP99}. We define a directed graph $G(s) = (V(s), E(s))$ induced by the $s$-values of the agents after these values have stabilized. The vertex set $V(s)$ is equal to the vertex set $V$ of the original undirected graph $G$. The edges in $E(s)$ are defined as, \begin{align*} &E(s) = \{(i,j)~|~j \in \Ne{i}~\text{and}\\ &~~~~~~~~~~~~~~~(s(j), j)~\text{is lexicographically the largest}\}. \end{align*} It has been shown that the undirected underlying graph of $G(s)$ is connected, contains exactly one cycle, and this cycle is of length 2~\cite{BGKP99}. Let $i$ and $j$ be the nodes that belong to the unique cycle. Deleting $(i,j)$ and $(j,i)$ from $G(s)$ results in two directed trees, $T_i(s)$, rooted at $i$ and $T_j(s)$, rooted at $j$. \begin{theorem}[Proposition 4.3 and Theorem 4.4 from~\cite{BGKP99}] \label{tree.prop} Each edge in $T_i(s)$ and in $T_j(s)$ is directed from a node to its parent, and if $k$ is a non-leaf node in $T_i(s)$ or $T_j(s)$, then $s_k > s_l$ for each child $l$ of $k$. \end{theorem} We now prove Theorem~\ref{leaderT.thm}. \\ \begin{IEEEproof} It is clear that, since exactly one node is initially the leader, and the leadership role is passed from one node to another, there is exactly one leader at any time. What remains is to show that the algorithm satisfies the convergence and closure properties in Definition~\ref{stable.def}. The values $s_i(t)$, $i \in V$ are updated according to the algorithm in~(\ref{medianalg.eq}). As this algorithm is self-stabilizing, in finite time, the $s$-values stabilize; there is a time $T$ after which no $s_i(t)$ changes. We denote the stable value of $s_i(t)$ by $s_i$. The optimal leaders for LS-TV\ are those nodes $i$ such that $s_i \geq s_j$ for all $i \in j$. To show that the closure property holds, we must show that, if one of the medians is the leader agent and the $s$-values have stabilized, then this agent remains the leader in all future rounds of the algorithm. Suppose that in a round $t \geq T$, $i$ is the leader and $i$ is such that $s_i(t) \geq s_j(t)$ for all $j \in \Ne{i}$. Then, in phase two of the algorithm round, since $i$ does not have any neighbor $j$ with $s_j(t) > s_i(t)$, agent $i$ will not transfer its leadership. Therefore, the closure property is satisfied. To show that the convergence property holds, we must show that given any initial values for $s_i(0)$, $i \in V$ and any initial leader assignment, in finite time, the $s$-values will stabilize and the leadership role will be at a median of the graph. Theorem~\ref{selfstable.thm} guarantees that the $s$-values will stabilize in finite time. Suppose, after the $s$-values stabilize, a node $u$ has the leadership role, but it is not a median of the graph. We show that in finite time, the leadership role will be transferred to a median of the graph. When a leader transfers leadership to a neighbor, it selects the neighbor with the maximal $s$-value among all its neighbors. Therefore, leadership is only transferred across edges in the directed graph $G(s)$. Suppose, without loss of generality, that the leadership role is at a node $k \neq i$ in the subtree $T_i(s)$. By the definition of $G(s)$, the neighbor $l$ of $k$ with maximal $s_l$ is the parent of $k$ in the tree. Further, by Theorem~\ref{tree.prop}, $s_l > s_k$. Therefore, in the next round, the leadership role will be transferred to agent $l$. By similar reasoning, in a finite number of rounds, the leadership role will be transferred up the tree until it reaches agent $i$. If $s_i > s_j$, then $i$ is the unique median of the graph, since $s_i > s_k$ for all $k \in \Ne{i}$. Similarly, if $s_i = s_j$, then both $i$ and $j$ are medians of the graph. In either of these cases, the leadership role has reached a median. If $s_j > s_i$, $j$ is the unique median of the graph, since $s_j > s_k$ for all $k \in \Ne{i}$. Since $s_i < s_j$ and all other neighbors $k$ of $i$ are such that $s_k < s_i$, in the next round, agent $i$ will transfer leadership to agent $j$. \end{IEEEproof} A similar result also holds for the in-network leader selection algorithm that selects the leader $u$ that minimizes $\Merr{\{u\}}$. We omit this proof since it is similar to the proof of Theorem~\ref{leaderT.thm}. \begin{theorem} Algorithm~\ref{leaderM.alg} is a self-stabilizing in-network leader selection algorithm for LS-MV. \end{theorem} For Algorithm~\ref{leader.alg}, the number of rounds until the $s$-values stabilize, starting from any initial state, is in $\Theta(d)$, where $d$ is the maximum distance from the edge of the network (the nodes $i$ with $| \Ne{i}| = 1$) to a median. Once the $s$-values stabilize, it takes at most $d$ rounds for the leadership role to be transferred to the median node. Therefore the running time of Algorithm~\ref{leader.alg} is in $\Theta(d)$. For Algorithm~\ref{leaderM.alg}, the number of rounds until the $h$-values stabilize is $\Theta(r)$ where $r$ is the radius of the graph. Once the $h$-values stabilize, it takes at most $r$ rounds for the leadership role to be transferred to the center. Therefore the running time of Algorithm~\ref{leaderM.alg} is in $\Theta(r)$. \vspace{-.2cm} \section{Extension to Weighted Graphs} \label{extend.sec} Several recent works have investigated a generalization of the $k$-leader selection problem where the objective is for every agent to maintain specified differences between its state and the states of its neighbors, \begin{equation} \label{diff.eq} x_i(t) - x_j(t) = \pos{i}{j}~~~~\text{for all}~(i,j) \in E, \end{equation} where $\pos{i}{j}$ denotes the desired difference, for example, the position of node $i$ relative to node $j$~\cite{BH06,BH08,CBP14}. The states of the leaders are the reference states. Let $\hat{x}$ denote vector of desired states that satisfy (\ref{diff.eq}) when the leader states are fixed. A follower updates its state based on noisy measurements of the differences between its state and the states of its neighbors. The dynamics of each follower agent is given by \[ \dot{x}_i(t) = -\sum_{j \in \Ne{i}} W_{ij} \left(x_i(t) - x_j(t) - \pos{i}{j}+ \epsilon_{ij}(t)\right), \] where $W_{ij}$ is the weight for link $(i,j)$ and $\epsilon_{ij}(t)$, $(i,j) \in E$, are independent, identically distributed zero-mean white noise processes with autocorrelation functions $\expec{\epsilon_{ij}(t)\epsilon_{ij}(t + \tau)} = \nu_{ij} \delta(\tau)$. The link weights are chosen so that $W_{ij} = \frac{\nu_{ij}}{\Deg{i}}$, where $\Deg{i} = \sum_{j \in \Ne{i}}\frac{1}{\nu_{ij}}$, which correspond to the best linear unbiased estimator of the leader agents state when $x_j(t) = \hat{x}_{j}$ for all $j \in \Ne{k}$~\cite{BH06}. In this system, the leader-follower dynamics are depend on a weighted Laplacian matrix, also called the conductance matrix, which defined as \[ L_{ij} = \left\{ \begin{array}{ll} - \frac{1}{\nu_{ij}} & ~\text{if}~(i,j) \in E \\ \Deg{i} &~\text{if}~i=j \\ 0 &~\text{otherwise.} \end{array} \right. \] Note that if $\nu_{ij} = 1$ for all $(i,j) \in E$, then $L$ is standard, unweighted Laplacian matrix. We assume that the weights are symmetric, meaning $\nu_{ij} = \nu_{ji}$ for all $(i,j) \in E$. Let $\sigma_i$ be the steady-state variance of the deviation from $\hat{x}$, \[ \sigma_i = \lim_{t \rightarrow \infty} \expec{x_i(t) - \hat{x}_i}^2. \] As with the unweighted Laplacian, $\sigma_i = \frac{1}{2} (\ensuremath{L_{ff}}^{-1})_{ii}$, where $\ensuremath{L_{ff}}$ is the follower-follower submatrix of the weighted Laplacian. It has also been shown that $\sigma_i = \frac{1}{2}\resistwt{i}{j}$, where $\resistwt{i}{j}$ is the resistance distance between $i$ and $j$ in the electrical network where each edge $(i,j)$ has a $\nu_{ij}$ resistor~\cite{BH06}. If we define $\fdist{i}{j} = \resistwt{i}{j}$, then, as before, a solution to the $p$-median problem for $p=1$ is a solution to LS-TV, and a solution to the $p$-center problem for $p=1$, is a solution to LS-MV. It is also straightforward to extend our in-network leader selection algorithms to this weighted graph setting. Let $\ensuremath{\hat{G}}=(V,\ensuremath{\hat{E}})$ be a weighted, connected acyclic graph, where the weight of edge $(i,j)$ is $c_{ij} = 1/\nu_{ij}$. In this graph, the resistance distance between nodes $i$ and $j$ is equal to the graph distance, where the graph distance is the sum of the weights of the edges in the path between nodes $i$ and $j$~\cite{KR93}. Our in-network leader selection algorithms must find the median and center for this weighted graph. We first state an important theorem about the medians of acyclic graphs with positive edge weights \begin{theorem}[Lemma 7.2 in \cite{BGKP99}] The medians of an acyclic, connected graph remain unchanged independent of any change in the weight of the edges. \end{theorem} Since the location of the graph medians, and hence the identity of an optimal leader, do not depend on the edge weights, Algorithm~\ref{leader.alg} also solves the in-network leader selection for LS-TV\ in acyclic weighted graphs. It has been shown that the self-stabilizing center finding algorithm in (\ref{centeralg.eq}) can be extended to weighed graphs by making small modifications~\cite{BGKP99}. Let $\Nhminusw{i}$ be defined as follows, \begin{align*} \Nhminusw{i} &= \{h_j(t) + c_{ij}~|~(i,j) \in \Ne{i}\} \\ &~~~~~~~~~~~~~~- \max\{h_j(t) + c_{ij}~|~(i,j) \in \Ne{i}\}. \end{align*} The modified center-finding algorithm is as follows, \[ h_i(t+1) = \left\{ \begin{array}{ll} 0 &~\text{if}~|\Ne{i}| = 1 \\ 1 + \max \left(\Nhminusw{i}\right)&~\text{otherwise}. \end{array}\right. \] By incorporating these small changes to the update to the $h$-values, Algorithm~\ref{leaderM.alg} can be used to solve the in-network leader selection problem for LS-MV\ in acyclic weighted graphs. \section{Conclusion} \label{conclusion.sec} In this work, we have posed a new leader selection problem called the in-network leader selection problem, whereby agents must collaborate to find the leader that optimizes a chosen performance measure. We have considered two performance measures, the total steady-state variance of the deviation and the maximal steady-state variance of the deviation. We have shown that finding a leader that minimizes the total variance is equivalent to solving the $p$-median facility location problem for $p=1$ and that finding a leader that minimizes the maximal variance is equivalent to the $p$-center facility location problem for $p=1$. Leveraging the connections to these problems, we have developed two self-stabilizing in-network leader selection algorithms, one for each performance measure. Finally, we have shown how our algorithms can be extended to weighted graphs. In future work, we plan to investigate generalizing our approach to in-network algorithms for the $k$-leader selection problem where $k$ is greater than one. \begin{algorithm} \caption{Algorithm for in-network leader selection for LS-MV.} \label{leaderM.alg} \begin{algorithmic} \State {Send $h_i(t)$ to all agents $j$ in $\Ne{i}$} \State {Receive $h_j(t)$ from all $j \in \Ne{i}$} \If {$|\Ne{i}| = 1$} \State{$h_i(t+1) \gets 0$} \Else \State {$h_i(t+1) \gets 1+ \max \left(\Nhminus{i}\right)$} \EndIf \If {(\textit{leader} = TRUE) and \\~~~~~~~~~~~($h_j(t) > h_i(t+1)$ for some $j \in \Ne{i}$) } \State{$k = \arg \max_{j \in \Ne{i}} h_j(t)$} \State{$\textit{leader} \gets$ FALSE} \State{transfer leadership to agent $k$} \ElsIf{receive leadership transfer from neighbor} \State{$\textit{leader} \gets$ TRUE} \EndIf \end{algorithmic} \end{algorithm} \appendices \section{Illustration that $\Merr{\cdot}$ is not Super-Modular} \label{notsubmod.app} \begin{figure} \centering \includegraphics[scale=.5]{line_graph} \caption{Example network demonstrating that the error measure $\Merr{\cdot}$ is not super-modular. Let $A =\{x\}$ and $B=\{x,y\}$. Then, $\Merr{A} = 2$, $\Merr{A \cup \{v\}} = 2$, $\Merr{B} = 1.5$, and $\Merr{B \cup \{v\}} = 1$. Thus, $\Merr{A} - \Merr{A \cup \{v\}} < \Merr{B} - \Merr{B \cup \{v\}}$.} \label{line.fig} \end{figure} A super-modular function is defined as follows. Let $V$ be a finite set and let $A$ and $B$ be sets with $A \subseteq B \subseteq V$. A function $f$ is \emph{super-modular} if and only if for all $v \in V \setminus B$, \[ f(A) - f(A \cup \{v\}) \geq f(B) - f(B \cup \{v\}). \] In Figure~\ref{line.fig}, we give an example that illustrates that $\Merr{\cdot}$ is not a super-modular function. \newpage \section{Pseudocode for In-Network Leader Selection Algorithm for LS-MV} \label{maxleadercode.app} Pseudocode for the self-stabilizing algorithm that solves LS-MV\ is given in Algorithm~\ref{leaderM.alg}. \bibliographystyle{IEEEtran}
1,314,259,995,376
arxiv
\section{Introduction} Star formation requires cool and high-density ISM in which most of the hydrogen is in molecular form so that the studies of distribution and properties of the molecular clouds are very important to understand the star formation processes. Emission from the tracer CO molecule has been widely used to estimate the distribution and amount of H$_2$ in the Galaxy and other galaxies. Nevertheless, the CO lines in dwarf irregular galaxies are relatively weak (e.g., Ohta et al.\ 1993), making the surveys of molecular clouds toward the dwarf galaxies difficult. The Large Magellanic Cloud (LMC) is one of the nearest galaxies to our own. Studies of this galaxy have provided valuable information for our understanding of the universe and galaxies concerning various aspects, including the evolution of stars and stellar clusters, owing to its unrivaled closeness to the solar system ($D \sim$ 50 kpc) and favorable viewing angle. The LMC also provides a unique opportunity for studying molecular clouds and star formation in galaxies whose environment is different from that in the Galaxy. In the LMC, the gas-to-dust ratio is $\sim$ 4 times higher (Koornneef 1982) and the metal abundance is about $\sim$ 3 to 4 times lower (Rolleston, Trundle, \& Dufton 2002; Dufour 1984) than that of the Galaxy. For these reasons, the LMC has been surveyed at a wide variety of wavelengths. H\,{\scriptsize I} maps of moderate ($\sim$15$'$) to high ($\sim$1$'$) angular resolution have been obtained with the Parkes 64 m telescope (McGee \& Milton 1966; Rohlfs et al.\ 1984; Luks \& Rohlfs 1992) and with the Australian Telescope Compact Array (e.g., Kim et al.\ 1998), respectively. These studies have shown that the H\,{\scriptsize I} distribution is dominated by many features, like filaments, shells, or holes. The ionized-gas content has been investigated with the aid of H$\alpha$ photographs (e.g., Henize 1956; Davies et al.\ 1976; Meaburn 1980; Kennicutt \& Hodge 1986) and the radio continuum (e.g., Haynes et al.\ 1991; Filipovic et al.\ 1996; Dickel et al.\ 2005). The H$\alpha$ and radio continuum images show a variety of interstellar shells, ranging from small supernova remnants to large supergiant shells more than a few 100 pc across. The stellar contents of the LMC have been widely studied by photometry of the stars from NIR to optical bands (e.g., Ita et al.\ 2004, Zaritsky et al.\ 2004, Blum et al.\ 2006, Kato et al.\ 2006). Especially, the stellar clusters and associations in the LMC have been surveyed and cataloged by many authors (e.g., Lucke \& Hodge 1970; Hodge 1988). Bica et al.\ (1996) estimated the ages of 504 stellar clusters and 120 associations based on their color indices in the $UBV$ bands. Soft X-ray images of the LMC have also been obtained by the ROSAT satellite (Snowden \& Petre 1994), revealing a variety of discrete sources, like supernova remnants, X-ray binaries, supersoft sources, and also diffuse sources associated with superbubbles and supergiant shells. Recent Spitzer observations cover $\sim 7\arcdeg \times 7\arcdeg$ of the entire LMC revealing the distribution and properties of the dust, YSOs, and evolved stars (Mexiner et al.\ 2006). Studies of molecular gas in the LMC began with the observations with either low angular resolution or a small spatial coverage. Cohen et al.\ (1988) obtained the first complete CO map of the LMC with the southern CfA 1.2 m telescope at CTIO. However, the survey was limited by the low spatial resolution, $8.\arcmin8 $ corresponding to 130 pc at the distance of the LMC. High-resolution CO observations of selected regions by the SEST 15 m telescope have been performed in the LMC (e.g., Israel et al.\ 1986; Johansson et al.\ 1994; Caldwell \& Kutner 1996; Kutner et al.\ 1997; Johansson et al.\ 1998; Israel et al.\ 2003). They have mapped some of the well-known HII regions, for example, 30 Doradus, N 11, and the molecular cloud complex extending some 2 kpc south of 30 Doradus. Although these observations revealed detailed structure of the individual molecular clouds at a linear resolution of less than 10 pc, they are limited in spatial coverage, about one square degree. Fukui et al.\ (1999) carried out and completed a survey of the LMC in CO $J =$ 1--0 by NANTEN, a 4 m radio telescope installed at the Las Campanas Observatory, Chile to reveal a molecular gas distribution with resolution high enough to identify individual molecular clouds in the LMC. The 3 $\sigma$ noise level of the velocity-integrated intensity was $\sim$ 1.8 K km s$^{-1}$. This corresponds to $N({\rm H}_2) \sim 1.3 \times 10^{21}$ cm$^{-2}$, by using a conversion factor of $X_{\rm CO} = 7 \times 10^{20}$ cm$^{-2}$(K km s$^{-1}$)$^{-1}$ (see section 4.2). The first results are presented in Fukui et al.\ (1999) with particular emphasis on the formation of populous clusters. The catalog of 107 molecular clouds and a comparison of these molecular clouds with HI distribution are described by Mizuno et al.\ (2001b). A comparison with the young stellar clusters and HII regions are the subjects of Yamaguchi et al.\ (2001c). The comparisons of the molecular clouds with HI gas and infrared emission by IRAS were made by Sakon et al.\ (2006) and Hibi et al.\ (2006). In order to have more comprehensive understanding of the distribution and properties of the molecular clouds in the LMC, a survey in CO $J =$ 1--0 with a sensitivity higher than that of the first survey has been carried out since 1999. Preliminary results focussing on the mass spectrum of this survey are presented by Fukui et al.\ (2001) and on the comparison of the molecular clouds with super giantshells by Yamaguchi et al.\ (2001a and 2001b). Hughes et al.\ (2006) made a comparison of Radio, FIR, HI and CO in the LMC and revealed the correlation of radio and FIR distributions. In this paper, we present a catalog and properties of the molecular clouds from the complete dataset of the second survey. Comparisons of the molecular clouds with the indication of the formation of clusters and massive stars detected in optical or radio are found in elsewhere (Fukui 2005; Kawamura et al. 2006; Kawamura et al.\ 2008, ``Paper II", hereafter). Fukui (2007) introduces a comparison of the molecular clouds with HI gas distribution. The detailed studies of the HI and molecular gas distribution to seek for the molecular cloud formation will be presented in Fukui et al. (2008, ``Paper III", hereafter). In this paper, the survey by NANTEN is described in Section 2 and the spatial distribution and a catalog of molecular clouds are presented in Section 3. We also discuss the correlations among cloud properties, such as $L_{\rm CO}$ and the virial mass of the molecular clouds and mass spectrum in Section 3. A comparison of the cloud properties from the first survey and the discussion of the CO to $N$(H$_{2}$) conversion factor are presented in Section 4. Section 5 summarizes the paper. \section{Observations} We carried out sensitive CO($J=$1--0) observations toward the LMC by NANTEN, a 4 m radio telescope of Nagoya University at Las Campanas Observatory, Chile. The half-power beam width was $2\arcmin.6$ at 115 GHz. The telescope had a 4 K cooled Nb superconductor-insulator-superconductor mixer receiver, which provided a system noise temperature of $\sim$ 170--270 K, including the atmosphere toward the zenith. The spectrometers were two acoust-optical spectrometers with 2048 channels; one had a velocity coverage and a resolution of 100 km s$^{-1}$ and 0.1 km s$^{-1}$, respectively. The other had a velocity coverage and a resolution of 650 km s$^{-1}$ and 0.65 km s$^{-1}$, respectively. The pointing accuracy was better than $20''$, as checked by optical observations of stars with a CCD camera attached to the telescope, as well as by radio observations of Jupiter, Venus, and the edge of the Sun. Further details about the telescope and related instruments are given by Ogawa et al.\ (1990) and by Fukui and Sakakibara (1992). The spectral intensities were calibrated by employing the standard room-temperature chopper wheel technique (Kutner \& Ulich 1981). An absolute intensity calibration was made by observing Orion-KL [R.A. (B1950) = $5^{\rm h}32^{\rm m}47^{\rm s}\hspace{-3pt}.\hspace{2pt}0$, Decl.\ (B1950) = $-$5$^{\circ}$24$'$$21''$] by assuming its absolute temperature $T_{\rm R}^{*}$ to be 65 K. We also observed the strongest peak position of the LMC, N 159 [R.A. (B1950) = $5^{\rm h}40^{\rm m}1^{\rm s}\hspace{-3pt}.\hspace{2pt}5$, Decl.\ (B1950) = $-$69$^{\circ}$47$' $$2''$] every 2 hours to confirm the stability of the system. The observed region covers $\sim 30$ square degrees where the molecular clouds were detected by the 1st survey. Figure 1 shows the observed region from 1998 April to 2003 August, superposed on the integrate intensity map of the CO from the 1st survey (Fukui et al.\ 1999; Mizuno et al.\ 2001b). In total, about 26,900 positions were observed in Equatorial coordinate (B1950) in position switching. The observed grid spacing was 2$\arcmin$ (corresponding to $\sim$ 30 pc at a distance of the LMC, 50 kpc) with a 2.6$\arcmin$ beam ($\sim$ 40 pc). Out of the $\sim$ 26,900 positions, 6,229 were observed with the narrow band spectrometer for $\sim100$ days from April to November 1998, while the rest were observed with the wide band spectrometer for $\sim 300$ days after March 1999. In this paper, all the spectra observed by the narrow band spectrometer are smoothed to 0.65 km s$^{-1}$ resolution for the reduction to have a uniform velocity resolution throughout the map. The rms noise fluctuations were $\sim$ 0.07 K at a velocity resolution of 0.65 km s$^{-1}$ with $\sim$ 3 minutes integration for an on-position. The typical 3 $\sigma$ noise level of the velocity-integrated intensity was $\sim$ 1.2 K km s$^{-1}$. This corresponds to $N({\rm H}_2) \sim 8 \times 10^{20}$ cm$^{-2}$, by using a conversion factor of $X_{\rm CO} = 7 \times 10^{20}$ cm$^{-2}$(K km s$^{-1}$)$^{-1}$ (see section 4.2). \section{Results} \subsection{Overall distribution} Out of the $\sim 26,900$ observed positions, significant $^{12}$CO ($J$ = 1--0) emission, with an integrated intensity greater than 1.2 K km s$^{-1}$ (the $\sim 3 \sigma$ noise level), was detected at about 1,300 positions, which corresponds to $\sim 5 \%$ of the total observed positions. The total velocity-integrated intensity distribution of the molecular gas is shown in Figure \ref{fig:ii}. The mass of the molecular gas in total is $\sim 5 \times 10^{7} M_{\sun}$ if we use the CO luminosity to hydrogen column density conversion factor, $X_{\rm CO}$-factor, to be $\sim 7 \times 10^{20}$ cm$^{-2}$ (K km s$^{-1}$)$^{-1}$ (see section 4). The CO distribution of the LMC is found to be clumpy with several large molecular cloud complexes contrary to the HI gas distribution, which is composed of many filamentary and shell-like structure (e.g., Kim et al.\ 1999). These clumps of the molecular gas tend to be detected toward the intensity peak of the HI gas as shown in Blitz et al. (2006), Fukui (2007), and Paper III. The cloud complex south of 30 Doradus at $\alpha$(J2000) $\sim 5^{\rm h}40^{\rm m}$ and $\delta$(J2000) from $\sim -71^{\circ}$ to $-69^{\circ} 30 \arcmin$ is remarkable stretching in a nearly straight line from the north to south, as already noted by the previous CO observations (Cohen et al.\ 1988; Fukui et al. 1999; Mizuno et al.\ 2001b). The current survey shows that this molecular cloud complex, ``the molecular ridge'', is actually connected to one another by a low-density molecular gas, while the 1st survey traced only the densest regions and identified the molecular ridge to consist of two or three discrete entities. The arc-like distribution of molecular clouds along the southeastern optical edge of the galaxy (``CO Arc'' in Fukui et al.\ [1999] and Fukui [2002]) is also clearly seen. The current sensitive survey confirms that this ``CO Arc'' indeed clearly represent an arc-like edge of the molecular gas distribution in this eastern boundary of the LMC. Other CO clouds are distributed over the observed area with moderate concentration towards several prominent H\,{\scriptsize II} regions (e.g., N 44 at $\alpha$(J2000) $\sim$ $5^{\rm h}22^{\rm m}$ and $\delta$(J2000) $\sim$ $-68^{\circ}$, N11 at $\alpha$(J2000) $\sim$ $4^{\rm h}55^{\rm m}$ and $\delta$(J2000) $\sim$ $-66\arcdeg 30\arcmin$) and the ``bar'' (e.g., clouds around $\alpha$ $\sim$ $5^{\rm h}20^{\rm m}$ and $\delta$ $\sim$ $-70^{\circ}$). The small molecular clouds are more clearly observable in the current survey, especially, toward a supergiant shell, LMC4, at $\alpha$(J2000) $\sim$ $5^{\rm h}20^{\rm m}$ -- $5^{\rm h}40^{\rm m}$ and $\delta$(J2000) $\sim$ $-68\arcdeg$ -- $-66\arcdeg$(see also Yamauguchi et al.\ 2001a). A detailed comparison of the molecular gas with the indicators of star formation, such as H $\alpha$ or radio continuum will be discussed in elsewhere (Paper II). Figure \ref{fig:iihist} is a histogram of the integrated intensity, $I.I.$, of the each observed position with $I.I. > 0.4$ K km s$^{-1}$ ($\sim 1\sigma$ noise level). About 980 observed positions have $N$(H$_{2}$) $> 10^{21}$ cm$^{-2}$, while only 4 positions have $N$(H$_{2}$) $> 10^{22}$ cm$^{-2}$. Figure \ref{fig:sddist}a shows the radial distribution of CO emission; the surface density, $\Sigma$, is derived by integrating the CO luminosity within annuli spaced by 4$\arcmin$ and then divided by an area of the annuli. The center used is $\alpha$(J2000)$=5^{h}17.6^{m}$, $\delta$(J2000)$=-69\arcdeg2^{'}$ determined from the kinematics of the HI observations by Kim et al.\ (1998). To see the angular distribution of the CO emission, the distribution of the surface density, $\Sigma$, derived by integrating CO luminosity over a sector with a $10\arcdeg$ width and then divided by an observed area of the sector is also shown in Figure \ref{fig:sddist}b. The CO luminosity to mass conversion is carried out by assuming a conversion factor, $X_{\rm CO}$, of $7 \times 10^{20}$ cm$^{-2}$(K km s$^{-1}$)$^{-1}$ (see Section 4.2) for both. Figure \ref{fig:sddist} indicates that the radial profile of the molecular gas decreases moderately along the galacto-centric distance as is also seen in the nearby spiral galaxies (e.g., Wong et al.\ 2002), although the profile does not really fit to a power law as they do for the spiral galaxies by Wong et al. (2000). It is interesting to note the sharp enhancement of the surface density around 2 kpc. This enhancement is due to the molecular cloud complexes, the molecular ridge, N11, and N44. This enhancement is also seen in the angular distribution, especially at about $120 \arcdeg$ due to the molecular ridge. Compared with the nearby galaxies by Wong et al.\ (2002), the local enhancement of the molecular gas is more conspicuous in the LMC. Figure \ref{fig:ch1}a--l are a series of channel maps to show the velocity distribution; they present channel maps with velocities at (a) 200--210 km s$^{-1}$, (b) 210--220 km s$^{-1}$, (c) 220--230 km s$^{-1}$, (d) 230--240 km s$^{-1}$, (e) 240--250 km s$^{-1}$, (f) 250--260 km s$^{-1}$, (g) 260--270 km s$^{-1}$, (h) 270--280 km s$^{-1}$, (i) 280--290 km s$^{-1}$, (j) 290--300 km s$^{-1}$, (k) 300--310 km s$^{-1}$, and (l) 310--320 km s$^{-1}$, respectively. In the velocity range 200--240 km s$^{-1}$, the CO Arc in the southeastern boundary of the LMC and the molecular ridge from the south of 30 Doradus are prominent. The two molecular-cloud complexes associated with active star-forming regions N 11 and N 44 are also prominent in the velocity range between 270--290 km s$^{-1}$. A systematic velocity gradient from the southeast to the northwest has been known from the 1st survey (Mizuno et al.\ 2001b). Our sensitive survey shows a clear feature in the northeast at high velocities, corresponding to LMC 4, in addition to the overall velocity gradient already observed in the 1st survey. Detailed comparisons of the velocity and spatial distributions of the HI and CO emission will be presented in Paper III. \subsection{Identification of CO molecular clouds} To study the properties of the molecular gas in the LMC, individual cloud was identified by the cloud finding algorithm, fitstoprops (Rosolowsky \& Leroy 2006). First, the intensity data cube is converted to a signal-to-noise data cube by dividing through by the noise at each position to search for significant emission by clipping the maps at a constant signal-to-noise ratio; a constant signal-to-noise ratio is used instead of a constant flux density threshold although the sensitivity variation across the map is not large (Figure \ref{fig:rms}). Then, pairs of adjacent velocity channels with normalized flux greater than 3 were searched for. For each pair, these velocity channels and all contiguous data pixels with normalized flux greater than 2 were assigned to a candidate molecular cloud. This process was continued until all the pairs of adjacent $2$ channels with normalized flux greater than 3 have been identified as candidate clouds. We identified 272 clouds, of which 230 were detected at more than two observed positions. In this paper, we shall call these 230 clouds with more than two observed positions ``the GMCs". The position and intensity-weighted mean velocity of the 230 GMCs, and the rest (``the small clouds", in the following) are listed in Tables 1 and 2, respectively. The sensitivity of a dataset influences the cloud properties derived from that data as emphasized by Rosolowsky \& Leroy (2006). In order to reduce the observational bias, and to be able to compare datasets with different signal-to-noise levels, the boundary of cloud is extrapolated to a boundary isosurface of $T_{\rm edge} = 0$ K (see also Blitz \& Thaddeus 1980; Scoville et al.\ 1987). The major and minor axes, size, $R$, position angle, PA, and virial mass, $M_{\rm VIR}$ are derived for 164 GMCs (``Group A GMCs", hereafter) out of 230; the rest (``Group B", hereafter) have a minor axis less than the NANTEN beam, so that the size is not derivable by using the de-convolved moment. The line width, $\Delta V$, and the CO luminosity, $L_{\rm CO}$, are less sensitive to the beam dilution than the cloud size, so that we can determine $\Delta V$ and $L_{\rm CO}$ of the both Group A and B GMCs. Then, $M_{\rm CO}$, the molecular cloud mass, is derived from $L_{\rm CO}$ by using $X_{\rm CO} = 7 \times 10^{20}$ cm$^{-2}$ (K km s$^{-1}$)$^{-1}$ as derived in Section 4.2 by assuming an virial equilibrium, and the mass fraction of helium to be 36 \%. The procedure to derive these properties is described by Rosolowsky \& Leroy (2006) in detail. The derived properties are presented in Table 3. \subsection{Properties of the Molecular Clouds} In this section, we shall consider the Group A GMCs, which are resolved by NANTEN. The Group A GMCs have radii ranging from 10 to 220 pc, line widths between 1.6 and 20.2 km s$^{-1}$, CO luminosity between $1.4 \times 10^{3}$ and $7.1 \times 10^{5}$ K km s$^{-1}$ pc$^{2}$, and virial masses ranging from $9 \times 10^{3}$ to $9 \times 10^{6} M_{\odot}$. Figure \ref{fig:vlsrhist} shows the frequency distribution of the $V_{\rm LSR}$ of the clouds. The distribution is rather nonuniform having peaks at $\sim$220--250 km s$^{-1}$ and $\sim$ 280--290 km s$^{-1}$. The one at $V_{\rm LSR} \sim$ 220--250 km s$^{-1}$ represents the clouds in the southern part of the LMC including the molecular ridge and the CO Arc as seen in Figures \ref{fig:ch1}c--e. The other at $V_{\rm LSR} \sim$ 280--290 km s$^{-1}$ is dominated by the emission from the LMC 4, N44, and N11 regions (Figures \ref{fig:ch1}i--j). Figures \ref{fig:manmin} and \ref{fig:pa164} are the histograms of the ratio of the major and minor axes and the position angle (PA) of the GMCs, respectively. The frequency distribution of PA is rather uniform. The distribution of the ratio of the major and minor axes has a peak at $\sim 1.7$ with an average of 2.5, indicating that cloud are generally elongated. In order to see if the cloud has an alignment with the large-scale structure of the galaxy, such as a spiral pattern, here we introduce a parameter, $\theta$, the angle between the major axis and the tangent at the molecular cloud of a circle with a radius, $d_{\rm cen}$ (Figure \ref{fig:theta}a). Figure \ref{fig:theta}b shows the frequency distribution of $\theta$. Since the uncertainties of the position angle as well as $\theta$ depend on the ratio of the major and minor axes, the histogram of $\theta$ is divided into three groups according to the axial ratio. The histogram of $\theta$ shows the number distribution is nearly uniform and do not have any particular favorable angles. To see if the distribution of $\theta$ has any characteristics with the galacto-centric distance, Figure \ref{fig:theta}c shows a plot of $d_{\rm cen}$ versus $\theta$. Again, the plots are distributed quite uniformly showing no strong dependence of $\theta$ on $d_{\rm cen}$. \subsubsection{Line-width - size relation} In this section, we present the correlation between the line width and the radius of the GMCs. The size, $R$, of the cloud is computed as a geometric mean of the ``de-convolved'' second spatial moments along the major and minor axes which were derived by using the principal component analysis: we ``de-convolve'' the beam by subtracting its size from the measured cloud size in quadrature. A full width at half maximum (FWHM) line width, $\Delta V$, is derived by multiplying the moment of the velocity within a GMC by $\sqrt{8ln(2)}$ (see also Rosolovsky \& Lorey 2006). Figure \ref{fig:rv} shows a plot of log ($\Delta V$) versus log ($R$) of the Group A GMCs. It has been known that the line width and the size of molecular clouds have a good correlation of $\sigma_v \propto R^{\sim 0.5}$ in the solar vicinity (Larson 1981) and in the inner Galaxy (e.g., Dame et al.\ 1986; Solomon et al.\ 1987), while the correlation in the GMCs in this work is rather weak, and the best fitting power law is $\Delta V = 1.3 R ^{0.2}$.with Spearman rank coefficient of 0.3. It is likely that the lack of the dynamic range in size may make the correlation lower in the present study than for the Galactic clouds, although we see a weak positive-correlation in the LMC GMCs. We have applied the cloud identifying algorithm by Rosolowsky \& Leroy (2006) with the parameters used in the current study to identify the GMCs in the LMC to the molecular clouds in the Small Magellanic Cloud (SMC, Mizuno et al.\ 2001a) as well as in the outer Galaxy, the Warp region (Nakagawa et al.\ 2005) and derived the physical properties. When we add the clouds in the SMC and the Warp region to the plot of Figure \ref{fig:rv}a, the dynamic range in $R$ becomes larger and the positive correlation is seen. A line width-size relation $\sigma_v \propto R^{0.5}$ in the inner Galaxy by Solomon et al.\ (1987) as an example is also shown as a dotted line in Figure \ref{fig:rv}. The correlation between the line width and size of the clouds in the LMC, SMC and the Warp region does seem to be consistent with a power law relation $\sigma_v \propto R^{0.5}$ but with a clear offset from the relation determined for the inner Galaxy (Solomon et al.\ 1987). One has to note that at least a part of this offset can be attributed to differences in the methods used to measure cloud properties. The sense of the offset is that for a given radius, the clouds in the inner Galaxy have larger line widths. This may be partially due to the relatively high value of $T_A$ used by Solomon et al.~(1987) to define the cloud radius, implying that the clouds might be smaller for a given value of $\Delta V$. \subsubsection{Virial Mass - CO Luminosity Relation} The determination of the mass of molecular-hydrogen gas is fundamental for understanding the physics of the interstellar medium and star formation in galaxies. In this subsection, we compare the virial mass and the CO luminosity, and discuss the conversion factor from the CO line intensity to the H$_{2}$ column density in the LMC. Figure \ref{fig:mvl} shows the virial mass, $M_{\rm VIR}$, as a function of luminosity, $L_{\rm CO}$, of the clouds in the LMC (filled circle). The plot shows a tight power law for the mass luminosity relation with some dispersion. A least-squares fit to the data gives a power law, [$M_{\rm vir}$/$M_{\odot}$]= 26[$L_{\rm CO}$/(K km s$^{-1}$ pc$^{2}$)]$^{1.1\pm0.3}$, with Spearman rank correlation of 0.8. This relation suggests that clouds are virialized and CO luminosity can be a good tracer of mass in the LMC with a quite constant conversion factor from $L_{\rm CO}$ to mass throughout the mass range $10^4 \la M_{\rm VIR} / M_{\sun} \la 10^{7}$. We have added the re-identified clouds in the SMC (Mizuno et al. 2001a) and the Warp region (Nakagawa et al.\ 2005) to Figure \ref{fig:mvl} as in Section 3.3.1. The clouds in the SMC and the Warp region lie along the best fitting power-law of GMCs in the LMC. \subsubsection{Mass Spectrum} The frequency distribution of the cloud masses has an important impact not only in the star formation but also in cloud formation and destruction. The mass spectrum of the clouds is well fitted by a power law and often presented as $dN/dM \propto M^{-(\alpha +1)}$ or $N_{\rm cloud}$ ($> M$) $\propto M^{-\alpha}$. The preliminary results of the mass spectrum by the second NANTEN survey in the LMC have been already presented and discussed in Fukui et al.\ (2001). They found the mass spectrum derived from the CO luminosity has a slope with $\alpha = 0.9 \pm 0.1$ above the completeness limit of $8 \times 10^4 \, M_{\odot}$. In this section, we present the mass spectra of the mass, $M_{\rm CO}$, derived from a CO luminosity, $L_{\rm CO}$, and a conversion factor, $X_{\rm CO} = 7 \times 10^{20}$ cm$^{-2}$ K km s$^{-1}$ (section 3.2 and section 4.2). Here, the CO luminosity, $L_{\rm CO}$, is less sensitive to the beam dilution than the cloud size, so that we can consider that not only the Groups A GMCs but also the $L_{\rm CO}$ of the Group B GMCs, which we could not derive a size and $M_{\rm VIR}$, are determined well enough to obtain $M_{\rm CO}$ (Section 3.2 and Table 3). The mass spectrum of $M_{\rm CO}$ including both the Group A and B GMCs, is shown in Figure \ref{fig:ms}. The maximum likelihood method (Crawford et al.\ 1970) was applied to obtain the best-fitting power law above the completeness limit, $5 \times 10^4 M{\odot}$. The best fitting power law above the completeness limit is $N_{\rm cloud}$($\geq M_{\rm CO}$) = $6.6 \times 10^{5} M^{-0.75 \pm 0.06}-3.4$. The results indicate that mass of the molecular gas in the LMC is concentrated in the massive clouds, since $\alpha < 2$. The slope of the mass spectrum, thus the fact that the massive clouds contribute to the galactic total mass, is consistent within the current results as well as with what is presented by Fukui et al.\ (2001), although the current result shows shallower slope than the other. This difference in the index values of the best-fitting power law may be explained by the difference in completeness limit. Table 4 shows the index value, $\alpha$, of the power law fit to the different mass range of the GMCs. The best fitting power law obtained for the clouds with $M_{\rm CO} \geq 3 \times 10^{5} M_{\sun}$ is shown in Figure \ref{fig:ms} as an example. Table 4 indicates that the slope of the mass spectrum becomes steeper if we fit only the massive clouds; e.g., $N_{\rm cloud}$($>M_{\rm CO}$) $\propto M_{\rm CO}^{-1.2 \pm 0.2}$ for $M_{\rm CO} \ge 3 \times 10^{5} M_{\sun}$, and the logarithmic slope of the mass spectrum becomes steeper at $\sim 3 \times 10^5 M_{\sun}$. \section{Discussion} \subsection{Comparison with the first Survey} The 2nd survey was carried out to cover the regions where the molecular clouds are detected in the 1st survey. The signal-to-noise ratio of the present observations was higher by a factor of 2 than that in the 1st NANTEN survey (Fukui et al.\ 1999; Mizuno et al.\ 2001b; Yamaguchi et al.\ 2001c). This increase in sensitivity made possible to increase the number of the significant detections and identified clouds. Different cloud identification criteria are used in the current study and those used in Fukui et al. (1999) and Mizuno et al. (2001b). Nevertheless, the number of the clouds with more than two observing positions (``the large clouds in Mizuno et al. (2001b) ) is a factor of 3 larger in the present survey even if we use the same algorithm and criterion. Here we compare the line width-size relation and virial mass-CO luminosity relation of the molecular clouds from the 1st and the current surveys. To compare the results, we re-calculated the size and the virial mass of the large clouds of the 1st survey by subtracting the NANTEN beam from the size in Table 1 of Mizuno et al.\ (2001b). Figure \ref{fig:rdv1st} is a plot of the line width and the size of the clouds derived from both the 1st survey (open circle) and the current survey (filled circle). The clouds from both the current and the 1st survey show a large scatter with a little positive correlation of the line width and the size. An offset from the correlation of the inner Galaxy is also seen. The scatter is larger in the current survey. This may be explained by the difference in the sensitivity; the high sensitivity of the current survey made it possible to decompose a cloud into several individual clouds with different velocities along the same line of site, although these clouds may have been identified as an entity by the 1st survey. Figure \ref{fig:mvlco1st} is a plot of the virial mass, $M_{\rm VIR}$ as a function of luminosity, $L_{\rm CO}$, of the Group A GMCs from the current survey (filled circle) and the 55 clouds from the 1st survey (open circle). The correlations between the $M_{\rm VIR}$ and $L_{\rm CO}$ in both surveys are consistent within the error. Again, the scatter in the current survey is larger, but because the higher sensitivity limit of the current survey enlarges the dynamic range in $M_{\rm VIR}$ and $L_{\rm CO}$ the correlation coefficient remains as high as 0.85. The ratio of the $M_{\rm VIR}$ and $L_{\rm CO}$ is related to the conversion factor, $X_{\rm CO}$-factor, from the CO luminosity to the hydrogen column density, $N$(H$_{\rm 2}$). The consistency of the $M_{\rm VIR}$ and $L_{\rm CO}$ relation in both survey suggests that the $X_{\rm CO}$-factor derived from the both surveys are also consistent. \subsection{CO to $N$($H_2$) Conversion Factor} In the following, we derive $X_{\rm CO}$-factor from the current survey. The determination of the mass of H$_2$ in galaxies is fundamental for an understanding of the interstellar physics and star formation. The principal method for obtaining H$_2$ masses converts the intensity of the CO molecular line emission, $I_{\rm CO}$, into the column density of H$_2$ molecules. The conversion factor, $X$-factor ($X \equiv N$(H$_{2}$) $/$ $I_{\rm CO} = M_{\rm H_2}/L_{\rm CO}$), has been derived for molecular clouds in the solar vicinity based on the assumption of virial equilibrium of individual clouds (e.g., Young \& Scoville 1991). This method has been also used to derive the conversion factors in nearby galaxies, such as the LMC, SMC, M31, M33 and etc., where individual clouds are resolved (e.g., Mizuno et al. 2001a; Mizuno et al. 2001b; Wilson \& Scoville 1990). The plot of the $M_{\rm VIR}$ against $L_{\rm CO}$ in section 3.3.2 (Figure \ref{fig:mvl}) from the current survey suggests that for massive clouds, $M_{\rm CO} > 10^5 M_{\sun}$, it is reasonable to assume virial equilibrium. Here, we shall use the conventional method, by applying the virial theorem to the clouds to estimate the $X_{\rm CO}$-factor in the LMC. The average value of log($M_{\rm VIR} / L_{\rm CO}$) is $1.2 \pm 0.3$, corresponding to $X_{\rm CO} = (7 \pm 2) \times 10^{20}$ cm$^{-2} $(K km s$^{-1}$)$^{-1}$, for the Group A GMCs with $L_{\rm CO}$ higher than the completeness limit, $9 \times 10^{3}$ K km s$^{-1}$ pc$^{2}$, that is equivalent to $M_{\rm VIR} \geq 1.4 \times 10^{5} M_{\odot}$. Figure \ref{fig:mvlhist} is a frequency distribution of log($M_{\rm VIR} / L_{\rm CO}$) of the Group A GMCs, i.e., including all the clouds for which we derived virial masses. The geometric mean of $M_{\rm VIR} / L_{\rm CO}$, and thus the $X_{\rm CO}$-factor, do not differ from the values obtained by using the clouds with $L_{\rm CO} \ge 9 \times 10^{3}$ K km s$^{-1}$ pc$^{2}$ only. This value is slightly less than what we obtained from the 1st survey, $X_{\rm CO} \sim (8 \pm 2) \times 10^{20}$cm$^{-2}$(K km s$^{-1}$)$^{-1}$, after taking into account the beam de-convolution, but is consistent within the error. An $X_{\rm CO}$-factor in the inner Galaxy has been derived by using a correlation between the $\gamma$ ray intensity and the CO intensity along the Galactic plane (Bloemen et al.\ 1986). Bloemen et al.\ (1986) summarized the value of $X$ factors and the value for the inner Galaxy is derived to be $\sim$ (1--3)$\times 10^{20}$cm$^{-2} $(K km s$^{-1}$)$^{-1}$ on average. The $X$-factor obtained above is about twice higher than that of the clouds in the inner Galaxy. Bertoldi \& McKee (1992) argue that the gravitational energy, $W$, of an ellipsoidal cloud is given by $W = -3/5 \times [GM^2 / R] \times [arcsin(e)/e]$, (Eq. A9 of Bertoldi \& McKee 1992) where $G$, $M$, and $R$ are the gravitational constant, mass and the size of the clouds. Here, $e$ is an eccentricity of the cloud, $e = (1-y^2)^{1/2}$, where y is an axial ratio of the cloud. The current sample of the GMCs are not really spherical with the mode of the ratio of the major and minor axes to be $\sim 1.7$ and the average 2.5 as shown in section 3.3 (see also Figure 8). The shape-dependent factor, $a_{2} = R_{m} /R$ $arcsin(e)/e$ (Eq. A8 of Bertoldi \& McKee 1992), ranges from 1 to 0.88 for the current sample of the GMCs in the LMC with $a_2 = 0.99$ for the axis ratio of 2.5. This argument means that the virial mass of the current sample of the GMCs differs from the derived virial mass with about 15 \% at most due to the elliptical shape of the clouds. The deviations of the estimated $M_{\rm VIR}$ from the true $M_{\rm VIR}$ affect the derived $X_{\rm CO}$ factor linearly. Thus the current estimate of the $X_{\rm CO}$ factor can be overestimated by $\sim15 \%$ at most due to the discrepancy of the cloud shape from the spherical symmetry. Rubio et al.\ (1993) suggested a possible dependence of the $X_{\rm CO}$-factor on the cloud size from their observation toward the clouds in the SMC by SEST. Figure \ref{fig:mvl_r} is a plot of the $M_{\rm VIR}/L_{\rm CO}$ against the cloud size. There is no significant correlation in the LMC clouds with scatter as large as an order of magnitude in $M_{\rm VIR}/L_{\rm CO}$, although the range of the cloud sizes is the same as that of the SMC (Rubio et al.\ 1993) It has been claimed that the metallicity is quite uniform in the LMC (e.g., Dufour 1984), while it has been suggested that the metallicity of the outer part of a galaxy is lower than that of the inner region in the Galaxy as well as in some of the nearby galaxies (e.g., Nakagawa et al. 2005). Figure \ref{fig:dcen_lco} shows a plot of $M_{\rm VIR}/L_{\rm CO}$ against the distance from the center of the LMC derived from the HI distribution (Kim et al.\ 1998). The current result shows no clear correlation of $M_{\rm VIR}/L_{\rm CO}$ to the distance from the center, suggesting that $X_{\rm CO}$-factor does not depend on the distance from the center of the LMC. This is consistent with the idea that the metallicity is quite uniform in the LMC. \subsection{Mass Spectrum} The mass spectra of the Galactic clouds ($^{12}$CO or$^{13}$CO) are well fitted by a power law with index values of $\alpha$ $\sim$0.5 -- 1.0 (e.g., Solomon et al.\ 1987; Solomon \& Rivolo 1989; Casoli, Combes, \& Gerin 1984; Digel et al.\ 1996; Dobashi et al.\ 1996), of which the higher values are derived for the clouds with lower mass in the outer Galaxy (e.g., Heyer et al.\ 2001) or those derived by using the data from $^{13}$CO observations (e.g., Yonekura et al.\ 1997; Kawamura et al.\ 1998). Not only the mass distribution of the Galactic clouds but also that of the clouds in nearby galaxies have been studied (Blitz et al. 2006 and the references therein). Most of the galaxies have similar mass distributions, $\alpha \sim 0.7$; an exception is M33, which is steeper than the rest but the dataset has a higher completeness limit (Rosolowsky et al.\ 2003; Rosolowsky et al.\ 2005). A number of numerical simulations have been conducted to obtain a mass spectrum of GMCs in galaxies (e.g., V\'{a}quez-Semadeni et al.\ 1997; Wada et al.\ 2000, and reference therein). Wada et al.\ (2000) carried out a simulation to the H {\sc i} and CO distributions of the LMC-type galaxy specifically at the highest spatial resolution ($\sim$ 7.8 pc) by incorporating fairly realistic star-formation processes and supernova rates. They show that the GMC mass spectrum in an LMC-like galaxy is expected to have a power law with an index value of $\sim 0.7$ if no star formation is taken into account, and that the index becomes steeper around $1.0$ if the dissipation of clouds due to star formation is incorporated. The present mass spectrum appears to be consistent with these values, while those of the non star-forming models may fit slightly better the present result. According to Wada \& Norman (2001), the absence of a massive GMC of $\sim 3 \times 10^{6} \, M_{\Sol}$ is due to the well-mixed, turbulent interstellar medium that tends to form smaller GMCs. The consistency of the mass spectrum among galaxies may suggest that the mechanism of cloud formation and the disruption show similar characteristics among the galaxies. Nevertheless, the truncation of very massive GMCs may suggest that the disruption of the molecular clouds is faster in the massive clouds. It may also suggest that cloud formation takes place inhomogeneously; the mass spectra in different regions of the galaxy may have different slopes and the truncation of the slope might appear when we sum up all the mass spectra within the galaxy. The reason of the truncation is not yet know but the current result present new information leading to a better knowledge of the cloud formation and disruption. \section{Summary} \label{summary} A large-scale $^{12}$CO($J$ = 1--0) survey for the molecular clouds was made toward the Large Magellanic Cloud by NANTEN. An area of $\sim$ 30 square degrees was covered, and significant $^{12}$CO emission ($\geq$ 0.07 K ) was detected at $\sim 1,300$ out of the 26,900 observed positions. We identified 272 molecular clouds, 230 of which were detected at more than three observing positions. The position, mean velocity and velocity dispersion of the 272 clouds, and the extents, position angle and CO luminosity of the 230 GMCs are derived. A reliable size and virial masses are determined for the well resolved 164 GMCs (Group A). The main results are summarized as follows. \begin{enumerate} \item The Group A GMCs have radii ranging from 10 to 220 pc, line widths between 1.6 and 20.2 km s$^{-1}$, CO luminosities between $1.4 \times 10^{3}$ and $7.1 \times 10^{5}$ K km s$^{-1}$ pc$^{2}$, masses derived from CO luminosity from $2 \times 10^{4}$ to $7 \times 10^{6} M_{\sun}$, and virial masses from $9 \times 10^{3}$ to $9 \times 10^{6} M_{\odot}$. The maximum temperature ($T_{\rm R}^{*}$) of the CO line is as high as $\sim$ 2 K, which is detected toward the N 113 and N 159 regions. \item The line width, $\Delta V$, and the radius, $R$, of the Group A GMCs appear to satisfy the slope of the power-law in line width-size relation of the clouds in the Galaxy, but with an offset in the constant of proportionality. \item A least-squares fit to virial mass vs. CO luminosity relation shows a power law, [$M_{\rm vir}$/$M_{\odot}$]= 26[$L_{\rm CO}$/(K km s$^{-1}$ pc$^{2}$)]$^{1.1\pm0.3}$ with Spearman rank correlation of 0.8. This good correlation shows that the CO luminosity is a good tracer of the mass of molecular clouds in the LMC. \item The $I_{\rm CO}$--$N$(H$_{2}$) conversion factor is found to be $X_{\rm CO} \sim$ $7 \times 10^{20}$cm$^{-2} $(K km s$^{-1}$)$^{-1}$ by assuming the virial equilibrium from the Group A GMCs. \item The mass spectrum of the GMCs with $ 5 \times {10}^4 \le M_{\rm CO} \le {10}^7 M_{\sun}$ is well fitted by a power law, $N_{\rm CO}$($>M_{\rm CO}) \propto {(M_{\rm CO}/M_{\sun})}^{-0.75 \pm 0.06}$. This slope is consistent with the previous results obtained from the Galaxy and nearby galaxies. The slope of the mass spectrum becomes steeper if we fit only the massive clouds; e.g., $N_{\rm cloud}$($>M_{\rm CO}$) $\propto M_{\rm CO}^{-1.2 \pm 0.2}$ for $M_{\rm CO} \ge 3 \times 10^{5} M_{\sun}$, suggesting the mass truncation. \end{enumerate} \acknowledgments The NANTEN project is based on a mutual agreement between Nagoya University and the Carnegie Institution of Washington (CIW). We greatly appreciate the hospitality of all the staff members of the Las Campanas Observatory of CIW. We acknowledge Drs Blitz, Rosolowsky and Leroy for their discussion and providing us with their cloud identifying program. We are thankful to many Japanese public donors and companies who contributed to the realization of the project. This work is financially supported in part by a Grant-in-Aid for Scientific Research from the Ministry of Education, Culture, Sports, Science and Technology of Japan (No.\ 15071203) and from JSPS (No.\ 14102003, core-to-core program 17004 and No.\ 18684003). TM is supported by the Japan Society of the Promotion of Science, and MR by the Chilean {\sl Center for Astrophysics} FONDAP No. 15010003. \clearpage
1,314,259,995,377
arxiv
\section{Introduction}\hspace{5mm} As it is known for some time, among various non-linear generalizations or deformations of the usual quantum harmonic oscillator there is a distinguished class of so-called Fibonacci oscillators \cite{Arik1} - the oscillators whose energy spectra satisfy the Fibonacci property (FP), implying: $E_{n+1}=\lambda E_n+\rho E_{n-1}$, with real constants\!\! \footnote {The famous Fibonacci numbers stem from the relation $F_{n+1}=F_{n}+ F_{n-1}$ where $F_0=F_1=1$.} $\lambda$ and $\rho$. As stated in \cite{Arik1}, the Fibonacci class is just the two-parameter deformed family of $p,\!q$-oscillators, introduced in \cite{CJ}. The family of $p,\!q$-oscillators is rich enough. In particular, it contains such exotic one-parameter $q$-oscillator as Tamm-Dancoff (TD) deformed oscillator \cite{Odaka}, \cite{Jagan} which possesses besides the FP a set of nontrivial properties as shown in \cite{GR1}. Moreover, a whole plenty of different one-parameter deformed oscillators are contained in this family as particular cases. Most of them, except for the best known $q$-oscillators of Arik-Cook (AC) \cite{Arik2} and Biedenharn-Macfarlane (BM) \cite{BM}, are not well-studied but nevertheless have some potential \cite{GR2} for possible applications. What concerns the $p,\!q$-deformed Fibonacci oscillators, there exist some rather unusual properties and already elaborated interesting physical applications, see \cite{Ch-J}-\!\cite{GR-UJP}. However, a natural question arises whether the family of $p,\!q$-oscillators exhausts the Fibonacci class. In that connection, recently we have shown in Ref. \cite{GKR} that definite 3-, 4-, and 5-parameter deformed extensions of the $p,\!q$-oscillator considered in \cite{Chung-PhL, Borzov, Mizrahi, Burban} also belong to the Fibonacci class, i.e., possess the FP. In that same paper, we studied a principally different, so-called $\mu$-deformed oscillator proposed earlier in \cite{Jann}, and shown that it {\it does not possess} the FP. For that reason, a new concept has been developed for this $\mu$-oscillator. Namely, it was demonstrated that the $\mu$-oscillator belongs to the more general, than Fibonacci, class of so-called "quasi-Fibonacci"\ oscillators \cite{GKR}. The goal of the present paper is to study yet another classes of nonlinear deformed oscillators which do not belong to the Fibonacci class. We treat, from the viewpoint of three possible ways of generalizing the FP, a class of polynomially deformed oscillators. It is proven, using the notion of deformed oscillator structure function \cite{Mel,Man',Bona}, that those oscillators are principally of non-Fibonacci nature. Then we develop the generalization of FP for these oscillators along three completely different paths: (i) as oscillators with $k$-term generalized Fibonacci property; (ii) as oscillators obeying inhomogeneous Fibonacci relation; (iii) as quasi-Fibonacci oscillators. Besides, we study a family of $(q;{\mu})$-oscillators which is, in a sense, a mix of the quadratic and the AC type $q$-deformed oscillators, and demonstrate its Tribonacci property. This result is extended to a general $r$-th order polynomial in the AC-type of $q$-oscillator bracket $[N]_q$, naturally leading to $k$-bonacci relations. In this respect, let us mention that similar $k$-bonacci relations were treated in \cite{Schork} in connection with generalized Heisenberg algebras \cite{Souza}. Likewise, for the $(q;\{\mu\})$-oscillators with $\{\mu\}\!=\!(\mu_1,\mu_2,..., \mu_r)$, combining the polynomial and the $q$-deformed AC features, the general statement on their $k$-bonacci property is proven. In a similar manner, the three parameter $(p,q;\mu)$-deformed oscillators are treated as well and shown to obey their characteristic Pentanacci property. Finally, for certain four-parameter or $(p,q;\mu_1,\mu_2)$-deformed family of nonlinear oscillators we demonstrate the validity of Nine-bonacci relation by finding explicitly the relevant nine coefficients $A_j(p,q)$. \section{Polynomially deformed or $\{\mu\}$- oscillators}\hspace{5mm} In the preceding work \cite{GKR} we have shown that the $\mu$-oscillator from \cite{Jann} does not satisfy the usual (with {\it two-term} RHS) linear, homogeneous Fibonacci relation (FR) \begin{equation} E_{n+1}=\lambda E_n+\rho E_{n-1}\ , \label{1} \end{equation} with $\lambda$ and $\rho$ some real, constant coefficients. To make the $\mu$-deformed oscillator from \cite{Jann} satisfy a relation like (\ref{1}), the important modification is needed: the coefficients should depend on $n$: $\lambda\!=\!\lambda(n)$, $\rho\!=\!\rho(n)$ (i.e., not constants). That is, this way of modifying the FP involves the coefficients, not the shape of relation. What concerns polynomially deformed oscillators to be studied here, we will demonstrate that they admit three different approaches for generalizing the FP. Like in \cite{Mel,Man'}, we study the algebra of deformed oscillator through its {\em structure function}: $a^{\dagger}a\!=\!\varphi(N)$ and $aa^{\dagger}\!=\!\varphi(N+1)$. Note, the same structure function determines both the basic commutation relation of $a, a^{\dagger}$ and the Hamiltonian and energy eigenvalues: \begin{equation} aa^{\dagger}-a^{\dagger}a=\varphi(N+1)-\varphi(N), \label{2} \end{equation} \begin{equation} H=\frac12\Bigl(\varphi(N+1)+\varphi(N)\Bigr), \hspace{18mm} E_n=\frac12\Bigl(\varphi(n)+\varphi(n+1)\Bigr)\ .\label{3} \end{equation} The latter formula implies usage of the properly modified version of Fock space wherein (see e.g., \cite{Bona}) \begin{equation} a|0\rangle = 0, \hspace{5mm} N |n\rangle = n|n\rangle, \hspace{5mm} \varphi(N)|n\rangle = \varphi(n)|n\rangle, \hspace{5mm} H|n\rangle = E_n|n\rangle . \label{4} \end{equation} In this paper we focus on the polynomially deformed oscillator. Its structure function \begin{equation} \varphi(N)=N+\sum_{i=1}^{r}\mu_iN^{i+1}, \hspace{18mm} \mu_i\geq 0\ , \label{5} \end{equation} involves the parameters $\{\mu\}\equiv(\mu_1, \mu_2,...,\mu_r)$, so these polynomial oscillators may also be termed the $\{\mu\}$-deformed ones. Note, the restriction on $\mu_i$ provides positivity and monotonicity of the energies $E_n$ in (\ref{3}). It is worth to remark that for the $\{\mu\}$-deformed oscillator given by (\ref{5}), the basic relation can be presented, instead of (\ref{2}), also as \begin{equation} aa^{\dagger}-qa^{\dagger}a=f(N)=\sum_{l=0}^{r+1} \alpha_lN^l, \hspace{18mm} \alpha_l\in\mathbf{R}. \label{6} \end{equation} We can translate the form (\ref{2}) of basic relation into the latter one (\ref{6}). Indeed, taking the $q$-commutator of $a$ and $a^{\dagger}$ for the deformed $\{\mu\}$-oscillator we have (we set $\mu_0=1$) \[ aa^{\dagger}-qa^{\dagger}a=\varphi(N+1)-q\varphi(N)= \] \[ =N+1+\sum_{j=1}^{r}\mu_j(N+1)^{j+1}- q\Bigl(N+\sum_{j=1}^{r}\mu_jN^{j+1}\Bigr)= \] \begin{equation} =\sum_{j=0}^{r}\mu_j\biggl(- q N^{j+1}+\sum_{s=0}^{j+1}\frac{(j+1)!}{s!(j+1-s)!}N^s\biggr). \label{7} \end{equation} The latter relation goes over into (\ref{6}) if \begin{equation} \alpha_0=\mu_0+\mu_1+\mu_2+...+\mu_{r}=\sum_{s=1}^{r+1}\mu_{s-1} \label{8} \end{equation} \begin{equation} \alpha_l=-q\mu_{l-1}+\sum_{s=1}^{r+1}\frac{s!}{l!(s-l)!}\mu_{s-1}, \hspace{12mm} 1\leq l\leq r+1 . \label{9} \end{equation} The form of basic relation similar to (\ref{6}) was used in \cite{Chung-JMP} to treat the polynomial oscillators. \subsection{Non-Fibonacci nature of polynomial $\{\mu\}$-oscillators}\hspace{5mm} Let us first demonstrate that the polynomially deformed oscillators, see (\ref{5}), do not satisfy the relation (\ref{1}) if one insists on the constant nature of its coefficients. Usual quantum harmonic oscillator which has $\varphi(n)=n$ and the linear energy spectrum $E_n=\frac12(2n+1)$, is just the particular $r=0$ case of (\ref{5}). As is known, this oscillator with $\lambda=2$ and $\rho=-1$ satisfies the standard FR (\ref{1}) . Such property, however, fails if $r=1$, i.e., for the quadratic, with $\varphi(n)=n+\mu_1n^2$, deformation of harmonic oscillator cannot satisfy the standard FR (\ref{1}). The FR (1) fails also for the cubic $r=2$ extension with $\varphi(n)=n+\mu_1 n^2+\mu_2 n^3$ for which the energy spectrum is $E_n=\frac{1}{2}(n+\mu_1 n^2+\mu_2 n^3+n+1+\mu_1(n+1)^2+\mu_2(n+1)^3)$. To show the failure, we insert the cubic $\varphi(n)$ into (\ref{1}) and deduce the system of equations ($\mu_1,\mu_2\ne 0$): \[n^3: \ \ \ \ \mu_2-\rho\mu_2-\lambda\mu_2 = 0\ ;\] \begin{equation} n^2: \ \ \ \ \hspace{-3mm}-\frac{3}{2}\lambda\mu_2-\rho\mu_1+\frac{9}{2}\mu_2+\mu_1+ \frac{3}{2}\rho\mu_2-\lambda\mu_1 = 0\ ; \label{10} \end{equation} \[n^1: \ \ \ \ 1+\rho\mu_1+3\mu_1-\frac{3}{2}\lambda\mu_2-\lambda-\lambda\mu_1+ \frac{15}{2}\mu_2-\frac{3}{2}\rho\mu_2-\rho = 0\ ;\] \[n^0: \ \ \ \ \frac{1}{2}\rho-\frac{1}{2}\rho\mu_1+\frac{5}{2}\rho+\frac{9}{2}\mu_2- \frac{1}{2}\lambda-\frac{1}{2}\lambda\mu_1- \frac{1}{2}\lambda\mu_2+\frac{1}{2}\rho\mu_2+\frac{3}{2}=0\ .\] \noindent The top two equations are solved with $\lambda=2$ and $\rho=-1$, but, these values are incompatible with the rest of equations in the system, that proves the statement. One can prove for general situation that the $r$-th order polynomially deformed oscillator (the structure function is of the order $r\geq2$) does not satisfy the standard FR (\ref{1}). Again, the equations got at two senior powers of $n$ yield $\lambda=2$ and $\rho=-1$ as solution, but these values are incompatible with the rest of equations in the system. Since the FP fails for polynomial $\varphi(n)$, we consider possible extensions of the FP. \vspace{3mm} \subsection{A $k$-term extended ($k$-bonacci) oscillators}\hspace{5mm} We begin with quadratic oscillator and extend the FR by adding one term: \begin{equation} E_{n+1}=\lambda_0 E_n+\lambda_1 E_{n-1}+\lambda_2 E_{n-2}\ . \label{11} \end{equation} This is the {\it three-step generalized} Fibonacci or Tribonacci (see e.g. \cite{Schork}) relation. As $\lambda_0$, $\lambda_1$, $\lambda_2$ are constants, and in view of (\ref{3}), it is sufficient to deal with the relation \begin{equation} \varphi_{n+1}=\lambda_0 \varphi_n+\lambda_1 \varphi_{n-1}+\lambda_2 \varphi_{n-2}\ , \hspace{18mm} \varphi_n\equiv\varphi(n). \label{12} \end{equation} Indeed, if (\ref{12}) is valid the relation (\ref{11}) is valid too. So, insert in (\ref{12}) the quadratic $\varphi(n)$ that is the $r=1$ case of (\ref{5}). Solving the system of equations deduced similarly to (\ref{10}) we find $\lambda_0=3$, $\lambda_1=-3$, $\lambda_2=1$. With these coefficients we verify that the relation (\ref{11}) does hold. Now consider general case of polynomially deformed oscillators given by the structure function (\ref{5}), with any $r\geq1$. Accordingly, consider the $k$-term extension of FR, or $k$-bonacci relation, of the form ($n\geq k-1$) \vspace{-2mm} \begin{equation} E_{n+1}=\lambda_0 E_n+\lambda_1 E_{n-1}+\lambda_2 E_{n-2}+...+\lambda_{k-1}E_{n-k+1}=\sum_{i=0}^{k-1}\lambda_i E_{n-i}\ . \label{13} \end{equation} Then the following statement is true. {\bf Proposition 1.} The energy values $E_n$, given by (\ref{3}) of the polynomially deformed oscillator with structure function (\ref{5}) satisfy the $k$-generalized FR (\ref{13}) if $r=k-2$ and $\lambda_i$ are given as\footnote{Since the set of coefficients $\lambda_i$ of (\ref{14}) obviously depend on fixed $k$, we will indicate this explicitly.} \begin{equation} \lambda^{(k)}_i=(-1)^i\frac{k!}{(i+1)!(k-1-i)!}=(-1)^i\left( \begin{array}{c} \hspace{-2mm}k\hspace{-2mm}\\ \hspace{-2mm}i\!+\!1\hspace{-2mm}\\ \end{array} \right). \label{14} \end{equation} {\it Proof.} Clearly, the $k$-term relation (\ref{13}) will be valid for the energy values if the structure function given in (\ref{5}) with $r=k-2$ satisfies the same equality, written as \begin{equation} n+1+\sum_{j=1}^{k-2}\mu_j(n+1)^{j+1}-\sum_{i=0}^{k-1} \lambda^{(k)}_i \biggl(n-i+\sum_{j=1}^{k-2}\mu_j(n-i)^{j+1}\biggr)=0\ . \label{15} \end{equation} The latter will be proven by induction. Supposing that the $(k-1)$-term relation \begin{equation} n+1+\sum_{j=1}^{k-3}\mu_j(n+1)^{j+1}-\sum_{i=0}^{k-2}\lambda_i^{(k-1)} \biggl(n-i+\sum_{j=1}^{k-3}\mu_j(n-i)^{j+1}\biggr)=0 \label{16} \end{equation} holds for the structure function $ \varphi(n)\!=\!n\!+\!\sum_{i=1}^{k-3}\mu_in^{i+1} $ with $\lambda_i^{(k-1)}$ as in (\ref{14}), we then prove that the structure function $\varphi(n)=n+\sum_{i=1}^{k-2}\mu_in^{i+1}$ satisfies the relation (\ref{15}) with $\lambda_i^{(k)}$ from (\ref{14}). But, first let us check that (\ref{15}) along with (\ref{14}) is true for $k=2,3$. If $k=2$ that means the usual linear quantum oscillator, the relation is just the standard 2-term FR $E_{n+1}=\lambda_0E_n+\lambda_1E_{n-1}$: it does hold for $\lambda_0=2$ and $\lambda_1=-1$ since for these $\lambda_0,\ \lambda_1$ the following pair of relations \[ \varphi_{n+1}=\lambda_0\varphi_n+\lambda_1\varphi_{n-1}, \hspace{12mm} \varphi_{n}=\lambda_0\varphi_{n-1}+\lambda_1\varphi_{n-2} , \] is obviously true, which read: \begin{equation} n+1-2n-(-1)(n-1)=0, \hspace{12mm} n-2(n-1)-(-1)(n-2)=0. \label{17} \end{equation} If $k=3$ or for quadratic $\varphi_n=n+\mu_1n^2$, we have the Tribonacci relation \begin{equation} \varphi_{n+1}=\lambda_0\varphi_n+\lambda_1\varphi_{n-1}+ \lambda_2\varphi_{n-2}. \label{18} \end{equation} It rewrites in the form (\ref{15}), that is \[ n+1+\mu_1(n+1)^2=3(n+\mu_1 n^2)+(-3)(n-1+\mu_1(n-1)^2)+1(n-2+\mu_1(n-2)^2) \] where $\lambda_0=3$, $\lambda_1=-3$ and $\lambda_2=1$. The latter relation, with account of the both identities in (\ref{17}), reduces to \[ \mu_1(n+1)^2-3\mu_1n^2+3\mu_1(n-1)^2-\mu_1(n-2)^2=0 \] where we encounter the full squares only. This, as easily checked, holds identically. Similar reasonings are applied to the situation of general polynomial $\varphi(n)$. Note first that $\lambda_i^{(k)}$ in (\ref{14}) split as \begin{equation} \lambda_i^{(k)}=\lambda_i^{(k-1)}-\lambda_{i-1}^{(k-1)}. \label{19} \end{equation} Using this splitting in the LHS of the $k$-th order generalized Fibonacci ($k$-bonacci) relation (\ref{15}) we extract {\it twice} the (supposed to hold) $(k-1)$-term generalized FR (\ref{16}): first, in the form of LHS of (\ref{16}), for fixed $n$, with $\lambda_i^{(k-1)}$ involved and, second, in the form of LHS of (\ref{16}) rewritten for $n\rightarrow n-1$ and involving the set $(-1)\lambda_{i-1}^{(k-1)}$ from (\ref{19}). As result, we get the relation consisting of the highest $(k-1)$-th order terms (in $n+1$ or in $n-i$) only: \begin{equation} \mu_{k-1}(n+1)^{k-1}-\sum_{i=0}^{k-1}\lambda_i^{(k)} \bigl(\mu_{k-1}(n-i)^{k-1}\bigr)=0. \label{20} \end{equation} Since $\mu_{k-1}\neq 0$, the latter relation rewrites as \begin{equation} F_n(k,\lambda_i^{(k)})\equiv(n+1)^{k-1}-\sum_{i=0}^{k-1}\lambda_i^{(k)} \bigl(n-i\bigr)^{k-1}=0. \label{21} \end{equation} Then, to prove (\ref{21}), we expand the binomials and interchange the summation order: \[ F_n(k,\lambda_i^{(k)})=\sum_{s=0}^{k-1}\frac{(k-1)!}{s!(k-1-s)!}~n^{k-1-s}1^s- \] \[ -\sum_{i=0}^{k-1}(-1)^i\frac{k!}{(i+1)!(k-1-i)!} \sum_{s=0}^{k-1}\frac{(k-1)!}{s!(k-1-s)!}~n^{k-1-s}(-1)^si^s= \] \[ =\sum_{s=0}^{k-1}\frac{(k-1)!}{s!(k-1-s)!}~n^{k-1-s}\biggl(1-(-1)^sk! \sum_{i=0}^{k-1}(-1)^i\frac{i^s}{(i+1)!(k-1-i)!}\biggr)= \] \[ =\sum_{s=0}^{k-1}\frac{(k-1)!}{s!(k-1-s)!}n^{k-1-s}\biggl((-1)^{s+1} \sum_{-1\leq i\leq k-1}(-1)^{i}\frac{k!}{(i+1)!(k-1-i)!}~i^s\biggr) \] (note that the entity $1$ is included in the sum as the additional $i=-1$ term). Shifting the index $i$ as $i\to i-1$ we obtain \[ F_n(k,\lambda_i^{(k)})=\sum_{s=0}^{k-1}\frac{(k\!-\!1)!}{s!(k\!-\!1\!-\!s)!} n^{k-1-s}\biggl((-1)^{s+1}\!\sum_{-1\leq i-1\leq k-1}(-1)^{i-1}\frac{k!}{i!(k-i)!}(i\!-\!1)^s\biggr)\!= \] \[ =\sum_{i=0}^{k-1}\frac{(k-1)!}{s!(k-1-s)!}n^{k-1-s}\biggl((-1)^s \sum_{i=0}^{k}(-1)^{i}\frac{k!}{i!(k-i)!}(i-1)^s\biggr)=0\ \] where the fact of final turning into zero is due to the formula \[ \sum_{j=0}^{k}(-1)^j\frac{k!}{j!(k-j)!}(j-1)^m=0, \hspace{8mm} m=0,1,2,...,k-1\ , \] which can be proven analogously to the known formula \cite{Korn} \[ \sum_{j=0}^k(-1)^j\frac{k!}{j!(k-j)!}~j^m=0, \hspace{8mm} m=0,1,2,...,k-1\ . \] Thus we gain the proof. {\bf Remark 1.} It is remarkable that the set (\ref{14}) of the coefficients $\lambda_i^{(k)}$\hspace{-1mm}, $i\!=\!0,1,2,...,k\!-\!1$, which provide the validity of the $k$-term Fibonacci relation (\ref{13}) for the polynomially deformed oscillators with the structure function $\varphi(n)=n+\sum_{i=1}^{k-2}\mu_in^{i+1}$, see (\ref{5}), are {\em totally independent} of the parameters $\mu_i$ of $\varphi(n)$. In particular, some of the $\mu_i$ (but not the "senior"\ one $\mu_{k-2}$) may be equal to zero. {\bf Remark 2.} The content of the Proposition 1 can be extended to the cases $r\!<\!k\!-\!2$ or $r\!>\!k\!-\!2$. Namely, it can be demonstrated that the $k$-term Fibonacci relation is satisfied for all the polynomial oscillators for which $r\!<\!k\!-\!2$. Equivalently, the oscillator with $r$-th order polynomial structure function satisfies all the $k$-term generalized FR such that $k>r+2$, with appropriate coefficients. For instance, the quadratic oscillator which obeys the 3-term or Tribonacci relation, see (\ref{18}), with fixed $\lambda_0$, $\lambda_1$ and $\lambda_2$ equal respectively to 3, -3, 1, obviously satisfies, with definite four coefficients, also the 4-term relation \[ \varphi_{n+1}=(\lambda_0-1)\varphi_n+(\lambda_0+\lambda_1)\varphi_{n-1}+ (\lambda_1+\lambda_2)\varphi_{n-2}+\lambda_2\varphi_{n-3}\ , \] and with proper coefficients also the higher order 5-term, 6-term, etc., $k$-bonacci relations. On the other hand, the oscillator with $r$-th order polynomial structure function does not satisfy any $k$-term generalized Fibonacci relations such that $k<r+2$. Accordingly, the $k$-term Fibonacci relation is not valid for those polynomial oscillators for which $r>k-2$. \vspace{3mm} \vspace{0.2cm} \subsection{\hspace{-2mm} Polynomial $\{\mu\}$-oscillators: inhomogeneous FR}\hspace{5mm} Here we consider an alternative (though also linear in the energy eigenvalues) form of generalized FR which is valid for the polynomially deformed oscillators: \begin{equation} E_{n+1}=\lambda E_n +\rho E_{n-1} +\sum_{i=0}^{k-1}\alpha_{i}n^i, \label{22} \hspace{15mm} E_n=\frac12\bigl(\varphi(n)+\varphi(n+1)\bigr). \end{equation} For obvious reason and in analogy with \cite{Asveld}, we call such an extension of FR the "inhomogeneous Fibonacci relation". Again it is sufficient to deal, instead of the energy by itself, with the structure function (\ref{5}) of deformed oscillator. So, consider two relations \begin{equation} \varphi(n\!+\!1)\!=\!\lambda \varphi(n)+\rho\, \varphi(n\!-\!1) +\sum_{i=0}^{k-1}\tilde\alpha_{i}\,n^i, \hspace{8mm} \varphi(n+2)\!=\!\lambda \varphi(n+1) +\rho\, \varphi(n) +\sum_{i=0}^{k-1}\tilde{\tilde\alpha}_{i}\,n^i. \label{23} \end{equation} Validity of these two equations will guarantee fulfillment of the inhomogeneous Fibonacci relation (\ref{22}) if in addition we require $\tilde\alpha_{i}+\tilde{\tilde\alpha}_{i}=\alpha_i$. It can be shown that the ($k+2$)-term\footnote{We count all the terms in (22), including $\lambda$-term and $\rho$-term.} inhomogeneous FR is satisfied for all the polynomial oscillators for which $r=k$. Inversely, the oscillator with $(r+1)$-th order polynomial structure function satisfies any $(k+2)$-term inhomogeneous FR such that \ \ $k\!>\!r$. On the other hand, the oscillator with $r$-th order polynomial structure function does not satisfy all the $k$-term generalized inhomogeneous FRs such that $k\!<\!r$. Accordingly, the $k$-term FR is not valid for all the polynomial oscillators for which $r\!>\!k$. Instead of proving the general statements of the latter paragraph we only give particular examples, the necessary data for which are placed in the Table. \begin{center} \begin{tabular}{|c|l|l|l|} \hline {} & {} & {} & {} \\ & Coefficients $\tilde{\alpha}_0, \tilde{\alpha}_1, ..., \tilde{\alpha}_r$ & Coefficients $\tilde{\tilde\alpha}_0, \tilde{\tilde\alpha}_1, ..., \tilde{\tilde\alpha}_r$ & Coefficients $\alpha_0, \alpha_1, ..., \alpha_r$ \\ {} & from (\ref{23}) & from (\ref{23}) & from (\ref{22}) \\ \hline {} & {} & {} & {} \\ $k=1$ & $\tilde{\alpha}_0=2\mu_1$ & $\tilde{\tilde\alpha}_0=2\mu_1$ & $\alpha_0=4\mu_1$ \\ \hline {} & {} & {} & {} \\ $k=2$ & $\tilde{\alpha}_0=2\mu_1$ & $\tilde{\tilde\alpha}_0=2\mu_1+6\mu_2$ & $\alpha_0=4\mu_1+6\mu_2$ \\ {} & $\tilde{\alpha}_1=6\mu_2$ & $\tilde{\tilde\alpha}_1=6\mu_2$ & $\alpha_1=12\mu_2$ \\ \hline {} & {} & {} & {} \\ $k=3$ & $\tilde{\alpha}_0=2\mu_1+2\mu_3$ & $\tilde{\tilde\alpha}_0=2\mu_1+6\mu_2+14\mu_3$ & $\alpha_0=4\mu_1+6\mu_2+16\mu_3$ \\ {} & $\tilde{\alpha}_1=6\mu_2$ & $\tilde{\tilde\alpha}_1=6\mu_2+24\mu_3$ & $\alpha_1=12\mu_2+24\mu_3$ \\ {} & $\tilde{\alpha}_2=12\mu_3$ & $\tilde{\tilde\alpha}_2=12\mu_3$ & $\alpha_2=24\mu_3$ \\ \hline {} & {} & {} & {} \\ $k=4$ & $\tilde{\alpha}_0=2\mu_1+2\mu_3$ & $\tilde{\tilde\alpha}_0=2\mu_1+6\mu_2$ & $\alpha_0=4\mu_1+6\mu_2$ \\ {} & {} & \ \ \ \ \ $+14\mu_3+30\mu_4$& \ \ \ \ \ {$+16\mu_3+30\mu_4$}\\ & $\tilde{\alpha}_1=6\mu_2+10\mu_4$ & $\tilde{\tilde\alpha}_1=6\mu_2+24\mu_3+70\mu_4$ & $\alpha_1=12\mu_2+24\mu_3+80\mu_4$ \\ {} & $\tilde{\alpha}_2=12\mu_3$ & $\tilde{\tilde\alpha}_2=12\mu_3+60\mu_4$ & $\alpha_2=24\mu_3+60\mu_4$ \\ {} & $\tilde{\alpha}_3=20\mu_4$ & $\tilde{\tilde\alpha}_2=20\mu_4$ & $\alpha_2=40\mu_4$ \\ \hline {} & {} & {} & {} \\ $k=5$ & $\tilde{\alpha}_0=2\mu_1+2\mu_3+2\mu_5$ & $\tilde{\tilde\alpha}_0=2\mu_1+6\mu_2$ & $\alpha_0=4\mu_1+6\mu_2$ \\ {} & {} & \ \ \ \ \ {$+14\mu_3+30\mu_4+62\mu_5$} & \ \ \ \ \ $+16\mu_3+30\mu_4+64\mu_5$\\ {} & $\tilde{\alpha}_1=6\mu_2+10\mu_4$ & $\tilde{\tilde\alpha}_1=6\mu_2+24\mu_3$ & $\alpha_1=12\mu_2+24\mu_3+80\mu_4$ \\ {} & {} & \ \ \ \ \ {$+70\mu_4+180\mu_5$} & \ \ \ \ \ {$+180\mu_5$} \\ {} & $\tilde{\alpha}_2=12\mu_3+30\mu_5$ & $\tilde{\tilde\alpha}_2=12\mu_3+60\mu_4+210\mu_5$ & $\alpha_2=24\mu_3+60\mu_4+240\mu_5$ \\ {} & $\tilde{\alpha}_3=20\mu_4$ & $\tilde{\tilde\alpha}_3=20\mu_4+120\mu_5$ & $\alpha_3=40\mu_4+120\mu_5$ \\ {} & $\tilde{\alpha}_4=30\mu_5$ & $\tilde{\tilde\alpha}_4=30\mu_5$ & $\alpha_4=60\mu_5$ \\ \hline \end{tabular} \end{center} The quadratically deformed oscillator, with $r=1$ or $\varphi(n)=n+\mu_1n^2$, does not satisfy the standard FR (\ref{1}), but it obeys the simplest inhomogeneous FR \begin{equation} E_{n+1}=\lambda E_n+\rho E_{n-1}+\alpha_0 \ , \hspace{10mm} \lambda=2, \hspace{4mm} \rho=-1 , \hspace{5mm} \alpha_0=4\mu_1, \label{24} \end{equation} see the $k=1$ row in the Table. Let us note that, whatever is $k$ (i.e., for any power in $n$ of the polynomial structure function), the coefficients $\lambda$, $\rho$ will be always $\lambda=2$, $\rho=-1$. The set $\alpha_0, \alpha_1,...,$ however, differs for different $k$, as seen in the five rows of the Table. Remark that, contrary to the case of $k$-bonacci relation where all the coefficients $\lambda_i^{(k)}$ in (\ref{14}) are really constant (independent of $n$ {\em and} $\{\mu\}$), here $\alpha_i$ are functions of $\mu_j$. \subsection{Polynomially deformed oscillators as quasi-Fibonacci ones} \hspace{3mm} In subsection 2.2 we assumed the coefficients $\lambda_i$, $i=0,...,k-1$, in the $k$-generalized Fibonacci (or $k$-bonacci) relation (\ref{13}) to be real constants. In this subsection we modify the initial two-term linear, {\bf standard} FR (\ref{1}) by admitting an explicit dependence on the number $n$ of both $\lambda$ and $\rho$ entering the relation. That is, now we deal with the so-called {\it quasi-Fibonacci relation}\footnote{Below, for convenience, we denote $\lambda(n)$ and $\rho(n)$ also as $\lambda_n$ and $\rho_n$.}: \begin{equation} E_{n+1}=\lambda(n)E_n+\rho(n)E_{n-1}\ . \label{25} \end{equation} Let us note that for the $\mu$-oscillator from \cite{Jann}, which is non-Fibonacci, its quasi-Fibonacci properties have been described in detail in Ref. \cite{GKR} where three different ways of deriving $\lambda_n$ and $\rho_n$ have been explored. Here, for the polynomially deformed or $\{\mu\}$-oscillators, only two of the three are considered. Following the first way we deal with the system of equations related with (\ref{25}), namely \begin{equation} \begin{cases} \varphi(n+1)=\lambda_n \varphi(n)+\rho_n \varphi(n-1)\ ; \cr \varphi(n+2)=\lambda_n \varphi(n+1)+\rho_n \varphi(n)\ . \label{26} \end{cases} \end{equation} Simultaneous validity of them both guarantee fulfillment of (\ref{25}). Solving of (\ref{26}) yields \begin{equation} \lambda_n=\frac{\varphi(n+1)-\rho_n\,\varphi(n-1)}{\varphi(n)}\ , \hspace{5mm} \rho_n=\frac{\varphi(n+2)\varphi(n)-\varphi^2(n+1)} {\varphi^2(n)-\varphi(n+1)\varphi(n-1)}\ . \label{27} \end{equation} With account of the explicit form (\ref{5}) of the structure function we have \[ \rho_n=\frac{\sum_{i=0}^k\mu_in^{i+1}\sum_{j=0}^k\mu_j(n+2)^{j+1}- \sum_{i=0}^k\mu_i(n+1)^{i+1}\sum_{j=0}^k\mu_j(n+1)^{j+1}} {\sum_{i=0}^k\mu_in^{i+1}\sum_{j=0}^k\mu_jn^{j+1}- \sum_{i=0}^k\mu_i(n-1)^{i+1}\sum_{j=0}^k\mu_j(n+1)^{j+1}}. \] The obtained expression for $\rho_n$ by plugging it in eq. (\ref{27}) yields also $\lambda_n$. To proceed in the second way, see \cite{GKR}, we put $ \rho_n=\lambda_{n-1}$ in (\ref{12}), that gives \[ E_{n+1}=\lambda_{n}E_{n}+\lambda_{n-1}E_{n-1} \] or \begin{equation} \lambda_{n+1}+\frac{E_n}{E_{n+1}}~\lambda_n=\frac{E_{n+2}}{E_{n+1}}\ , \hspace{5mm} n\geq 0 \ . \label{28} \end{equation} With the initial condition $\lambda_0=c$, we find by induction the formula \[ \lambda_n\equiv\lambda(n)=\frac{\sum_{j=2}^{n+1}(-1)^{n-j+1}E_j+(-1)^ncE_0}{E_n} \] which in terms of the structure function looks as \[ \lambda_n=\frac{\sum_{j=2}^{n+1}(-1)^{n-j+1}\varphi(j)+(-1)^nc\varphi(0)}{\varphi(n)}\ . \] With account of (\ref{5}), the expressions for $\lambda_n$ and $\rho_n=\lambda_{n-1}$ which provide validity of the quasi-Fibonacci relation (\ref{25}) take the final explicit form \begin{equation} \lambda_n=\frac{\sum_{j=2}^{n+1}(-1)^{n-j+1}\sum_{i=0}^s\mu_ij^{i+1}} {\sum_{i=0}^s\mu_in^{i+1}}\ , \hspace{5mm} \rho_n=\frac{\sum_{j=2}^{n}(-1)^{n-j}\sum_{i=0}^s\mu_ik^{i+1}} {\sum_{i=0}^s\mu_i(n-1)^{i+1}}\ . \label{29} \end{equation} This completes our short quasi-Fibonacci treatment of the polynomially deformed $\{\mu\}$-oscillators, \ ${\mu}\equiv(\mu_1, \mu_2,...,\mu_n)$. \section{Deformed oscillators, polynomial in $q$- or $p,\!q$-brackets}\hspace{5mm} Here we examine some other, than the ${\{\mu\}}$-deformed, classes of oscillators (with added more canonical deformation parameters) obeying Tribonacci and higher order relations. \subsection{ A class of $(q;\mu)$\,- and $(q;\{\mu\})$\,-deformed oscillators} Consider the $(q;\mu)$-deformed oscillator defined by the structure function \begin{equation} \varphi_{n}(q;\mu)\equiv\varphi(n; q,\mu)=[n]+\mu [n]^2=[n]\bigl(1+\mu[n]\bigr)\ , \label{30} \end{equation} \begin{equation} [n]\equiv[n]_q=\frac{1-q^n}{1-q}, \hspace{5mm} q>0 \ .\label{31} \end{equation} It can be proven that the structure function (\ref{30}) and hence the energy values $E_{n}$ of such oscillators obey the 3-term extended Fibonacci (= Tribonacci) relation \begin{equation} \varphi_{n+1}(q,\mu)= \lambda(q)\, \varphi_n(q,\mu) + \rho(q)\,\varphi_{n-1}(q,\mu) + \sigma(q)\,\varphi_{n-2}(q,\mu)\ , \label{32} \end{equation} \begin{equation} \hspace{9mm} E_{n+1}= \lambda(q) E_{n} + \rho(q) E_{n-1} + \sigma(q) E_{n-2}\ , \label{33} \end{equation} where $\lambda(q)$, $\rho(q)$, $\sigma(q)$ depend on the parameter $q$ as \[ \lambda(q) =[3] \ , \hspace{12mm} \rho(q) =-q[3] \ , \hspace{12mm} \sigma(q) = q^3 \ \] (compare with (\ref{12}) and its coefficients $\lambda_0=3,\ \lambda_1=-3,\ \lambda_2=1$). The result in (\ref{30}), (\ref{32})-(\ref{33}), with the above $\lambda(q)$, $\rho(q)$, $\sigma(q)$, generalizes to the following statement. \vspace{2mm} {\bf Proposition 2.} For the $(q;\{\mu\})$-deformed oscillators, their structure function \begin{equation} \varphi_n(q;\{\mu\})=[n]_q+\sum^r_{j=1}\mu_j\, ([n]_q)\!^{j+1} \ , \hspace{8mm} \{\mu\}\equiv(\mu_1,\mu_2,...,\mu_r), \label{34} \end{equation} and thus the energies $E_{n}$ obey the $k$-term extended Fibonacci (or "$k$-bonacci") relation \begin{equation} \varphi_{n+1}(q;\{\mu\})= \lambda_0\,\varphi_n(q;\{\mu\}) + \lambda_1\,\varphi_{n-1}(q;\{\mu\}) + ... + \lambda_{k-1}\,\varphi_{n-k+1}(q;\{\mu\})\ , \label{35} \end{equation} \begin{equation} \hspace{9mm} E_{n+1}= \lambda_0\,E_n(q;\{\mu\}) + \lambda_1\,E_{n-1}(q;\{\mu\}) + ... + \lambda_{k-1}\,E_{n-k+1}(q;\{\mu\})\ , \label{36} \end{equation} if $r=k-2$ and the coefficients $\lambda_i=\lambda_i(q)$, $i=0,1,...,k-1$, are taken in the form \begin{equation} \lambda_i(q)=\lambda^{(k)}_i(q)=(-1)^i\, q^{i(i+1)/2}\frac{[k]!}{[i+1]![k-1-i]!}= (-1)^i\,q^{i(i+1)/2}\left( \begin{array}{c} \hspace{-2mm}k\hspace{-2mm}\\ \hspace{-2mm}i\!+\!1\hspace{-2mm}\\ \end{array} \right)_q\ . \label{37} \end{equation} The proof proceeds in analogy with that of Proposition 1. Note: in the limit $q\rightarrow 1$, formulas (\ref{34}) and (\ref{37}) reduce respectively to (\ref{5}) and (\ref{14}), that in this limit gives complete recovery of the Proposition 1. As an interesting fact let us stress the independence of $\lambda_i^{(k)}(q)$ on $\{\mu\}$ in (\ref{37}), and in this limit. \subsection{A class of ($p,q;\mu$)-deformed nonlinear oscillators}\hspace{5mm} Let us remind that the $q$-deformed oscillator linear in the $q$-bracket (\ref{31}) possesses, from one hand, the standard 2-term Fibonacci property (1) while, from the other hand, is a particular $p=1$ case of the $p,\!q$-deformed oscillator whose $p,\!q$-bracket is \begin{equation} [n]_{p\!,q}\equiv\frac{p^n-q^n}{p-q} \ . \label{38} \end{equation} We could expect that the 3-term (Tribonacci) relation holds for the oscillators involving besides $\mu$ two more deformation parameters $p$, $q$, so that the structure function is \begin{equation} \varphi_{n}(p,q;\mu) =[n]_{p\!,q}+\mu [n]_{p\!,q}^2=[n]_{p\!,q}\bigl(1+\mu[n]_{p\!,q} \bigr)\ . \label{39} \end{equation} But, it turns out this fails. To prove this fact suppose the opposite, that deformed oscillator with the structure function (\ref{39}) obeys the Tribonacci relation \begin{equation} \varphi_{n+1}(p,q;\mu)= \lambda({p,q})~ \varphi_n(p,q;\mu) + \rho({p,q})~ \varphi_{n-1}(p,q;\mu) + \sigma({p,q})~ \varphi_{n-2}(p,q;\mu)\ . \label{40} \end{equation} Insert (\ref{39}) into the relation (\ref{40}) and by equating the corresponding coefficients deduce the following set of equations: \vspace{3mm} $p^n$:\ \ \ \ \hspace{5mm} $p^2-pq=\lambda(p-q)+\rho(1-p^{-1}q)+\sigma(p^{-1}-p^{-2}q)$\ , \vspace{3mm} $q^n$:\ \ \ \ \hspace{5.5mm} $q^2-pq=\lambda(q-p)+\rho(1-q^{-1}p)+\sigma(q^{-1}-q^{-2}p)$\ , \vspace{3mm} $(pq)^n$:\ \ \ \ \ $pq=\lambda+\rho p^{-1}q^{-1}+\sigma p^{-2}q^{-2}$\ , \vspace{3mm} $p^{2n}$:\ \ \ \ \hspace{4mm} $p^2=\lambda+\rho p^{-2}+\sigma p^{-4}$\ , \vspace{3mm} $q^{2n}$:\ \ \ \ \hspace{4mm} $q^2=\lambda+\rho q^{-2}+\sigma q^{-4}$\ . \ \ \ \ \vspace{3mm}\noindent This system of equations is easily shown to be inconsistent (having no solutions). Therefore, the structure function (\ref{39}) does not satisfy the Tribonacci relation (\ref{40}). Analogously to this negative result, it can be proven that the deformed oscillator under question does not satisfy as well the 4-term (or Tetranacci) relation. In the next subsection we will show how to properly treat the deformed oscillators defined by the structure function (\ref{39}) and alike, from the viewpoint of yet higher extension of the Fibonacci (Tribonacci, Tetranacci) relations. \subsection{Deformed $(p,q;\mu)$-oscillators as Pentanacci oscillators} \hspace{3mm} One can prove that the following statement is true. {\bf Proposition 3.} The family of $(p,\!q;\mu)$-oscillators with quadratic in $[n]_{p,q}$ structure function (\ref{39}) obeys the Pentanacci (5-term extended Fibonacci) relation \begin{equation} \varphi_{n+1}=\lambda(p,q)\varphi_{n}+\rho(p,q)\varphi_{n-1}+ \sigma(p,q)\varphi_{n-2}+\gamma(p,q)\varphi_{n-3}+\delta(p,q)\varphi_{n-4}\ \label{41} \end{equation} if the coefficients $\lambda(p,q), \rho(p,q), \sigma(p,q), \gamma(p,q), \delta(p,q)$ are\footnote{Note their $\mu$-independence.}: \[\lambda(p,q)=p^2+q^2+p+q+pq= [2]_{p,q}+[3]_{p,q},\] \[\hspace{-7mm}\rho(p,q)=-p^3q-p^3-2p^2q-p^2q^2-pq^3-2pq^2-pq-q^3=\] \[\hspace{5.5mm}=-\Bigl([4]_{p,q}+pq\bigl(1+[2]_{p,q}+[3]_{p,q}\bigr)\Bigr)= -\Bigr([3]_{p,q}\cdot[2]_{p,q}+pq(1+[3]_{p,q})\Bigl),\] \[\sigma(p,q)=2p^3q^2+2p^2q^3+p^4q+pq^4+p^3q+pq^3+p^2q^2+p^3q^3= pq\Bigr([3]_{p,q}([2]_{p,q}+1)+p^2q^2\Bigl), \] \[\hspace{-7mm}\gamma(p,q)=-\bigl(p^2q+p^2+pq^2+pq+q^2\bigr)p^2q^2= -p^2q^2\bigr([3]_{p,q}+pq[2]_{p,q}\bigl),\] \begin{equation} \hspace{-7mm}\delta(p,q)=p^4q^4. \label{42} \end{equation} The {\it proof} proceeds by direct verification. {\bf Remark 3.} One can show that these same oscillators satisfy also respective $k$-term extended Fibonacci relation for an integer $k\geq5$. Let us illustrate this for $k=6$. We take one more copy of the relation (\ref{41}) in which the shift $n\to n-1$ is done, and subtract this copy, multiplied by some $\kappa$, from the initial relation (\ref{41}). Then the 6-term extended relation \[ \varphi_{n+1}=(\lambda-\kappa)\,\varphi_{n}+(\rho+\lambda\kappa)\,\varphi_{n-1}+ (\sigma+\rho\kappa)\,\varphi_{n-2}+(\gamma+\sigma\kappa)\,\varphi_{n-3}\,+ \] \[ \hspace{10mm} +\,(\delta+\gamma\kappa)\,\varphi_{n-4}+ \kappa\,\varphi_{n-5} \] results and is valid for any real number $\kappa$, with the coefficients $\lambda, \rho, \sigma, \gamma, \delta$ taken from (\ref{42}). Note, this same procedure can be applied any desired number of times, with the appropriate shifts $n\rightarrow n-j$ (clearly, $j<n$). {\bf Remark 4.} It is of interest to check the $p=1$ limit of (\ref{39}) and (\ref{41})-(\ref{42}). Contrary to naive expectation that we should obtain the $r=5$ case of Proposition 2, with the coefficients $\lambda_i^{(5)}$, given in (\ref{37}), we find a kind of surprise: the coefficients that follow from (\ref{42}) are other than $\lambda_0^{(5)}, \lambda_1^{(5)}, \lambda_2^{(5)}, \lambda_3^{(5)}$ and $\lambda_4^{(5)}$. This "controversy"\ is rooted in the fact that the $(q;\mu)$-oscillator with $\varphi(n)=[n]_q+\mu[n]_q^2$ respects already the Tribonacci (3-term extended Fibonacci) relation while usual FR fails. The situation with $(p,q;\mu)$-oscillator is more involved: for it, both the usual 2-term FR and the 3-, 4-term extensions of Fibonacci relations are not valid; only the 5-term or Pentanacci relation does hold. \subsection{Nine-bonacci deformed $(p,q;\mu_1,\mu_2)$-oscillators} \hspace{3mm} Consider the cubic in the $p,\!q$-bracket $[n]_{p,q}$ structure function \begin{equation} \varphi_n=[n]_{p,q}+\mu_1[n]_{p,q}^2+\mu_2[n]_{p,q}^3\ , \hspace{8mm} [n]_{p,q}=\frac{p^n-q^n}{p-q}\ . \label{43} \end{equation} It can be proven that such oscillator does not satisfy the standard FR, nor it satisfies any $k$-term extended, $k\leq 8$, $k$-bonacci relation. However, it does satisfy the 9-term extended version of FR ({\em "Nine-bonacci relation"}), as reflected in the next statement. {\bf Proposition 4}. The $(p,q;\mu_1,\mu_2)$-oscillator given by the structure function $\varphi(n)$ in (\ref{43}) satisfies the $9$-term extension of FR or Nine-bonacci relation of the form \begin{equation} \varphi_{n+1}=\sum_{j=0}^8A_j\,\varphi_{n-j} \label{44} \end{equation} if the coefficients $A_j\equiv A_j(p,q)$ are given as ({\em note their} $\mu_1,\mu_2$\,-{\em independence}) \[ A_0(p,q)=[4]_{p,q}+[3]_{p,q}+[2]_{p,q}, \] \[ A_1(p,q)=-([6]_{p,q}+(1+pq)[5]_{p,q}+(1+pq)[4]_{p,q}+2pq[3]_{p,q}+\] \[\hspace{15mm}+pq(1+pq)[2]_{p,q}+pq(1+p^2q^2)), \] \[ A_2(p,q)=(1+pq)[7]_{p,q}+2pq[6]_{p,q}+pq(2+pq)[5]_{p,q}+pq(2+4pq+p^2q^2)[4]_{p,q}+\] \[\hspace{15mm}+pq(1+2pq+2p^2q^2)[3]_{p,q}+pq(pq+2p^2q^2)[2]_{p,q}+2p^3q^3, \] \[ A_3(p,q)=-pq\Bigl([8]_{p,q}+(1+pq)[7]_{p,q}+ (1+2pq+p^2q^2)[6]_{p,q}+pq(3+2pq)[5]_{p,q}+\] \[\hspace{22mm}+pq(1+4pq+p^2q^2)[4]_{p,q}+pq(1+2pq+3p^2q^2)[3]_{p,q}+ p^2q^2(2+2pq+p^2q^2)[2]_{p,q}+\] \[\hspace{15mm}+p^3q^3(2+p^2q^2)\Bigr), \] \begin{equation} A_4(p,q)=p^2q^2\Bigl([8]_{p,q}+(1+pq)[7]_{p,q}+ (1+2pq+p^2q^2)[6]_{p,q}\Bigr)+ \label{45} \end{equation} \[ \hspace{25mm}+p^3q^3\Bigl((2+3pq)[5]_{p,q}+(1+4pq+p^2q^2)[4]_{p,q}\Bigr) +p^4q^4\Bigl((3+2pq+p^2q^2)[3]_{p,q}+ \] \[\hspace{15mm}+(1+2pq+2p^2q^2)[2]_{p,q}+(1+2p^2q^2)\Bigr), \] \[ \hspace{10mm} A_5(p,q)=-p^3q^3\Bigl((1+pq)[7]_{p,q}+2pq[6]_{p,q}+pq(1+2pq)[5]_{p,q}+ pq(1+2pq+2p^2q^2)[4]_{p,q}+\] \[\hspace{15mm}+p^2q^2(2+2pq+p^2q^2)[3]_{p,q}+p^3q^3(2+pq)[2]_{p,q}+2p^4q^4\Bigr), \] \[ A_6(p,q)=p^7q^7\Bigl([6]_{p,q}+ 2[3]_{p,q}+(1+pq)[2]_{p,q}+(2+p^2q^2)\Bigr)\] \[\hspace{15mm}+p^5q^5(1+pq)[5]_{p,q}+p^6q^6(1+pq)[4]_{p,q}, \] \[ A_7(p,q)=-p^7q^7\Bigl([4]_{p,q}+pq[3]_{p,q}+p^2q^2[2]_{p,q}\Bigr), \] \[A_8(p,q)=p^{10}q^{10}. \] The proof is achieved by direct verification. {\bf Remark 5.} If $p\rightarrow 1$, (\ref{43}) reduces to the $r=2$ case of (\ref{34}), and the coefficients (\ref{45}) turn into \[ A_0(q)=[4]_q+[3]_q+[2]_q, \] \[ A_1(q)=-2q^2([4]_q+[3]_q+2q)-q^2[2]_q, \] \[ A_2(q)=[8]_q+5q[6]_q+2q[5]_q+q[4]_q+6q^2[3]_q+q^3[2]_q+6q^3, \] \[ A_3(q)=-q(3[8]_q+6q[6]_q+2q[5]_q+5q^2[4]_q+6q^3[3]_q+8q^3[2]_q+2q^4), \] \begin{equation} A_4(q)=3q^2[8]_q+6q^3[6]_q+2q^4[5]_q+11q^4[4]_q+4q^5[2]_q+4q^6, \label{46} \end{equation} \[ A_5(q)=-q^3([8]_q+6q[6]_q+q[5]_q+4q^2[4]_q+3q^3[3]_q+3q^3[2]_q+4q^4, \] \[ A_6(q)=2q^5[6]_q+q^6[5]_q+q^6[4]_q+2q^7[3]_q+3q^8[2]_q+3q^9, \] \[ A_7(q)=-q^7([4]_q+q[3]_q+q^2[2]_q), \] \[ A_8(q)=q^{10}. \] Here again we find a surprise: the $p\rightarrow 1$ limits of the coefficients $A_j$ do not merge with those stemming from the general formula (\ref{37}). This is rooted in the fact that the structure function (\ref{43}) goes over into $\varphi(n)=[n]_q+\mu_1[n]^2_q+\mu_2[n]_q^3$, for which already the Pentanacci relation is valid as follows from Proposition 2. To lift the controversy, let us show that it is possible to derive the 9-term extended FR (\ref{44}), just with the coefficients (\ref{46}), starting from the Pentanacci relation given by (\ref{41})-(\ref{42}) at $k=4$. Indeed, consider besides the Pentanacci relation (\ref{41}) at $k=4$, with fixed $n$, the four additional copies of it written accordingly other shifts $n\to n-1, n\to n-2, n\to n-3, n\to n-4$. Then multiply these four relations respectively by $t, x, y, z$ of the form \[t=-([2]+[3]),\] \[ x=-(2q^2+1)([4]+[3])-[2](q^2-1)-4q^3, \] \[ y=-[8]-5q[6]\!-\!2q[5]+[4](2q^2\!-\!q+2)+[3](-4q^2+2)+[2](-q^3+q^2\!-\!1)\!-\!2q^3, \] \[ z=3q[8]+[6](6q^2+5q)+2q[5](q+1)+[4]q(5q^2+1)+[3](6q^4+6q^2+4)+ \] \[ \hspace{3.5mm}+\,[2](8q^4+q^3+1)+2q^5+6q^3, \] and take their sum, term by term, with the first copy. That will lead to the above Nine-bonacci relation (\ref{44}), with exactly the coefficients given in (\ref{46}). We conclude this section with the {\em remark concerning general situation} of deformed oscillators with a polynomial of any order in $[n]_{p,q}$ structure function, cf. (\ref{43}). It can be argued that the oscillator for which $\varphi(n)$ is $(k+1)$-th order polynomial in $[n]_{p,q}$ obeys the $m$-bonacci relation where $m=\frac12(k+2)(k+3)-1$. It is however hardly possible to find the coefficients $\lambda_0, \lambda_1, \lambda_2,...,\lambda_{m-1}$ in explicit form {\it for arbitrary} $m$. \section{Conclusions}\hspace{3mm} In this paper we have shown, first, that deformed oscillators whose structure function is a polynomial in $n$, or $[n]_q$, or $[n]_{p,q}$, do not belong to the Fibonacci class. On the other hand, the $\{\mu\}$-oscillators (respectively $(q;\{\mu\})$-oscillators) for which $\varphi(n)$ is polynomial in $n$ (respectively polynomial in $[n]_q$) share the characteristic property: they satisfy the $k$-bonacci relation, with the coefficients (\ref{14}) or (\ref{37}), if the order of polynomial is $r=k-2$. In particular, the Tribonacci relation occurs if the structure function is quadratic. It is worth to mention a remarkable fact that in the both these cases the coefficients (\ref{14}) and (\ref{37}) of the $k$-bonacci relation {\em do not depend on the set of numbers} ${\mu_i}$ involved in the polynomial, and depend only on its label (=subscript) and on the number $k$, linked as $r=k-2$ to the order of the polynomial. Independence of $\mu$ or $\mu_1$, $\mu_2$ is also seen in (\ref{42}) and (\ref{45}) for the Pentanacci and Nine-bonacci relations. Instead, the "initial values"\ $E_0$, $E_1$,..., $E_{k-1}$ for the $k$-bonacci relation {\it inevitably depend} on $\mu_1$, $\mu_2$,..., $\mu_{k-2}$, as it is given by the formula (\ref{3}) for $E_n$ joint with (\ref{5}); see also (\ref{35}), (\ref{36}). Moreover, for deformed $\mu$-oscillators whose $\varphi(n)$ is given in (\ref{5}) we have studied the alternative possibility that the deformed oscillators of this family may also be considered as those obeying the inhomogeneous FR, see (\ref{22}). However, in this case, unlike the already mentioned property of $\lambda_i^{(k)}$ and $\lambda_i^{(k)}(q)$ in (\ref{14}) and (\ref{37}), the coefficients $\tilde{\lambda_i}$ and $\tilde{\tilde{\lambda_i}}$ and $\lambda_i$ from (\ref{22}), (\ref{23}) {\it do depend} on the numbers ${\mu_i}\equiv(\mu_1,...,\mu_2)$ which determine the polynomial structure function (\ref{5}). This fact is manifest in the Table of subsection 2.3. Concerning the related class of deformed oscillators, polynomial in $[n]_q$, one can also consider them from the viewpoint of inhomogeneous FR, but with important change: the sum appearing in the RHS of the analogue of (\ref{23}) should now be taken over the powers of $q^n$ (instead of the powers of $n$). We have also shown that deformed oscillators of these two classes can be treated as quasi-Fibonacci ones, i.e. those obeying the relation with 2-term RHS where $\lambda=\lambda(n)$ and $\rho=\rho(n)$. As even more unusual looks the situation with the class of deformed oscillators whose $\varphi(n)$ is polynomial in the $p,\!q$-bracket $[n]_{p,q}$. Indeed, the structure function quadratic in $[n]_{p,q}$ obeys the Pentanacci relation first while the FR, Tribonacci and Tetranacci relations all fail. Then, $\varphi(n)$ cubic in $[n]_{p,q}$ defines the family of oscillators which are 9-bonacci oscillators, and so on, according to the rule: $m$-bonacci relation corresponds to $(k+1)$-th order polynomial, in $[n]_{p,q}$, deformation where $m=\frac12(k+2)(k+3)-1$. Finally, let us note that the both, $(q;\{\mu\})$- and $(p,q;\{\mu\})$-deformed oscillators can be viewed as quasi-Fibonacci oscillators, in complete analogy with our above treatment (in sec. 3.2) of $\{\mu\}$-oscillators, and in analogy with the content of our work \cite{GKR}. Our final remarks concern the issue of physics aspects of the polynomially deformed non-Fibonacci oscillators, in the one-mode (non-covariant) case studied in this paper. We believe these nonlinear oscillators have good potential to find effective applications in a number of quantum-physics branches, from quantum optics \cite{Man',Alvarez,Sunil} to deformed field theory, say, in the spirit of \cite{Man'2,Curado}. About special manifestations of just the non-Fibonacci nature of deformed oscillators, at present we can only mention our recent work \cite{GR_mu-B} on the application of $\mu$-oscillators (being not Fibonacci but quasi-Fibonacci, see \cite{GKR}) for constructing respective $\mu$-Bose gas model in analogy with the $p,q$-Bose gas model treated in \cite{AdGa}. Unlike the latter, for $\mu$-Bose gas the evaluations of (intercepts of) 2-, 3-particle correlations are significantly more involved and do not yield closed expressions: only approximate formulas can be obtained. \subsection* {Acknowledgements} The authors are thankful to I.M. Burban and I.I. Kachurik for valuable discussions. This research was partially supported by the Grant 29.1/028 of the State Foundation of Fundamental Research of Ukraine, and by the Special Program of the Division of Physics and Astronomy of the NAS of Ukraine.
1,314,259,995,378
arxiv
\section{Introduction} In Classical Mechanics, one of the most venerable equations on a (connected) Riemannian manifold $(M_0,g_0)$ is: \[ \hspace*{3.5cm}\frac{D\dot\gamma}{dt}(t)\ =\ - \nabla^{M_0} V (\gamma(t),t)\hspace*{3.5cm} (E_0) \] where $D/dt$ denotes the covariant derivative along $\gamma$ induced by the Levi--Civita connection of $g_0$ and $\dot\gamma$ represents the velocity field along $\gamma$, while $V:M_0\times\R \rightarrow \R$ is a smooth time--dependent potential. In fact, when $(M_0,g_0)$ is $\R^3$, this is just Newton's second law for forces that come from an external time-dependent potential. A basic property that may have its solutions is {\em completeness} i.e. the extendability of their domain to all $\R$. At the beginning of the seventies, some authors studied systematically this property (see, e.g., \cite{Eb,Go,WM} or also \cite[Theorem 3.7.15]{AM}) but, essentially, they focus only in the autonomous case, that is, when $V(x,t)\equiv V(x)$ ($V$ is independent of time). Very recently, the authors have considered the completeness of the trajectories not only for the general equation (E$_0$) but also for more general forces (see \cite{CRS2012}). Concretely, $- \nabla^{M_0} V$ was generalized to an arbitrary time-dependent vector field $X$ and forces linearly dependent with the velocity by means of an operator $F$, were also allowed. Nevertheless, it is specially interesting to understand and analyze accurately the differences between the autonomous and the non-autonomous case for a potential. Moreover, as pointed out in \cite{CFS} (see also \cite{CaSa}), the completeness for (E$_0$) is equivalent to the completeness for the geodesics of a class of relativistic spacetimes that generalizes the classical plane and pp--waves. So, the aim of the present paper is, first, to analyze further the completeness in the non-autonomous case $X=- \nabla^{M_0} V$ (even admitting the linear dependence of the force with the operator $F$, see equation (E) below) and, then, to analyze the applications to generalized plane waves. This paper is organized as follows. In Section 2, we recall the framework for the completeness of Riemannian trajectories (Subsection 2.1), and give a new theorem on completeness (Subsection 2.2). The proofs of two results are provided. The first one is a technical comparison lemma that is commonly taken into account in the results on completeness (Lemma \ref{comparison}). The second one is a theorem on completeness (Theorem \ref{G01}), obtained by developing further the techniques in \cite{CRS2012}. In Section 3 we introduce plane wave type spacetimes (Subsection 3.1) and explain the relation between the problem of completeness of trajectories and the geodesic completeness of generalized plane waves (Subsection 3.2). Moreover, we give further results on geodesic completeness (Corollaries \ref{complete2}, \ref{complete3}) as a consequence of the previous result of completeness of trajectories. \section{Completeness of Riemannian trajectories} \subsection{Framework} Let $(\mo,g_0)$ be a (connected) smooth $n$--dimensional Riemannian manifold and $V:\mo\times\R \rightarrow \R$ a given smooth function. Taking $p\in \mo$ and $v\in T_p\mo$, there exists a unique inextensible smooth curve $\gamma : I \to \mo$, $0\in I$, solution of $(E_0)$ which satisfies the initial conditions \begin{equation}\label{initial} \gamma(0) = p,\quad \dot\gamma(0) = v. \end{equation} An inextensible solution of $(E_0)$ is {\sl complete} if it is defined on the whole real line. Note that equation $(E_0)$ in the trivial case $V\equiv 0$ is the equation of the geodesics in $(\mo,g_0)$. Let us recall that a Riemannian manifold $(\mo,g_0)$ is {\sl geodesically complete} if any of its inextensible geodesics is defined on $\mathbb{R}$ or, equivalently, the metric distance induced by $g_0$ is complete. In \cite[Theorem 2.1]{Go} Gordon proved the completeness of the trajectories of $(E_0)$ if the potential $V$ is time--independent, bounded from below and satisfying either $(\mo,g_0)$ is complete or $V$ is proper (i.e., $V^{-1}(K)$ is compact in $\mo$ for any compact $K\subset \R$). Other results in the autonomous case were given in \cite{Eb,WM} and \cite[Theorem 3.7.15]{AM}. Following \cite{CRS2012}, we generalize such results to the non--autonomous case by including also the action of a (1,1) tensor field $F$ along the natural projection $\pi : \mo \times \R \longrightarrow \mo$, i.e., we consider the second order differential equation \[ \hspace*{2.9cm}\frac{D\dot\gamma}{dt}(t)\ =\ F_{(\gamma(t),t)}\ \dot\gamma(t) - \nabla^{M_0} V (\gamma(t),t).\hspace*{2cm}(E) \] Let us remark that the existence and uniqueness result of inextensible solutions of $(E)$, under the same initial conditions \eqref{initial}, remains now true, and, obviously, one has the notion of complete inextensible trajectory of $(E)$. Now, let us introduce some terminology in order to express natural conditions on $F$ and $V$. Notice that, in general, $F$ is neither self-adjoint nor skew-adjoint with respect to $g_0$, and denote by $S$ the self--adjoint part of $F$. For each $t \in \R$, put \[ \| S(t) \|\ : = \ \max\big\{\big|S_{\sup}(t)\big|,\, \big|S_{\inf}(t)\big|\big\} \] where \[ S_{\sup}(t): = \sup_{\underset{\|v\|=1}{v\in T\mo}} g\left(v,S_{(p,t)} v\right) \quad \text{and} \quad S_{\inf}(t):= \inf_{\underset{\|v\|=1}{v\in T\mo}} g\left(v,S_{(p,t)} v\right). \] We say that $S$ is {\em bounded} (resp. {\em upper bounded}, {\em lower bounded}) {\em along finite times} when, for each $T>0$, there exists a constant $N_T$ such that \begin{equation}\label{bf} \|S(t)\| \le N_T \; \hbox{(resp. $S_{\sup}(t) \le N_T$, $-S_{\inf}(t) \le N_T$)}\; \hbox{for all $t\in [-T,T]$.} \end{equation} Moreover, the potential $V$ is {\em bounded from below along finite times} if there exists a continuous function $\beta_0: \R\rightarrow \R$ such that \begin{equation}\label{bf1} V(p,t)\ \ge\ \beta_0(t)\quad \hbox{for all}\quad (p,t)\in \mo \times \R. \end{equation} In order to investigate the completeness of the inextensible solutions of equation $(E)$, let us recall that an integral curve $\rho$ of a vector field on a manifold, defined on some bounded interval $[a,b)$, $b<+\infty$, can be extended to $b$ (as an integral curve) if and only if there exists a sequence $\{t_n\}_n$, $t_n \to b^-$, such that $\{\rho(t_n)\}_n$ converges \cite[Lemma 1.56]{ON}. The following technical result follows directly from this fact and \cite[Lemma 3.1]{CRS2012}. \begin{lemma}\label{extend} Let $\gamma: [0,b) \to \mo$ be a solution of equation $(E)$ with $0<b<+\infty$. The curve $\gamma$ can be extended to $b$ as a solution of $(E)$ if and only if there exists a sequence $\{t_n\}_n \subset [0,b)$ such that $t_n \to b^-$ and the sequence of velocities $\{\dot\gamma(t_n)\}_n$ is convergent in the tangent bundle $T\mo$. \end{lemma} Furthermore, we need also the following result (compare with \cite[Example 2.2.H]{AM}). \begin{lemma}[{\bf Comparison Lemma}]\label{comparison} Let $\varphi :[a,+\infty) \to \R$ be a continuous monotone increasing function such that \begin{equation}\label{et1} \varphi(s) > 0\quad\ \hbox{for all $\ s\ge a$} \quad\hbox{and}\quad \int_a^{+\infty} \frac{ds}{\varphi(s)} = +\infty. \end{equation} If a $C^1$ function $v_0=v_0(t)$ satisfies the equation \begin{equation}\label{global} v_0'(t)\ =\ \varphi(v_0(t))\quad \hbox{with $v_0(0) \ge a$,} \end{equation} and it is inextensible, then it is defined for all $t \ge 0$. \vspace{1mm} Furthermore, if $v :[0,b) \to \R$ is a continuous function such that \begin{equation}\label{stime} \left\{\begin{array}{ll} \displaystyle a \le v(t) \le v(0) + \int_0^t \varphi(v(s)) d s &\hbox{for all \; $t \in [0,b)$,}\\[1mm] v(0) \le v_0(0),& \end{array}\right. \end{equation} then $v(t) \le v_0(t)$ for all $t \in [0,b)$. \end{lemma} \begin{proof} Even though this is a simple exercise, we prefer to give here a complete argument by completeness. If $v_0=v_0(t)$ is a $C^1$ inextensible solution of \eqref{global} in the interval $[0,\bar b)$, then \begin{equation}\label{et2} v_0(t) \ge v_0(0) \ge a\quad \hbox{for all $t\in [0,\bar b)$,} \end{equation} whence, for all $t\in [0,\bar b)$, $\varphi(v_0(t))(>0)$ is well defined and $v_0$ becomes strictly monotone increasing. Thus, dividing both the terms of \eqref{global} by $\varphi(v_0(t))$ and integrating in $[0,t]$, $0 < t < \bar b$, we have \[ \int_0^t\frac{v_0'(\tau)}{\varphi(v_0(\tau))}d\tau = t, \] hence, $v_0 = v_0(t)$ is the inverse of \begin{equation}\label{et} t(v_0)\ =\ \int_{v_0(0)}^{v_0} \frac{ds}{\varphi(s)}, \end{equation} with the maximum $\bar b$ equal to $\displaystyle\lim_{v_0\rightarrow +\infty}t(v_0)$ in \eqref{et}. From \eqref{et1} it follows $\bar b = +\infty$. \vspace{1mm} Now, let $v=v(t)$, $t\in [0,b)$, be such to satisfy \eqref{stime} and define \[ h(t) \ =\ v_0(0) +\ \int_0^t \varphi(v(s)) ds. \] Clearly, $h$ is a $C^1$ function such that \[ h(0) = v_0(0)\quad\hbox{and}\quad h'(t) = \varphi(v(t)) \quad \hbox{for all $t\in [0,b)$.} \] Moreover, from \eqref{stime} it follows \begin{equation}\label{stima3} a\ \le\ v(t) \ \le\ h(t)\quad \hbox{for all $t\in [0,b)$,} \end{equation} whence the monotonicity of $\varphi$ implies \begin{equation}\label{stima2} h'(t)\ \le\ \varphi(h(t))\quad \hbox{for all $t\in [0,b)$.} \end{equation} Thus, from \eqref{et1}, \eqref{global} and \eqref{stima2} we have \[ \frac{h'(t)}{\varphi(h(t))} \ \le\ 1\ =\ \frac{v_0'(t)}{\varphi(v_0(t))} \quad \hbox{for all $t\in [0,b)$,} \] whence direct computations give \begin{equation}\label{et4} \int_{v_0(0)}^{h(t)}\frac{ds}{\varphi(s)} \ \le\ \int_{v_0(0)}^{v_0(t)}\frac{ds}{\varphi(s)} \quad \hbox{for all $t\in [0,b)$.} \end{equation} Now, assume that $\bar t \in (0,b)$ exists such that $h(\bar t) > v_0(\bar t)$. Hence, \eqref{et1} and \eqref{et2} imply \[ \int_{v_0(0)}^{h(\bar t)}\frac{ds}{\varphi(s)} \ >\ \int_{v_0(0)}^{v_0(\bar t)}\frac{ds}{\varphi(s)} \] in contradiction with \eqref{et4}. So, we have $h(t) \le v_0(t)$ for all $t \in [0,b)$ and the proof follows from \eqref{stima3}. \end{proof} \subsection{Our main result on the non--autonomous problem $(E)$} Now, we are ready to state our main result on the completeness of inextensible trajectories of the non--autonomous problem $(E)$. \begin{theorem}\label{G01} Let $(\mo,\g)$ be a complete Riemannian mani\-fold, $F$ a smooth time--dependent $(1,1)$ tensor field with self--adjoint component $S$ and $V:\mo\times\R \rightarrow \R$ a smooth potential. Assume that $\|S(t)\|$ is bounded along finite times, $V$ is bounded from below along finite times and there exists a continuous function $\alpha_0: \R\rightarrow \R$ such that \[ \left|\frac{\partial V}{\partial t}(p,t)\right|\ \le\ \alpha_0(t) (V(p,t) - \beta_0(t))\quad \hbox{for all \, $(p,t)\in \mo\times \R$} \] with $\beta_0$ as in \eqref{bf1}. Then, each inextensible solution of equation $(E)$ must be complete. \end{theorem} The proof of Theorem \ref{G01} is a direct consequence of the following more general result. \begin{proposition}\label{G011} Let $(\mo,\g)$ be a complete Riemannian manifold, $F$ a smooth time--dependent $(1,1)$ tensor field with self--adjoint component $S$ and $V:\mo\times\R \rightarrow \R$ a smooth potential bounded from below along finite times with $\beta_0$ as in \eqref{bf1}. \vspace{1mm} If $S_{\sup}(t)$ is upper bounded along finite times and a continuous function $\alpha_0: \R\rightarrow \R$ exists such that \[ \frac{\partial V}{\partial t}(p,t)\ \le\ \alpha_0(t) (V(p,t) - \beta_0(t))\quad \hbox{for all \, $(p,t)\in \mo\times \R$,} \] then each inextensible solution of equation $(E)$ must be forward complete. \vspace{1mm} Conversely, if $S_{\inf}(t)$ is lower bounded along finite times and a continuous function $\alpha_0: \R\rightarrow \R$ exists such that \[ - \frac{\partial V}{\partial t}(p,t)\ \le\ \alpha_0(t) (V(p,t) - \beta_0(t))\quad \hbox{for all \, $(p,t)\in \mo\times \R$,} \] then each inextensible solution of equation $(E)$ must be backward complete. \end{proposition} \begin{proof} Let $\gamma$ be a non--constant forward inextensible solution of equation $(E)$ defined on the interval $[0,b) \subset \R$. Arguing by contradiction, assume that $\gamma$ is not forward complete, i.e., $b<+\infty$, so a real positive constant $T > b$ can be fixed so that \eqref{bf} holds for $S_{\sup}(t)$, furthermore \begin{equation}\label{stima5} V(p,t) - B_T \ge 1\quad \hbox{and}\quad \frac{\partial V}{\partial t}(p,t)\ \le\ A_T (V(p,t) - B_T) \end{equation} for all \, $(p,t)\in \mo\times [-T,T]$, with $A_T \ge \max \alpha_0([-T,T])$ and $B_T \le \min\beta_0([-T,T]) -1$. Now, for simplicity, denote \[ u(t) = g(\dot\gamma(t),\dot\gamma(t))\;\; \hbox{and}\;\; v(t)\ =\ \frac12\ u(t) + V(\gamma(t),t) - B_T, \quad t\in [0,b). \] From \eqref{stima5} it follows \[ u(t)+1\ \le\ 2 v(t), \] hence if $v(t)$ is bounded in $[0,b)$ so is $u(t)$, that is a constant $k > 0$ exists such that \begin{equation}\label{inequality3} u(t)\ \le\ k \quad\hbox{for all $t \in [0,b)$.} \end{equation} Note that this inequality is enough for contradicting that $b$ is finite. In fact, \eqref{inequality3} implies that $\dot\gamma([0,b))$ is bounded in $T\mo$ and, being $(\mo,\g)$ complete, Lemma \ref{extend} is applicable because of the completeness of $M_0$. Hence, $\gamma$ can be extended to $b$ in contradiction with its maximality assumption. In order to prove that $v(t)$ is bounded in $[0,b)$, taking any $t\in [0,b)$ by using equation $(E)$ and estimates \eqref{bf} and \eqref{stima5} we have \[\begin{split} \frac{d v}{dt}(t)\ &=\ g\big(\frac{D\gamma}{dt}(t),\dot\gamma(t)\big) + g\left(\nabla^{\mo}V(\gamma(t),t),\dot\gamma(t)\right) + \frac{\partial V}{\partial t}(\gamma(t),t)\\ &=\ g\big(F_{(\gamma(t),t)}\dot \gamma(t),\dot\gamma(t)\big)\, + \,\frac{\partial V}{\partial t}(\gamma(t),t)\\[1mm] &=\ g\big(S_{(\gamma(t),t)}\dot \gamma(t),\dot\gamma(t)\big)\, + \,\frac{\partial V}{\partial t}(\gamma(t),t)\\[1mm] &\le\ N_T\, u(t) + A_T \big(V(\gamma(t),t)-B_T\big). \end{split} \] Whence, $A_T^* \in \R$ exists such that \begin{equation}\label{G05} \frac{d v}{dt}(t)\ \le\ A_T^*\, v(t) \quad\hbox{for all $t \in [0,b)$.} \end{equation} On the other hand, if we consider the linear equation \begin{equation}\label{G06} w'(t)\ =\ A_T^*\ w(t), \end{equation} let $v_0=v_0(t)$ be the unique (global) solution of \eqref{G06} satisfying the initial condition $v_0(0) = v(0)$, with $v(0) \ge 1$ from \eqref{stima5}. Thus, from \eqref{G05} and Lemma \ref{comparison} with $\varphi(s) = A_T^* s$ and $a=1$, we have that $v(t) \le v_0(t)$ for all $t\in [0,b)$, with $v_0(t)$ bounded in $[0,b]$; whence, $v(t)$ is bounded in $[0,b)$. \vspace{1mm} Conversely, let us assume that $\gamma$ is not backward complete in $(-b,0]$ with $b < +\infty$, then we can consider $T>b$ and $\tilde \gamma(t) := \gamma(-t)$ in $[0,b)$. From the lower boundedness of $S_{\inf}(t)$ in $[-T,T]$ and the estimate on $-\frac{\partial V}{\partial t}$ along finite times, we have \[ \begin{split} \frac{dv}{dt}(-t)\ &=\ - g\left(S_{(\gamma(-t),-t)} \dot\gamma(-t),\dot\gamma(-t)\right) - \frac{\partial V}{\partial t}(\gamma(-t),-t)\\[1mm] &\le\ N_T\, u(-t) + A_T \big(V(\gamma(-t),-t) - B_T\big), \end{split} \] and we repeat the above argument for $\tilde\gamma(t)$. \end{proof} \begin{remark} Both in Theorem \ref{G01} and in Proposition \ref{G011} the assumption on the completeness of $(\mo,\g)$ can be replaced by the condition ``{\sl $V$ is proper}''. In fact, in the above proof once we have proven that $v(t)$ is bounded in $[0,b)$, the properness of $V$ implies that $\dot \gamma ([0,b))$ lies in a compact subset of $T\mo$, so $\gamma$ can be extended to $b$. \end{remark} \begin{remark} As commented in the Introduction, other completeness results on the inextensible trajectories of equation $(E)$ as well as their comparison with Theorem \ref{G01} can be found in \cite{CRS2012}. \end{remark} \section{Geodesic completeness of GPW} \subsection{Plane waves and their generalizations} A {\em parallely propagated wave} spacetime, or a {\em pp--wave} in brief, is a relativistic spacetime $(\R^4,ds^2)$ where the Lorentzian metric $ds^2$ has the form \[ ds^2 \ =\ dx^2+dy^2 + 2dudv + H(x,y,u) du^2, \] being $(x,y,u,v)$ the natural coordinates of $\R^4$ and $H : \R^3 \to \R$ a non--zero smooth function. If the expression of $H$ is quadratic in $x, y$, i.e., \begin{equation} \label{ehpw} H(x,y,u)\ =\ f_1(u) x^2 - f_2(u) y^2 + 2 f(u) xy, \end{equation} for some smooth real functions $f_1$, $f_2$ and $f$, then the spacetime is called {\em plane wave}, and, in particular, an {\sl (exact plane fronted) gravitational wave} if $f_1 \equiv f_2$ (for example, see \cite{BEE}). Since the pioneer papers dealing with gravitational waves \cite{Br,ER}, these spacetimes have been widely studied by many authors (see \cite{CFS} and references therein or the summary in \cite{Yu}) not only for their geometric interest but above all for their physical interpretation. In fact, as explained in \cite{MTW}, a gravitational wave represents ripples in the shape of spacetime which propagate across spacetime, as water waves are small ripples in the shape of the ocean's surface propagating across the ocean. The source of a gravitational wave is the motion of massive particles; in order to be detectable, very massive objects under violent dynamics must be involved (binary stars, supernovas, gravitational collapses of stars...). With more generality, pp--waves may also taken into account the propagation of non--gravitational effects such as electromagnetism. Here, we focus only on the property of geodesic completeness. In particular, we add further information to the study of the geometric properties for the family of generalized plane waves, already developed in \cite{CFS, CRS2012, FS_CQG, FS_JHEP}. The key fact is that the geodesic completeness of a pp--wave reduces to the completeness of the inextensible trajectories that are solutions of the second order differential equation (E$_0$) when $(M_0,g_0)$ is $\R^2$. However, this last restriction is not important and, following \cite{FS_CQG}, the classical notion of pp--wave can be generalized as follows: \begin{definition}\label{pfwave} {\rm A Lorentzian manifold $(\m,g)$ is called \emph{generalized plane wave}, briefly \emph{GPW}, if there exists a connected $n$--dimensional Riemannian manifold $(\mo,\g)$ such that $\m = \mo \times \R^{2}$ and \begin{equation}\label{wave} g\ =\ \g + 2dudv+ \h (x,u) du^{2}, \end{equation} where $x\in \mo$, the variables $(u,v)$ are the natural coordinates of $\R^{2}$ and the smooth function $\h: \mo\times \R\rightarrow \R$ is such that $\h\not\equiv 0 $.} \end{definition} \subsection{Application to geodesic completeness} In order to investigate the properties of geodesics in a GPW, it is enough studying the behavior of the Riemannian trajectories under a suitable potential $V$. In particular, the problem of geodesic completeness is fully reduced to a purely Riemannian problem: the completeness of the inextensible trajectories of particles moving under the potential $V(x,u) = - \frac{1}2\, \h(x,u)$ as the following result shows (see \cite[Theorem 3.2]{CFS} for more details). \begin{theorem} \label{traj} A GPW is geodesically complete if and only if $(\mo,\g) $ is a complete Riemannian manifold and the inextensible trajectories of \[ \hspace*{45mm}\frac{D\dot{\gamma}}{dt}\ =\ \frac{1}{2}\,\nabla^{\mo}\h(\gamma(t),t) \hspace*{35mm} (E_0^*) \] are complete. \end{theorem} Now, we can use Theorem \ref{G01} to obtain the completeness of the inextensible trajectories of equation $(E_0^*)$. Then, the following result on the geodesic completeness on GPW can be stated: \begin{corollary} \label{complete2} Let $\m=\mo\times\R^{2}$ be a GPW such that $(\mo,\g)$ is a geodesically complete Riemannian manifold and $\h:\mo\times \R\rightarrow \R$ is a smooth function. If there exist two continuous functions $\alpha_0$, $\beta_0: \R\rightarrow \R$ such that \[ \h(x,u) \ \le\ \beta_0(u)\quad \hbox{and} \quad \left|\frac{\partial \h}{\partial u}(x,u)\right|\ \le\ \alpha_0(u) \left(\beta_0(u) - \h(x,u)\right) \] for all $(x,u) \in \mo \times\R$, then $(\m,g)$ is geodesically complete. \end{corollary} We emphasize that other results on autonomous and non-autonomous potentials can be translated into results of geodesic completeness of GPW. So, as a consequence of \cite[Corollary 3.6]{CRS2012} we have: \begin{corollary} \label{complete3} A GPW with complete $(\mo,\g)$ is geodesically complete if $\nabla^M\h$ grows at most linearly in $M$ along finite times \end{corollary} \begin{remark} The particular case of this corollary for pp--waves (i.e. its application for $(\mo,\g)=\R^2$) was discussed in \cite{CRS2012}, and it has a clear interpretation: not only classical plane waves are geodesically complete but also each pp--wave such that its coefficient $\h$ behaves qualitatively as the one of a plane wave, are. This can be understood as a result of stability of the completeness of plane waves in the class of all pp--waves. So, Corollary \ref{complete3} also ensures stability of completeness in the class of generalized plane waves. Even though the physical interpretation of Corollary \ref{complete2} is not so clear, it is logically independent of Corollary \ref{complete3} (a discussion as the one below Proposition 3.7 in \cite{CRS2012} also holds here). This shows that the application of the techniques are not exhausted and, under motivated assumptions, further results could be obtained. \end{remark}
1,314,259,995,379
arxiv
\section{Introduction} The radio emission of Active Galactic Nuclei (AGNs) is synchrotron radiation generated in the relativistic jets that emerge from the nucleus of the galaxy, presumably along the rotational axis of a central supermassive black hole. Synchrotron radiation can be highly linearly polarized, up to $\simeq 75\%$ in the case of a uniform magnetic ({\bf B}) field (Pacholczyk 1970). Linear polarization observations are essential, as they give information about the orientation and degree of order of the {\bf B} field, as well as the distribution of thermal electrons and the {\bf B}-field geometry in the vicinity of the AGN. Many theorists have suggested that the magnetic fields of these sources are closely connected with the collimation of the jets, and could determine whether sources have prominent jets or not (eg. Meier et al. 2001). Thus, information on the magnetic fields of these sources is essential in helping us better understand various physical processes in AGN jets. VLBI polarization observations of BL~Lac objects have shown a tendency for the polarization {\bf E} vectors in the parsec-scale jets to be aligned with the local jet direction, which implies that the corresponding {\bf B} field is transverse to the jet, because the jet is optically thin (Gabuzda, Pushkarev \& Cawthorne 2000). It seems likely that many of these transverse {\bf B} fields represent the ordered toroidal component of the intrinsic {\bf B} fields of the jets, as discussed by Gabuzda et al. (2008), see also references therein. Depending on the observer's viewing angle and the helix's pitch angle, helical jet {\bf B} fields can also give rise to a `spine-sheath' polarization structure in the frame of the observer, with a region of longitudinal polarization (transverse {\bf B}-vectors) along the central `spine' of the jet surrounded by regions of transverse polarization (longitudinal {\bf B}-vectors) near the edges of the jet. The presence of transverse polarization near the edges of the jet could be a natural consequence of a helical jet {\bf B} field, although it has also been suggested to be due to interaction with the surrounding medium (Laing 1996; Lyutikov, Pariev \& Gabuzda 2005; Attridge, Roberts \& Wardle 1999; Pushkarev et al. 2005). Faraday Rotation studies can play a key role in determining the intrinsic {\bf B} field geometries associated with the jets. Faraday Rotation of the plane of linear polarization occurs during the passage of an electromagnetic wave through a region with free electrons and a magnetic field with a non-zero component along the line-of-sight. The amount of rotation is proportional to the integral of the density of free electrons $n_{e}$ multiplied by the line-of-sight {\bf B} field, the square of the observing wavelength $\lambda^{2}$, and various physical constants; the coefficient of $\lambda^{2}$ is called the Rotation Measure (RM): \begin{eqnarray} \Delta\chi\propto\lambda^{2}\int n_{e} B\cdot dl\equiv RM\lambda^{2} \end{eqnarray} The intrinsic polarization angle can be obtained from the relation: \begin{eqnarray} \chi_{obs} = \chi_0 + RM \lambda^{2} \end{eqnarray} where $\chi_{obs}$ is the observed polarization angle, $\chi_0$ is the intrinsic polarization angle in the absence of Faraday rotation and $\lambda$ is the observing wavelength. Simultaneous multifrequency observations thus potentially enable the determination of the RM, as well as identification of the intrinsic polarization angles. Systematic gradients in the Faraday Rotation Measure (RM) have been reported previously across the parsec-scale jets of several AGNs, interpreted as reflecting the systematic change in the line-of-sight component of a toroidal or helical jet {\bf B} field across the jet (Blandford 1993, Asada et al. 2002, Gabuzda, Murray \& Cronin 2004, Zavala \& Taylor 2005, Gabuzda et al. 2008, Asada et al. 2008a,b,2010, Mahmud, Gabuzda \& Bezrukovs 2009). Such fields would come about in a natural way as a result of the `winding up' of an initial `seed' field by the differential rotation of the central accreting objects (e.g. Nakamura et al. 2001, Lovelace et al. 2002). We consider here two objects in which we have detected transverse RM gradients in both the core region and jet: 0716+714 and 1749+701. In both cases, there is a reversal of the direction of the RM gradient between these two regions. We discuss a possible explanation of this phenomena based on magnetic-tower-type models for jet launching. Throughout we assume $H_{0}$ = 71 km/s/Mpc, {$\Omega_{\lambda}$ = 0.73} and {$\Omega_{m}$ = 0.27}. \section{Faraday-Rotation Observations and Reduction} Very Long Baseline Array (VLBA) polarization observations of the sources included in this paper were carried out as part of two different studies of the same sample of BL Lac objects: one at 4.6--15.4~GHz and one at 1.36--1.67~GHz. The high-frequency observations of 0716+714 were on 22 March 2004 and of 1749+701 were on 22 August 2003; the low-frequency observations of 1749+701 were on 17 January 2004. In both cases, the distributions of $u$--$v$ points were virtually identical for the different frequencies observed during a single set of observations, with the baseline lengths scaled in accordance with the individual observing frequencies. Standard tasks in the NRAO AIPS package were used for the amplitude calibration and preliminary phase calibration. The instrumental polarizations (`D-Terms') were determined with the task `LPCAL', solving simultaneously for the source polarization. In all cases, the reference antenna used was Los Alamos. The Electric Vector Position Angle (EVPA) calibration was done using integrated polarization observations of bright, compact sources, obtained with the Very Large Array (VLA) near in time to our VLBA observations, by rotating the EVPA for the total VLBI polarization of the source to match the EVPA for the integrated polarization of that source derived from VLA observations. \subsection{August 2003 and March 2004 Observations: 4.6--15.4~GHz} The observations were carried out at six frequencies: 4.612, 5.092, 7.916, 8.883, 12.939 and 15.383~GHz. Each source was observed for about 25--30 minutes at each frequency, in a `snap-shot' mode with 8--10 scans spread out over the observing time period. Presented in this paper are the results for 0716+714 (observed on 22 March 2004) and 1749+701 (observed on 22 August 2003). The instrumental polarizations (`D-Terms') were determined using observations of 1156+295 (22 August 2003) and 0235+164 (22 March 2004). The source of the integrated VLA polarizations for the EVPA calibration was the NRAO website (www.aoc.nrao.edu/~smyers/calibration/). The VLA observations were made at frequencies 5, 8.5, 22 and 43 GHz. We found these EVPA values to be consistent with a linear $\lambda^{2}$ law (Faraday Rotation) and were thus able to interpolate the corresponding values for our non-standard frequencies (see Mahmud et al. 2009). The sources used were 1803+784 and 2200+420. We refined our initial EVPA calibration by examining the resulting polarization images for several sources with simple structures and checking for consistency. This led to adjustments of $5-20^{\circ}$ for several of the EVPA corrections. This procedure improved the overall self consistency of the polarization and RM maps for virtually all of the sources observed. We estimate that our overall EVPA calibration is accurate to within $3^{\circ}$. A summary of our final 4.6--15.4 GHz EVPA corrections is given in Mahmud et al. (2009). \begin{table} \caption{EVPA calibrations for 17 January 2004} \centering \label{tab:EVPA_red} \begin{tabular}{ll} \hline Frequency (GHz) & $\Delta\chi$ (deg) \\\hline 1.358 & 128.1 \\ 1.430 & 111.1 \\ 1.493 & 100.4 \\ 1.665 & 82.5 \\ \hline \end{tabular} \end{table} \hfill To verify the accuracy of the overall flux calibration at 4.6--15.4 GHz, we determined the spectra of various optically thin regions in the jet of 1803+784, after taking into account the relative shifts between the images (see Mahmud et al. 2009). The observed 4.6--15.4~GHz fluxes are consistent with a power-law within the errors, corresponding to a ``normal'' optically thin spectral indices of $\simeq$1. \subsection{January 2004 Observations: 1.36--1.67~GHz} Results for 1749+701 at epoch 17 January 2004 at four frequencies between 1.358 and 1.665 GHz are also included in this paper. The instrumental polarizations (`D-Terms') for these observations were determined using observations of 0851+202. The absolute calibration of the EVPAs was determined using VLA observations of 0851+202 obtained at 1.485 and 1.665~GHz on February 20, 2004. These observations were sufficiently close in time to the VLBA observations to be suitable for the EVPA calibration because the polarization is not rapidly variable at such low frequencies. A lambda-squared fit was applied to the VLA polarization angles, yielding a rotation measure of +31.6~rad/m$^2$, in excellent agreement with the previously measured value of $+31 \pm 2$ (Pushkarev 2001). We accordingly used the measured VLA polarization angles and rotation measure to determine the integrated polarization angles for our four VLBA frequencies, which were then used to calibrate the VLBA EVPAs. We estimate the errors in the resulting polarization angles to be no more than 2$^\circ$. For a list of the observing frequencies and their EVPA corrections, see Table~\ref{tab:EVPA_red}. \subsection{Rotation Measure Determination} We made maps of the distribution of the total intensity $I$ and Stokes parameters $Q$ and $U$ at each of the frequencies, with matched cell sizes, images sizes, and resolutions, by convolving all of the final maps with the same beam. The distributions of the polarized flux ($p = \sqrt{Q^2 + U^2}$), as well as maps of the EVPA ($\chi = \frac{1}{2}\arctan \frac{U}{Q}$) and $\chi$ noise maps, were obtained from the $Q$ and $U$ maps using the task `COMB'. Although the $Q$ and $U$ maps at each frequency will be properly aligned with the $I$ map at that same frequency, the images at different frequencies can be appreciably shifted relative to each other. The physical origin of this effect is the fact that the mapping procedure effectively aligns the images roughly on the bright, compact VLBI core, whose position depends on the observing frequency: the core ($\tau = 1$ surface) appears further down the jet at lower frequencies (K\"onigl 1981). It is important to correct for this effect by properly aligning the $I$ images before making spectral-index maps; although the effect of modest misalignments between the $\chi$ maps at different frequencies is much smaller, it is nevertheless optimal to align these maps before constructing RM maps to maximize the reliability of the resulting RM maps. We determined the relative shifts between the maps at each of our frequencies using the cross-correlation algorithm of Croke \& Gabuzda (2008), which essentially aligns the images based on their optically thin jet emission. This procedure yielded only negligible ``core-shifts'' for 0716+714 in our frequency range (less than a pixel), consistent with the results of Kovalev et al. (2008). The core-shift for 1749+701 between 4.6~GHz and 15.4~GHz is appreciable: approximately 0.8~mas in position angle $-63^{\circ}$, aligned with the VLBI jet direction. The shifts between the other frequencies and 15.4~GHz were smaller and consistent with the expected scaling with frequency. We accordingly used these shifts to align the polarization-angle maps for 1749+701 before making the high-frequency RM map. We verified that the shifts between the images at the lower frequencies (1.36--1.67~GHz) were neglible for all sources observed in that experiment, including 1749+701; this is expected, since these frequencies do not cover a wide range; accordingly, no shifts were necessary for those images. Further, we constructed maps of the RM, using the AIPS task `RM', after first subtracting the effect of the integrated RM (Pushkarev 2001), presumed to arise in our Galaxy, from the observed polarization angles, so that any residual Faraday Rotation was due to only the thermal plasma in the vicinity of the AGN. We used a modified version of `RM' enabling simultaneously RM fitting using up to eight frequencies. We used the option in the task `RM' of blanking output pixels when the uncertainty in the RM exceeds a specified value, which was about 30~rad/m$^{2}$ for the high-frequency maps and about 10~rad/m$^2$ for the low-frequency maps. This uncertainty in the RM calculated for a given pixel by the `RM' task is based on a fit of $\chi$ vs. $\lambda^2$ weighted by the uncertainties in the polarization angles, which are, in turn, calculated using the noise levels on the Stokes $Q$ and $U$ maps. Thus, the resulting RM uncertainties for individual pixels are determined both by the uncertainties in the polarization angles and the quality of the linear $\lambda^2$ fit. The blanking levels we chose were determined empirically, as the maximum values that did not lead to any obviously spurious features in the RM maps, either at the location of the source or in the rest of the map. The blanking applied essentially means that the output RM values show some evidence of reality and reliability -- colocation with the source emission region and agreement with a $\lambda^2$ law within the specified limit. \section{Estimation of the $\chi$ and RM Uncertainties} It has been usual to adopt the root-mean-square (rms) deviations in the residual map (or in the final CLEAN map far from any regions containing real flux) $\sigma_{rms}$ as an estimate of the total uncertainty in the measured flux in an individual pixel. Hovatta et al. (2012) have recently investigated this practice empirically using Monte Carlo simulations. They concluded that the uncertainties in $Q$ and $U$ fluxes in individual pixels are described well by the expression \begin{equation} \sigma = \sqrt{\sigma_{rms}^2 + \sigma_{Dterm}^2 + (1.5\sigma_{rms})^2} \end{equation} \noindent where $\sigma_{Dterm}$ is associated with the presence of residual instrumental polarizations in the data (see also Roberts, Wardle \& Brown 1994): \begin{equation} \sigma_{Dterm} \simeq \frac{\sigma_{\Delta}}{\sqrt{N_{ant} N_{IF} N_{scan}}}\sqrt{I^2 + (0.3\, I_{peak})^2} \end{equation} \noindent where $\sigma_{\Delta}$ is the estimated uncertainty in the individual D-terms, $N_{ant}$ the number of antennas in the VLB array (assuming all have altitude-azimuth mounts), $N_{IF}$ the number of IFs (sub-bands within the total observed band at a given frequency) used for the observations, $N_{scan}$ the number of scans with independent parallactic angles, $I$ the total intensity at the point in question, and $I_{peak}$ the total intensity at the map peak. The term containing $I_{peak}$ was added by Hovatta et al. (2012) to approximately take into account the fact that the residual D-term uncertainty tends to scatter polarized flux throughout the map. The expression for $\sigma$ above explicitly demonstrates that, even if the D-term error term is negligible, the uncertainty in fluxes in regions of source emission is somewhat higher than the map rms in regions far from source emission. In our case, $N_{ant} = 10$, $N_{IF} = 4$ for the 7.9--15.4-GHz observations and 2 for the 4.6~GHz, 5.1~GHz and 1.36--1.67~GHz observations, and $N_{scan}\simeq 8$. We estimate $\sigma_{\Delta}\simeq 0.005$ for all the experiments from the scatter in the D-terms. The largest value for $\sigma_{Dterm}$ will occur at the peaks of the maps; at the positions where we have determined the RM values below (Table~2), the resulting D-term uncertainties are no more than $\simeq 0.60\sigma_{rms}$ for 0716+714 and no more than $\simeq 0.40\sigma_{rms}$ for 1749+701, making $\sigma_{Dterm}$ small compared to the other terms contributing to $\sigma$. The $Q$ and $U$ uncertainties determined in this way can then be propagated to derive the corresponding uncertainties in the polarization angles, $\sigma_{\chi}$: \begin{eqnarray} \chi & = & \frac{1}{2}ArcTan(\frac{U}{Q})\\ \sigma^{2}_{\chi} & = & \frac{1}{4}[(\frac{Q}{Q^{2}+U^{2}})^{2}\sigma_{U}^{2} +(\frac{U}{Q^{2}+U^{2}})^{2}\sigma_{Q}^{2}] \end{eqnarray} \noindent The uncertainty in the EVPA calibration $\sigma_{EVPA}$ can then be added in quadrature: \begin{equation} \sigma^{2}_{\chi_{final}}=\sigma^{2}_{\chi}+\sigma^{2}_{EVPA} \end{equation} \noindent These $\chi$ uncertainties can then, in turn, be used to determine uncertainties in the fitted RM values, as is described by Hovatta et al. (2012). Note, however, that, as the same EVPA calibration is applied to each polarization angle at a given frequency, the uncertainty this introduces is {\em systematic}. One consequence of this is that, although the EVPA calibration uncertainties will increase the uncertainties in the fitted RM values, EVPA calibration uncertainties should not give rise to spurious RM gradients (e.g. Mahmud et al. 2009, Hovatta et al. 2012). The reason for this is essentially that any EVPA calibration error corresponds to a specific systematic offset that affects all EVPA measurements at all points of the maps at the corresponding frequency equally \emph{and in the same direction}, and so will not induce gradients between points. This was taken into account in our analysis in the same way as was done by Mahmud et al. (2009) and Hovatta et al. (2012): when RM values are derived specifically so that they can be compared to search for possible gradients, the value $\sigma_{\chi}$ without adding $\sigma_{EVPA}$ in quadrature was used to determine the RM uncertainty. \section{Results} The images in Fig.\ \ref{fig:pol_maps1} show the observed VLBI total-intensity and linear-polarization structures for both sources at 15.4, 7.9 and 4.9 GHz, and Fig.~\ref{fig:1749_pol_red} the total-intensity and linear-polarization structure of 1749+701 at 1.43~GHz, all corrected for integrated but not local Faraday rotation. The maps at 1.36, 1.49 and 1.67~GHz are very similar, and are not shown here. The convolving beams used in each case are indicated in the lower right-hand corner of the figures. The peaks and bottom contours are indicated in the figure captions and, in all cases, the contour levels increase in steps of a factor of two. \begin{figure*} \begin{minipage}[t]{7.5cm} \begin{center} \includegraphics[width=5.5cm,clip]{0716_2cm2.eps} \end{center} \end{minipage} \begin{minipage}[t]{7.5cm} \begin{center} \includegraphics[width=7.0cm,clip]{1749_2cm2.eps} \end{center} \end{minipage} \begin{minipage}[t]{7.5cm} \begin{center} \includegraphics[width=5.5cm,clip]{0716_4cm1.eps} \end{center} \end{minipage} \begin{minipage}[t]{7.5cm} \begin{center} \includegraphics[width=7.0cm,clip]{1749_4cm1.eps} \end{center} \end{minipage} \begin{minipage}[t]{7.5cm} \begin{center} \includegraphics[width=5.5cm,clip]{0716_6cm1.eps} \end{center} \end{minipage} \begin{minipage}[t]{7.5cm} \begin{center} \includegraphics[width=7.0cm,clip]{1749_6cm1.eps} \end{center} \end{minipage} \caption[Short caption for figure 1]{\label{fig:pol_maps1} VLBA $I$ maps for 0716+714 (left) and 1749+701 (right) with polarization sticks superimposed, at 15.4~GHz (top), 7.9~GHz (middle) and 4.6 GHz (bottom), corrected for integrated Faraday Rotation. The maps of 0716+714 have peaks of 1.0, 1.3, and 1.5 Jy/beam and bottom contours of 0.7, 0.8 and 0.9 mJy/beam; the maps of 1749+701 have peaks of 0.5, 0.4 and 0.3~Jy/beam and bottom contours of 0.6m, 0.5 and 1.7~mJy/beam. } \end{figure*} 0716+714 has a redshift of $z = 0.30$, corresponding to 4.52 pc/mas, and an integrated RM of $-30$~rad/m$^2$ (Pushkarev 2001). The jet of 0716+714 extends roughly to the North. The jet polarization {\bf E} vectors are aligned with the local jet direction, as is also shown by the 2cm MOJAVE images (http://www.physics.purdue.edu/MOJAVE/sourcepages/). 1749+701 has a redshift of $z = 0.77$, corresponding to 7.41~pc/mas, and an integrated RM of $+15$~rad/m$^2$ (Pushkarev 2001). The jet of 1749+701 initially emerges toward the Northwest, then turns toward the North, and further toward the East; this spiral-like path is evident in the 7.9 and 4.9-GHz maps (see also Gabuzda \& Lisakov 2009). The polarization {\bf E} vectors are mostly aligned with the local jet direction; some regions of `spine-sheath' polarization structure or orthogonal polarization offset toward one side of the jet are visible in the maps. Although the core--jet structure is not directly distinguishable in the 1.43-GHz image for January 2004 in Fig.~\ref{fig:1749_pol_red}, the orientation of this structure is known from the higher-frequency images in Fig.~\ref{fig:pol_maps1}. The polarization {\bf E} vectors appear to be aligned with the jet direction. The weak emission to the southeast of the map center corresponds to a continuation of the emission in roughly this region in the 4.9-GHz image (Fig.~\ref{fig:pol_maps1}, bottom right). \begin{figure} \centering \includegraphics[width=0.5\textwidth]{1749_B21_red_pol.eps} \caption[Short caption for figure]{\label{fig:1749_pol_red}VLBA $I$ map of 1749+701 with polarization sticks superimposed, at 1.43~GHz, corrected for integrated Faraday Rotation. The $I$ peak is 0.60~Jy/beam and the bottom contour is 1.5~mJy/beam.} \end{figure} The images in Figs.~\ref{fig:0716_RM} and \ref{fig:1749_RM} show the parsec-scale RM distributions for 0716+714 (4.6--15.4~GHz) and 1749+701 (1.36--1.67~GHz), superimposed on the corresponding $I$ contours. The RM distribution for 1749+701 for 4.6--15.4~GHz is subject to uncertainty due to the relatively large shifts required to align the $\chi$ images at the different frequencies, and we accordingly focus on the more reliable images shown in these two figures. In all cases, the $I$ contours increase in steps of a factor of two. The arrows show the direction of RM gradients in the corresponding regions visible by eye; in other words, the direction in which the value of the RM increases (from more negative to less negative, negative to positive, or less positive to more positive, as the case may be). The convolving beams used in each case are indicated in the lower right-hand corner of the figures. The beam used for the 4.6--15.4~GHz RM map for 0716+714 was $1.28$~mas~$\times 1.06$~mas in position angle $-0.84^{\circ}$, which corresponds to the resolution of the 7.9~GHz data; this beam was chosen in order to provide slightly higher resolution in the RM map at the expense of only a modest over-resolution of the lowest-frequency images. Accompanying panels show plots of polarization angle ($\chi$) vs. wavelength squared ($\lambda^2$) for the indicated regions; the uncertainties in the polarization angles shown here include the EVPA uncertainty added in quadrature. Slices of the RM across the gradients in the specified locations in the jets and core regions obtained with the AIPS task 'SLICE' are also shown; we do not include the (single-pixel-based) uncertainties on these slices on these plots, since they are meant only to be orientational. Instead, we carry out below an anaysis involving the RM values and their uncertainties for three regions across the jet (on either side and in the center), more in keeping with the limited resolution available with our arrays. \section{Discussion} \subsection{Linear Polarization Structure} Previous polarization observations have demonstrated that `spine-sheath' polarization structures are not uncommon among blazars. Attridge et al. (1999) interpreted a `spine-sheath' polarization structure in the jet of the quasar 1055+018 as a result of a series of shocks compressing the field in the central region and shearing of the field induced by interaction with the surrounding medium at the jet edges. However, more recent studies (eg. Lyutikov et al. 2005, Pushkarev et al. 2005) have discussed the possibility that `spine-sheath' polarization structures can come about naturally in the case of helical jet magnetic fields. The BL~Lac objects 0714+716 and 1749+701 both show signs of `spine-sheath' polarization or transverse polarization offset toward one side of the jet at one or more wavelengths, consistent with the possibility that their jets carry helical magnetic fields. \subsection{Detection of Transverse RM Gradients across the Jets} Tentative transverse RM gradients across the jets of the two BL~Lac objects considered here are visible by eye in the colour versions of the RM distributions in Figs.~\ref{fig:0716_RM} and \ref{fig:1749_RM}. The first step in testing the reality of these gradients is estimating the uncertainties in the RM values on either end of the gradient, to determine at what level the two RM values differ. We have done this using the $Q$ and $U$ uncertainty relations of Hovatta et al. (2012), as described above. \begin{figure*} \begin{minipage}[t]{16.0cm} \begin{center} \includegraphics[width=16.0 cm,clip]{big_image_final_arr.eps} \end{center} \end{minipage} \caption{\label{fig:0716_RM} RM map of 0716+714 at 4.6--15.4~GHz. The accompanying panels show slices of the RM distributions across the jet and core, and polarization angle ($\chi$) vs. wavelength-squared ($\lambda^2$) plots for pixels on either side of the core and jet. The errors shown in the plots are 1$\sigma$, and include the estimated random errors and the EVPA uncertainties added in quadrature. The peak of the $I$ map is 1.3~Jy/beam and the bottom contour is 1.0~mJy/beam. The beam used to construct the $I$ and RM maps was 1.28~X~1.06~mas in position angle $-0.8^{\circ}$.} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.6\textwidth,angle=90]{1749_RM_low_new.ps} \caption{\label{fig:1749_RM} RM map of 1749+701 1.36--1.66~GHz. The accompanying panels show slices of the RM distribution across the core, and polarization angle ($\chi$) vs. wavelength-squared ($\lambda^2$) plots for pixels on either side of the core and jet. Errors shown are 1$\sigma$, and include the estimated random errors and the EVPA uncertainties added in quadrature. The peak of the $I$ map is 0.6~Jy/beam; the bottom contour is 1.4~mJy/beam (January 2004). The beam used to construct the $I$ and RM maps was 9.16~X~8.57~mas in position angle $49^{\circ}$.} \end{figure*} Figs.~\ref{fig:0716_transdist} and \ref{fig:1749_transdist} show plots of the observed RMs at three points across the core-region and jet structures of 0716+714 and 1749+701 (on either side and near the center). Together with the slices shows in Figs.~\ref{fig:0716_RM} and \ref{fig:1749_RM}, these figures demonstrate the systematic, monotonic nature of these observed RM gradients. Table~2 summarizes the sets of RM values shown in Figs.~\ref{fig:0716_transdist} and \ref{fig:1749_transdist} together with their uncertainties, as well as the differences between the RM values on either side of the inferred transverse gradients and their uncertainties. The columns present (1) the figure to which the RM values refer, (2) the source name, (3) the point in the indicated figure to which the RM value corresponds, (4) the position where the RM value was measured, in milliarcseconds, relative to the phase centre, (5) the RM value at the indicated position, together with its uncertainty, (6) the difference between the two RM values on either side of the source structure and its uncertainty and (7) the significance of this difference in numbers of $\sigma$. The uncertainties listed in column (5) are based on $\chi$ uncertainties without the EVPA-calibration uncertainty added in quadrature, since this will affect all points in an RM image systematically in the same way. The last column of this table shows that the differences between the RM values detected on either side of the jet structures are at the level of $3-5\sigma$, demonstrating that these differences appear to be statistically significant. \begin{figure*} \centering \includegraphics[width=0.33\textwidth,angle=-90]{0716+714_Core_Final_beam.ps} \includegraphics[width=0.33\textwidth,angle=-90]{0716+714_Jet_Final_beam.ps} \caption[Short caption for figure]{\label{fig:0716_transdist} Plots of observed RM as a function of distance from a reference point on one side of the source structure across the core-region (left) and jet (right) RM distributions of 0716+714 at 4.6--15.4~GHz. The positions of each point and the corresponding RM values and their errors are listed in Table~2. The horizontal bar shows the approximate size of the beam FWHM.} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.33\textwidth,angle=-90]{1749+701_Core.ps} \includegraphics[width=0.33\textwidth,angle=-90]{1749+701_Jet.ps} \caption[Short caption for figure]{\label{fig:1749_transdist} Plots of observed RM as a function of distance from a reference point on one side of the source structure across the core-region (left) and jet (right) RM distributions of 1749+701 at 1.36--1.67~GHz. The positions of each point and the corresponding RM values and their errors are listed in Table~2. The horizontal bar shows the approximate size of the beam FWHM.} \end{figure*} \begin{table*} \caption{RM measurements in Figs. \ref{fig:0716_transdist} and \ref{fig:1749_transdist}} \centering \begin{tabular}{llcccccccc} \hline Figure & Source & Point & Position & RM & Left--Right & RM Diff in $\sigma$\\ & & in plot& (mas) &rad/m$^2$ & RM Diff &\\ (1) & (2) & (3) & (4) & (5) & (6) & (7) \\ \hline \ref{fig:0716_transdist} (left)& 0716+714 & Left & $(+0.90, -1.00)$ & $-256\pm 53$& &\\ & (Core-region) & Mid & $(0.10, -0.80)$ & $-41\pm 44$ & $+262\pm60$ & $4.4\sigma$\\ & & Right & $(-0.70, -0.60)$ & $+6\pm 28$ & &\\ \ref{fig:0716_transdist} (right)& 0716+714& Left & $(+1.00, +1.00)$ & $+94\pm 37$ & & \\ & (Jet) & Mid & $(+0.30, +1.00)$ & $-31\pm 9$ & $-239\pm 47$& $5.1\sigma$\\ & & Right & $(-0.50, +1.10)$ & $-145\pm 29$& &\\ \ref{fig:1749_transdist} (left) & 1749+701& Left & $(+9.00, +1.50)$ & $-19\pm 8$ & & \\ & (Core-region) & Middle & $(+4.50, -3.00)$ & $+13\pm 4$ & $+39\pm 9$ & $4.3\sigma$\\ & & Right & $(-1.50, -7.50)$ & $+20\pm 2$ &&\\ \ref{fig:1749_transdist} (right)& 1749+701& Left & $(-4.50, +9.00)$ & $+17\pm 10$ &&\\ & (Jet) & Middle & $(-6.00, +3.00)$ & $+4\pm 6$ & $-38\pm 12$ & $3.2\sigma$\\ & & Right & $(-7.50, -3.00)$ & $-21\pm 6$ &&\\ \hline \end{tabular} \end{table*} \subsection{The Possibility of Detecting Transverse RM Gradients Across Narrow Jets} Taylor \& Zavala (2010) have recently proposed four criteria for the reliable detection of transverse Faraday rotation gradients, the most stringent of which is that the observed RM gradient span at least three ``resolution elements'' across the jet. This criterion reflects the desire to ensure that it is possible to distinguish properties between regions located on opposite sides of the jets. The criterion of three ``resolution elements'' has been taken to correspond to three beamwidths, and coincides with the general idea that structures separated by less than a beamwidth are not well resolved. To test the validity of this criterion of Taylor \& Zavala (2010), we constructed core--jet-like sources with various intrinsic widths and with transverse RM gradients present across their structures, and carried out Monte Carlo simulations based on these model sources. A description of our Monte Carlo simulations and the results they yielded are presented in the Appendix. The transverse source widths for our model sources correspond to intrinsic widths of about 1/2, 1/3, 1/5, 1/10 and 1/20 of the beam full-width at half-maximum (FWHM) in the direction across the jet. The simulations show that the transverse RM gradients introduced into the model visibility data remain visible in the RM maps constructed from ``noisy'' data having the same distribution of $(u,v)$ points as our observations of 0716+714, even when the intrinsic width of the structure is much smaller than the beam width. Both uni-directional model RM gradients and model RM structure containing two oppositely directly transverse gradients in the core region and jet are visible for all the jet widths considered. The results of these new Monte Carlo simulations thus directly demonstrate that the three-beamwidth criterion of Taylor \& Zavala (2010) is overly restrictive, since the simulations directly show the possibility of detecting transverse RM gradients even when the intrinsic widths of the corresponding source structures are much less than the beamwidth, resulting in RM distributions that span only $1-1.5$ beamwidths. This demonstrates that the relatively modest widths spanned by the transverse RM gradients in 0716+714 and 1749+701 that we report here should not be taken by themselves as grounds to question the reliability of these gradients. We note here that our Monte Carlo simulations are not intended to provide a physical model for our observations, or to reproduce our observed RM distributions in any detail; instead, they are intended solely to demonstrate the possibility of detecting a transverse RM gradient in real data, even if the intrinsic jet width is much smaller than the beam FWHM. Inspection of Fig.~30 of Hovatta et al. (2012) indicates that the fraction of ``false positives'', i.e., spurious RM gradients, that were obtained in their Monte Carlo simulations did not exceed $\simeq 1\%$ when a $3\sigma$ criterion was imposed for the RM gradient, even when the observed width of the RM gradient was less than 1.5 beamwidths. This suggests that there may be up to a $\simeq 1\%$ probability that the RM gradients we report here are spurious, due to their relatively limited widths, although we consider this to be unlikely, given that the RM differences involved correspond to as much as $5\sigma$. \subsection{The Remaining Criteria of Taylor \& Zavala (2010)} With regard to the other criteria for reliability of transverse RM gradients proposed by Taylor \& Zavala (2010), the criterion that the change in the RM across the jet be at least $3\sigma$ is satisfied by the RM images in Figs.~\ref{fig:0716_RM} and \ref{fig:1749_RM} (see also Table~2). The differences in the RMs across the core region and jet of 0716+714 (Fig.~\ref{fig:0716_transdist}) are approximately $4-5\sigma$; the differences in the RMs across the core region and jet of 1749+701 (Fig.~\ref{fig:1749_transdist}) are approximately $3-4\sigma$. The criterion that the change in the RM be monotonic and smooth within the errors is also satisfied by the gradients in both 0716+714 and in 1749+701. Although the gradients suggested by the slices displayed in Figs.~\ref{fig:0716_RM} and \ref{fig:1749_RM} are not constant (linear), they are nevertheless monotonic. It is interesting to note here that the simulated RM maps of Broderick \& McKinney (2010) typically do not show RM gradients with a constant slope all across the RM distribution after convolution, even though the intrinsic predicted gradients are monotonic [see, for example, their Fig.~8 bottom right panels]. The remaining criterion proposed by Taylor \& Zavala (2010) is that the spectrum be optically thin at the location of the observed RM gradient. This criterion is motivated by two factors: (i) the desire to avoid possible jumps in the observed polarization angles due to optically thick--thin transitions with the observed frequency range, and (ii) the fact that the fractional polarization can change rapidly with optical depth in the optically thick regime, leading to the possibility of wavelength-dependent polarization effects when regions having different optical depths at different frequencies are superposed, which could in principle lead to the fitting of spurious RM values in optically thick regions when these are inhomogeneous. This criterion is clearly satisfied by the gradients across the jets of 0716+714 and 1749+701, which are all optically thin. The core regions of these two objects are also predominantly, but not fully, optically thin. The core-region spectral indices and $\chi$ values provide no evidence for a transition between optically thick and optically thin in the frequency rangees considered, consistent with the fact that the observed Faraday rotations in the polarization angles are all no greater than a few tens of degrees. Thus, there is no reason to suspect that jumps in the observed polarization angles due to optical-depth transitions are contributing to the observed core-region RMs. We cannot completely rule out the possibility that the polarization angles in the core-region are subject to wavelength-dependent optical-depth effects, however, we consider this to be unlikely, for two reasons: (i) the degrees of polarization in the core regions are $m_{core}\simeq 3-4\%$ for 0716+714 and $m_{core}\simeq 7\%$ for 1749+701, indicating a substantial contribution from optically thin regions; and (ii) the quality of the $\lambda^2$ fits for the core regions is no worse than for the optically thin jet regions. Thus, our detection of the RM gradients across the jets can be considered firm, while the detection of the oppositely directed RM gradients across the core may be somewhat more tentative, due to the small possibility that the observed polarization could be affected by optical depth effects at some of the observed frequencies. This is much less likely to be the case for 1749+701, since the observed frequencies span the relatively narrow range from 1.36--1.67~GHz. \subsection{Reversal of RM gradients in the Core Region and Jet} In both 0716+714 and 1749+701, the tentative transverse RM gradients detected in the core region are opposite in direction to the RM gradients detected across the jets (Figs.~\ref{fig:0716_RM}--\ref{fig:1749_RM} and Figs.~\ref{fig:0716_transdist}--\ref{fig:1749_transdist}). In fact, a similar behaviour is shown by the parsec-scale RM distribution for 3C~120 presented by G\'omez et al. (2011): their RM map for January 1999 shows higher positive values on the Southern side of the jet at the distance of components L and K (about 4~mas from the core), but more negative values on the Southern side of the jet at the distance of component O (about 2~mas from the core). At first, this seems difficult to understand, since the direction of an RM gradient associated with a helical {\bf B} field is essentially determined by the direction of the rotation of the central accretion disc and the direction of the poloidal field it winds up, both of which we would expect to be constant in time. We can offer several possible explanations for this result. We briefly discuss these below, and explain our reasoning for identifying the one that we think is the most likely (see also Mahmud et al. 2009). {\bf Torsional Oscillations of the Jet.} One possible interpretation of oppositely directed core and jet transverse RM gradients, could be that the direction of the azimuthal {\bf B} field component changed as a result of torsional oscillations of jet (Bisnovatyi--Kogan 2007). Such torsional oscillations, which may help stabilize the jets, could cause a flip of the azimuthal {\bf B} field from time to time, or equivalently with distance from the core, given the jet outflow. In this scenario, we expect that the direction of the observed transverse RM gradients may reverse from time to time when the direction of the torsional oscillation reverses; this reversal pattern would presumably then propagate outward with the jet. {\bf Reversal of the ``pole'' facing the Earth.} Another possible interpretation could be that the ``pole'' of the black hole facing the Earth reversed. One way to retain a transverse RM gradient in a helical magnetic field model but reverse the direction of this gradient, is if the direction of rotation of the central black hole (i.e.~the direction in which the field threading the accretion disc is ``wound up'') remains constant, but the ``pole'' of the black hole facing the Earth changes from North to South, or vice versa. To our knowledge, it is currently not known whether such polarity reversals are possible for the central black hole of AGN, or on what time scale they could occur. {\bf Nested-helix B-field structure.} A simpler and more likely explanation is a magnetic-tower-type model, with poloidal magnetic flux and poloidal current concentrated around the central axis (Lynden-Bell 1996; Nakamura et al. 2006). Fundamental physics dictates that the magnetic-field lines must close; in this picture, the magnetic field forms meridional loops that are anchored in the inner and outer parts of the accretion disc, which become twisted due to the differential rotation of the disc. This should essentially give rise to an ``inner'' helical B field near the jet axis and an ``outer'' helical field somewhat further from the jet axis. These two regions of helical field will be associated with oppositely directed RM gradients, and the total observed RM gradient will be determined by which region of helical field dominates the observed RMs. Thus, the presence of a change in the direction of the observed transverse RM gradient between the core/innermost jet and jet regions well resolved from the core could represent a transition from dominance of the inner to dominance of the outer helical {\bf B} fields in the total observed RM. This seems to provide the simplest explanation for the RM-gradient reversals we observe in these two objects. Typically, we would expect the direction of the RM gradients in the core and jet (i.e., the regions whose net RM is determined by the inner/outer helical fields) to remain constant in time, since they should be determined by the source geometry and viewing angle. Mahmud et al. (2009) discuss the possibility that this type of ``nested helical field'' structure could also occasionally give rise to changes in the direction of the observed RM gradients with time within a given source. \section{Conclusion} The polarization rotation-measure images for the two BL~Lac objects presented here provide new evidence in support of helical magnetic fields associated with the jets of these AGN, most importantly, the presence of transverse rotation measure gradients across the jets of both objects. There is also a dominance of transverse {\bf B} fields in the jets of 0716+714 and 1749+701, and signs of `spine-sheath' polarization structures or orthogonal polarization offset toward one side of the jet in both these sources, consistent with the possibility that these jets carry a helical magnetic-field component: this type of structure can also come about naturally in the case of a helical jet {\bf B} field (e.g. Lyutikov et al. 2005; Pushkarev et al. 2005). We interpret the observed transverse RM gradients as being due to the systematic variation of the toroidal component of a helical {\bf B} field across the jet (Blandford 1993). We note in this connection that the transverse RM gradients in both 0716+714 and 1749+701 have opposite signs on either side of the jet, making it impossible to explain the gradients as an effect of changing thermal-electron density alone (there must be a change in the direction of the line-of-sight magnetic field). We have also detected tentative transverse RM gradients in the region of the observed VLBI core in both BL~Lac objects, which can be interpreted as being associated with helical {\bf B} fields in the innermost jets of these sources. Further, we have found a striking new feature of the RM distributions in these objects: a reversal in the direction of the transverse RM gradients. Similar reversals can be seen in the RM images for 3C~120 presented by G\'omez et al. (2011). At first, this seems difficult to understand, since the direction of the RM gradient associated with a helical {\bf B} field is essentially determined by the direction of the rotation of the central accretion disc and the direction of the poloidal field it winds up. We suggest that the most likely explanation for these reversals is that we are dealing with a `nested-helical-field' structure such as that present in magnetic-tower models, in which poloidal field lines emerging from the inner accretion disc form meridional loops that close in the outer part of the disc, with both sides of the loops (which have oppositely directed poloidal field components) getting `wound up' by the disc rotation. Further observations and studies of the RM-gradient reversals observed in these objects can potentially provide key information about how the geometry of the magnetic fields in these AGN jets evolve, and may provide information on the jet dynamics and jet collimation. We are currently using a variety of multi-frequency polarization VLBA data to search for additional candidates for AGN jets displaying RM gradients and RM-gradient reversals on both parsec and decaparsec scales. \section{Acknowledgements} The research for this publication was supported by a Research Frontiers Programme grant from Science Foundation Ireland and the Irish Research Council for Science Engineering and Technology (IRCSET). The National Radio Astronomy Observatory is operated by Associated Universities Inc. We thank R.~Zavala for kindly providing the modified version of the AIPS `RM' task used in this work. We are also grateful to the referee for his careful reading of the paper and thoughtful, competent and useful comments. \section{Appendix: Monte Carlo Simulations} We constructed a model source with a transverse RM gradient present across its jet, and carried out Monte Carlo simulations based on this model source. The model source is cylindrical, with a fall-off in intensity on either side of the cylinder axis, and along the axis of the cylinder from a specified point located near one end of the cylinder (see Fig.~7). The resulting appearance of the model emission region is broadly speaking ``core--jet-like''. Model visibility data were generated for each of the six frequencies listed in Section~2.1 (4.6--15.4~GHz), including the effect of the transverse RM gradient in the $Q$ and $U$ visibility data, and these model visibility data were sampled at precisely the $(u,v)$ points at which 0716+714 was observed at each of the frequencies. Random thermal noise and the effect of uncertainties in the EVPA calibration by up to $3^{\circ}$ were added to the sampled model visibilities. The amount of thermal noise added was chosen to yield rms values in the simulated images that were comparable to those in our actual observations. Stokes $I$, $Q$ and $U$ images were constructed from these visibilities in CASA, using the same beam as was used in the observations of 0716+714 presented here ($1.28\times 1.06$~mas in $PA = -0.84^{\circ}$, where the dimensions given correspond to the full width at half maximum of the beam along its major and minor axes). The polarization of the model was chosen to yield a degree of polarization in the lower half of the convolved model image (the ``core'' region) of about 5\% and a degree of polarization in the upper half of the convolved model image of about 10\% -- similar to the observed values for 0716+714. The $Q$ and $U$ images were then used to construct the corresponding polarization angle (PANG) images at each frequency, which were, in turn, used to construct RM images in the usual way. Finally, Monte Carlo RM maps were constructed, based on 200 independent realizations of the thermal noise and EVPA calibration uncertainty. In each case, the RM values were output to the RM map only in pixels in with the RM uncertainty indicated by the fitting was less than 80~rad/m$^2$; this value was chosen so that no spurious pixels were written to the output RM maps for any of the 200 realizations of the RM distribution. Finally, an average RM map was derived by averaging together all 200 individual realizations of the RM distribution. This procedure was carried out for a number of model sources, all with a length of 1~mas and with transverse widths of 0.50, 0.35, 0.20. 0.10 and 0.05~mas. A recent observation of 0716+714 with the {\emph RadioAstron} space antenna and the European VLBI Network has measured the size of a feature in the 6.2-cm core region to be 0.07~mas (Kardashev et al. 2013), and our narrowest jet was designed to have a width somewhat smaller than this. We considered two types of monotonic transverse RM gradients: uni-directional along the entire source structure, and oriented in one direction in the ``core'' region and in the opposite direction in the ``jet'' region, i.e., showing a reversal. These Monte Carlo simulations complement those carried out by Hovatta et al. (2012), in which simulated RM maps were made from model data that did not contain RM gradients, to determine the frequency of spurious transverse RM gradients appearing in the simulated RM maps. Examples of the total intensity maps of the model sources used in the simulation are shown in Fig. 7 in this Appendix, and the results of the RM Monte Carlo simulations are shown in Figs.~8--15 in this Appendix (we do not show the results for the jet width of 0.50~mas, since these are very similar for the 0.35-mas jet width). The panels in Figs.~8--15 show (i) the RM map obtained by putting data without added thermal noise through the imaging procedure (i.e., the intrinsic RM distribution, but subject to errors due to the CLEAN process and limited $uv$ coverage); (ii) two examples of the individual ``noisy'' RM maps obtained. Note that the colour scales for the three maps in a corresponding set have been individually chosen to highlight the RM patterns present, and may differ somewhat in some cases. In all cases, the RM gradients that were introduced into the simulated data are visible in the ``noisy'' RM maps that were obtained, even when the intrinsic width of the jet is approximately 1/20 of the beam full-width at half-maximum (FWHM). This may seem surprising, but it is clearly demonstrated by the simulated data. The magnitude of the RM gradient is reduced by the convolution more and more as the size of the beam relative to the intrinsic size of the jet width increases, but the RM gradients that were initially introduced into the simulated data remain visible. In the case of jet widths much less than the beam FWHM, the appearance of individual realizations can sometimes be fairly strongly distorted by noise; however, in all cases, averaging together all the individual realizations confirms the presence of the RM gradients in the simulated images. These results essentially indicate that it may not be necessary to impose a restriction on the width spanned by an observed RM gradient, {\emph provided that the difference between the RM values observed at opposite ends of the gradient is at least $3\sigma$}. This is consistent with the results of Murphy \& Gabuzda (2012), who investigated the effect of resolution on transverse RM profiles. It is also consistent with Fig.~30 of Hovatta et al. (2012), which shows that the fraction of ``false positives'', i.e., spurious RM gradients, that were obtained in their Monte Carlo simulations did not exceed $\simeq 1\%$ when a $3\sigma$ criterion was imposed for the RM gradient, even when the observed width of the RM gradient was less than 1.5 beamwidths. It becomes important to place some restriction on the width spanned by the gradient if the difference between the RM values being compared is less than $3\sigma$, as was also shown clearly by the Monte Carlo simulations of Hovatta et al. (2012). \begin{figure} \begin{minipage}[t]{8.7cm} \begin{center} \includegraphics[width=8.7cm,clip]{Intrinsic.eps} \end{center} \end{minipage} \begin{minipage}[t]{8.7cm} \begin{center} \includegraphics[width=8.7cm,clip]{MEH_40_NODATA.eps} \end{center} \end{minipage} \caption[Short caption for Figure 7]{\label{fig:MC_I} (Top) Intrinsic total intensity image of the model core--jet-like source with an intrinsic length of 1.0~mas (200~pixels) and an intrinsic width of 0.20~mas (40~pixels), used for the Monte Carlo simulations. (Bottom) One realization of a ``noisy'' intensity map produced during the simulations. The convolving beam is 1.28~mas$\times$1.06~mas in PA = $-0.84^{\circ}$ (shown in the upp left-hand corner of the convolved image). The peak of the unconvolved image is $5.62\times 10^{-4}$~Jy, and the contours are 5, 10, 20, 40, and 80\% of the peak. The peak of the convolved image is 1.11~Jy/beam, and the contours are $-0.125$, 0.125, 0.25, 0.5, 1, 2, 4, 8, 16, 32, and 64\% of the peak. } \end{figure} \begin{figure} \begin{center} \includegraphics[width=9.2cm,clip]{MNR66_RMNN.eps} \end{center} \begin{center} \includegraphics[width=9.2cm,clip]{MNR66_RM127.eps} \end{center} \begin{center} \includegraphics[width=9.2cm,clip]{MNR66_RM44.eps} \end{center} \caption[Short caption for Figure 8]{\label{fig:MNR66} Results of Monte Carlo simulations using model core--jet sources with uniformly directed transverse RM gradients. The intrinsic width of the jet (RM gradient) is 0.35~mas. The convolving beam (1.28~mas$\times$1.06~mas in PA = $-0.84^{\circ}$) is shown in the lower left-hand corner of each panel. The top panel shows the RM image obtained by processing the model data as usual, but without adding random noise or EVPA calibration uncertainty; pixels with RM uncertainties exceeding 10~rad/m$^2$ were blanked. The remaining two panels show two examples of the 200 individual RM images obtained during the simulations; pixels with RM uncertainties exceeding 80~rad/m$^2$ were blanked. } \end{figure} \begin{figure} \begin{center} \includegraphics[width=9.2cm,clip]{MNR40_RMNN.eps} \end{center} \begin{center} \includegraphics[width=9.2cm,clip]{MNR40_RM129.eps} \end{center} \begin{center} \includegraphics[width=9.2cm,clip]{MNR40_RM58.eps} \end{center} \caption[Short caption for Figure 9]{\label{fig:MNR40} Same as Fig.~8 for a core--jet source with the same length but an intrinsic jet width of 0.20~mas. } \end{figure} \begin{figure} \begin{center} \includegraphics[width=9.2cm,clip]{MNR_NN20.eps} \end{center} \begin{center} \includegraphics[width=9.2cm,clip]{MNR20_RM50.eps} \end{center} \begin{center} \includegraphics[width=9.2cm,clip]{MNR20_RM146.eps} \end{center} \caption[Short caption for Figure 10]{\label{fig:MNR20} Same as Fig.~8 for a core--jet source with the same length but an intrinsic jet width of 0.10~mas. } \end{figure} \begin{figure} \begin{center} \includegraphics[width=9.2cm,clip]{MNR10NN_RM.eps} \end{center} \begin{center} \includegraphics[width=9.2cm,clip]{MNR10_93RM.eps} \end{center} \begin{center} \includegraphics[width=9.2cm,clip]{MNR10_79RM.eps} \end{center} \caption[Short caption for Figure 11]{\label{fig:MNR10} Same as Fig.~8 for a core--jet source with the same length but an intrinsic jet width of 0.05~mas. } \end{figure} \begin{figure} \begin{center} \includegraphics[width=9.2cm,clip]{MRCG66NN_RM.eps} \end{center} \begin{center} \includegraphics[width=9.2cm,clip]{MRCG66_RM180.eps} \end{center} \begin{center} \includegraphics[width=9.2cm,clip]{MRCG66_RM175.eps} \end{center} \caption[Short caption for Figure 12]{\label{fig:MRCG100} Results of Monte Carlo simulations using model core--jet sources with oppositely directed transverse RM gradients in the core region and inner jet. The intrinsic width of the jet (RM gradient) is 0.35~mas. The convolving beam (1.28~mas$\times$1.06~mas in PA = $-0.84^{\circ}$) is shown in the lower left-hand corner of each panel. The top panel shows the RM image obtained by processing the model data as usual, but without adding random noise or EVPA calibration uncertainty; pixels with RM uncertainties exceeding 10~rad/m$^2$ were blanked. The remaining two panels show two examples of the 200 individual RM images obtained during the simulations; pixels with RM uncertainties exceeding 80~rad/m$^2$ were blanked. } \end{figure} \begin{figure} \begin{center} \includegraphics[width=9.2cm,clip]{MRCG40NN_RM.eps} \end{center} \begin{center} \includegraphics[width=9.2cm,clip]{MRCG40_RM123.eps} \end{center} \begin{center} \includegraphics[width=9.2cm,clip]{MRCG40_RM144.eps} \end{center} \caption[Short caption for Figure 13]{\label{fig:MCG40} Same as Fig.~12 for a core--jet source with the same length but an intrinsic jet width of 0.20~mas. } \end{figure} \begin{figure} \begin{center} \includegraphics[width=9.2cm,clip]{MRCG20NN_RM.eps} \end{center} \begin{center} \includegraphics[width=9.2cm,clip]{MRCG20_RM158.eps} \end{center} \begin{center} \includegraphics[width=9.2cm,clip]{MRCG20_RM105.eps} \end{center} \caption[Short caption for Figure 14]{\label{fig:MCG20} Same as Fig.~12 for a core--jet source with the same length but an intrinsic jet width of 0.10~mas. } \end{figure} \begin{figure} \begin{center} \includegraphics[width=9.2cm,clip]{MRCG10NN_RM.eps} \end{center} \begin{center} \includegraphics[width=9.2cm,clip]{MRCG10_RM106.eps} \end{center} \begin{center} \includegraphics[width=9.2cm,clip]{MRCG10_RM110.eps} \end{center} \caption[Short caption for Figure 15]{\label{fig:MCG10} Same as Fig.~12 for a core--jet source with the same length but an intrinsic jet width of 0.05~mas. } \end{figure}
1,314,259,995,380
arxiv
\section{Introduction} Solitons play an important role in many areas of physics. As classical solutions of non-linear field theories, they are localised structures with finite energy, which are globally regular. In general, one can distinguish topological and non-topological solitons. While topological solitons \cite{ms} possess a conserved quantity, the topological charge, that stems (in most cases) from the spontaneous symmetry breaking of the theory, non-topological solitons \cite{fls,lp} have a conserved Noether charge that results from a symmetry of the Lagrangian. The standard example of non-topological solitons are $Q$-balls \cite{coleman}, which are solutions of theories with self-interacting complex scalar fields. These objects are stationary with an explicitely time-dependent phase. The conserved Noether charge $Q$ is then related to the global phase invariance of the theory and is directly proportional to the frequency. $Q$ can e.g. be interpreted as particle number \cite{fls}. While in standard scalar field theories, it was shown that a non-renormalisable $\Phi^6$-potential is necessary \cite{vw}, supersymmetric extensions of the Standard Model (SM) also possess $Q$-ball solutions \cite{kusenko}. In the latter case, several scalar fields interact via complicated potentials. It was shown that cubic interaction terms that result from Yukawa couplings in the superpotential and supersymmetry breaking terms lead to the existence of $Q$-balls with non-vanishing baryon or lepton number or electric charge. These supersymmetric $Q$-balls have been considered recently as possible candidates for baryonic dark matter \cite{dm} and their astrophysical implications have been discussed \cite{implications}. Two interacting scalar fields are also interesting from another point of view. Up until now, the number of explicit examples of stationary solitonic-like solutions that involve two interacting global scalar fields is small. An important example are superconducting strings, which are axially symmetric in $2+1$ dimensions extended trivially into the $z$-direction \cite{witten}. Axially symmetric generalisations in $3+1$ dimensions, so-called vortons, have been constructed in \cite{ls}. Note that all these solutions have been constructed in models which have a renormalisable $\Phi^4$-potential. Here, we study two interacting scalar fields in $3+1$ dimensions and construct explicit examples of stationary solitonic-like axially symmetric solutions consisting of two global scalar fields. While vortons possess one scalar field with an unbroken U(1) symmetry (the condensate field) and a scalar field whose U(1) is spontaneously broken (the string field), we here consider two scalar fields with unbroken U(1) symmetries. One can thus see our model as the limit of vanishing vacuum expectation value for the second scalar field. Then, stationary solitonic like objects can be constructed explicitely. Note that the model in \cite{ls} contains a renormalisable $\Phi^4$-potential, while we need a non-renormalisable $\Phi^6$-potential here. However, as stated in \cite{ls}, the explicit construction of vortons was done using also a non-renormalisable potential which contains an interaction term of the form $\Phi_1^6 \Phi_2^2$. $Q$-ball solutions in $3+1$ dimensions have been first studied in detail in \cite{vw}. It was realised that next to non-spinning $Q$-balls, which are spherically symmetric, spinning solutions exist. These are axially symmetric with energy density of toroidal shape and angular momentum $J=kQ$, where $Q$ is the Noether charge of the solution and $k\in \mathbb{Z}$ corresponds to the winding around the $z$-axis. Approximated solutions of the non-linear partial differential equations were constructed in \cite{vw} by means of a truncated series in the spherical harmonics to describe the angular part of the solutions. The full partial differential equation was solved numerically in \cite{kk}. It was also realised in \cite{vw} that in each $k$-sector, parity-even ($P=+1$) and parity-odd ($P=-1$) solutions exist. Parity-even and parity-odd refers to the fact that the solution is symmetric and anti-symmetric, respectively with respect to a reflection through the $x$-$y$-plane, i.e. under $\theta\rightarrow \pi-\theta$. These two types of solutions are closely related to the fact that the angular part of the solutions constructed in \cite{vw,kk} is connected to the spherical harmonic $Y_0^0(\theta,\varphi)$ for the spherically symmetric $Q$-ball, to the spherical harmonic $Y_1^{1}(\theta,\varphi)$ for the spinning parity even ($P=+1$) solution and to the spherical harmonic $Y_2^{1}(\theta,\varphi)$ for the parity odd ($P=-1$) solution, respectively. Radially excited solutions of the spherically symmetric, non-spinning solution were also obtained. These solutions are still spherically symmetric but the scalar field develops one or several nodes for $r\in ]0,\infty[$. In relation to the apparent connection of the angular part of the known solutions to the spherical harmonics, it is natural to investigate whether ``$\theta$-angular excitations'' of the $Q$-balls exist in correspondence to the whole family of spherical harmonics $Y_L^k(\theta,\varphi)$, $-L \leq k \leq L$. This can further be motivated by the fact that, in the small field limit where a linear approximation can be used, the field equation describing the $Q$-ball becomes a standard harmonic equation that can be solved by separation of variables and whose fundamental solutions are given in terms of spherical harmonics for the angular part. Of course, it has to be checked whether this correspondence, expected from the linear limit, still holds for the full, i.e. non-linear equation. In the present paper, we present strong numerical arguments that new angularly excited solutions of the non-linear field equations exist and that the correspondence between angular excitations of the $Q$-balls and spherical harmonics indeed holds. In addition to the solutions corresponding to $Y_k^{ k}$ and $Y_k^{k-1}$ for $k=1,2,3$ presented in \cite{vw} we have constructed solutions with angular dependence and symmetries corresponding to the spherical harmonics $Y_1^0$ and $Y_2^0$. These solutions are non-spinning but constitute axially symmetric excitations with respect to the angular coordinate $\theta$. As expected, these new solutions have higher energies and charges than the spherically symmetric solutions and we would thus expect them to be unstable. These solutions thus complete the already known spectrum of $Q$-ball solutions and show that not only radial excitations of fundamental soliton solutions, but also angular excitations exist. We also study two interacting $Q$-balls and put the emphasis on the interaction between a non-spinning and a spinning $Q$-ball. In particular, we investigate the dependence of the energy and the charges of the solution on the interaction parameter and the frequencies, respectively. Next to parity-even and parity-odd solutions, we also construct solutions that have no defined parity with respect to reflexion through the $x$-$y$-plane. The explicit construction of solutions with two interacting complex scalar fields is surely of interest for the astrophysical implications of such objects, especially for the construction of such objects in supersymmetric theories. Moreover, it adds to the spectrum of soliton solutions that e.g. possess no definite parity. The differential equations describing both excited as well as interacting $Q$-balls are non-linear partial differential equations, which -to our knowledge- cannot be solved analytically. We thus solve these equations numerically using an appropriate PDE solver \cite{fidi}. Our paper is organised as follows: in Section 2, we discuss the model and give the equations and boundary conditions. In Section 3, we discuss the new $Q$-ball solutions for $k=0$, while in Section 4, we present our results for two interacting $Q$-balls. Section 5 contains our conclusions. \section{The model} In the following, we study a scalar field model in $3+1$ dimensions describing two $Q$-balls interacting via a potential term. The Lagrangian reads: \begin{equation} \label{lag} {\cal L}=\frac{1}{2}\partial_{\mu} \Phi_1 \partial^{\mu} \Phi_1^*+ \frac{1}{2}\partial_{\mu} \Phi_2 \partial^{\mu} \Phi_2^* - V(\Phi_1,\Phi_2) \end{equation} where both $\Phi_1$ and $\Phi_2$ are complex scalar fields. The potential reads: \begin{equation} U(\Phi_1,\Phi_2)=\sum_{i=1}^2\left( \alpha_i \vert\Phi_i\vert^6 - \beta_i \vert\Phi_i\vert^4 + \gamma_i \vert\Phi_i\vert^2 \right) +\lambda \vert\Phi_1\vert^2 \vert\Phi_2\vert^2 \end{equation} where $\alpha_i$, $\beta_i$, $\gamma_i$, $i=1,2$ are the standard potential parameters for each $Q$-ball, while $\lambda$ denotes the interaction parameter. In \cite{vw} it was argued that a $\Phi^6$-potential is necessary in order to have classical $Q$-ball solutions. This is still necessary for the model we have defined here, since we want $\Phi_1=0$ and $\Phi_2=0$ to be a local minimum of the potential. A pure $\Phi^4$-potential which is bounded from below wouldn't fulfill these criteria. The Lagrangian (\ref{lag}) is invariant under the two global U(1) transformations \begin{equation} \Phi_1 \rightarrow \Phi_1 e^{i\alpha_1} \ \ \ , \ \ \ \Phi_2 \rightarrow \Phi_2 e^{i\alpha_2} \end{equation} which can be applied separately or together. As such the total conserved Noether current $j^{\mu}_{(tot)}$, $\mu=0,1,2,3$, associated to these symmetries is just the sum of the two individually conserved currents $j^{\mu}_{1}$ and $j^{\mu}_2$ with \begin{equation} j^{\mu}_{(tot)}= j^{\mu}_1 +j^{\mu}_2 = -i \left(\Phi_1^* \partial^{\mu} \Phi_1 - \Phi_1 \partial^{\mu} \Phi_1^*\right) -i \left(\Phi_2^* \partial^{\mu} \Phi_2 - \Phi_2 \partial^{\mu} \Phi_2^*\right)\ \ . \end{equation} with $\partial_{\mu} j^{\mu}_{1}=0$, $\partial_{\mu} j^{\mu}_{2}=0$ and $\partial_{\mu} j^{\mu}_{(tot)}=0$. The total Noether charge $Q_{(tot)}$ of the system is then the sum of the two individual Noether charges $Q_1$ and $Q_2$: \begin{equation} Q_{(tot)}=Q_1+Q_2= -\int j_1^0 d^3 x -\int j_2^0 d^3 x \end{equation} Finally, the energy-momentum tensor reads: \begin{equation} T_{\mu\nu}=\sum_{i=1}^2 \left(\partial_{\mu} \Phi_i \partial_{\nu} \Phi_i^* +\partial_{\nu} \Phi_i \partial_{\mu} \Phi_i^*\right) -g_{\mu\nu} {\cal L} \end{equation} \subsection{Ansatz} We choose as Ansatz for the fields in spherical coordinates: \begin{equation} \label{ansatz1} \Phi_i(t,r,\theta,\varphi)=e^{i\omega_i t+ik_i\varphi} \phi_i(r,\theta) \ \ , \ i=1,2 \end{equation} where the $\omega_i$ and the $k_i$ are constants. Since we require $\Phi_i(\varphi)=\Phi_i(\varphi+2\pi)$, $i=1,2$, we have that $k_i\in \mathbb{Z}$. It was moreover demonstrated in \cite{vw,kk} that $Q$-balls exist only in a specific parameter range $\omega_{min} < \omega < \omega_{max}$ and that the charge $Q$ tends to infinity when either $\omega \rightarrow \omega_{min}$ or $\omega \rightarrow \omega_{max}$. We discuss the limits in the 2 $Q$-ball system in the following section. The Noether charges of the solution then read: \begin{equation} Q_i = 2\omega_i \int \vert \Phi_i \vert^2 \ d^3 x = 4\pi \omega_i \int_{0}^{\pi} \int_0^{\infty} r^2 \sin\theta \ dr \ d\theta \ \phi_i^2 \ \ \ , \ \ i=1,2 \end{equation} while the energy is given by the volume integral of the $tt$-component of the energy-momentum tensor: \begin{eqnarray} E=\int T_{00} \ d^3 x &=&2\pi \int\limits_0^{\pi}\int\limits_0^{\infty}\left[ r^2 \ \sin\theta \ dr \ d\theta \sum_{i=1}^2\left(\omega_i \phi_i + (\phi_i')^2 + \frac{(\dot{\phi}_i)^2 }{r^2} \right.\right. \nonumber \\ &+& \left. \left. \frac{k_i^2\phi_i^2}{r^2\sin^2\theta} + \alpha_i \vert\phi_i\vert^6 -\beta_i \vert\phi_i\vert^4 + \gamma_i \vert\phi_i\vert^2 \right) + \lambda \vert\phi_1\vert^2 \vert\phi_2\vert^2 \right] \end{eqnarray} where the prime and dot denote the derivative with respect to $r$ and $\theta$, respectively. For $k_i\neq 0$, the solutions have non-vanishing angular momentum that is quantised. The total angular momentum $J$ is the sum of the angular momenta of the two individual $Q$-balls: \begin{equation} J=\int T_{0\varphi} d^3 x =J_1+J_2= k_1 Q_1 + k_2 Q_2 \end{equation} We will thus in the following refer to solutions with $k_i=0$ as non-spinning and to solutions with $k_i\neq 0$ as spinning. The Euler-Lagrange equations read: \begin{equation} \label{eom} \phi_i''+ \frac{2}{r} \phi_i' + \frac{1}{r^2} \ddot{\phi}_i + \frac{1}{r^2} \cot\theta \dot{\phi}_i - \frac{k_i^2}{r^2\sin^2\theta}\phi_i + \omega_i^2 \phi_i = 3\alpha_i \phi_i^5 - 2\beta_i \phi_i^3 + \gamma_i \phi_i + \lambda \phi_i \phi_k^2 \end{equation} with $i=1,2$ and $k\neq i$. The boundary conditions, which result from requirements of regularity, finiteness of the energy and the symmetry of the solution, are: \begin{equation} \label{bc1} \partial_r\phi_i(r=0,\theta)= 0 \ , \ \partial_{\theta}\phi_i(r=\infty,\theta)=0 \ , \ \partial_{\theta}\phi_i(r,\theta=0,\pi)=0 \ , \ i=1,2 \ . \end{equation} for non-spinning solutions with $k_i=0$ and \begin{equation} \label{bc2} \phi_i(r=0,\theta)= 0 \ , \ \phi_i(r=\infty,\theta)=0 \ , \ \phi_i(r,\theta=0,\pi)=0 \ , \ i=1,2 \ . \end{equation} for spinning solutions $k_i\neq 0$. \subsection{Bounds on $\omega_1$ and $\omega_2$ in the 2-$Q$-ball system} In \cite{vw,kk} the bounds on the frequency $\omega$ have been discussed in the case of one $Q$-ball. Here, we note that these bounds have to be modified if one considers two interacting $Q$-balls. The set of equations (\ref{eom}) can be interpreted as the mechanical equations describing the frictional motion of a particle in two dimensions. The effective potential in this case reads: \begin{equation} V(\phi_1,\phi_2)= \frac{1}{2} (\omega_1^2\phi_1^2 + \omega_2^2\phi_2^2) - \frac{1}{2} U(\phi_1,\phi_2) \end{equation} $Q$-ball solutions exist provided the configuration $(\phi_1=0$,$\phi_2=0)$ corresponds to a local maximum of the effective potential and provided the effective potential has positive values in any radial direction from the origin in the $\phi_1-\phi_2-$plane. This leads to non-trivial bounds for the parameters $\omega_1$ and $\omega_2$. The former condition leads to the requirement that \begin{equation} \omega_1^2 < \omega_{1,max}^2 = \gamma_1 \ \ , \ \ \omega_2^2 < \omega_{2,max}^2 = \gamma_2 \ \ . \end{equation} The latter condition leads to a more complicated domain of existence in the $\omega_1$-$\omega_2$-plane. To describe this condition, we introduce the polar decomposition of $\phi_1$ and $\phi_2$ as follows: \begin{equation} \phi_1=\rho\cos\chi \ \ , \ \ \phi_2=\rho\sin\chi \ . \end{equation} where $0 \le \chi < 2\pi$ and $0 \le \rho < \infty$. The condition on the frequencies $\omega_1$ and $\omega_2$ then read: \begin{equation} \omega_1^2 \cos^2\chi + \omega_2^2 \sin^2\chi > (\omega_1^2 \cos^2\chi + \omega_2^2 \sin^2\chi)_{min} = {\rm min}_{\rho}[U(\rho,\chi)/\rho^2] \ \ , \ \ \forall \ \chi \end{equation} In the particular case that we have studied throughout this paper, namely $\alpha_1=\alpha_2=1$, $\beta_1=\beta_2=2$ and $\gamma_1=\gamma_2=1.1$ this inequality takes the form: \begin{eqnarray} \omega_1^2 \cos^2\chi + \omega_2^2 \sin^2\chi & > & \left[-5\lambda^2 \cos^4\chi\sin^4\chi+ 20\lambda\cos^2\chi\sin^2\chi (\cos^4\chi + \sin^4\chi) \nonumber \right. \\ &+& \left. 2(\cos^8\chi + \sin^8\chi + 11 \cos^6\chi\sin^2\chi + 11 \sin^6\chi\cos^2\chi \nonumber \right. \\ &-& \left. 20 \cos^4\chi\sin^4\chi)\right]/(\cos^4\chi +\sin^4\chi -\cos^2\chi\sin^2\chi) \end{eqnarray} For $\chi= n \pi/2$, $n=0,1,2,...$, we recover the results of the one $Q$-ball system discussed in \cite{vw,kk}. For all other values of $\chi$, the limiting values for $\omega_1$ and $\omega_2$ will depend on the value of the interaction coupling $\lambda$. E.g. for $\phi_1=\phi_2$, i.e. $\chi=\pi/4$, we find : \begin{equation} \omega_1^2 + \omega_2^2 > 1/5 + \lambda - 1/8 \lambda^2 \end{equation} Thus, for small $\lambda$, the lower bound on the value of $\omega_1^2 + \omega_2^2$ will be larger than in the non-interacting limit. \section{New non-spinning $Q$-ball solutions for $\alpha_2=\beta_2=\gamma_2=\lambda=0$} In order to be able to understand the structure of a system of two $Q$-balls, we have reconsidered the one $Q$-ball system. We set all quantities with index ``2'' to zero in the following and omit the index ``1'' for the remaining quantities. In this section, we would like to point out that more than the previously in the literature discussed solutions exist. For this, we first consider the equation for one $Q$-ball with vanishing potential. This reads: \begin{equation} \phi''+ \frac{2}{r}\phi' + \frac{1}{r^2} \ddot{\phi} + \frac{1}{r^2} \cot\theta \dot{\phi} - \frac{k^2}{r^2\sin^2\theta}\phi + \omega^2 \phi=0 \end{equation} Although the solutions of the above equation are well known, it will be useful for the following to recall their properties. Using the standard separation of variables, the solutions read: \begin{equation} \phi(r,\theta,\varphi)\propto \frac{J_{L+1/2}(\omega r)}{\sqrt{r}} Y_L^k(\theta,\varphi) \end{equation} where $J$ denotes the Bessel function, while $Y_L^k$ are the standard spherical harmonics with $-L \le k \le L$. One may hope that solutions of the full equations with the discrete symmetries corresponding to the ones of the spherical harmonics will exist. Of course, the non-linear potential interaction will deform the radial part of the solutions of the linear equation in a highly non-trivial manner. The solutions of the full equation constructed so far for $k=0$ have been spherically symmetric. With the above arguments, axially symmetric solutions should equally exist with an angular dependence of the form $Y_L^0$, e.g. for $L=1$, the angular dependence should be of the form $\cos\theta$. In the following, we will denote the solutions of the full non-linear equations with angular symmetries corresponding to the symmetries of the spherical harmonic $Y_L^k$ by $\phi_L^k$. \subsection{Numerical results} \begin{figure} \includegraphics[width=8cm]{newsol1a.eps} \includegraphics[width=8cm]{newsol1b.eps} \caption{\label{fig1a} The profile of the function $\phi_1^0$ is shown for $\omega = 0.8$, $\alpha=1$, $\beta=2$, $\gamma=1.1$ (left). The corresponding energy density $T_{00}$ is also given (right).} \end{figure} The partial differential equation has been solved numerically subject to the boundary conditions (\ref{bc1}) or (\ref{bc2}) using the finite difference solver FIDISOL \cite{fidi}. We have mapped the infinite interval of the $r$ coordinate $[0:\infty]$ to the finite compact interval $[0:1]$ using the new coordinate $z:=r/(r+1)$. We have typically used grid sizes of $150$ points in $r$-direction and $50$ points in $\theta$ direction. The solutions have relative errors of $10^{-3}$ or smaller. Throughout this section, we choose $\alpha_1\equiv\alpha=1$, $\beta_1\equiv\beta=2$, $\gamma_1\equiv\gamma=1.1$. In Fig.\ref{fig1a} (left), we show the profile of a new solution that we obtained for $k=0$ and $\omega=0.8$. This solution looks like a deformation of the spherical harmonic $Y_1^0$ with the appropriate symmetry with respect to $\theta=\pi/2$ and is clearly axially symmetric. In particular it fulfills $\phi_1^0(r,\pi/2)=0$. The field $\phi_1^0(r,\theta)$ is maximal at a finite distance from the origin on the positive $z$-axis. Moreover, the configuration is anti-symmetric under reflexion through the $x-y-$plane, i.e. under $\theta \rightarrow \pi-\theta$. Thus the solution is parity-odd: $P=-1$. Note that we have only plotted the function for $\theta\in [0:\pi/2]$, but that we have verified the symmetry of the solution. \begin{figure}[!htb] \centering \leavevmode\epsfxsize=11.0cm \epsfbox{new_fig.eps}\\ \caption{\label{fignew} $\phi_1^0$ (solid), $\phi_2^0$ (short dashed) and $\phi_3^0$ (long dashed) are shown as functions of $\theta$ for fixed value of $r$. We have chosen $r\sim 5 $ for $\phi_1^0$, $r\sim 2$ for $\phi_2^0$ and $r\sim 6 $ for $\phi_3^0$. The corresponding spherical harmonics $Y_1^0\propto \cos(\theta)$, $Y_2^0\propto 3\cos(\theta)^2-1$ and $Y_3^0\propto 5\cos(\theta)^3-3\cos\theta$ (with an appropriate normalisation) are also shown. } \end{figure} We also present the corresponding energy density $T_{00}$ in Fig.\ref{fig1a} (right). It shows that the density of the solution is mainly concentrated within two small ``balls'' situated around the positive $z$-axis (at $z\approx 2.4$ and $z\approx 7.6$) and separated by a minimum (at $z\approx 5$). The position of this minimum coincides with the maximum of the scalar field $(\phi_1^0)_{max} \approx \phi_1^0(5,0) \approx 1.2$. It can be checked that this value corresponds roughly to a local minimum of the potential while the partial derivatives are evidently small in this region, explaining the occurrence of a minimal value of the energy density at $(x,y,z) \approx (0,0,5)$. Of course, due to the anti-symmetry of the solution this pattern is equally given on the negative $z$-axis. \begin{figure}[!htb] \centering \leavevmode\epsfxsize=10.0cm \epsfbox{newsol2.eps}\\ \caption{\label{fig2} The profile of the function $\phi_2^0$ for $\omega = 0.8 $, $\alpha=1$, $\beta=2$, $\gamma=1.1$ } \end{figure} The classical energy and charge of this new solution is higher than that of the spherically symmetric $k=0$ solution (see Table 1 and Table 2 below), however lower than that of the $k=1$ spinning $Q$-ball. In order to investigate further our idea of constructing new solutions as deformations of the spherical harmonics, we have also investigated solutions with higher value of $L$ and we managed to construct solutions $\phi_2^0$ and $\phi_3^0$ corresponding in their angular symmetries to those of the spherical harmonics $Y_2^0 \propto 3\cos^2\theta -1$ and $Y_3^0 \propto 5\cos^3\theta -3\cos\theta$, respectively. In Fig.\ref{fignew}, we plot $\phi_1^0$, $\phi_2^0$ and $\phi_3^0$ as functions of $\theta$ for a fixed value of $r$ together with the corresponding spherical harmonics $Y_1^0$, $Y_2^0$ and $Y_3^0$. Here, we have chosen $r\sim 5$ for $\phi_1^0$, $r\sim 2$ for $\phi_2^0$ and $r\sim 6$ for $\phi_3^0$. The first thing to notice is that the symmetries of the solutions $\phi_1^0$, $\phi_2^0$ and $\phi_3^0$ with respect to reflection at $\theta=\pi/2$ are exactly equal to those of the corresponding spherical harmonics. The actual solutions are, of course, deformed with respect to the spherical harmonics, but the correspondence is apparent. E.g. the solution $\phi_2^0$ has $\partial_\theta \phi_2^0(r, \pi/2)=0$ (in contrast to the solution $\phi_1^0$ which has $\phi_1^0(r, \pi/2)=0$). We don't show the energy density of $\phi_2^0$ and $\phi_3^0$ here, since it resembles that shown in Fig. 1. We believe that the correspondence also holds for higher spherical harmonics. \begin{table} \begin{center} \begin{tabular}{|c||c|c|c|c|} \hline\hline & $\phi_0^0$ & $\phi_1^0$ & $\phi_2^0$ & $\phi_1^1$\\ \hline $E$ & $73.6$ & $141.6$ & $170.9$ & $223.5$\\ $Q/\omega$ & $75.2$ & $146.9$ & $176.1$ & $220.4$ \\ $P$ & $+1$ & $-1$ & $+1$ & $+1$ \\ symmetry & spherical & axial & axial & axial \\ \hline \end{tabular} \caption{The energy $E$, the charge per frequency $Q/\omega$, the parity $P$ and the symmetry of the first few $Q$-ball solutions is given for $\omega=0.8$. } \end{center} \end{table} \begin{table} \begin{center} \begin{tabular}{|c||c|c|c|c|} \hline\hline & $\phi_0^0$ & $\phi_1^0$ & $\phi_2^0$ & $\phi_1^1$\\ \hline $E$ & $61.2$ & $115.8$ & $179$ & $195.6$\\ $Q/\omega$ & $60.0$ & $114.7$ & $192.1$ & $186.3$ \\ $P$ & $+1$ & $-1$ & $+1$ & $+1$ \\ symmetry & spherical & axial & axial & axial \\ \hline \end{tabular} \caption{The energy $E$, the charge per frequency $Q/\omega$, the parity $P$ and the symmetry of the first few $Q$-ball solutions is given for $\omega=0.84$. } \end{center} \end{table} Since we have presented strong numerical evidence that the correspondence with the spherical harmonics holds, it is justified to label the different solutions of the field equation by means of the quantum numbers of the corresponding spherical harmonic, i.e. by $L$ and $k$ referring to $Y_L^k$, with $L,k$ integers and $-L \le k \le L $. Needless to say that the numerical construction becomes more involved when the difference $L-|k|$ increases. Adopting these notations and fixing the potential according to $\alpha=1$, $\beta=2$, $\gamma=1.1$, we find for the solutions corresponding to $\omega = 0.8$ and $\omega=0.84$ the values for the energy $E$ and charge per frequency $\frac{Q}{\omega}$ given in Table 1 and Table 2, respectively. The first three solutions $\phi_L^0$, $L=0,1,2$ in this list are static (i.e. non spinning) while the last, $\phi_1^1$, is stationary (i.e. spinning). For all the solutions we constructed, the energy of the non-spinning solutions is lower that the energy of the spinning ones. \begin{figure}[!htb] \centering \leavevmode\epsfxsize=12.0cm \epsfbox{t00.eps}\\ \caption{\label{t00} The energy density $T_{00}$ of the 2-$Q$-ball solution consisting of a spherically symmetric, non-spinning $Q$-ball ($k_1=0$) and a spinning $Q$-ball ($k_2=1$) is shown for $\omega_1 = \omega_2 = 0.8$, $\alpha_1=\alpha_2=1$, $\beta_1=\beta_2=2$, $\gamma_1=\gamma_2=1.1$ and for three different values of $\lambda = 0$, $1$, $-0.5$. } \end{figure} \section{Interacting $Q$-balls} Since in supersymmetric extensions of the Standard Model, $Q$-balls exist that result from the interaction of several scalar fields, we here investigate the interaction of two classical $Q$-balls as toy-model for these systems. For two spherically symmetric $Q$-balls ($k_1=k_2=0$) in interaction, the 2-$Q$-ball solution is still spherically symmetric and the domain of existence in the $\omega_1$-$\omega_2$-plane can be determined by using the reasoning given in Section 2.2. Here, we put the emphasis on solutions where the two $Q$-balls have different symmetries and study the effect of the direct interaction parameterized by the coupling constant $\lambda$. We believe that a particularly interesting case is the interaction between a spherically symmetric, non-spinning $Q$-ball ($k_1=0$) and a spinning $Q$-ball ($k_2=0$). We have thus restricted our analysis to this case and set $k_1=0$ and $k_2=1$ in the following. Note that we will index all quantities related to the spherical $Q$-ball in the following with ``1'', while all quantities related to the axially symmetric $Q$-ball will be indexed with ``2''. For later use, we define the ``binding energy'' of the solution according to \begin{equation} \Delta E = E- E_{k_1=0}- E_{k_2=1} \ . \end{equation} It represents the difference between the energy $E$ of the 2-$Q$-ball configuration and the sum of the energies of the two single (i.e. non-interacting) $Q$-balls $E_{k_1=0}$, $E_{k_2=1}$ with the same frequency. We expect that those solutions which have $\Delta E <0$ to be stable, while those with $\Delta E > 0$ would be unstable. \subsection{Numerical results} We have solved the two coupled partial differential equations using the solver FIDISOL \cite{fidi} for several values of $\omega_1$, $\omega_2$ and $\lambda$ and fixing $\alpha_1=\alpha_2=1$, $\beta_1=\beta_2=2$ and $\gamma_1=\gamma_2=1.1$. As starting profiles, we have used the corresponding non-interacting $Q$-ball solutions. For $\lambda=0$, these solve the two decoupled partial differential equations. We have then slowly increases the parameter $\lambda$ to obtain the interacting solutions. \subsubsection{$\omega_1=\omega_2$} In order to understand the influence of the interaction parameter $\lambda$, we show the energy density $T_{00}$ for $\omega_1=\omega_2=0.8$ and three different values of $\lambda$ in Fig.\ref{t00}. For $\lambda=0$, the two $Q$-balls are non-interacting and the energy density is just a simple superposition of the energy densities of the two individual $Q$-balls. For $\lambda \neq 0$ the $Q$-balls interact. For $\lambda > 0$, it is energetically favourable to have the two $Q$-balls' cores in different regions of space. As seen in Fig.\ref{t00} for $\lambda=1$, the spinning $Q$-ball seems to be ``pushed away'' from the non-spinning, spherically symmetric one. For $\lambda < 0$, it is energetically favourable to have two $Q$-balls sitting ``on top of each other''. This is shown in Fig.\ref{t00} for $\lambda=-0.5$, where the two $Q$-balls seem to be localised at the same place. We have also studied the dependence of the energy $E$, the binding energy $\Delta E$ and the two charges $Q_1$ and $Q_2$ on the interaction parameter $\lambda$. The results are shown in Fig.\ref{lambdavary} for $\omega_1=\omega_2=0.8$. All quantities increase with the increase of $\lambda$, specifically it is evident that the 2-$Q$-ball configuration is energetically more stable for $\lambda <0$ than for $\lambda > 0$. Specifically, we would thus expect the solution to be stable for $\lambda <0$ and unstable for $\lambda > 0$. Following our discussion in Section 2.2, we have also studied the dependence of the energy $E$ and of the charges $Q_1$ and $Q_2$ on the frequencies $\omega_1$ and $\omega_2$. Our results for $\omega_1=\omega_2$ are shown in Fig.\ref{fig_qq2} for $\lambda=-0.5$, $0$ and $0.5$, respectively. \begin{figure}[!htb] \centering \leavevmode\epsfxsize=10.0cm \epsfbox{del_mix.eps}\\ \caption{\label{lambdavary} The quantities $E$, $Q_1$, $Q_2$ and $\Delta E$ are shown as functions of the interaction parameter $\lambda$ with $\omega_1= \omega_2= 0.8$, $\alpha_1=\alpha_2=1$, $\beta_1=\beta_2=2$, $\gamma_1=\gamma_2=1.1$ } \end{figure} As expected, the energy $E$ for a given frequency $\omega_1=\omega_2$ is higher (resp. lower) than in the non-interacting case for positive (resp. negative) values of $\lambda$. \begin{figure} \includegraphics[width=8cm]{eqq_om12a.eps} \includegraphics[width=8cm]{eqq_om12b.eps} \caption{\label{fig_qq2} The energy $E$ (left) and the charges $Q_1$ and $Q_2$ (right) are shown as functions of the frequency $\omega_1=\omega_2$ for $\lambda= -0.5,0,0.5$.} \end{figure} As before, we find that the solutions exist in a given interval of the frequency: $\omega_{1,min}(\lambda) \leq \omega \leq \omega_{1,max}(\lambda)$ (and equally for $\omega_2$ since $\omega_1=\omega_2$). We have determined the bounds on $\omega_1$ and $\omega_2$ in Section 2.2 for two spherically symmetric $Q$-balls. Here, we would expect that these values change slightly since we have a system of one spherically symmetric and one axially symmetric $Q$-ball. However, we see that the qualitative results are similar here. We observe that for $\lambda \ge 0$, the values of the energy $E$ and of the charges $Q_1$, $Q_2$ diverge at $\omega_1=\omega_{1,min}$ and $\omega_1=\omega_{1,max}$. Following the discussion of Section 2.2 we find that the maximal value of $\omega_1$ is independent of $\lambda$. This can be clearly seen in Fig. \ref{fig_qq2} where the energy $E$ and the charges $Q_1$ and $Q_2$ diverge at $\omega_1=\omega_{1,max}\approx 1.035$ for all three values of $\lambda$. Note that this maximal value is only slightly lower than the bound given in Section 2.2: $\omega^2_{1,max}=1.1$. The reason why the bound is not equal is that here we are dealing with an axially symmetric solution interacting with a spherically symmetric one. Analytic arguments of the type done in Section 2.2 are, however, only possible if the Euler-Lagrange equations are ordinary differential equations, i.e. only in the case where the solutions are spherically symmetric. So, it is not surprising that the analytic values differ from the numerical ones. On the other hand, the minimal value of $\omega_1$ is $\lambda$-dependent. This can be seen in Fig. \ref{fig_qq2}. We have given our results only for $\omega \ge 0.6$ in this figure since the construction of solutions becomes increasingly difficult for $\omega < 0.6$. However, it can be clearly seen that the energy $E$ and $Q_2$ diverge at different values of $\omega_1=\omega_{1,min}$. In agreement with Section 2.2., we find that $\omega_{1,min}$ is increasing for increasing (and small) $\lambda$. For $\lambda < 0$ the behaviour at the lower bound of $\omega_1$ changes. We observe that $Q_1$ corresponding to the spherically symmetric field $\phi_1$ decreases when $\omega_1$ decreases. The analysis of the profile of the solution reveals that the field $\phi_1$ deviates only slightly from the spherically symmetric configuration for frequencies close to $\omega_{1,max}$. However, it gets more and more deformed in the equatorial plane when $\omega_1$ decreases. At the same time, the field $\phi_2$ increases in the equatorial plane. This phenomenon is illustrated in Fig.\ref{coupe} for $\lambda = - 0.5$, $\omega = 0.6$ and $\omega= \omega_{1,max}$, respectively. In this figure, the fields $\phi_1$ and $\phi_2$ as well as the energy density $T_{00}$ are shown as function of $r$ for two angles $\theta= 0$ and $\theta = \pi/2$, respectively. \begin{figure}[!htb] \centering \leavevmode\epsfxsize=10.0cm \epsfbox{coupe.eps}\\ \caption{\label{coupe} The profiles of $\phi_1$, $\phi_2$ and the energy density $T_{00}$ are shown for $\theta=0$ and $\theta=\pi/2$, respectively. Here $\lambda= -0.5$. The upper figure is for $\omega=0.6$, the lower for $\omega=\omega_{1,max}\approx 1.035$.} \end{figure} \subsubsection{Solutions with $\omega_1 \neq \omega_2$} \begin{figure}[!htb] \centering \leavevmode\epsfxsize=10.0cm \epsfbox{t00bis.eps}\\ \caption{\label{t00bis} The energy density $T_{00}$ of the 2-$Q$-ball solution consisting of a spherically symmetric, non-spinning $Q$-ball ($k_1=0$) and a spinning $Q$-ball ($k_2=1$) is shown for $\omega_1 = 0.65$, $\omega_2 = 1$, $\alpha_1=\alpha_2=1$, $\beta_1=\beta_2=2$, $\gamma_1=\gamma_2=1.1$ and for four different values of $\lambda = 0.5$, $0$, $-0.2$ and $-0.5$.} \end{figure} We have also constructed 2-$Q$-ball solutions for $\omega_1 \neq \omega_2$. The energy density $T_{00}$ of a 2-$Q$-ball solution corresponding to $\omega_1=0.65$ and $\omega_2=1$ is shown in Fig.\ref{t00bis} for four different values of $\lambda$. The result is qualitatively similar to the case $\omega_1=\omega_2$. This figure however suggests very clearly that for $\lambda < 0$ the $k_2=1$ $Q$-ball has tendency to disappear from the 2-$Q$-ball system. For instance the maximal value of the $\phi_2$ field, $\vert \phi_{2,max}\vert$, decreases for decreasing $\lambda$. We have also studied the dependence of the solution's conserved quantities on $\omega_2 = \omega_1/0.65$ for $\lambda = \pm 0.5$. The dependence of the energy $E$ and the charges $Q_1$, $Q_2$ is shown in Fig.\ref{eqq_mix}. These results strongly suggest that for $\lambda<0$ and in the region of the parameter space chosen, the field $\phi_2$ corresponding to the $k_2=1$ $Q$-ball tends uniformly to zero for a critical value of $\omega_2=\omega_2^{(cr)}$ such that $Q_2\rightarrow 0$ for $\omega_2\rightarrow \omega_2^{(cr)}$. Only the field $\phi_1$ remains non-trivial when $\omega_2\leq \omega_2^{(cr)}$. This effect can also be observed in Fig.\ref{t00bis}, where the solution for $\lambda=-0.5$ has nearly lost all its axially symmetric character. We observe the inverse phenomenon for $\omega_1=c \omega_2$ with a constant $c > 0$. We don't present our detailed results here since they are qualitatively equivalent to the case discussed above. We find that $Q_1\rightarrow 0$ for $\omega_1\rightarrow \omega_1^{(cr)}$. Thus, the spherically symmetric solution disappears from the system, while $\phi_2$ remains non-trivial for $\omega_1 \leq \omega_1^{(cr)}$. Apparently, while in the case $\omega_1=\omega_2$ and $\lambda < 0$, the charge $Q_1$ associated to the spherical $Q$-ball tends to zero for $\omega_1\rightarrow \omega_1^{(cr)}$, it is the charge $Q_i$, $i=1,2$ of the $Q$-ball with the higher frequency that tends to zero for $\omega_i\rightarrow \omega_i^{(cr)}$, $i=1,2$ when $\omega_1\neq \omega_2$ and $\lambda < 0$. Note that nothing similar is observed when $\lambda \ge 0$. \begin{figure}[!htb] \centering \leavevmode\epsfxsize=12.0cm \epsfbox{eqq_mix.eps}\\ \caption{\label{eqq_mix} The energy $E$ and charges $Q_1$, $Q_2$ are shown as functions of $\omega_2$ for $\omega_1 = 0.65 \omega_2$ and $\lambda=\pm 0.5$.} \end{figure} While 1-$Q$-ball solutions known so far are always either parity-even or parity-odd with respect to $\theta\rightarrow \pi-\theta$, we have constructed several examples of 2-$Q$-ball solutions that do not have a defined parity. One such solution is shown in Fig.\ref{asym} (lower part) together with a parity-even solution (upper part). These solutions exist for exactly the same values of the coupling constants. Both functions $\phi_1, \phi_2$ are clearly neither parity-even nor parity-odd and the field $\phi_2$ possesses in addition nodes in the radial direction. This solution is thus an asymmetric, radially excited 2-$Q$-ball solution. As expected, we observe that this asymmetric solution has much higher energy and charges than the corresponding parity-even solution. The investigation of solutions of this type and their eventual bifurcation into branches of solutions with defined parity is currently underway. \begin{figure}[!htb] \centering \leavevmode\epsfxsize=12.0cm \epsfbox{contour1.eps}\\ \caption{\label{asym} The contour plots for $\phi_1$ and $\phi_2$ of a parity-even 2-$Q$-ball solution (upper part) and of an asymmetric 2-$Q$-ball solution (lower part) are shown for $\lambda = -0.5$, $\omega_1=0.585$ and $\omega_2=0.9$} \end{figure} \section{Concluding remarks} In this paper, we have presented numerical evidence that $Q$-balls solutions admit several types of excitations labelled by integers. So far, it was known that the static, spherically symmetric solution is the ``ground state'' of a series of radially excited solutions. Families of spinning solutions are also known, they are axially symmetric and can be labelled according to the winding $k$ around the axis of symmetry. Here we present evidence that excitation with respect to $\theta$ can be constructed as well. Generally, the previous results and the present analysis suggest that families of elementary solutions of the field equations exist and are labelled by $n$, $L$, $k$, where $n$ refers to the number of nodes in radial direction, while $L$, $k$ refer to the ``quantum numbers'' related to the spherical harmonics. At the moment, the only analytic argument we have for this property is its analogy to the linearized version (i.e. small field limit) of the equation where this result holds true by standard harmonic analysis. It is likely that the qualitative properties of the solutions exist also in the case of the full non-linear equations. We have also studied a system of two interacting $Q$-balls and have constructed several examples of axially symmetric, stationary solutions that carry conserved currents and charges. We observe that the 2-$Q$-ball solutions exist in a finite range of the frequency $\omega_{i,min}\le \omega_i \le \omega_{i,max}$, $i=1,2$, where $\omega_{i,max}$ is independent of the interaction coupling, while $\omega_{i,min}$ does dependent of the interaction coupling in a highly non-trivial manner. We find that the charges $Q_i$, $i=1,2$ of the 2 $Q$-balls in interaction tend to infinity when $\omega_i\rightarrow \omega_{i,max}$ or $\omega_i\rightarrow \omega_{i,min}$ as long as $\lambda \ge 0$. For $\lambda < 0$, however, we observe that the charges $Q_i$ associated to the $Q$-ball with the higher frequency $\omega_i$ tends to zero for $\omega_i\rightarrow \omega_{i}^{(cr)} < \omega_{i,max}$. For $\omega_{i,min} \le \omega_i \le \omega_{i}^{(cr)}$ only the remaining field $\phi_j$, $j\neq i$ is non-zero. In a future publication, we intend to construct solutions with the more realistic potential available from supersymmetry \cite{kusenko} and put the emphasis on the possibility of constructing $Q$-balls and their excited and/or spinning versions with potentials involving only quartic terms in the scalar fields. \\ \\ \\ {\bf Acknowledgments} We thank Y. Verbin for discussions at the first stages of this paper. Y.B. thanks the Belgian FNRS for financial support.
1,314,259,995,381
arxiv
\subsubsection{0pt}{3pt plus 1pt minus 1pt}{3pt plus 1pt minus 1pt} \let\OLDthebibliography\thebibliography \renewcommand\thebibliography[1]{ \OLDthebibliography{#1} \setlength{\parskip}{0pt} \setlength{\itemsep}{0pt plus 0.3ex} } \titleformat{\section}{\normalfont\fontsize{14}{16}\bfseries}{\thesection}{1em}{} \renewcommand\theadset{\bfseries}% \title{\vspace{-1cm}\OPENRBC: A Fast Simulator of Red Blood Cells at Protein Resolution} \author[*1]{Yu-Hang Tang} \author[*1]{Lu Lu} \author[1]{He Li} \author[2]{Constantinos Evangelinos} \author[2]{Leopold Grinberg} \author[2]{Vipin Sachdeva} \author[1]{George Em Karniadakis} \affil[1]{Division of Applied Mathematics, Brown University, Rhode Island, USA 02912} \affil[2]{International Business Machines Corporation} \affil[*]{Equally credited} \date{} \definecolor{mygreen}{rgb}{0,0.6,0} \definecolor{mygray}{rgb}{0.5,0.5,0.5} \definecolor{mymauve}{rgb}{0.58,0,0.82} \lstset{ % backgroundcolor=\color[rgb]{1,1,0.9}, basicstyle=\footnotesize\ttfamily, breakatwhitespace=false, breaklines=true, captionpos=b, deletekeywords={...}, escapeinside={\%*}{*)}, frame=none, keepspaces=true, keywordstyle=\color[rgb]{0.2,0.4,0.85}, commentstyle=\color[rgb]{0.6,0.6,0.6}, otherkeywords={*,...}, rulecolor=\color{black}, showspaces=false, showstringspaces=false, showtabs=false, stepnumber=2, stringstyle=\color{mymauve}, tabsize=2, title=\lstname, belowskip=-1.70 \baselineskip, aboveskip= 0.25 \baselineskip, } \begin{document} \maketitle \abstract{We present \OpenRBC\footnote{The source code is available at {\color[rgb]{0,0.3,0.8} \texttt{https://github.com/yhtang/OpenRBC} }.}, a coarse-grained molecular dynamics code, which is capable of performing an unprecedented \emph{in silico} experiment --- simulating an entire mammal red blood cell lipid bilayer and cytoskeleton as modeled by 4 million mesoscopic particles --- using a single shared memory commodity workstation. To achieve this, we invented an adaptive spatial-searching algorithm to accelerate the computation of short-range pairwise interactions in an extremely sparse 3D space. The algorithm is based on a Voronoi partitioning of the point cloud of coarse-grained particles, and is continuously updated over the course of the simulation. The algorithm enables the construction of the key spatial searching data structure in our code, \textit{i.e.} a lattice-free cell list, with a time and space cost linearly proportional to the number of particles in the system. The position and shape of the cells also adapt automatically to the local density and curvature. The code implements OpenMP parallelization and scales to hundreds of hardware threads. It outperforms a legacy simulator by almost an order of magnitude in time-to-solution and more than 40 times in problem size, thus providing a new platform for probing the biomechanics of red blood cells.} \keywords{coarse-grained molecular dynamics, bilayer, cytoskeleton, membrane fluctuation, vesiculation, high-performance computing} \section*{Introduction}\label{introduction} The red blood cell (RBC) is one of the simplest, yet most important cells in the circulatory system due to their indispensable role in oxygen transport. An average RBC assumes a biconcave shape with a diameter of 8 \(\mu\)m and a thickness of 2 \(\mu\)m. Without any intracellular organelles, it is supported by a cytoskeleton of a triangular spectrin network anchored by junctions on the inner side of the membrane. Therefore, the mechanical properties of an RBC can be strongly influenced by molecular level structural details that alter the cytoskeleton and lipid bilayer properties. Both continuum models~\cite{evans1974bending, feng2006finite, powers2002fluid, helfrich1973elastic} and particle-based models~\cite{feller2000molecular, saiz2002towards, tieleman1997computer, tu1998constant} have been developed with the aim to help uncover the correlation between RBC membrane structure and property. Continuum models are computationally efficient, but require \textit{a priori} knowledge of cellular mechanical properties such as bending and shear modulus. Particle models are useful for extracting RBC properties from low-level descriptions of the membrane structure and defects. However, it is computational demanding, if not prohibitive, to simulate the large number of particles required for modeling the membrane of an entire RBC. To the best of our knowledge, a bottom-up simulation of the RBC membrane at the cellular scale using particle methods remains absent. Recently, a two-component coarse-grained molecular dynamics (CGMD) RBC membrane model which explicitly accounts for both the cytoskeleton and the lipid bilayer was proposed~\cite{li2014erythrocyte}. The model could potentially be used for particle-based whole-cell RBC modeling because its coarse-grained nature can drastically reduce computational workload while still preserving necessary details from the molecular level. However, due to the orders of magnitude difference in the length scale between a cell and a single protein, a total of 4 million particles are still needed to represent an entire RBC. In addition, the implicit treatment of the plasma in this model eliminates the overhead for tracking the solvent particles, but also exposes a notable spatial density heterogeneity because all CG particles are exclusively located on the surface of a biconcave shell. The inside and outside of the RBC are empty. This density imbalance imposes a serious challenge on the efficient evaluation of the pairwise force using conventional algorithms and data structures, such as the cell list and the Verlet list, which typically assumes a uniform spatial density and a bounded rectilinear simulation box. In this paper we present \OpenRBC, a new software tailored for simulations of an entire RBC using the two-component CGMD model on multicore CPUs. As illustrated in Figure~\ref{fig:overview}, the simulator can take as input a triangular mesh of the cytoskeleton of a RBC and reconstruct a CGMD model at protein resolution with explicit representations of both the cytoskeleton and the lipid bilayer. This type of whole cell simulation of RBCs can thus realize an array of \emph{in silico} measurements and explorations of: \begin{itemize}[label=$\cdot$,leftmargin=*] \setlength\itemsep{-0.5em} \item RBC shear and bending modulus, \item membrane loss through vesiculation in spherocytosis and elliptocytosis~\cite{li2015vesiculation}, \item anormalous diffusion of membrane proteins~\cite{li2016modeling}, \item interaction between sickle hemoglobin fibers and RBC membrane in sickle cell disease~\cite{lei2012predicting,li2016patient}, \item uncoupling between the lipid bilayer and cytoskeleton~\cite{peng2013lipid}, \item adenosine triphosphate (ATP) release due to deformation~\cite{sprague1998deformation}, \item nitric oxide (NO) modulated mechanical property change~\cite{wood2013circulating}, \item cellular uptake of elastic nanoparticles~\cite{zhao2011interaction}. \end{itemize} \begin{figure*} \begin{minipage}[c]{0.67\textwidth} \includegraphics[width=\textwidth]{overview-2.pdf} \end{minipage}\hfill \begin{minipage}[c]{0.3\textwidth} \caption{ \textbf{(A)} A canonical hexagonal triangular mesh of a biconcave surface representing the cytoskeleton network \label{fig:overview} is used together with \textbf{(B)} the two-component CGMD RBC membrane model to reconstruct \textbf{(C)} a full-scale virtual RBC which allows for a wide range of computational experiments.\label{fig:cgmd-rbc-scheme} } \end{minipage} \end{figure*} \section*{Software Overview}\label{simulator-design} \OpenRBC is written in C++ using features from the C++11 standard. To maximize portability and allow easy integration into other software systems\cite{tang2015multiscale}, the project is organized as a header-only library with no external dependencies. The software implements SIMD vectorization \cite{abraham2015gromacs} and OpenMP shared memory parallelization, and was specifically optimized toward making efficient use of large numbers of simultaneous hardware threads. As shown in Figure~\ref{fig:design}A, the main body of the simulator is a time stepping loop, where the force and torque acting on each particle is solved for and used for the iterative updating of the position and orientation according to the equation of motion. The time distribution of each task in a typical simulation is given in Figure~\ref{fig:design}B. The majority of time is spent in force evaluation, which is compute-bound. This makes the code highly efficient in utilizing the high thread count of modern CPUs with the shared memory programming paradigm. \begin{figure}[t!] \centering \includegraphics[width=\columnwidth]{flowchart.pdf} \caption{The A) flow chart and B) typical wall time distribution of \OpenRBC.\label{fig:design}} \end{figure} \section*{Initial structure generation} As shown in Figure~\ref{fig:cgmd-rbc-scheme}, a two-component CGMD RBC system can be generated from a triangular mesh which resembles the biconcave shape of a RBC at equilibrium. Note that the geometry may be alternatively sourced from experimental data using techniques such as optical image reconstruction because the algorithm itself is general enough to adapt to an arbitrary triangular mesh. This feature can be useful for simulating RBCs with morphological anomalies. Actin and glycophorin protein particles are placed on the vertices of the mesh, while spectrin and immobile band-3 particles are generated along the edges. The band-3--spectrin connections and actin--spectrin connections can be modified to simulate RBCs with structural defects. Lipid and mobile band-3 particles are randomly placed on each triangular face by uniformly sampling each triangle defined by the three vertices~\cite{osada2002shape}. A minimum inter-particle distance is enforced to prevent clutter between protein and lipid particles. The system is then optimized using a velocity quenching algorithm to remove collision between the particles. \section*{Spatial Searching Algorithm}\label{spatial-search-algorithm} \begin{figure} \centering \includegraphics[width=\columnwidth]{cell-list-2.pdf} \caption{\textbf{Left:} Only cells in dark gray are populated by CG particles in a cell list on a rectilinear lattice. This results in a waste of storage and memory bandwidth. \textbf{Right:} All cells are evenly populated by CG particles in a cell list based on the Voronoi diagram generated from centroids located on the RBC membrane. \label{fig:cell-list-vacant}} \end{figure} Pairwise force evaluation accounts for more than 70\% of the computation time in \OpenRBC as well as other molecular dynamics softwares\cite{tang2014accelerating,Rossinelli2015GB15}. To efficiently simulate the reconstructed RBC model, we invented a lattice-free spatial partitioning algorithm that is inspired by the concept of Voronoi diagram. The algorith, at the high level, can be described as \begin{lstlisting} 1. Group particles into a number of `adaptive' clusters 2. Compute interactions between neighboring clusters 3. update cluster composition after particle movement 4. repeat from step 2 \end{lstlisting} As illustrated in Figure~\ref{fig:cell-list-vacant}, the algorithm adaptively partitions a particle system into a number of Voronoi cells that are approximately equally populated. In contrast, a lattice-based cell list leaves many cells vacant due to the density heterogeneity. Thus, the algorithm can provide very good performance in partitioning the system, maintaining data locality and searhcing for pairwise neighbors in a sparse 3D space. It is implemented in our software using a \textit{k}-means clustering algorithm, which is, in turn, enabled by a highly optimized implementation of the \textit{k}-d tree searching algorithm, as explained below. A Voronoi tessellation \cite{edelsbrunner2014voronoi} is a partitioning of a n-dimensional space into regions based on distance to a set of points called the centroids. Each point in the space is attributed to the closest centroid (usually in the L2 norm sense). An example of a Voronoi Diagram generated by 12 centroids on a 2D rectangle is given in Figure~\ref{fig:voronoi-kmeans}A. The \textit{k}-means clustering \cite{hartigan1979algorithm} is a method of data partitioning that aims to divide a given set of n vectors into k clusters in which each vector belongs to the cluster whose center is closest to it. The result is a partition of the vector space into a Voronoi tessellation generated by the cluster centers as shown in Figure~\ref{fig:voronoi-kmeans}B. Searching for the optimal clustering which minimizes the within-cluster sum of square distance is NP-hard, but efficient iterative heuristics based on \textit{e.g.} the expectation-maximization algorithm \cite{dempster1977maximum} can be used to quickly find a local minimum. \begin{figure} \centering \includegraphics[width=\columnwidth]{kmeans+voronoi.pdf} \caption{(A) A Voronoi partitioning of a square as generated by centroids marked by the blue dots. (B) A \textit{k}-means (\textit{k}=3) clustering of a number of points on a 2D plane. (C) A vesicle of 32,673 CG particles. \label{fig:voronoi-kmeans}} \end{figure} \begin{figure} \centering \includegraphics[width=\columnwidth]{patch-2D.pdf} \caption{Thanks to the spatial locality ensured by reordering particles along the Morton curve (dashed line), we can simply divide the cells between two threads by their index into two patches each containing five consective Voronoi cells. The force between cells from the same patch is computed only once using Newton's 3rd law, while the force between cells from different patches is computed twice on each side. \label{fig:patch-force}} \end{figure} A \textit{k}-d tree is a spatial partitioning data structure for organizing points in a \textit{k}-dimensional space~\cite{bentley1975multidimensional}. It is essentially a binary tree that recursively bisects the points owned by each node into two disjoint sets as separated by a axial-parallel hyperplane. It can be used for the efficient searching of the nearest neighbors of a given query point in $O(\log N)$ time, where $N$ is the total number of points, by pruning out a large portion of the search space using cheap overlap checking between bounding boxes. The \textit{k}-means/Voronoi partitioning of a point cloud adapts automatically to the local density and curvature of the points. As such, we exploit this property to create a generalization of the cell list algorithm using the Voronoi diagram. The algorithm can be described as a two-step procedure: 1) clustering all the particles in the system using \textit{k}-means, followed by an online Expectation-Maximization algorithm that continuously updates the system's Voronoi cells centroid location and particle ownership; 2) sorting the centroids and particles with a two-level data reordering scheme , where we first order the Voronoi centroids along a space filling curve (a Morton curve, specifically) and then reorder the particles according to the Voronoi cell that they belong to. The pseudocode for the algorithm can be found in SI Algorithm~\ref{SI-alg:voronoi-cell-list}. The reordering step in updating the Voronoi cells ensures that neighboring particles in the physical space are also statistically close to each other in the physical space. This locality can speed up the \textit{k}-d tree nearest-neighbor search by using the closest centroid of the last particle as the initial guess for the current particle. The heuristic helps to further prune out most of the \textit{k}-d tree search space and essentially reduces the complexity of a nearest-neighbor query from \(O(\log N)\) to \(O(1)\). In practice, this brings about 100 times acceleration when searching through 200,000 centroids. As shown by Figure~\ref{fig:voronoi-kmeans}C, the Voronoi cells generated from a \textit{k}-means clustering of the CG particles are uniformly distributed on the surface of the lipid membrane. \section*{Force Evaluation}\label{force-evaluation} Lipid particles accounts for 80\% of the population in the whole-cell CGMD system. The Voronoi cells can be used directly for efficient pairwise force computation between lipids with a quad loop that ranges over all Voronoi cell \(v_i\), all neighboring cells \(v_j\) of \(v_i\), all particles in \(v_i\), and all particles in \(v_j\) as shown in SI Algorithm~\ref{SI-alg:pairwise}. Since the cytoskeleton of a healthy RBC is always attached to the lipid bilayer, its protein particles are also distributed following the local curvature of the lipid particles. This means that we can reuse the Voronoi cells of the lipid particles, but with a wider searching cutoff, to compute both the lipid-protein and protein-protein pairwise interactions. For diseased RBCs with a fully or partially detached cytoskeleton, a separate set of Voronoi cells can be set up for the cytoskeleton proteins to compute the force. A list of bonds between proteins is maintained and used for computing the forces between proteins that are physically linked to each other. A commonly used technique in serial programs to speed up the force computation is to take advantage of the Newton's 3rd law of action and reaction. Thus, the force between each pair of interacting particles is only computed once and added to both particles. However, this generates a race condition in a parallel context because two threads may end up simultaneously computing the force on a particle shared by two or more pairwise interactions. Our solution takes advantage of the strong spatial locality of the particles as maintained by the two-level reordering algorithm, and decomposes the workload both spatially and linearly-in-memory into patches by splitting the linear range of cells indices among OpenMP threads. Each thread will be calculating the forces acting on the particles within its own patch. As shown in Figure~\ref{fig:patch-force}, force accumulation without triggering racing condition can be realized by only exploiting the Newton's 3rd law on pairwise interactions where both Voronoi cells belong to a thread's own patch. Interactions involving a pair of particles from different patches are calculated twice (once for each particle) by each thread. The strong particle locality minimizes the shared contour length between two patches and hence also minimizes the amount of inter-patch interactions. \section*{Validation and Benchmark}\label{benchmark} In this section we present validation of our software by comparing simulation and experimental data. We also compare the program performance against that of a legacy CGMD RBC simulator used in Ref~\cite{li2014erythrocyte}. The legacy simulator, which performs reasonably well for a small number of particles in a periodic rectangular box, was written in C and parallelized with the message passing interface (MPI) using a rectilinear domain decomposition scheme and a distributed memory model. Two computer systems were used in the benchmark each equipped with a different mainstream CPU microarchitecture, \textit{i.e.} the AMD Piledriver, and the IBM Power8~\cite{starke2015cache}. The specification of the machines are given in Table~\ref{table:computer-specs}. \begin{table*}[htbp!] \centering \caption{A summary of capability and design highlights of \OpenRBC and the specifications of the computer systems used in benchmark.\label{table:computer-specs}} \begin{tabularx}{\textwidth}{llllXlllll} \toprule \multicolumn{4}{c}{\textbf{Capability \& Design}} && \multicolumn{5}{l}{\textbf{Performance - time steps / day}} \\ \midrule & \textbf{\OpenRBC} & \textbf{Legacy} & \textbf{Improvement} && \textbf{Cores} & \textbf{Particles} & \textbf{\OpenRBC} & \textbf{Legacy} & \textbf{Speedup} \\ \textbf{Max system size (\#particles)} & $> 8 \times 10^6$ & $2 \times 10^5$ & $>$ 40 times && 20 & $8.34 \times 10^6$ & $3.90\times 10^5$ & \textit{-} & - \\ \textbf{Line of code} & 4,677 & 7,424 & 37\% less && 4 & $1.88 \times 10^5$ & $3.86\times 10^6$ & $0.42\times 10^6$ & 9.2 \\ \bottomrule \end{tabularx} \begin{tabularx}{\textwidth}{lllllllllll} \toprule \thead[tl]{CPU} & \thead{Architecture} & \thead[tl]{Instruction\\Set} & \thead[tl]{Freq.\\(GHz)} & \thead[tl]{Physical\\Cores} & \thead[tl]{Hardware\\Threads} & \thead[tl]{Total\\threads} & \thead[tl]{Last Level\\Cache (MB)} & \thead[tl]{GFLOPS\\(SP)} & \thead[tl]{Achieved FLOPS\\by \OpenRBC} & \thead[tl]{Memory\\Bandwidth} \\ \midrule IBM POWER 8 `Minsky' & Power8 & Power & 3.5 & 10 $\times$ 2 & 8 & 160 & 80 $\times$ 2 & 560.0 & 8.7\% & 230 GB/s \\ AMD Opteron 6378 & Piledriver & x86-64 & 2.4 & 16 $\times$ 4 & 1 & 64 & 16 $\times$ 4 & 614.4 & 4.5\% & 204 GB/s \\ \bottomrule \end{tabularx} \end{table*} To compare performance between \OpenRBC and the legacy simulator, the membrane vesiculation process of a miniaturized RBC-like sphere with a surface area of of $2.8\ \mu m^2$ was simulated. The evolution of the dynamic process is visualized from the simulation trajectory and shown in Figure~\ref{fig:validation}A. \OpenRBC achieves almost an order of magnitude speedup over the legacy solver in this case on all three computer systems as shown in Table~\ref{table:computer-specs}. \begin{figure} \centering \includegraphics[width=\columnwidth]{validation.pdf} \caption{(A) The vesiculation procedure of a miniature RBC. (B) The instantaneous fluctuation of a full-size RBC in \OpenRBC compares to that from experiments~\cite{evans2008fluctuations,park2008refractive,betz2009atp}. Microscopy image reprinted with permission from Ref.~\cite{park2008refractive}. \label{fig:validation}} \end{figure} Furthermore, \OpenRBC can efficiently simulate an entire RBC modeled by 3.2 million particles and correctly reproduce the fluctuation and stiffness of the membrane as shown in Figure~\ref{fig:validation}B. The legacy solver was not able to launch the simulation due to memory constraint. The simulation was carried out by implementing the experimental protocol of Ref.~\cite{park2008refractive} that measures the instantaneous vertical fluctuation $\Delta h(x,y)$ along the upper rim of a fixed RBC. In addition, a harmonic volume constraint is applied to maintain the correct surface-to-volume ratio of the RBC. We measured a membrane root-mean-square displacement of 33.5 nm, while previous experimental observations and simulation results range between 23.6 nm to 58.8 nm~\cite{evans2008fluctuations,park2008refractive,betz2009atp,fedosov2011multiscale}. \begin{figure}[b!] \centering \includegraphics[width=\columnwidth]{whole-rbc-scaling-2.pdf} \caption{Scaling of \OpenRBC across physical cores and NUMA domains when simulating an RBC of 3.2 million particles.\label{fig:scaling}} \end{figure} Scaling benchmark for the whole cell simulation on the three computer systems is given in Figure~\ref{fig:scaling}. It can be seen that compute-bound tasks such as pairwise force evaluation can scale linearly across physical cores. Memory-bound tasks benefit less from hardware threading as expected, however thanks to thread pinning and a consistent workload decomposition between threads there is no performance degradation due to side effects such as cache and bandwidth contention. It is also worth noting that Fu \textit{et al.} recently published an implementation of a related RBC model in LAMMPS~\cite{fu2017lennard}, which can simulate $1.15 \times 10^6$ particles for $10^5$ time steps on 864 CPU cores in 2761 seconds. However, the use of explicit solvent particles in their model generates difficulty to establish a fair performance comparison between their implementation and \OpenRBC. Nevertheless, as a rough estimate and assuming perfect scaling, their timing result can be translated into simulating $8.34 \times 10^6$ particles for $0.41 \times 10^5$ time steps per day on 864 cores. OpenRBC can perform roughly the same amount of time steps on 20 CPU cores. We do recognized that the explicit solvent model carrys more computational workload, and that implementing non-rectilinear partitioning schemes may not be straightforward within the current software framework of LAMMPS. This comparison serves more as a demonstration of the usage of the shared-memory programming paradigm on \textit{fat} compute nodes with large amounts of strong cores and memory. \section*{Summary}\label{summary} We presented a from-scratch development of a coarse-grained molecular dynamics software, \OpenRBC, which exhibits exceptional efficiency when simulating systems of large density discrepancy. This capability is supported by a key algorithm innovation for computing an adaptive partitioning of the particles using a Voronoi diagram. The program is parallelized with OpenMP and SIMD vector instructions, and implements threading affinity control, consistency loop partitioning, kernel fusion, and atomics-free pairwise force evaluation to increase the utilization of simultaneous hardware threads and to maximize memory performance across multiple NUMA domains. The software achieves an order of magnitude speedup in terms of time-to-solution over a legacy simulator, and can handle systems that are almost two orders of magnitude larger in particle count. The software enables, for the first time ever, simulations of an entire RBC with a resolution down to single proteins, and opens up the possibility for conducting many \textit{in silico} experiments concerning the RBC cytomechanics and related blood disorders \cite{li2016computational}. \footnote{The source code is available at {\color[rgb]{0,0.3,0.8} \texttt{https://github.com/yhtang/OpenRBC} }.} \titleformat{\section}{\normalfont\fontsize{11}{13}\bfseries}{\thesection}{1em}{} \scriptsize \section*{Author Contribution} \noindent YHT designed algorithm. YHT and LL implemented the software. YHT and CE carried out performance benchmark. HL developed the CGMD model. HL and YHT performed validation and verification. YHT, LL and HL wrote the manuscript. LG, CE, and VS provided algorithm consultation and technical support. GK supervised the work. \section*{Acknowledgment} \noindent This work was supported by National Institutes of Health (NIH) grants U01HL114476 and U01HL116323 and partially by the Department of Energy (DOE) Collaboratory on Mathematics for Mesoscopic Modeling of Materials (CM4). YHT acknowledges partial financial support from an IBM Ph.D. Scholarship Award. Part of the simulations were carried out at the Oak Ridge Leadership Computing Facility through the Innovative and Novel Computational Impact on Theory and Experiment program at Oak Ridge National Laboratory under project BIP118.
1,314,259,995,382
arxiv
\section{Introduction} The local properties of space-time in general relativity are described by the metric tensor $g_{\mu\nu}$. In cosmology, the metric tensor is expected to be close to the Friedmann-Lema\^itre-Robertson-Walker (FLRW) form that describes a perfectly homogeneous and isotropic universe, with small fluctuations. A key goal for observational cosmology is to constrain the properties of these fluctuations. Small fluctuations can be decomposed into scalar, vector and tensor degrees of freedom and, at the level of linear perturbation theory these degrees of freedom do not mix. Scalar metric perturbations are linked to density perturbations and gradient velocity fields and describe gravitational clustering. Vector perturbations are related to the vorticity of the velocity field and frame dragging. And tensor perturbations describe gravitational waves and tensor anisotropies of spacetime. The formation of structure in the Universe thus predominantly creates scalar metric perturbations, which is why observational cosmology has focused primarily on these. Additionally, the dominant paradigm for generating the initial perturbations in the Universe, inflation, is generically based on a scalar field and creates only scalar and tensor perturbations. Moreover both vector and tensor perturbations redshift away if they are not sourced continuously by anisotropic stresses which, in concordance cosmology, are very small in the late Universe. However, once structure formation becomes non-linear, the separation into scalars, vectors and tensors breaks down, so that at late times and on small scales vector perturbations are necessarily generated. Even though vorticity is conserved in a perfect fluid, dark matter is free-streaming and gravitational clustering leads to shell crossing which induces velocity dispersion and therefore vorticity~\cite{Aviles:2015osc,Piattella:2015nda,Cusin:2016zvu}. Observations of coherent angular velocity out to scales of up to 20 Mpc at redshift $z=1$ have lately been reported~\cite{Taylor:2016rsd}. It is not clear whether such large coherence scales can be reached for vector perturbations generated by shell crossing within standard $\Lambda$CDM. Other mechanisms such as topological defects~\cite{Durrer:2001cg,Daverio:2015nva,Lizarraga:2016onn}, magnetic fields~\cite{Durrer:1998ya}, inflation with vector fields~\cite{Ford:1989me,Golovnev:2008cf} or vector-field-based models of modified gravity~\cite{Jacobson:2000xp,Zlosnik:2006zu,Heisenberg:2014rta,Tasinato:2014eka} can generate vector perturbations throughout the history of the Universe and on a wide range of scales. If they persist until late times, or are even generated there, the presence of such vector perturbations could `pollute' the measurement of the scalar degrees of freedom and spoil precision cosmology with future large surveys if they are not properly taken into account. On the other hand if they are measured and characterized, they can turn into a signal instead of a source of systematic uncertainty, and improve our understanding of the Universe. A large part of the effort of measuring vector-type deviations of the metric has been focused on the Cosmic Microwave Background (CMB). The approaches can be grouped into three categories: (i) introducing dynamical vector degrees of freedom in the early universe, but maintaining isotropy and homogeneity of the background. One can then either maintain statistical isotropy and homogeneity of the perturbations or allow for statistically anisotropic perturbations. In that case, there is a new contribution to scalar and tensor fluctuations, but symmetry does not allow for a generation of a strong vector signal \cite{Lim:2004js}. Nonetheless an effect could in principle be measured through B-modes in the CMB polarisation \cite{Nakashima:2011fu}. Alternatively (ii), one can deform the isotropy of the cosmological background and therefore constrain its anisotropy, while keeping the matter content standard, ensuring that this anisotropy decays with time \cite{Saadeh:2016bmp}. Finally (iii), one can posit a mechanism to introduce an anisotropy directly in the primordial power spectrum (through some interactions in the early universe, e.g.\ \cite{Ackerman:2007nb}). One then tries to look for `anomalies' in the CMB, such as in, for example,~\cite{Ade:2015hxq}. One can also look for this primordial signal in galaxy surveys, \cite{2010JCAP...05..027P,Jeong:2012df,Shiraishi:2016wec,Sugiyama:2017ggb}. In this paper we will focus on the impact of the vector modes in the peculiar velocity field of galaxies, irrespective of their origin and thus on the redshift-space distortions (RSD) observed in galaxy surveys. In Section~\ref{sec:vec_gen} we describe the vector contribution, and how we model it. Section \ref{sec:rsd} computes the impact of vector perturbations on redshift-space distortions, providing the general expression for vector RSD and showing that they also contribute to the monopole, quadrupole and hexadecapole of the galaxy correlation function, adding to the effect of the scalar fluctuations. We compute estimates of the detectability of these contributions with galaxy surveys in Section \ref{sec:fisher}, before presenting our conclusions. \section{Vector Contribution to Galaxy Velocities} \label{sec:vec_gen} \subsection{Effect of Vector Contribution\label{sec:basics}} We assume that our Universe is described by a perturbed FLRW metric, which we gauge-fix so that \begin{align} \mathrm{d}s^2 = a^2(\tau)&\Big[-(1+2\Psi)\mathrm{d}\tau^2 - \Sigma_i \mathrm{d}\tau\mathrm{d}x^i + \label{eq:metric}\\ &\,+(1-2\Phi)\delta_{ij}\mathrm{d}x^i\mathrm{d}x^j\Big] \,. \notag \end{align} Here $\Phi$ and $\Psi$ are the standard Newtonian scalar potentials, and $\Sigma_i$ is a pure vector fluctuation $\partial_i\Sigma^i=0$, related to frame dragging.% \footnote{We have fixed the gauge such that the $0i$ component of the metric has no scalar perturbations and so that the vector part of the $ij$ component vanishes. We also neglect gravitational waves (tensor perturbations).} % We define $\mathcal{H}\equiv a'/a$ to be the conformal Hubble parameter. The vector field $\Sigma_i$ is characterised by an amplitude and a direction. If the vector is purely time-dependent, it can be reabsorbed by a change of coordinates. We will assume that the vector field is described by a fluctuating amplitude taken from a homogeneous and isotropic distribution with the statistics of the direction described by a tensor $W_{ij}$, defined below. We are interested in the imprint of the vector field on the two-point correlation function of galaxies in redshift-space. The general velocity field for galaxies located at position $\boldsymbol{r}$ at conformal time $\tau$, $v^i(\boldsymbol{r},\tau)$, can be decomposed into a scalar (potential) part, $v$, and a pure vector part, $\Omega^i$, with $\partial_i\Omega^i =0$, \begin{equation} v^{i}\equiv \partial^{i}v+\Omega^{i} \,.\label{eq:vel} \end{equation} The gauge-invariant relativistic vorticity~\cite{Rbook} is actually $a(\Omega_i-\Sigma_i)$. This quantity is obtained by lowering the index from $\Omega^i$ with the perturbed metric. The relativistic vorticity is often denoted $\Omega_i$ (e.g.\ in~\cite{Lu:2008ju,Rbook}) and it is an additional rotational velocity over and above the frame-dragging effect. Here, however we will focus on $\Omega^i$ as it is the velocity with an upper index that is relevant for us. Note that this difference is only relevant on large scales. Well inside the horizon, the contribution from $\Sigma_i$ in concordance cosmology can by neglected whenever $\Omega_i$ does not vanish. The galaxies are typically assumed to move on timelike geodesics of the metric, i.e.\ to obey Euler's equation. To first order in perturbation theory the Euler equation for perfect fluids implies \begin{equation} \dot{\Omega}_i - \dot{\Sigma}_i + \mathcal{H} (\Omega_i-\Sigma_i) = 0 \,. \label{eq:vecgeo} \end{equation} This means that geodetic motion will cause the vorticity to redshift away with only the frame-dragging effect remaining, if it is sourced through gravity. Indeed the vector part of the first-order Einstein equations is given by \begin{align} \Delta \Sigma_i &= 16\pi G_\text{N}a^2 \delta T^{\,0}_{(V)i}\, , \\ \dot{\Sigma}_{(i,j)}&+2\mathcal{H} \Sigma_{(i,j)}=-8\pi G_\text{N}a^2 \delta T^{\,i}_{(V)j}\, , \end{align} where $\delta T^{\,\alpha}_{(V)\beta}$ is the vector part of the perturbation of the energy-momentum tensor, which depends generally on the velocity, the anisotropic stress and the metric perturbation itself. The relativistic vorticity is conserved for a perfect fluid also within General Relativity at all orders~\cite{Lu:2008ju}. In a perfect fluid therefore, if vector perturbations vanish initially, there is no vorticity generation and $\Omega_i-\Sigma_i=0$ at all times. In the perfect-fluid approximation of concordance cosmology, there are no vector degrees of freedom, or sources of anisotropic stress, which would have a significant effect on the gravitational field and therefore on peculiar velocities. In the real Universe, however, dark matter (or galaxies) are not truly a perfect fluid. They are free-streaming, i.e.\ they move on geodesics, and as soon as shell crossing occurs, velocity dispersion can no longer be neglected and vorticity is generated for the fluid of the averaged dark matter particles (or galaxies). In this paper, we are interested in understanding how galaxy correlations can be used independently of other probes to constrain the existence of any vector fluctuations at late times. We will discuss in detail what the current expectations for vorticity within the $\Lambda$CDM paradigm are and to what extent it is possible to measure it. \subsection{Statistical Properties of Vector Fluctuations} In order to compute the two-point correlation function of galaxies (2PCF), we need a model for the two-point correlation of the vector velocity, $\corr{\Omega_i\Omega_j}$ and its cross-correlation with the dark matter overdensity $\corr{\delta_\mathrm{m}\Omega_i}$. We will characterise their structure in Fourier space, with our Fourier transform convention defined by \[ f(\boldsymbol{k}) = \int d^3 x f(\boldsymbol{x}) e^{-i \boldsymbol{k} . \boldsymbol{x}} \, . \] We assume that the power spectrum of the amplitude of the fluctuations obeys statistical isotropy and homogeneity, i.e.\ that it depends only on the magnitude of the wave number $k\equiv |\mathbf{k}|$. \begin{enumerate} \item The auto-correlation of the vector field takes the form \begin{align} \langle\Omega_{i}(\boldsymbol{k})\Omega_{j}(\boldsymbol{k}')\big\rangle = (2\pi)^{3}\delta^{(3)}(\boldsymbol{k}+\boldsymbol{k}')\times \nonumber\\ \left[W_{ij}P_{\Omega}(k) +i\alpha_{ij}P_A(k)\right] \,, \end{align} where $P_{\Omega}(k)$ and $P_A(k)$ contain information about the amplitude of the vector field, and $W_{ij}$ and $\alpha_{ij}$ are, respectively, symmetric and anti-symmetric tensors, that encode the dependence on direction. Since $\Omega_i$ is a pure vector field, $W_{ij}$ and $\alpha_{ij}$ must satisfy $k^i W_{ij}=k^j W_{ij}=k^i\alpha_{ij}=k^j \alpha_{ij}=0$. The $P_A$-term is parity odd while the $P_\Omega$-term is parity even. If no parity violating processes occur in the Universe we may set $P_A=0$. The most general form for $W_{ij}$ is \begin{align} \hspace{0.7cm}W_{ij}=~&\frac{\omega}{2}\left(\delta_{ij}-\hat{k}_{i}\hat{k}_{j}\right)+ \label{eq:wij}\\ &+\bar{\omega}_{ij}-\bar{\omega}_{il}\hat{k}^{l}\hat{k}_{j}-\bar{\omega}_{lj}\hat{k}^{l}\hat{k}_{i}+ \bar{\omega}_{lm}\hat{k}^{l}\hat{k}^{m}\hat{k}_{i}\hat{k}_{j} \,,\nonumber \end{align} with an \emph{arbitrary} constant symmetric tensor, which we have already decomposed into its trace $\omega$ and trace-free part $\bar{\omega}^i_i=0$. As usual $\hat{\boldsymbol{k}}$ denotes the unit vector in the direction of the vector $\boldsymbol{k}$. The tensorial form for $\alpha_{ij}$ is completely fixed by anti-symmetry and transversality, \begin{equation}\label{eq:aij} \hspace{0.3cm}\alpha_{ij}= \alpha\epsilon_{ijm}\hat k^m \,. \end{equation} The first line of~\eqref{eq:wij} respects statistical homogeneity and isotropy, whereas the second line is non-zero only when isotropy is violated. In what follows, we absorb the trace $\omega$ into the normalisation of the power spectrum $P_\Omega$. The only possible parity odd term given in \eqref{eq:aij} is statistically isotropic. In general, since $\bar\omega_{ij}$ is symmetric, it can be decomposed into a sum of three tensor products of its orthonormal eigenvectors $\bar\omega_i^{I}$, \[ \bar\omega_{ij}=\sum_{I=1}^{3}\lambda_{I}\bar\omega_{i}^{I}\bar\omega_{j}^{I}\, , \] with the sum of eigenvalues $\sum\lambda^I=0$. \item The cross-correlation with dark matter can be non-zero only if statistical isotropy is violated. Assuming that the vector field is fluctuating in some fixed direction $\hat{\boldsymbol{\omega}}$, the cross-correlation takes the form \begin{align} \hspace{0.5cm}\corr{\delta_\mathrm{m}(\boldsymbol{k})\Omega_{i}(\boldsymbol{k}')} &= (2\pi)^{3} W_i P_{\delta\Omega}(k)\delta^{(3)}(\boldsymbol{k} +\boldsymbol{k}') \,, \\ \mbox{with} \quad W_i &\equiv \hat\omega_i - \hat{k}_i\hat{k}_j\hat\omega^j \,. \notag \end{align} This form follows from the fact that $\Omega_i$ is a pure vector field i.e.\ divergence free. A non-vanishing $\corr{\delta_\mathrm{m}\Omega_{i}}$ always defines a preferred spatial direction $\hat\omega_i$ and therefore violates statistical isotropy. \end{enumerate} \begin{figure*}[t] \includegraphics[width=\columnwidth]{figs/Pw}\hspace{0.5cm}\includegraphics[width=\columnwidth]{figs/Pom} \caption{\label{f:spectrum}{\it Left panel}: The power spectrum $P_w(k,z=0)$ in units of $({\rm Mpc}/h)^3$ normalised by $(\mathcal{H}_0f_0)^2$, as in Fig. 4 of~\cite{Pueblas:2008uv} (see text for more detail). The black solid line corresponds to the shape defined in Eq.~\eqref{Pom1343}, the green dotted line to Eq.~\eqref{Pom243}, the blue dot-dashed line to Eq.~\eqref{Pom1335} and the red dashed line to Eq.~\eqref{Pom235}. {\it Right panel}: The corresponding power spectrum $P_\Omega(k,z=0)$ in unit $({\rm Mpc}/h)^3$, with the same color coding as in the left panel.} \end{figure*} \subsection{Simple models of vector perturbations} \label{sec:models} Since sources of vector perturbations are not the main topic of this paper, we will use simple parameterisations of two basic scenarios: vector perturbations generated from non-linear structure formation (vorticity), and vector perturbations generated by topological defects (frame dragging). In these models there are usually no parity violating terms so we shall set $P_A=0$ for this study. \subsubsection{Vorticity} Ref.~\cite{Pueblas:2008uv} and, more recently,~\cite{Zhu:2017vtj} studied the generation of vorticity from non-linear structure formation using numerical simulations. Here we assume that the averaged distribution of galaxies follows the same trajectories as the dark matter distribution, so that the galaxy velocity field exhibits the same vorticity as the dark matter velocity field analysed in these numerical simulations. We use the vorticity power spectrum plotted in Fig.\ 4 of~\cite{Pueblas:2008uv} to construct the following fit for $P_\Omega$, \[ P_\Omega (k,z=0)=A_V \frac{(k/k_*)^{n_\ell}}{\left[ 1+ (k/k_*) \right]^{n_\ell+n_s}}\quad \big({\rm Mpc}/h)^3\, , \label{Pom1343} \] where the power at large scales is given by $n_\ell=1.3$, the power at small scales by $n_s=4.3$ and the transition scale by $k_*=0.7\,h/$Mpc. From Fig.\ 4 of~\cite{Pueblas:2008uv} we find that the predicted amplitude for $P_\Omega$ is $A_V=10^{-5}$. In Fig.~\ref{f:spectrum} we plot $P_\Omega(k)$ (right panel, black solid line). In the left panel, we show the quantity calculated in numerical simulations $P_w(k)$, where $\boldsymbol{w}=\boldsymbol{\nabla}\times\boldsymbol{v}$ so that $P_w(k)$ is related to $P_\Omega$ by a factor $k^2$.% \footnote{Note that in Fig.~\ref{f:spectrum} and in Fig.\ 4 of~\cite{Pueblas:2008uv} $P_w$ is normalised by $(\mathcal{H}_0f_0)^2$ so that it has the same dimensions as $P_\Omega$. Note also that the convention for the power spectrum used in~\cite{Pueblas:2008uv} differs from our convention by a factor $1/(2\pi)^3$, so that $P_\Omega=(2\pi)^3 P_w(k)/k^2$.} % According to the numerical simulations of Ref.~\cite{Pueblas:2008uv}, the vorticity power spectrum seems to evolve with $\mathcal{H}(z)^2 f(z)^2 D_1(z)^7$ at large scales. At small scales, the evolution has an additional scale-dependence, leading to a suppression of power at small scales at late times, see Fig.\ 4 of~\cite{Pueblas:2008uv}. In the following we will ignore this small-scale dependence and assume that the power spectrum at redshift $z$ is given by~\footnote{Note that the constraints obtained in this way are conservative, because we underestimate the vorticity power spectrum at small scales for large redshift.} \[ P_\Omega (k)=P_\Omega (k,z=0)\left(\frac{\mathcal{H}(z)f(z)}{\mathcal{H}_0f(z=0)}\right)^2\left(\frac{D_1(z)}{D_1(z=0)}\right)^7\, . \label{Pevol} \] In~\cite{Cusin:2016zvu}, the vorticity generation from large-scale structure was calculated by including velocity dispersion using a perturbative approach. At large scales, this analytical result gives a behaviour slightly different from the numerical one of~\cite{Pueblas:2008uv}, scaling with $n_\ell=2$ instead of $n_\ell=1.3$. This also follows from a simple causality argument: in standard structure formation, vorticity vanishes initially and only builds up over time by shell crossing. This is a causal process and we therefore expect the vorticity correlation function to be a function with compact support, which vanishes outside the horizon. Therefore, its Fourier transform, the power spectrum, is analytic. Due to the non-analytic pre-factor $(\delta_{ij} -\hat k_i\hat k_j)$ this requires that for small $k$, $P_\Omega (k) \propto k^n$, where $n$ is an even integer. If there is no additional conservation law which forbids this, we expect $n=2$. The value $n=1.3$ is therefore not possible. Of course a numerical determination of this power law on large scales is difficult and it is not very surprising that N-body simulations find a somewhat different behaviour. In addition, the analytical calculation of~\cite{Cusin:2016zvu} finds that the vorticity grows as $D_1(z)$ instead of $\mathcal{H}(z)^2f(z)^2D_1(z)^7$. Finally, let us note that the numerical simulations of~\cite{Zhu:2017vtj} find a shape for the vorticity power spectrum (see Figure 13) which is broad agreement with~\cite{Pueblas:2008uv}. There are however some differences: first the slope of~\cite{Zhu:2017vtj} at small $k$ is slightly less steep than the one of~\cite{Pueblas:2008uv}. Second, the turnover happens at smaller $k$ in~\cite{Zhu:2017vtj}. And finally, the amplitude of the power spectrum is roughly 50-100 times larger in~\cite{Zhu:2017vtj} than in~\cite{Pueblas:2008uv}, i.e. $A_V\sim 10^{-3}$ instead of $A_V\sim 10^{-5}$. In the following, we will study how the constraints on vorticity change when we vary the parameters in Eqs.~\eqref{Pom1343} and~\eqref{Pevol}, i.e. $n_\ell, n_s, k_*$ and the evolution with redshift. More precisely, we study three additional fits for $P_\Omega$ \begin{align} &P_\Omega (k,z=0)=A_V \frac{1.45(k/k_*)^{n_\ell}}{\left[ 1+ (k/k_*) \right]^{n_\ell+n_s}}\quad \big({\rm Mpc}/h)^3\, ,\nonumber\\ &{\rm with}\quad n_\ell=2, n_s=4.3\,\, {\rm and}\,\, k_*=0.7\,h/{\rm Mpc}\, .\label{Pom243}\\ &P_\Omega (k,z=0)=A_V \frac{0.88(k/k_*)^{n_\ell}}{\left[1+ (k/k_*) \right]^{n_\ell+n_s}}\quad \big({\rm Mpc}/h)^3\, ,\nonumber\\ &{\rm with}\quad n_\ell=1.3, n_s=3.5\,\, {\rm and}\,\, k_*=0.5\,h/{\rm Mpc}\, .\label{Pom1335}\\ &P_\Omega (k,z=0)=A_V \frac{1.76(k/k_*)^{n_\ell}}{\left[1+ (k/k_*) \right]^{n_\ell+n_s}}\quad \big({\rm Mpc}/h)^3\, ,\nonumber\\ &{\rm with}\quad n_\ell=2, n_s=3.5\,\, {\rm and}\,\, k_*=0.4\,h/{\rm Mpc}\, .\label{Pom235} \end{align} Note that we have adjusted $k_*$ and the amplitude in order to always have $P_w$ which peaks around $k=1\,h/$Mpc with an amplitude of $0.01(\mathcal{H}_0f_0)^2\, ({\rm Mpc}/h)^3$ when $A_V=10^{-5}$. The different curves are plotted in Fig.~\ref{f:spectrum}. In addition to these three additional fits, we also consider model~\eqref{Pom1343} where we vary $k_*$ to $k_*=0.6\,h$/Mpc (corresponding to a peak around 0.85\,$h/$Mpc) and $k_*=0.8\,h$/Mpc (corresponding to a peak around 1.15\,$h/$Mpc), keeping the amplitude $0.01(\mathcal{H}_0f_0)^2\, ({\rm Mpc}/h)^3$ (for $A_V=10^{-5}$) fixed. \subsubsection{Topological defects} For topological defects, we use that $\Omega_i \rightarrow \Sigma_i$ as discussed in \ref{sec:basics}, i.e.\ that the galaxy velocities are not vortical, but we use the frame dragging effect $\Sigma_i \propto \delta T_{i(V)}^0 / k^2$. Based on large numerical field-theory simulations of cosmic strings, Ref.~\cite{Daverio:2015nva} derived fitting functions to the various unequal time correlators of the defect energy-momentum tensor. As discussed in the erratum of~\cite{Daverio:2015nva}, the power spectrum of $T_{0i}$ of a causal source is generically proportional to $k^2$ on large scales. Numerically they find that the power spectrum for $\delta T^0_i$ turns over slightly inside the horizon, $k_* \tau \approx 12$, and then decays as $k^{-1.14}$. As defects are scaling, it is best to express all quantities except for a dimensionful prefactor in terms of $k \tau \equiv x$, so that finally the dimensionless vector velocity power spectrum for cosmic strings is given by \[ k^3 P_\Omega (k,\tau) \approx 14 (G\mu)^2 \left( \frac{x/12}{1 +(x/12)^{3.14}} \right) \, . \label{POmdef} \] $G\mu$ is a dimensionless number that is linked to the symmetry breaking scale (e.g.\ \cite{Urrestilla:2007sf}). For defects formed in a phase transition at the Grand Unified Theory (GUT) scale, it is of the order of $10^{-6}$. Observational constraints from the Cosmic Microwave Background (CMB) limit it to be smaller than about $10^{-7}$ to $10^{-6}$ depending on the model \cite{Ade:2015xua,Lizarraga:2016onn}. Of course also in scenarios with topological defects at some points non-linearities lead to shell crossing and the additional vorticity generation that we discussed above. But here we consider the new effect specific to topological defects which generates frame dragging already at the level of linear perturbation theory where we may set $\Omega_i-\Sigma_i=0$. \section{The Kaiser formula in the presence of vectors\label{sec:rsd}} The observed galaxy number counts are those in redshift space, with the leading correction arising from the Kaiser term \cite{Kaiser:1987qv} \[ \Delta(\boldsymbol{r})=\delta_\mathrm{g}(\boldsymbol{r})-\frac{1}{\mathcal{H}}n^{i}\partial_{i}(n^{j}v_{j}(\boldsymbol{r}))\,, \] with the galaxy velocity $v_i$ and where we define the line-of-sight direction $\boldsymbol{n}$ as \[ \boldsymbol{n}\equiv\frac{\boldsymbol{r}}{r} \] i.e.\ the unit vector in the direction of the galaxy lying at $\boldsymbol{r}$, with the observer located at $\boldsymbol{r}=0$. Splitting the velocity into the scalar and vector parts, as in Eq.~\eqref{eq:vel}, we have \begin{equation} \Delta(\boldsymbol{r})=\delta_\mathrm{g}(\boldsymbol{r})-\frac{1}{\mathcal{H}}n^{i}n^{j}\big(\partial_{i}\partial_{j}v(\boldsymbol{r})+\partial_{i}\Omega_{j}(\boldsymbol{r})\big) \, .\label{eq:deltaz} \end{equation} The effects of vector perturbations in the general relativistic number counts were studied in~\cite{Durrer:2016jzq}, where it was found that the dominant effect are in the RSD. Since in the relativistic $C_\ell(z_1,z_2)$'s the RSD cannot easily be extracted, we study here the impact of the vector field on the two-point correlation function of galaxies. In this study we neglect both the sub-dominant vector relativistic corrections from~\cite{Durrer:2016jzq} and the scalar relativistic corrections derived in~\cite{Yoo:2009au,Bonvin:2011bg,Challinor:2011bk}.% \footnote{Note that depending on the model responsible for the vector field, the scalar relativistic corrections to the correlation function may be of the same order of magnitude as the dominant vector contributions. If this is the case, the scalar relativistic corrections should be included in the modelling of the two-point correlation function. As we will see in Section~\ref{sec:vorticity}, in the case of vorticity, the vector contribution dominates at small scales, where the scalar relativistic corrections are negligible. Note also that our forecasts do not depend on the importance of the relativistic corrections, since in any case the covariance matrix is strongly dominated by density and RSD.} % The two-point correlation function is given by \[ \xi(\boldsymbol{r}_1,\boldsymbol{r}_2,z_1,z_2)=\langle \Delta(\boldsymbol{r}_1,z_1)\Delta(\boldsymbol{r}_2,z_2) \rangle\, . \] Without redshift space distortion, the correlation function is isotropic and depends therefore only on the galaxy separation \[ x \equiv |\boldsymbol{r}_1 - \boldsymbol{r}_2|\, , \] and on the mean distance of the pair from the observer $\bar r$, or equivalently its mean redshift $\bar z$ \[ \bar r =\frac{1}{2}(r_1 + r_2), \quad \bar z=\frac{1}{2}(z_1 + z_2)\, . \] Redshift space distortions break the isotropy of the correlation function, which consequently depends also on the orientation of the pair with respect to the line-of-sight. In the flat-sky approximation, $\boldsymbol{n}_1=\boldsymbol{n}_2=\boldsymbol{n}$, neglecting evolution between $z_1$ and $z_2$, the scalar correlation function including redshift space distortion can be written as \begin{align} \xi^\text{scalar}(\bar z, x,\boldsymbol{n}\cdot\hat{\boldsymbol{x}}) =& \left(b^2+\frac{2b}{3}f+\frac{1}{5}f^2\right)C_{0}(x) \label{xiscalar}\\ &-\left(\frac{4b}{3}f+\frac{4}{7}f^{2}\right)C_{2}(x)\mathcal{P}_{2}(\boldsymbol{n}\cdot\hat{\boldsymbol{x}}) + \notag\\ & +\frac{8}{35}f^{2}C_{4}(x)\mathcal{P}_{4}(\boldsymbol{n}\cdot\hat{\boldsymbol{x}}) \,, \notag \end{align} where $\mathcal{P}_n$ is the Legendre polynomial of degree $n$, $f$ is the growth rate $f\equiv d\ln D_1/d\ln a$ and \begin{equation} C_{n}(x)=\frac{1}{2\pi^{2}}\int\mathrm{d}k\,k^{2}P_{\delta\delta}(\bar z, k)j_{n}(kx) \,. \end{equation} Here $j_n$ is the $n$\textsuperscript{th} spherical Bessel function and $P_{\delta\delta}(\bar z, k)$ is the matter power spectrum at the mean redshift $\bar z$. We have made the standard assumption that the galaxy bias $b$ is deterministic and, like the growth-rate $f$ in $\Lambda$CDM, it is scale independent. The new vector contribution comprises three terms: \begin{enumerate} \item Cross-correlation with the density:\newline We have \begin{align} \hspace{1cm}\xi_{\delta\Omega} = -i&\int\!\frac{\mathrm{d}^{3}k}{(2\pi)^{3}} \Bigg[\frac{b(z_1)}{\mathcal{H}(z_2)}(\boldsymbol{n}_1\cdot\boldsymbol{k})(\boldsymbol{n}_1\cdot\boldsymbol{w})\\ &-\frac{b(z_2)}{\mathcal{H}(z_1)}(\boldsymbol{n}_2\cdot\boldsymbol{k})(\boldsymbol{n}_2\cdot\boldsymbol{w})\Bigg]P_{\delta\Omega}(\bar z,k)e^{i\boldsymbol{k}\cdot\boldsymbol{x}}\, .\label{eq:Dipole?} \notag \end{align} Hence in the flat-sky limit, $\boldsymbol{n}_1=\boldsymbol{n}_2$, if we neglect evolution, the cross-correlation exactly vanishes, even if $P_{\delta\Omega}\neq0$ and isotropy is violated. We thus neglect this contribution here. Note, however, that the cross-correlation would provide a dipole contribution in the case where we correlate two different populations of galaxies. \item Cross-correlation with the scalar velocity:\newline We have \begin{align} \hspace{0.8cm}&\xi_{v\Omega}= i\int\!\frac{\mathrm{d}^{3}k}{(2\pi)^{3}} (\boldsymbol{n}_1\cdot\hat\boldsymbol{k})(\boldsymbol{n}_2\cdot\hat{\boldsymbol{k}}) P_{\delta\Omega}(k)e^{i\boldsymbol{k}\cdot\boldsymbol{x}} \\ &\times \left[\frac{f(z_1)}{\mathcal{H}(z_1)} (\boldsymbol{n}_1\!\cdot\!\boldsymbol{k})(\boldsymbol{n}_2\!\cdot\! \boldsymbol{w})-\frac{f(z_2)}{\mathcal{H}(z_2)}(\boldsymbol{n}_2\!\cdot\!\boldsymbol{k})(\boldsymbol{n}_1\!\cdot\! \boldsymbol{w}) \right]\,. \notag \end{align} Also this contribution vanishes in the flat-sky limit, if we neglect evolution, even in the presence of anisotropy. \item Auto-correlation:\newline We obtain \begin{align} \hspace{0.8cm}\xi_{\Omega\Omega} = &\frac{1}{\mathcal{H}(z_1)\mathcal{H}(z_2)}\int\!\frac{\mathrm{d}^{3}k}{(2\pi)^{3}} k^2(\boldsymbol{n}_1\cdot\hat\boldsymbol{k})(\boldsymbol{n}_2\cdot\hat\boldsymbol{k})\\ &\times n_1^{i}W_{ij}(\hat\boldsymbol{k})n_2^{j}P_{\Omega}(k)e^{i\boldsymbol{k}\cdot\boldsymbol{x}}\, . \nonumber \end{align} The auto-correlation has a complicated tensor structure. In the following we will restrict ourself to the case of statistical isotropy $W_{ij}=\delta_{ij}-\hat{k}_i\hat{k}_j$. In the flat-sky approximation, and neglecting evolution we obtain \begin{align} \hspace{0.8cm}\xi_{\Omega\Omega} = &\frac{1}{\mathcal{H}^2}\int\!\frac{\mathrm{d}^{3}k}{(2\pi)^{3}} k^2(\boldsymbol{n}\cdot\hat\boldsymbol{k})^2\big(1+(\boldsymbol{n}\cdot\hat\boldsymbol{k})^2\big)\\ &\times P_{\Omega}(k)e^{i\boldsymbol{k}\cdot\boldsymbol{x}}\, . \nonumber \end{align} Rewriting the $\boldsymbol{n}\cdot\hat\boldsymbol{k}$ contributions in terms of Legendre polynomial and integrating over the direction of $\boldsymbol{k}$, we obtain for the isotropic contribution \begin{align} \hspace{0.8cm}\xi^\text{iso}&(\bar z, x,\boldsymbol{n}\cdot\hat{\boldsymbol{x}}) =\frac{2}{15}\mathcal{P}_{0}(\boldsymbol{n}\cdot\hat{\boldsymbol{x}})C_{0}^{\Omega}(x)\label{xivec}\\ &-\frac{2}{21}\mathcal{P}_{2}(\boldsymbol{n}\cdot\hat{\boldsymbol{x}})C_{2}^{\Omega}(x) -\frac{8}{35}\mathcal{P}_{4}(\boldsymbol{n}\cdot\hat{\boldsymbol{x}})C_{4}^{\Omega}(x)\,, \notag \end{align} with \begin{equation} C_{n}^{\Omega}(x)=\frac{1}{2\pi^{2}}\frac{1}{\mathcal{H}^2}\int\mathrm{d}k\,k^{4}P_{\Omega}(k)j_{n}(kx) \,.\label{COmega} \end{equation} The unit vector $\hat{\boldsymbol{x}}$ indicates the direction of the vector connecting the two galaxies. Notice the extra $k^2$ factor multiplying the power spectrum, which is absorbed in the scalar case when the velocity power spectrum is re-expressed in terms of the density power spectrum. \end{enumerate} The vector fluctuations contribute to the same multipoles as the scalar fluctuations while the functional dependence remains limited to that of the galaxy separation $x$ and the orientation of the pair w.r.t.\ to the line-of-sight. On the other hand, the relative contributions to the three multipoles differ and therefore it should be in principle possible to simultaneously measure the amplitude of the vector velocity power spectrum together with the growth rate and the bias. Comparing Eqs.~\eqref{xiscalar} and~\eqref{xivec} we see that when $b\sim 1$ and $f\sim 0.5$ the monopole and quadrupole from vorticity are suppressed by about a factor of 10 with respect to the scalar monopole and quadrupole, whereas the hexadecapole from vorticity and from scalar perturbations have the same pre-factor. \section{Constraints on Vector Fluctuations\label{sec:fisher}} We now forecast the constraints on vector fluctuations as expected from future redshift surveys. We study the two cases presented in Section~\ref{sec:models}, namely vector perturbations generated from non-linear structure formation, and vector perturbations generated by topological defects. In both cases we assume that we know the shape of the power spectrum and we forecast the constraints on its amplitude: $A_V$ for the vorticity and $(G \mu)^2$ for topological defects. We consider a $\Lambda$CDM universe and we fix the cosmological parameters to the fiducial values of~\cite{2014MNRAS.441...24A}: $\Omega_m = 0.274, h = 0.7, \Omega_b h^2 = 0.0224, n_s = 0.95$ and $\sigma_8 = 0.8$. \subsection{Constraints on vorticity} \label{sec:vorticity} We first calculate the constraints on vorticity expected from a survey like the future Dark Energy Spectroscopic Instrument (DESI)~\cite{Aghamousa:2016zmz}. The Bright Galaxy DESI survey will observe 10 million galaxies over 14,000 square degrees at redshift $z\leq 0.3$ with spectroscopic redshift accuracy. We split the sample into three thin redshift bins: $0.05 < z < 0.1$, $0.1 < z < 0.2$ and $0.2 < z < 0.3$ that we assume to be uncorrelated. We assume a mean bias of $b = 1.17$ over the whole sample, similar to the one of the main SDSS sample~\cite{Percival:2006gt}. In each redshift bin, we measure the amplitude of the monopole, quadrupole and hexadecapole in bins of separation $x_i$. The Fisher matrix for the amplitude associated with the multipoles $\ell=0,2,4$ is then given by \[ \mathcal{F}^\ell_{A_V}(\bar z)=\sum_{ij}\frac{\partial \xi_\ell}{\partial A_V}(\bar z, x_i)\big({\rm cov_\ell^{-1}} \big)(\bar z, x_i,x_j)\frac{\partial \xi_\ell}{\partial A_V}(\bar z, x_j)\, .\label{FAV} \] Here $\xi_\ell$ denotes the amplitude of the monopole, quadrupole and hexadecapole of the correlation function. Since the standard correlation function~\eqref{xiscalar} is independent of the vorticity amplitude $A_V$, and the vector part depends linearly on $A_V$ through the $C_{\ell}^{\Omega}(x_i)$ (see Eqs.~\eqref{COmega} and~\eqref{Pom1343}) the partial derivatives can easily be performed. The integrand in~\eqref{COmega} scales as $k^{4-n_s}$ at large $k$ and oscillates very rapidly. For $n_s=4.3$ it converges very slowly, whereas for $n_s=3.5$ it even diverges. This divergence is, however, not physical: we are using galaxies as our probes of the velocity field, and thus we are insensitive to any modes on scales smaller than the typical size of a galaxy. To account for this effect, we introduce a window function $W(k)=\exp\big[-\left(k/k_c \right)^2 \big]$ in~\eqref{COmega} which removes scales above $k_c$. We choose $k_c=10$\,Mpc$^{-1}$, corresponding to a length of $0.1$\,Mpc i.e.\ the typical size of a galaxy. We have checked that increasing $k_c$ to $100$\,Mpc$^{-1}$ changes the constraints on $A_V$ by less than 2\%. The matrix ${\rm cov}_\ell$ denotes the covariance matrix of the multipole $\ell$. The covariance contains contributions from Poisson noise and from cosmic variance. Since the scalar correlation function is expected to be much larger than the vector correlation function, we neglect the covariance due to the latter and calculate only the cosmic variance of the scalar part, which strongly dominates the error. We follow the method developed in~\cite{Bonvin:2015kuc,Hall:2016bmm}. The detailed expression for the covariance is given in Appendix~\ref{sec:covariance}. In Eq.~\eqref{FAV}, the sum runs over all the pixels' separations. We choose a pixel size of 2\,Mpc$/h$, and use pixel separations that are multiples of 2\,Mpc$/h$. Since the signal from vorticity quickly decreases with separation, the constraints strongly depend on the minimum separation that we include in our forecasts. Using Eq.~\eqref{Pom235} to model the shape of the power spectrum, we find a precision on $A_V$ of $\sigma_{A_V}=6\times 10^{-6}$, combining all three multipoles and using a minimum separation $x_{\rm min}=2$\,Mpc$/h$. This degrades to $\sigma_{A_V}=4.6\times 10^{-4}$ if we increase the minimum separation to $x_{\rm min}=10$\,Mpc$/h$. In both cases we use a maximum separation of 100\,Mpc$/h$. The constraints from individual multipoles are summarised in Table~\ref{tab:DESI}. In both cases the constraints come mainly from the hexadecapole, which is significantly more sensitive than the monopole and quadrupole to the presence of vorticity. At small separations, the vector part of the hexadecapole is indeed 20 to 50 times larger than the vector part of the monopole and 5 to 30 times larger than the vector part of the quadrupole. This is due to the spherical Bessel function $j_4$ in Eq.~\eqref{COmega}, which seems to have a better overlap with the form of the vorticity power spectrum than $j_0$ and $j_2$. Since the covariance of the hexadecapole is of the same order as the covariance of the monopole and of the quadrupole, this results in stronger constraints on vorticity from the hexadecapole.% \footnote{One would naively expect that since the hexadecapole from scalars is significantly smaller than the monopole and quadrupole from scalars, the covariance of the hexadecapole would also be smaller than the covariance of the monopole and of the quadrupole, resulting in even stronger constraints from the hexadecapole. This is however not the case, since the covariance of the hexadecapole is affected by the modes contributing to the monopole and quadrupole. As a consequence, the covariances of the three multipoles are of the same order of magnitude.} % We also study the dependence of the constraints on the shape of the power spectrum, using models~\eqref{Pom1343}, \eqref{Pom243} and~\eqref{Pom1335}, instead of \eqref{Pom235}. We find that the constraints depend only mildly on the shape. Choosing $x_{\rm min}=2$\,Mpc$/h$, they change by 14\,$\%$: the best constraints are for models~\eqref{Pom243} to~\eqref{Pom235} and the worst for model~\eqref{Pom1343}. Using $x_{\rm min}=10$\,Mpc$/h$ the constraints change by 37\,$\%$ when varying the shape: the best constraints are in this case for models~\eqref{Pom1335} and~\eqref{Pom235} and the worst for model~\eqref{Pom243}. We then change the position at which the power spectrum peaks, keeping the amplitude fixed. In model~\eqref{Pom1343} we vary $k_*=0.7\,h/$Mpc to $k_*=0.8\,h/$Mpc and $k_*=0.6\,h/$Mpc. We find that the constraints change by 29\,\% when $x_{\rm min}=2$\,Mpc$/h$ and by 38\,\% when $x_{\rm min}=10$\,Mpc$/h$. Our forecasts are therefore quite robust with respect to small changes in the shape of the power spectrum. Finally, we vary the evolution of the power spectrum with redshift, using the analytical evolution $D_1(z)$ instead of the numerical evolution of~\eqref{Pevol}, $\mathcal{H}^2(z)f^2(z)D_1^7(z)$. We find that the constraints improve by 40\,\% with the analytical evolution. The numerical simulations of~\cite{Pueblas:2008uv} find an amplitude for the vorticity $A_V\sim 10^{-5}$, whereas the simulations of~\cite{Zhu:2017vtj} find an amplitude of the order of $A_V\sim 10^{-3}$. Our results show that in the first case, the presence of vorticity will be measurable only at small separations $x_i < 10$\,Mpc$/h$. On the other hand if the amplitude is a few $10^{-4}$, vorticity will leave an observable impact on scales of 10\,Mpc$/h$ and slightly above. \begin{figure*}[th] \includegraphics[width=9.5cm]{figs/sigmaAV_zmax.pdf} \caption{\label{f:zmax} Constraints for model~\eqref{Pom235} on the amplitude $A_V$ from the combined monopole, quadrupole and hexadecapole in the SKA, using a mimimum separation $x_{\rm min}=2$\,Mpc$/h$. The constraints are plotted as a function of the maximum redshift bin included in the forecast. The blue dots assume that the vorticity power spectrum evolves as $\mathcal{H}^2(z)f^2(z)D_1^7(z)$, as found in numerical simulations~\cite{Pueblas:2008uv}, whereas the red diamonds assume that the vorticity evolves as $D_1(z)$ as found in the analytical derivation of~\cite{Cusin:2016zvu}.} \end{figure*} \begin{figure*}[th] \includegraphics[width=8.5cm]{figs/monopole.pdf}\hspace{0.5cm}\includegraphics[width=8.5cm]{figs/quadrupole.pdf} \caption{\label{f:monoquad} The monopole (left panel) and quadrupole (right panel) from SKA at $\bar z=0.35$, multiplied by $x^2$ and plotted as a function of separation. The red solid line shows the non-linear scalar multipoles (using halo-fit), with error bars obtained from Eqs.~\eqref{cov0} to~\eqref{cov4}. The black dashed line shows the sum of the non-linear scalar multipoles and the vector contributions with $A_V=5\times 10^{-3}$. Note that the vector contribution is negative for $x\leq 14$\,Mpc/$h$ and positive at larger separations.} \end{figure*} \begin{figure*}[th] \includegraphics[width=8.5cm]{figs/hexadecapole.pdf}\hspace{0.5cm}\includegraphics[width=8.5cm]{figs/hexadecapole_LNL.pdf} \caption{\label{f:hexa} The hexadecapole from SKA at $\bar z=0.35$, multiplied by $x^2$ and plotted as a function of separation. {\it Left panel}: the red solid line shows the non-linear scalar contribution to the hexadecapole (using halo-fit), with error bars obtained from Eqs.~\eqref{cov0} to~\eqref{cov4}. The black lines show the sum of the non-linear scalar hexadecapole and the vector contribution with $A_V=3\times 10^{-5}$ (dotted line), $A_V=10^{-4}$ (dashed line) and $A_V=10^{-3}$ (dot-dashed line). {\it Right panel}: the red solid line shows the non-linear scalar contribution to the hexadecapole (using halo-fit), the black dot-dashed line shows the sum of the non-linear scalar hexadecapole and the vector contribution with $A_V=10^{-3}$ and the blue dashed line shows the linear scalar hexadecapole. Note that the vector contribution to the hexadecapole is always negative.} \end{figure*} \vspace{1em} Let us also forecast the constraints obtained from a survey like the SKA. In its second phase of operation the SKA HI (21cm) galaxy survey will detect galaxies spectroscopically from redshift 0.1 to 2 over 30,000 square degrees. We split the redshift range into bins of width $\Delta z=0.1$ and we forecast the constraints on $A_V$ in each of the bins. We use the number density and bias specifications from~\cite{Bull:2015lja} (see Table 3). We then calculate the cumulative constraint from all bins, assuming that they are independent. Again we choose model~\eqref{Pom235}, since it follows the analytical shape of~\cite{Cusin:2016zvu} at large scales and gives similar constraints as model~\eqref{Pom1343} which fits well the shape found in~\cite{Pueblas:2008uv}. In Table~\ref{tab:SKA} we summarise the constraints on $A_V$ from the individual multipoles as well as the total. We choose different minimum separations: 2\,Mpc$/h$, 10\,Mpc$/h$ and 20\,Mpc$/h$. For all cases the maximum separation is 40\,Mpc$/h$. We have checked that including larger separations does not improve the constraints. \begin{table}[t] \centering \begin{tabular}{ | c | c | c | c | c | } \hline $x_{\rm min}$ $[{\rm Mpc}/h]$ & mono & quad & hexa & total \\ \hline 2 & $2.5\times 10^{-4}$ & $2.9\times 10^{-5}$ & $6.2\times 10^{-6}$ & $6\times 10^{-6}$ \\ \hline 10 & $6.4\times 10^{-3}$ & $1.3\times 10^{-2}$ & $4.6\times 10^{-4}$ & $4.6\times 10^{-4}$ \\ \hline \end{tabular} \caption{Constraints for the model given in Eq.~\eqref{Pom235} with growth $\propto \mathcal{H}(z)^2f(z)^2D_1(z)^7$ on the amplitude $A_V$ from the monopole, quadrupole, hexadecapole, and combined multipoles in DESI. We use separations $x_{\rm min}\leq x_i\leq 100$\,Mpc$/h$ with two values for $x_{\rm min}$.} \label{tab:DESI} \end{table} \begin{table}[t] \centering \begin{tabular}{ | c | c | c | c | c | } \hline $x_{\rm min}$ $[{\rm Mpc}/h]$ & mono & quad & hexa & total \\ \hline 2 & $3.7\times 10^{-5}$ & $4.2\times 10^{-6}$ & $8.7\times 10^{-7}$ & $8.5\times 10^{-7}$ \\ \hline 10 & $9.4\times 10^{-4}$ & $2\times 10^{-3}$ & $7.1\times 10^{-5}$ & $7.1\times 10^{-5}$ \\ \hline 20 & $7.2\times 10^{-2}$ & $4.6\times 10^{-2}$ & $1.6\times 10^{-3}$ & $1.6\times 10^{-3}$ \\ \hline \end{tabular} \caption{Constraints from model~\eqref{Pom235} with growth $\propto \mathcal{H}(z)^2f(z)^2D_1(z)^7$ on the amplitude $A_V$ from the monopole, quadrupole, hexadecapole, and combined multipoles in the SKA. We use separations $x_{\rm min}\leq x_i\leq 40$\,Mpc$/h$ with three values for $x_{\rm min}$. \label{tab:SKA}} \end{table} \begin{table}[t] \centering \begin{tabular}{ | c | c | c | c | c | } \hline $x_{\rm min}$ $[{\rm Mpc}/h]$ & mono & quad & hexa & total \\ \hline 2 & $1.2\times 10^{-5}$ & $1.6\times 10^{-6}$ & $3.5\times 10^{-7}$ & $3.4\times 10^{-7}$ \\ \hline 10 & $3.2\times 10^{-4}$ & $6.6\times 10^{-4}$ & $2.4\times 10^{-5}$ & $2.4\times 10^{-5}$ \\ \hline 20 & $2.2\times 10^{-2}$ & $1.4\times 10^{-2}$ & $5.2\times 10^{-4}$ & $5.2\times 10^{-4}$ \\ \hline \end{tabular} \caption{Constraints from model~\eqref{Pom235} combined with the analytical prediction for the growth $\propto D_1(z)$, on the amplitude $A_V$ from the monopole, quadrupole, hexadecapole, and combined multipoles in the SKA. We use separations $x_{\rm min}\leq x_i\leq 40$\,Mpc$/h$ with three values for $x_{\rm min}$. \label{tab:SKAanalytic}} \end{table} From Table~\ref{tab:SKA} we see that if the amplitude of the vorticity power spectrum is of the order of $10^{-5}$ as suggested by the numerical simulations of Ref.~\cite{Pueblas:2008uv}, it should leave an observational impact on both the quadrupole and the hexadecapole at small separations $x_i < 10$\,Mpc$/h$. An amplitude of $10^{-4}$ would leave an impact on scales of the order of 10\,Mpc$/h$ and an amplitude of $10^{-3}$, as suggested by the simulations of Ref.~\cite{Zhu:2017vtj}, on scales as large as 20\,Mpc$/h$. In Table~\ref{tab:SKAanalytic}, we repeat the constraints, but this time assuming that vorticity grows linearly with $D_1(z)$, as predicted by the analytical calculation of~\cite{Cusin:2016zvu}. In this case the constraints improve by a factor 2.5 to 3 with respect to those in Table~\ref{tab:SKA}. Tables~\ref{tab:SKA} and~\ref{tab:SKAanalytic} show the cumulative constraints obtained from combining all redshift bins. Comparing the individual constraints from each bin, we find that using the redshift evolution from numerical simulations $\propto \mathcal{H}^2(z)f^2(z)D_1^7(z)$ the strongest constraint comes from the bin $0.3\leq z\leq 0.4$, whereas using a linear evolution $\propto D_1(z)$ it comes from the bin $0.4\leq z\leq 0.5$. In Figure~\ref{f:zmax} we plot the cumulative constraints as a function of the maximum redshift bin $z_{max}$, for both the numerical and analytical case. We see that in both cases, including bins above $\bar z\simeq1$ does not improve the constraints anymore. This can be understood by noting that the signal decreases with redshift. On the other hand, the volume increases with redshift, leading to a decrease of the cosmic variance. Finally the number density decreases with redshift (see Table 3 of~\cite{Bull:2015lja}), leading to an increase of the Poisson noise. Since the constraints come mainly from small separations, the increase of the Poisson noise dominates the error budget at large redshift leading to a decrease in the constraints. In Figure~\ref{f:monoquad} we plot the monopole and quadrupole at $z=0.35$. We compare the scalar non-linear multipoles, calculated with halo-fit, with the contribution generated by vorticity. At small scales, vorticity leaves an imprint which is larger than the error-bars. As the separation increases, the vorticity contribution quickly decays and the signal is completely dominated by the scalar contribution. In Figure~\ref{f:hexa} we show the hexadecapole at $z=0.35$. We compare the hexadecapole from the scalar non-linear contribution (using halo-fit) with the contribution generated by vorticity, for three different values of $A_V$. The impact of vorticity on the hexadecapole is significantly stronger than on the other multipoles. The hexadecapole is therefore ideal to detect the presence of vorticity. In the right panel, we also show the linear scalar contribution. We see that for $A_V=10^{-3}$ (as predicted by~\cite{Zhu:2017vtj}), the contribution from vorticity is similar to the difference between the halo-fit and linear scalar contributions at small scales. At large scales however, the vorticity signal decreases faster than the non-linear scalar contribution. Note that in all these plots we have used halo-fit to calculate the non-linear scalar contributions to the multipoles.% \footnote{More precisely we use the linear continuity equation to relate the velocity to the density and we then calculate the density power spectrum with halo-fit.} % This does not provide a very accurate description of the scalar velocity in the non-linear regime. More reliable models have been developed to account for the Fingers of God and for the smoothing of the BAO scales, see e.g.~\cite{2013MNRAS.431.2834X}. In this paper we are however not interested in the scalar non-linear signal, but rather in the vector non-linear signal. It is therefore enough for us to use halo-fit to estimate the amplitude of the scalar signal. Note also that our forecasts depend on the form of the scalar contribution only through the covariance matrix, for which halo-fit is sufficiently precise. \subsection{Constraints on topological defects} We now turn to forecast the constraints on topological defects. We use the power spectrum~\eqref{POmdef} to calculate the contribution from the defects to the multipoles and we forecast the constraints expected on the amplitude $(G\mu)^2$. The Fisher matrix for $(G\mu)^2$ is given by \begin{align} &\mathcal{F}^\ell_{(G\mu)^2}(\bar z)=\label{Fdefect}\\ &\sum_{ij}\frac{\partial \xi_\ell}{\partial (G\mu)^2}(\bar z, x_i)\big({\rm cov_\ell^{-1}} \big)(\bar z, x_i,x_j)\frac{\partial \xi_\ell}{\partial(G\mu)^2}(\bar z, x_j)\, .\nonumber \end{align} Contrary to the vorticity from non-linearities, the $C_n^\Omega$ in Eq.~\eqref{COmega} do not diverge for topological defects, since the power spectrum scales as $k^{-5.14}$ at large $k$. It is therefore not necessary to introduce a window function in this case. We find that the constraints are always much weaker than those obtained from the CMB. We use an optimal survey observing the whole sky between $z=0$ and $z=2$, with a bias evolution similar to that of the SKA. We assume a high number density, so that the Poisson noise can be completely neglected and only cosmic variance contributes to the covariance matrix (first term in Eqs.~\eqref{cov0} to~\eqref{cov4}). We use a pixel size of 2\,Mpc$/h$ and we include separations between 2 and 1000\,Mpc$/h$ in the Fisher matrix~\eqref{Fdefect}. We find that even in this very optimistic settings, the constraints on the amplitude are \[ \sigma_{(G\mu)^2}=1.3\times 10^{-7}\, , \] which is 6 to 7 orders of magnitude larger than the constraints from the CMB, $(G\mu)^2 < 4 \times 10^{-14}$ \cite{Lizarraga:2016onn,Ade:2015xua}. Redshift space distortions are therefore not competitive to detect the presence of topological defects. This is mainly due to the fact that topological defects leave a very clean imprint on very large scales and at early times that are better probed by the CMB than by large-scale structure. In this analysis, we have only used linear perturbation theory predictions, based on the result of Eq.\ (\ref{eq:vecgeo}), that the vorticity redshifts away and the velocity field follows the frame being dragged by metric perturbations (measured in numerical simulations of cosmic strings). In general, we expect non-linear effects like wake-formation behind a cosmic string passing through matter. To properly assess the impact of such non-linear effects we would need to combine numerical string simulations with $N$-body simulations, which goes well beyond the scope of this work. However, it appears unlikely that such effects could increase the vorticity in galaxy velocities by over six orders of magnitude. \section{Conclusions and Implications} In this paper, we have shown that the vector contribution to the peculiar velocity of galaxies induces a particular set of corrections to the redshift-space galaxy two-point correlation function. These vector modes arise from either vorticity in the galaxy velocity field or through frame dragging, both affecting the galaxy correlation function. Even when this contribution is isotropic, it is different to the standard scalar one. Even in concordance cosmology, vorticity modes exist since they are produced during structure formation, through shell crossing in the cold dark matter. We have performed an initial study of the feasibility of detecting this signal and have shown that that next generation surveys such as DESI and SKA will be sensitive enough to detect it. The theoretical uncertainty on the amplitude of the vorticity power spectrum is still significant, but even at the level of the most pessimistic estimate, a contribution from the vector signal is significant enough to at least become a new systematic for the next generation spectroscopic surveys and should be included in analyses of the correlation function. We have found that the hexadecapole is most affected by vorticity. At small separations, the vector part of the hexadecapole is indeed 20 to 50 times larger than the vector part of the monopole and 5 to 30 times than the vector part of the quadrupole. Unfortunately, modelling the hexadecapole is well known to be difficult since the effect of the non-linear scalar contribution is relatively larger than for the lower multipoles. Since, for CDM vorticity, the signal is strongest at the smallest separations $x<10$~Mpc$/h$, this is likely to continue being a challenge. Indeed, deep in the non-linear regime, once perturbation theory has broken down completely, one would naturally expect the signal to be equipartitioned between the fully non-linear scalar and vector modes. A realistic extraction of the vector signal would require a good model for the relationship between the scalar velocity flows and the non-linear matter-density power spectrum, most likely from simulations, with any constraints further degraded by marginalization over nuisance parameters introduced in the models. Nonetheless, the signal is there in the standard model of cosmology, at a level which will affect amplitudes of the correlation function. An exciting possibility is that the signal is actually higher than expected as a result of a non-standard model of dark matter or new degrees of freedom being active in the late universe, or even our inability to properly capture small-scale physics in simulations. For example, in~\cite{Cusin:2016zvu} it was shown that dispersion in the dark matter distribution (i.e.\ if the DM is not completely cold) can create large amplitudes for the vorticity. It is thus in principle possible to use a measurement of the vector modes to put new constraints on the model of dark matter. Any extended model will come with its own prediction for the shape of the vector power spectrum. We have shown that upcoming surveys are broadly insensitive to changes to the precise shape and peak position of the vector power spectrum, provided that it is roughly located in the transition between the linear and non-linear regimes. However, the constraints on power spectra peaked at horizon scales, such as for topological defects, are much weaker, mostly as a result of the few galaxy pairs available at such separations and the much larger cosmic variance. Finally let us mention that vector modes allow one to include the effects of local anisotropy appearing at late times. We have neglected such physics in this work, focussing on isotropic vector fluctuations. The appearance of a preferred direction would have a very specific effect on the correlation function, and would be much better probed using new observables rather than the monopole, quadrupole and hexadecapole of the correlation function into which the full data are currently compressed. We leave the full analysis for future work. \begin{acknowledgments} It is a pleasure to thank Mark Hindmarsh for useful discussions. C.B., R.D. and M.K.~acknowledge funding by the Swiss National Science Foundation. I.S. is supported by the European Regional Development Fund and the Czech Ministry of Education, Youth and Sports (MŠMT) (Project CoGraDS — CZ.02.1.01/0.0/0.0/15\_003/0000437). \end{acknowledgments}
1,314,259,995,383
arxiv
\section{Introduction} In 2005, two complementary and independent methods were discovered that allowed numerical relativists to completely solve the black-hole binary problem in full strong-field gravity~\cite{Pretorius:2005gq, Campanelli:2005dd, Baker:2005vv}. On the other hand, there are currently major experimental and theoretical efforts underway to measure these gravitational wave signals. Therefore, one of the most important tasks of numerical relativity (NR) is to assist gravitational wave observatories in detecting gravitational waves and extracting the physical parameters of the sources. Given the demanding resources required to generate these black-hole binary simulations, it is necessary to develop various techniques in order to model arbitrary binary configuration based on numerical simulations in combination with post-Newtonian (PN) and perturbative (e.g. black-hole perturbation) calculations. In this paper, we compare the NR and PN waveforms for the challenging problem of a generic black-hole binary, i.e., a binary with unequal masses and unequal, non-aligned, and precessing spins. Comparisons of numerical simulations with post-Newtonian ones have several benefits aside from the theoretical verification of PN. From a practical point of view, one can directly propose a phenomenological description and thus make predictions in regions of the parameter space still not explored by numerical simulations. From the theoretical point of view, an important application is to have a calibration of the post-Newtonian error in the last stages of the binary merger. The paper is organized as follows. In Sec. II we present our method to derive the PN gravitational waveforms from generic black-hole binaries, and in III we compare the NR and PN waveforms. Finally in Sec. IV we summarize this paper and discuss remaining problems. The detailed numerical method and PN calculation presented here have been given in \cite{Campanelli:2008nk}. \section{Gravitational waveforms in the PN approach} In order to calculate PN gravitational waveforms, we need to calculate the orbital motion of binaries in the post-Newtonian approach. Here we use the ADM-TT gauge, which is the closest to our quasi-isotropic numerical initial data coordinates. In this paper, we use the PN equations of motion (EOM) based on~\cite{Buonanno:2005xu,Damour:2007nc,Steinhoff:2007mb}. The Hamiltonian is given in~\cite{Buonanno:2005xu}, with the additional terms, i.e., the next-to-leading order gravitational spin-orbit and spin-spin couplings provided in~\cite{Damour:2007nc,Steinhoff:2007mb}, and the radiation-reaction force given in~\cite{Buonanno:2005xu}. The Hamiltonian which we used here is given by \begin{eqnarray} H &=& H_{\rm O,Newt} + H_{\rm O,1PN} + H_{\rm O,2PN} + H_{\rm O,3PN} \nonumber \\ && + H_{\rm SO,1.5PN} + H_{\rm SO,2.5PN} + H_{\rm SS,2PN} + H_{\rm S_1S_2,3PN} \,, \label{eq:H} \end{eqnarray} where the subscript O, SO and SS denote the pure orbital (non-spinning) part, spin-orbit coupling and spin-spin coupling, respectively, and Newt, 1PN, 1.5PN, etc., refer to the perturbative order in the post-Newtonian approach. From this Hamiltonian, the conservative part of the orbital and spin EOM is derived using the standard techniques of the Hamiltonian formulation. For the dissipative part, we use the non-spinning radiation reaction results up to 3.5PN (which contributes to the orbital EOM at 6PN order), as well as the leading spin-orbit and spin-spin coupling to the radiation reaction~\cite{Buonanno:2005xu}. The above PN evolution is used both to produce very low eccentricity orbital parameters at $r\approx11M$ from an initial orbital separation of $50M$, and to evolve the orbit from $r\approx11M$. We use these same parameters at $r\approx11M$ to generate the initial data for our numerical simulations. The initial binary configuration at $r=50M$ had the mass ratio $q=m_1/m_2 = 0.8$, $\vec S_1/m_1^2 = (-0.2, -0.14,0.32)$, and $\vec S_2/m_2^2 =(-0.09, 0.48, 0.35)$. We then construct a hybrid waveform from the orbital motion by using the following procedure. First we use the 1PN accurate waveforms derived by Wagoner and Will~\cite{Wagoner:1976am} (WW waveforms) for a generic orbit. By using these waveforms, we can introduce effects due to the black-hole spins, including the precession of the orbital plane. On the other hand, Blanchet {\it et al}.~\cite{Blanchet:2008je} recently obtained the 3PN waveforms (B waveforms) for non-spinning circular orbits. We combine these two waveforms to produce a hybrid waveform. In order to combine the WW and B waveforms, we need to take into account differences in the definitions of polarization states and the angular coordinates. The WW waveforms use the standard definition of GW polarization states, which are the same as those derived from the Weyl scalar, but the B waveforms use an alternate definition. The angular coordinates in the B waveforms are derived from circular orbits in the equatorial (xy) plane. To directly compare the NR and PN waveforms, we must add a time dependent inclination to the B waveforms because in the generic case the orbital planes are inclined with respect to the xy plane. We note that since there is no gauge ambiguity for combining the two waveforms, the combination of the WW and B waveforms is unique. Also, it should be noted that we calculate the spin contribution to the waveforms through its effect on the orbital motion directly in the WW waveforms and indirectly in B waveforms through the inclination of the orbital plane. For the NR simulations we calculate the Weyl scalar $\psi_4$ and then convert the $(\ell,m)$ modes of $\psi_4$ into $(\ell,m)$ modes of $h = h_{+} - i h_{\times}$. \section{Comparison of the NR and PN waveforms} To compare PN and numerical waveforms, we need to determine the time translation $\delta t$ between the numerical time and the corresponding point on the PN trajectory. That is to say, the time it takes for the signal to reach the extraction sphere ($r=100M$ in our numerical simulation). We determine this by finding the time translation near $\delta t=100M$ that maximizes the agreement of the early time waveforms in the $(\ell=2,m=\pm2)$, $(\ell=2,m=\pm1)$, and $(\ell=3,m=\pm3)$ simultaneously. We find $\delta t \sim 112$, in good agreement with the expectation for our observer at $r=100M$. Since our PN waveforms are given uniquely by a binary configuration, i.e., an actual location of the PN particle, we do not have any time shift or phase modification other than this retardation of the signal. It is noted that other methods which are not based on the particle locations, have freedom in choosing a phase factor. In the left panel of Fig.~\ref{fig:G3.5}, we show the real part of the $(\ell=2, m=2)$ mode of the strain $h$ with this time translation. (The other modes are shown in~\cite{Campanelli:2008nk}.) We note that the reasonable agreement of the numerical and PN waveforms for $700M$. \begin{figure}[ht] \center \includegraphics[width=2.9in]{G3.5_PN_NUM_2_2.eps} \includegraphics[width=2.9in]{G3.5_PN_NUM_AMP_2_1.eps} \caption{ {\bf Left:} The real part of the $(\ell=2, m=2)$ mode of $h$ from the numerical and 3.5PN simulations. Here 3.5PN predicts an early merger and has a higher frequency than the numerical waveform. {\bf Right:} The amplitude of the $(\ell=2, m=1)$ mode of $h$ from the numerical and 3.5PN simulations. The secular oscillation in the numerical amplitude occurs at roughly the precessional frequency. The timescale is of order $1000M$. Here the shorter-timescale oscillations correspond roughly to the orbital period. } \label{fig:G3.5} \end{figure} From the analysis of the amplitudes of each mode, we see that the precession and eccentricity of the orbit impart signatures on the modes of the waveform at the orbital frequency. However, the long-time oscillations in the amplitudes, here apparent only in the ($\ell = 2,\,m = \pm 1$) modes, seem to be due purely to precession, and occur at the precessional frequency. In the right panel of Fig.~\ref{fig:G3.5}, we show the amplitudes of the $(\ell=2,m=1)$ mode of $h$. Next, in order to quantitatively compare the modes of the PN waveforms with the numerical waveforms we define the overlap, or matching criterion, for the real and imaginary parts of each mode as \begin{eqnarray} \label{eq:match} M_{\ell m}^\Re = \frac{<R^{\rm Num}_{\ell m}, R^{\rm PN}_{\ell m}>} {\sqrt{<R^{\rm Num}_{\ell m},R^{\rm Num}_{\ell m}> <R^{\rm PN}_{\ell m},R^{\rm PN}_{\ell m}>}} \,, \quad M_{\ell m}^\Im = \frac{<I^{\rm Num}_{\ell m}, I^{\rm PN}_{\ell m}>}{\sqrt{<I^{\rm Num}_{\ell m}, I^{\rm Num}_{\ell m}><I^{\rm PN}_{\ell m},I^{\rm PN}_{\ell m}>}} \,, \end{eqnarray} where $R_{\ell m}$ and $I_{\ell m}$ are defined by the real and imaginary parts of the waveform mode $h_{\ell m}$, respectively, and the inner product is calculated by $ <f,g> = \int_{t_1}^{t_2} f(t) g(t) dt $. Hence, $M_{\ell m}^\Re = M_{\ell m}^\Im = 1$ indicates that the given PN and numerical mode agree. The results of these matching studies are summarized in Table~\ref{tab:G3.5match}. \begin{table}[ht] \caption{The match of the real and imaginary parts of the modes of $h$ of the G3.5 configuration for the 3.5 PN waveforms and the numerical waveforms with the time translation $\delta t = 112.5$.} \label{tab:G3.5match} \center \renewcommand{\arraystretch}{1.2} \begin{tabular}{l||c|c|c} Integration Time & 600 & 800 & 1000 \\ \hline $M_{22}^\Re$ & 0.986 & 0.964 & 0.895 \\ $M_{22}^\Im$ & 0.987 & 0.962 & 0.900 \\ $M_{2-2}^\Re$ & 0.986 & 0.964 & 0.895 \\ $M_{2-2}^\Im$ & 0.987 & 0.962 & 0.901 \\ $M_{21}^\Re$ & 0.904 & 0.912 & 0.843 \\ $M_{21}^\Im$ & 0.916 & 0.901 & 0.820 \\ $M_{2-1}^\Re$ & 0.920 & 0.908 & 0.833 \\ $M_{2-1}^\Im$ & 0.917 & 0.903 & 0.816 \\ $M_{33}^\Re$ & 0.938 & 0.891 & 0.738 \\ $M_{33}^\Im$ & 0.919 & 0.868 & 0.721 \\ $M_{3-3}^\Re$ & 0.931 & 0.880 & 0.733 \\ $M_{3-3}^\Im$ & 0.906 & 0.857 & 0.721 \\ \end{tabular} \end{table} We also determine an alternate time translation, one wavelength in the $(\ell=2,m=2)$ mode, that increases the matching of the $(\ell=2,m=2)$ mode over longer integration periods. On the other hand, this new time translation, $\delta t = 233$, causes the $(\ell=3)$ modes to be out of phase, leading to negative overlaps. Thus by looking at the $(\ell=2)$ and $(\ell=3)$ modes simultaneously, we can reject this false match. \section{Conclusion and discussion} We analyzed the first long-term generic waveform produced by the merger of unequal mass, unequal spins, precessing black holes. It is found that a good initial agreement of waveforms for the first six cycles, with overlaps of over $98\%$ for the $(\ell=2, m=\pm2)$ modes, over $90\%$ for the $(\ell=2, m=\pm1)$ modes, and over $90\%$ for the $(\ell=3, m=\pm3)$ modes. These agreement degrades as we approach the more dynamical region of the late merger and plunge. There are some remaining problems. The PN gravitational waveforms used here do not include direct spin effects. We considered the spin contribution to the waveform only through its effect on the orbital motion. Recently, the direct spin effects have been discussed in~\cite{Arun:2008kb}. And also in the PN approach, the waveforms are derived from binaries whose each body is considered as a point particle. The finite size effects of the bodies is also important in the late-inspiral region. Furthermore, we will need higher-order post-Newtonian calculations of both spin-orbit and spin-spin terms, especially for the phase evolution of gravitational waves. We also have a important issue. In order to detect the gravitational waves from binaries, it is necessary to study the data analysis. (For example, the Numerical INJection Analysis (NINJA) project~\cite{ninja}.) Here, we must treat a very large parameter space for intrinsic parameters of black-hole binaries, The development of effective GW templates for the whole history of binaries, i.e., the inspiral, merger and ringdown should be done. \subsection*{Acknowledgments} We would like to thank H.~Tagoshi and R.~Fujita for useful discussions.
1,314,259,995,384
arxiv
\chapter{Supplemental Material} \label{appendix:UQ-ARMED} \input{appendix_UQ-ARMED.tex} \section{Coefficient covariate estimation using $\frac{\partial y}{\partial \Bar{x}}$} As described in \ref{sec:UQ-ARMED:methods:covariate coef} there are two reasonable approaches for estimating the covariate coefficient calculation for the non-linear models. \clearpage \begin{figure}[] \centering \includegraphics[width=\textwidth,keepaspectratio]{figs/UQ-ARMED/sup_covariate_coefficient_xbar_1.PNG} \end{figure} \clearpage \begin{figure}[] \centering \includegraphics[width=\textwidth,keepaspectratio]{figs/UQ-ARMED/sup_covariate_coefficient_xbar_2.PNG} \end{figure} \clearpage \begin{figure}[] \centering \includegraphics[width=\textwidth,keepaspectratio]{figs/UQ-ARMED/sup_covariate_coefficient_xbar_3.PNG} \caption[Coefficient covariate estimation and calculated uncertainty for all models using $\frac{\partial y}{\partial \Bar{x}}$.]{ Coefficient covariate estimation and calculated uncertainty for all models using $\frac{\partial y}{\partial \Bar{x}}$. Each plot shows the mean, and 95CI for each estimated covariate coefficient. Specific model indicated at the top of each plot. Organized in the same order as the performance plots in Fig. \ref{fig:UQ-ARMED:performance}. } \label{sup:fig:UQ-ARMED:covariate coef dy/dxbar} \end{figure} \begin{figure}[] \centering \includegraphics[width=\textwidth,keepaspectratio]{figs/UQ-ARMED/SupPredictionConfidence1.PNG} \end{figure} \clearpage \begin{figure}[] \centering \includegraphics[width=\textwidth,keepaspectratio]{figs/UQ-ARMED/SupPredictionConfidence2.PNG} \end{figure} \clearpage \begin{figure}[] \centering \includegraphics[width=\textwidth,keepaspectratio]{figs/UQ-ARMED/SupPredictionConfidence3.PNG} \caption[Box plots for comparison of the prediction confidence of correctly and incorrectly predicted subjects per model, for all models]{ Box plots for comparison of the prediction confidence of correctly and incorrectly predicted subjects per model, for all models with high predictive performance. Confidence is calculated as described in sec \ref{sec:UQ-ARMED:methods:confidence}. The x-axis groups the prediction confidence based on the correctly and incorrectly predicted samples. Columns show the distributions on the training, seen-site test data, and unseen-site test data, left to right, for each model.} \label{sup:UQ-ARMED:confidence} \end{figure} \chapter*{Abstract} This work demonstrates the ability to produce readily interpretable statistical metrics for model fit, fixed effects covariance coefficients, and prediction confidence. Importantly, this work compares 4 suitable and commonly applied epistemic UQ approaches, BNN, SWAG, MC dropout, and ensemble approaches in their ability to calculate these statistical metrics for the ARMED MEDL models. In our experiment, not only do the UQ methods provide these benefits, but several UQ methods maintain the high performance of the original ARMED method, some even provide a modest (but not statistically significant) performance improvement. The ensemble models, especially the ensemble method with a 90\% subsampling, performed well across all metrics we tested with (1) high performance that was comparable to the non-UQ ARMED model, (2) properly deweights the confounds probes and assigns them statistically insignificant p-values, (3) attains relatively high calibration of the output prediction confidence. The MC dropout models showed the lowest performance, and failed to provide non-statistically significant fixed effects covariate coefficients. The SWAG model’s performance was dependent on the learning rate. Specifically, the models with low learning rate underestimated the fixed effect covariate coefficient uncertainty providing very small standard errors causing very significant p-values for all covariate coefficients including for the synthetic probes. Lastly, The BNNs also performed reasonably well, showing good model performance, and almost achieved providing statistically insignificant p-values for the synthetic probe covariate coefficients (depending on your cut off), but did not perform as well as the ensemble approaches for both covariate coefficient statistical significance, or model prediction confidence. The largest potential downside to the ensemble approaches is the increased training time, however as discussed in results, the balance between wall clock time and available computational resources could be achieved through parallelization. Additionally, in many instances the models inference time is more important than training time, for which the ensemble approaches are tied as the fastest of the UQ methods. Based on these results, the ensemble approaches, especially with a subsampling of 90\%, provided the best all-round performance for prediction and uncertainty estimation, and achieved our goals to provide statistical significance for model fit, statistical significance covariate coefficients, and confidence in prediction, while maintaining the baseline performance of MEDL using ARMED. \chapter{Introduction} Linear models and standard deep learning approaches assume the data is independent and identically distributed (\emph{iid}). In biomedical data, there is often a clustering of the input data. For example, samples in medical data are often correlated: (e.g. samples from the same subject, recorded by the same observer, or obtained from the same clinic). In the later case, the clustering can be due to data acquired with slighlty different protocols or medical devices (e.g. brand of magnetic resonance imaging device). This clustering of subjects causes the data to be non-\emph{iid} as subjects from the same grouping are typically more correlated then data from other other groupings. As illustrated in the Simpson’s paradox \cite{Wagner.1982} these confounds can lead to Type 1 and Type 2 errors. That is, covariates can be found to have an association with target where none exists (false positive) or vice versa (false negative). In the statistics community, such clustering is accounted for through the use of linear mixed effects (LME) models. The LME is defined as $y_s = \beta_0+\beta_1x_s+\mu_{0,j}+\mu_{1,j}x_s+\epsilon_s$, where $x_s$ is the covariate of the $s$-th subject, $y_s$ is the prediction target (dependent variable) for the s-th subjects, $\beta_0$ and $\beta_1$ are the fixed effects intercept and slope which apply to the whole population, $\mu_{0,j}$ and $\mu_{1,j}$ are the random effects intercept and slope for the $j$ cluster respectively, and $\epsilon$ is the error. LME models explicitly separate and quantify the fixed effects, i.e. population effects, using $\beta$, from the random effects, i.e. cluster specific effects, using $\mu$. Accounting for random effects by modeling the fixed and random effects separately helps mitigate confounds and decrease Type 1 and Type 2 errors. Additionally, LME models report results with statistically-meaningful characterization including the statistical significance of each covariate coefficient and for the overall multivariate model fit. Importantly, using standard cut-offs (e.g. alpha-criterion $\alpha=0.01$) this statistical characterization allows for a principled separation of significant and insignificant covariates (e.g. for the general population using the fixed effects covariate coefficient's significance). A primary limitation of LME models is that they are incapable of capturing non-linear relationships between the covariates and target. Additionally, while nonlinear mixed effects models (NLME) have been proposed they are not data driven and typically require specification (prior knowledge) of nonlinear relations (priors) often in the form of differential equations describing an underlying physical phenomena. In medical applications, such prior knowledge is typically unavailable. Deep learning (DL) predictive models have achieved successes across all areas of life sciences. They are a universal approximator which can learn any non-linear association between predictors and target in a data driven way \cite{Hornik.1989}. Accordingly, to provide a data driven non-linear mixed effects solution, mixed effects deep learning (MEDL) models have been proposed to combine the strengths of DL with the confound mitigation of LMEs \cite{Nguyen.2022b, Simchoni.2021, Xiong.2019}. Notably, the Adversarially-Regularized MEDL (ARMED) framework \cite{Nguyen.2022b} was found to perform well relative to other MEDL approaches across many tasks. The ARMED framework employs a multi-module architecture to explicitly model both the fixed and random effects, quantify them separately, and combine them to provide a mixed effects prediction. Briefly, the ARMED framework consists of three elements. First, it adds an adversarial network module to testing the conventional models features for their ability to predict cluster membership. It acts as a negative feedback encouraging the conventional network to identify fixed effects that apply to the whole dataset. Second, it adds a random effects network to explicitly capture the random effects (RE), including the option to model the RE intercept, and a linear or non-linear RE slope. The third aspect is the addition of a cluster membership predictor (Z-predictor) allowing the full mixed effects model to be used for prediction on data from clusters unseen during training. The ARMED framework attains, in a data driven way, a non-linear mixed effects association from covariates to targets. This is a critical advance over both the traditional statistical LME model and the statistical non-linear mixed effects models that require known partial or ordinary differential equations to govern the form of the non-linear mixed effects model. While the ARMED framework empowers deep learning for mixed effects modeling, yielding improved generalization, mitigating confounds and increasing model interpretability, there are several substantial limitations in ARMED curtailing their use in practical situations, such as medical applications. The first limitation is that ARMED does not provide a statistically meaningful measure of feature importance for each covariate. Second it does not provide a statistically meaningful measure of the overall model significance. A final limitation in the ARMED framework, is that it provides only a point estimate of the ME prediction, without any associated prediction confidence. We note that these limitations are common to other proposed MEDL solutions \cite{Simchoni.2021, Xiong.2019}. Characterizing covariates and the overall model in a statistically meaningful fashion is essential for neural networks to become widely trusted among life scientists and clinicians accustomed to characterizations such as p-values and 95\% confidence intervals (CIs). Additionally, the characterization of model prediction confidence is critical so that users know when to trust the model. The need to address these key MEDL limitations motivates the this work. Uncertainty quantification (UQ) methods learn a distribution of models rather than a single model, which is achieved by learning a distribution or multiple sets of model weights. Formally, a UQ DL model estimates a probability density function for the weights given the data: $p(w|D)$ where $D = {(x_1,y_1),(x_2,y_2)...(x_n,y_n)}$ and $n$ is the number of samples in the data. Specifically this captures epistemic (model) uncertainty. There are multiple methods to estimate model uncertainty for deep learning. In this work we evaluate the suitability of 4 of the most successful approaches. (1) \textit{Bayesian neural networks} (BNNs) directly estimate a distribution for each weight to estimate a posterior of the weights given the data $p(w|D)$ \cite{Haubmann.2020}, unlike standard networks that just used a point estimate. (2) \textit{Stochastic weight averaging Gaussian} (SWAG), approximates Bayesian inference by fitting a Gaussian to the first moment of the stochastic gradient decent (SGD) iterates. The distribution of the models weights are defined as a multivariate Gaussian, which can then be sampled from to estimate the model's posterior \cite{Maddox.2019}. (3) \textit{Ensemble methods} train multiple conventional models, each learning a single point estimate for each weight in the neural network. Each model is trained with a different perturbation of the training dataset or weight initialization during model training. The set of models is then used to approximate a posterior distribution at inference time \cite{Wilson.2020}. (4) Lastly, \textit{Monte Carlo (MC) dropout} which has traditionally been used as a regularization method and can also be used at test time to mimic an ensemble method. For the Bayesian approximation, each network neuron is optionally selected for temporary exclusion from the model by sampling from a binomial distribution. The exclusion of neurons occurs both during training and inference to provide a prediction uncertainty \cite{Srivastava.2014, Gal.2016, Gal.2016b}. For all of these UQ methods, the probability distribution of the weights can then be employed to efficiently quantify the model uncertainty. To estimate the overall significance of the model, the traditional approach is permutation testing \cite{Combrisson.2015}. However permutation testing often requires retraining the complete model up to 10,000 times, depending on the desired alpha criterion, which is computationally intractable in many problems. UQ via the methods described above mitigates this issue by allowing to directly estimate the significance of the overall multivariate model fit by exploiting the learned distribution of models. By combining UQ with the state of the art mixed effected deep learning framework, ARMED \cite{Nguyen.2022b}, this work brings 3 new key capabilities to mixed effects deep learning. In particular it allows the development of MEDL models that not only capture fixed and random effects to mitigate type 1 and 2 errors but also: 1) produces a true probabilistic confidence in the models’s ME prediction, 2) calculates the statistical significance of the overall model: did it truly learn an association between covariates and target, or was it just happenstance?, and 3) estimates the effect size of each FE covariate, its statistical significance, and 95\% CI. We evaluate 4 different UQ methods for their suitability for the ARMED framework to achieve these aims, and make a concrete recommendation of which UQ method best attains the above objectives while maintaining or improving the benefits of the ARMED framework. To demonstrate this, the UQ-ARMED models are trained and evaluated on the classification for subjects with mild cognitive impairment (MCI) to convert to Alzheimer's disease (AD) or remain stable within 2 years from their baseline visit. The Alzheimer's Disease Neuroimaging Initiative data is used due to its large number of longitudinally tracked subjects, and useful covariates. The non-linearity of complex medical data, combined with the clustering of subjects within each site, makes this a suitable dataset to test mixed effects deep learning. \chapter{Methods} Several methods have been proposed for mixed effects deep learning (MEDL). Recently it has been shown that ARMED outperforms comparables including MeNet, LMMNN, as well as alternative strategies including meta-learning, domain adversarial modes, and models that take the cluster membership as an additional covariate \cite{Nguyen.2022b}. Therefore we have chosen to ARMED as our base mixed effects deep learning framework, that we will enhance through the application of uncertainty quantification (UQ) methods. ARMED models the output as a point estimate, and as such does not intrinsically provide a principled uncertainty quantification in its predictions. While the output from a softmax activation, a commonly used classification activation function, can be interpreted as a confidence, it has been shown to be inaccurate and in particular over estimates the confidence in the category with highest evidence \cite{Monarch.2021}. Additionally, ARMED does not provide covariate coefficient significance. There are numerous methods for integrating model (aka epistemic) uncertainty including: (1) Bayesian Neural Networks (BNN) trained via Laplace approximations (2) BNNs trained via Markov chain Monte Carlo, (3) BNNs trained via variational Inference, (4) Monte Carlo (MC) Dropout, (5) ensemble approaches, and (6) Stochastic Weight Averaging Gaussian (SWAG)\cite{Maddox.2019}. Laplace approximation, with its required calculation of the Hessian for complex architectures, often contains millions of parameters rendering it in practice to be computationally expensive, non-trivial, and error prone. Markov chain Monte Carlo (MCMC) requires integrating over the model weights of the model to estimate the prior and posterior distribution. This requires many model samples, many of which are discarded as burn in samples. These factors often make Laplace approximations MCMC intractable for large models \cite{Mena.2022, Abdullah.2022}. These methods were therefore excluded from this work. Instead, this work focuses on 4 UQ approaches including three practical and commonly employed approaches\cite{Abdullah.2022}, BNNs trained via variational inference (hence forth referred to as simply BNNs), MC dropout, and ensemble models. We also include SWAG which is a relatively newer but efficient approach \cite{Maddox.2019}). Collectively, the approaches considered are representative of the wide variety of methods proposed for UQ, enabling a comprehensive evaluation of the suitability of UQ for MEDL, specifically ARMED. \section{Uncertainty quantification ARMED models} This work integrates UQ into the ARMED framework in order to obtain 3 objectives: (1) produce an accurate probabilistic confidence in the model’s ME prediction, allowing for the user to better determine when to trust the models prediction, (2) calculate the statistical significance of the overall model, to determine whether the model learned a genuine covariate to target association, (3) provide statistically-meaningful characterizations of each covariate including the effect size statistical significance and the 95\% confidence interval (CI). For the overall model, hyperparameters are held consistent with the original ARMED network \cite{Nguyen.2022b}, however each UQ method has a different hyperparameter to optimize (e.g. for the MC dropout model, the dropout rate). To ensure that each of the 4 UQ methods is not hindered by a specific selection of the hyperparameter, 3-5 reasonable values for these hyperparameters are tested for each model, as described in sections \ref{sec:UQ-ARMED:methods:BNN}, \ref{sec:UQ-ARMED:methods:SWAG}, \ref{sec:UQ-ARMED:methods:Ensemble}, and \ref{sec:UQ-ARMED:methods:MC dropout}, and reported individually in the results. To provide a distribution of weights, the weight posterior is sampled 30 times for each UQ method. For the ensemble methods, this requires 30 models to be trained, but for the other UQ methods that allow for direct sampling from the posterior, a single model is trained. Note that in adding UQ to ARMED we do not expect performance to increase, rather we aim to achieve the above 3 practical goals \textit{while maintaining the performance} of the original ARMED approach. \subsection{Bayesian Neural Networks for uncertainty quantification in mixed effects deep learning} \label{sec:UQ-ARMED:methods:BNN} For conventional networks with multiple layers, making any single layer Bayesian causes the training to learn a distribution of model weights, which is sufficient to calculate the model uncertainty. However, since we do not know a priori which subset of the layers to make Bayesian, this is treated as a hyperparameter in our experiments. To keep the number of models reasonable comparisons are made between models that have the first, last, or all layers Bayesian. Each layer selected to be Bayesian replaces each point estimate weight (scalar) with two values, a mean, and a variance that define Gaussian probability distribution for each weight. Other probability distributions are possible, however Gaussian distributions are typically used, unless there is prior knowledge that suggests a different distribution would perform better based on the dataset. As mentioned above, these models are trained via variational inference to maximize the Evidence Lower Bound (ELBO), $ ELBO = [log p(X|W)] - D_KL(q(W)||p(W)] $, that consists of the log likelihood and the KL divergence between the posterior and a prior distribution and $q(W)$ that defines a surrogate posterior approximating the true posterior. As our weights are modeled as a Gaussian, our prior distribution is also a Gaussian with 0 mean and unit variance. \subsection{SWAG for uncertainty quantification in MEDL} \label{sec:UQ-ARMED:methods:SWAG} Similar to BNNs, SWAG is able to construct a distribution of model weights from training a single model, and thus also produces a posterior distribution of model weights that can be sampled from. However, rather than estimate each weight as a distribution like BNNs, SWAG estimates the variance/co-variance matrix for the models weights. To achieve this, after training until model convergence, SWAG samples the local minima by continuing to train with SGD and a constant learning rate to estimate the variance/co-variance of the weights within the local minima. There are two primary SWAG variations: the first, which is the full SWAG approach estimates the complete variance-covariance matrix, while the second estimates the diagonal variance matrix. For clarity, we refer to the approach which estimates the full variance-covariance matrix as SWAG-full, and the latter approach as SWAG-diag. We note that the SWAG-full approach explicitly estimates off diagonal covariance, which is not attained by other UQ methods such as BNN and thereby motivated to compare these UQ methods. For SWAG-full, the variance and variance co-variance matrix are scaled by half prior to summing as suggested in the original manuscript. This essentially provides the mean value between the estimated co-variance matrix, and the diagonal used in the SWAG-diag. Also, as described in the SWAG manuscript, the model is trained until convergence prior to estimating the weight variance and covariance matrices. For a fair comparison to the other methods, the model is trained to convergence using the same optimizer. An additional 30 training iterations with a consistent learning rate is then used to estimate the variance and co-variance matrices. For each SWAG model, the estimated variance and co-variance matrices depend on the learning rate as described in the original manuscript \cite{Maddox.2019}. If the learning rate is too small not enough variance will be captured while sampling for the variance/co-variance matrix. If the learning rate is too large, the model may diverge from the local minima it is sampling from. Therefore, we test 4 different learning rates for comparison: 1e-1, 1e-2, 1e-3, and 1e-4 for each of the SWAG UQ methods. This set of learning rates were chosen to span multiple orders of magnitude commonly seen for learning rates, while simultaneously limiting the number models to train. \subsection{Monte Carlo dropout for uncertainty quantification in mixed effects deep learning} \label{sec:UQ-ARMED:methods:MC dropout} MC dropout is a simpler approach to estimate model uncertainty when compared with BNNs and SWAG, but has also been shown to be an effective UQ method \cite{Srivastava.2014, Gal.2016, Gal.2016b}. To provide UQ, MC dropout randomly sets layer's input activation's to zero, and is often used as a regularization method while training. However, to estimate the model uncertainly multiple inferences are taken for each provided input, each time with different activation's being set to zero. MC dropout can be seen as an Bayesian estimate the models the weights, with the posterior modeled as a binomial distribution. The most important hyperparameter for MC dropout is the dropout rate that determines the number of input activation's to set to zero for each layer. Too large of a fraction, and the model does not have enough information to use for prediction, too small of a fraction and the model may underestimate the uncertainty. For our experiments we tested a dropout rate in increments of 0.1 from 0.1 to 0.5: 0.1, 0.2, 0.3, 0.4, and 0.5. This selection covers a large range of possible dropout rates, while maintaining a reasonable number of models to train and compare. \subsection{Ensemble approaches for uncertainty quantification in mixed effects deep learning} \label{sec:UQ-ARMED:methods:Ensemble} Unlike the preceding UQ methods which train only a single model to obtain a distribution of learned models, the ensemble UQ approach trains a set of deterministic models from different randomly chosen subsets of the data, or from different random initializations. After training, the distribution of conventional models can are used to make a distribution of predictions (one per conventional model) which subsequently allow for model uncertainty to be estimated. To ensure that the models are trained differently some perturbation on the training of the model is required. To provide this perturbation, we used (1) different model initializations, (2) sub-sampling of the training data with 70, 80, or 90\% of the data for the model to train on. This allows us to compare different ensemble methods (e.g. initializations, and sub-sampling). The probability distribution of the model may depend on the amount of sampling from the data, but we didn't want sample less than 70\% of the data as to not effect the final models performance. To enable fair comparison with the other UQ methods, a total of 30 conventional models are trained for each variety of ensemble methods. \subsection{Network architecture and initialization} For each experiment, all networks are trained consistently e.g., same optimizer and learning rate. The models configuration (a.k.a hyperparameters) are as consistent as possible, only modifying when necessarily for the specific Bayesian method with the ranges defined in the above sections: the BNN have additional weights to model the variance of the weights, SWAG trains for an additional 30 epochs post convergence with different learning rates, ensemble methods use different training perturbations, and MC dropout includes different dropout rates. Specific hyperparameters are included in supplemental table \ref{sup:UQ-ARMED:HP} and identical to the original ARMED manuscript \cite{Nguyen.2022b}. Adversarial networks, including the adversarial network in ARMED, can be hard to train and sometimes diverge \cite{Nowozin.2016}. Therefore, to provide the best comparison between the ARMED and the UQ-ARMED models we initialize the ARMED and all UQ-ARMED models with an equivalent set of weights for each fold that was found to converge well for the ARMED model. Some modifications are required for compatibility with the UQ method. For the BNN, the initial starting average of the distribution is set to the initial deterministic weight from the ARMED model. For the SWAG, MC dropout and sub-sampling ensemble models, the initial weights are the same as the ARMED model. However, for the random initializations UQ model it is not possible to set all initial weights to be the same, so each model is independently initialized. \section{Data partitioning} \label{sec:UQ-ARMED:meth:kfold} To help provide an estimate of generalizable performance for deep learning models, data splitting for training and evaluation is essential. For the MCI conversion experiment, the training data and seen test data consists of the largest 20 study sites from ADNI, and the remaining 34 sites are held out for the un-seen test data. 10 fold cross validation was used to partition the 20 sites into a training and seen-test set. To allow a fair comparison against the original ARMED \cite{Nguyen.2022b} the same data partitioning strategy is used. \section{Probe generation} \label{sec:UQ-ARMED:methods:probe gen} A fundamental goal of ARMED is to mitigate confounds. To test that this is being achieved, artificial covariates are added, as in the original ARMED manuscript. 5 probes are generated that are non-linearly associated with each site and the probability of conversion, but not associated with biological relevance. These probes are then added to the covariates to evaluate the ability for each network to provide non-significant statistical covariate coefficients for the 5 probes. \section{Performance metrics and methods for comparison between UQ methods.} \label{sec:UQ-ARMED:methods:comparison metrics} To compare the ability of the UQ methods to achieve our goals of the following subsections describe the comparison for: model predictive power equivalent to ARMED and significance of model fit, ability to provide non-statistically significant fixed effects covariate coefficients to the probes, and calibration of model prediction, the following metrics are compared. \subsection{Model Prediction Performance} Standard performance metrics are used to compare overall predictive power between the standard ARMED and the Bayesian methods. These performance metrics include: AUROC Balanced accuracy (at Youden), AUROC, F1, F1 score, Spec at Youden, Sens at Youden. \subsection{Calculation of statistical significance for model fit} \label{sec:UQ-ARMED:meth:modelfit} At inference time, the models weights are sampled 30 times to produces a distribution of model balanced accuracy. To scale the balanced accuracy between –infinity and +infinity, which is ideal for statistical testing of accuracy, as it's range is limited to between 0-1, the $log(\frac{p}{1-p})$ transform is used. To provide a statistical significance for model fit, this transformed distribution is then compared to correspondingly scaled chance performance ($log(\frac{0.5}{1-0.5})=0$) using a 1 sided t-test where the null hypothesis is $H_0$: model performance $\leq$ to chance performance. We use a one sided test as we’re specifically interested in determining if the model’s performance is larger than chance. Rejecting this null hypothesis means that the model has learned a mapping of the input covariates to the target which is statistically significant for a given alpha criterion (false positive rate), where we define $\alpha$ = 0.05. While k-fold is a vital method in deep learning to minimize the generalization error, one wrinkle from k-fold cross validation approach is that this yields multiple models, each with their own statistical metrics. To remedy this, pooled statistical values across the models are calculated using the Satterthwaite approximation. The Satterthwaite approximation is used such that no assumption about the equality of the variances across folds needs to be made. The Satterthwaite provides an estimation of the standard error and degrees of freedom such that $SE = \sqrt{\sum_i^k{\frac{s_i^2}{n_i}}}$, in contrast to the un-pooled standard error $SE=\sqrt{\frac{s^2}{n}}$ where $s^2$ is the variance, n is the number of samples, and $i$ is the i-th sample. The Satterthwaite estimation for the degrees of freedom is calculated to be $df = \frac{(\sum_i^k\frac{s_i^2}{n_i})^2}{\sum_i^k(\frac{1}{n_i-1})(\frac{s_i^2}{n_i})^2}$. The pooled $SE$ and $df$ can then be used to calculate a pooled T and corresponding T value. \subsection{Calculation and comparison of statistical significance for covariate coefficients} \label{sec:UQ-ARMED:methods:covariate coef} The ARMED model is by definition a non-linear model. Therefore, estimating the covariate coefficient can be non-trivial. Gradient methods are a common approach to estimate the covariate coefficient for a deep learning model (TODO cite Grad-Cam and others?). ARMED employs a similar approach to estimate the mixed, fixed, and random, and effects covariate coefficient values. Specifically, we use: $\frac{\partial y_F}{\partial x}$, $\frac{\partial y_M}{\partial x}$, $\frac{\partial h_R}{\partial x}$, respectively. As each input matrix $X$ will provide p (number of samples) estimates for each covariate coefficient averaging is used to summarize these estimates into a single covariate measure. There are two reasonable approaches we could use, (1) take the gradient for each input vector, e.g. $ave(\frac{\partial y}{\partial x})$, (2) average the input vector and then take the gradient e.g. $\frac{\partial y}{\partial \Bar{x}}$. In the results we focus on (1), but also provide (2) in the supplemental. The benefits and limitations to each method is further discussed in the discussion section. Sampling from the model’s weight posterior allows us to calculate a distribution of the covariate coefficients. Similarly to the model fit statistical significance, this allows us to compute a statistical significance for each covariate coefficient. Analogous to the linear mixed effects model, our null hypothesis is: $H_0: \beta = 0$ for each covariate coefficient, and a two sided t-test is used as we also want to account for covariate directionality. Similarly to the model fit section \ref{sec:UQ-ARMED:meth:modelfit}, we pool these statistics across the k folds via the Satterthwaite approximation. \subsection{Model confidence} \label{sec:UQ-ARMED:methods:confidence} As the UQ methods provide a distribution of prediction, a confidence in that prediction can be calculated. To determine the validity of this confidence, a comparison between the confidence of the correctly and in-correctly predicted subjects is used. Well calibrated prediction confidence will show lower confidence on the subjects that are incorrectly predicted. The larger difference between the confidence of the correctly and incorrectly predicted subjects, the more accurate the model confidence. The prediction confidence of the model is the defined as the integral over the prediction probability with respect to the prediction. This integral is taken either side of the decision boundary(s) (e.g. 0.5 for a 2 class classifier). The max value for the integral between each decision boundary is then used as the confidence. Sampling from the posterior provides a discrete distribution of predictions, and therefore a sum takes place of the integral, which is then normalized to the total number of samples. For example, if after producing 30 predictions for a binary classification problem, 3 predictions are for class 0, and 27 for class 1, then the confidence would by max(3/30, 27/30), or 90\% confidence for a prediction of class 1 \cite{}. To pool across folds for the seen sights, each Bayesian deep learning method provides a single confidence for each subject. Therefor there is no need to pool, rather a simple concatenation providing a confidence score for each subject in the seen test data is all that is needed. To pool across folds for the unseen sites each method will provide k (where k is the number of folds) confidences. To eliminate the possibility of averaging confidences across models that make different predictions, a joint distribution is calculated across k folds and a final confidence is calculated based on that joint probability distribution. \subsection{Train and inference time comparison} A final practical consideration for these methods is the amount of computational resources they use. Therefore, the training and inference time for these models is recorded and compared. For comparison both training and inference time across the different ARMED methods is measured and summarized the across the k-folds. \section{Experimental Materials} \subsection{Alzheimer’s Disease Neuroimaging Initiative (ADNI) dataset} \label{sec:UQ-ARMED:materials:ADNI} The ADNI dataset is used to provide a real word application to test and compare the UQ-ARMED methods on. As described in sec \ref{sec:UQ-ARMED:results:MCI}, UQ-ARMED methods are compared for the prediction of if a subject with mild cognitive impairment (MCI) will convert to Alzheimer's disease (AD) within 24 months. The data included in this study in ADNI are are MCI subjects with baseline demographic information, cognitive scores, tabular summary metrics from neuroimaging measurements, and bio-marker measurements, with a follow up diagnosis in 24 months. This provides a total of 42 features. The follow up diagnosis provides the target for each subject. Subjects that convert to AD within 24 months are labeled as progressive MCI (pMCI), and the remaining subjects that do not convert are labeled as stable MCI (sMCI). Correlation between subjects from different sites can be caused from differences in imaging protocols and inter-rater variability. To explicitly account for the site confound, it is modeled as the random effect. A total of 719 subjects from 54 sites match our requirements, 392 from the 20 sites with the largest number of subjects are included in the training and seen test (with k-fold applied), and the unseen test consists of the remaining 327 from 34 sites. 27.041\% of the subjects training and seen test are pMCI, and 37.615\% of the unseen subjects pMCI. \chapter{Experimental Results} \section{Application of UQ-ARMED to distinguish stable versus progressive MCI} \label{sec:UQ-ARMED:results:MCI} An mild cognitively impaired (MCI) patient is a subject with a cognitive disorder including memory and executive function issues which don’t yet impair daily activities. This experiment compares non-UQ ARMED (the original ARMED approach), and ARMED with 4 different UQ methods for the two-category classification task of predicting whether an MCI patient has stable (sMCI) or progressive (pMCI) MCI. A pMCI patient is a subject whose symptoms worsen within 24 months from baseline to full Alzheimer’s disease (AD) where cognitive issues impair daily activities. sMCI patients are subjects who retain stable in their MCI diagnosis at the 24 month follow up. The primary random effect in this cross-sectional dataset is from correlation among samples from the same site. There are 54 different sites in the data. Subjects from 20 sites are used for training (partitioned into training and test data from seen sites), and the remaining sites are held out as test data from unseen sites. As summarized in materials section \ref{sec:UQ-ARMED:materials:ADNI}, the tabular summary data provided by ADNI (i.e. ‘ADNIMERGE’) are the 42 input covariates. A dense feed-forward neural network (AKA a multi layer perception (MLP)) was chosen as the conventional model, as this architecture is most suitable to learn from such tabular data. Comparable UQ ARMED models are constructed by adding the appropriate modification, as described in methods sections \ref{sec:UQ-ARMED:methods:BNN}, \ref{sec:UQ-ARMED:methods:SWAG}, \ref{sec:UQ-ARMED:methods:MC dropout}, and \ref{sec:UQ-ARMED:methods:Ensemble}, and evaluated as described in section \ref{sec:UQ-ARMED:methods:comparison metrics}, with hyperparameters defined in sup \ref{sup:UQ-ARMED:HP}. Across the models, we compare predictive performance, statistical significance for model fit, statistical significance of the covariates including the synthetic probes, and the calibration of the prediction confidence. \section{Predictive Performance Comparison} \label{sec:UQ-MABO:results:performance} As defined in the introduction, the goals of this work is to provide UQ for the models, while also maintaining the high performance archived by the non-UQ ARMED model. For this experiment, the non UQ ARMED model achieved an AUROC of 0.879 and 0.799 on the seen and unseen test data, respectively. Fig. \ref{fig:UQ-ARMED:performance} provides a comparison of AUROC across the models, and table \ref{tab:UQ-ARMED:performance} provides all performance metrics for each model, including the 95\% CI for seen and unseen test data, and the overall model fit for the data calculated as described in \ref{sec:UQ-ARMED:meth:modelfit}. Based on the 95\% confidence interval (CI) of model performance, many UQ-ARMED methods (bolded in \ref{tab:UQ-ARMED:performance}) achieve the same (non-statistically different) or better performance when compared to non-UQ ARMED, fulfilling the performance objective. Among UQ methods, we observe that all MC dropout models decreased performance significantly. Because of this, our remaining results analysis focuses on higher performing models, specifically: BNN All, Ensemble sampling with 0.9 fraction, and SWAG-diag with a learning rate=0.1. Results for lower performing UQ methods are included in the supplementary material \ref{appendix:UQ-ARMED}. \begin{figure}[] \centering \includegraphics[width=\textwidth,keepaspectratio]{figs/UQ-ARMED/MainPerformance.PNG} \caption[Comparison of AUROC performance of the tested UQ-ARMED methods.]{ Comparison of AUROC performance of the tested UQ-ARMED methods, with error bars indicating the pooled standard error across the 30 draws from the model distribution. Chance performance is displayed as a red dashed line. All models perform significantly better than chance. Exact AUROC values and 95\% CI are reported in table \ref{tab:UQ-ARMED:performance} under the AUROC column. \textbf{A.} AUROC on the seen-site test data. \textbf{B.} AUROC on the unseen-site test data. Models highlighted by green text are focused on in subsequent main results figures, figures for other models found in supplementary materials. } \label{fig:UQ-ARMED:performance} \end{figure} \begin{sidewaystable}[] \scriptsize \begin{tabular}{llllllllllllll} & p & \multicolumn{2}{l}{AUROC} & \multicolumn{2}{l}{Accuracy} & \multicolumn{2}{l}{Spec. at Youden} & \multicolumn{2}{l}{Sens. at Youden} & \multicolumn{2}{l}{Sens. at 80\% Spec.} & \multicolumn{2}{l}{Sens. at 90\% Spec.} \\ \cline{2-14} Model & & mean & 95\% CI & mean & 95\% CI & mean & 95\% CI & mean & 95\% CI & mean & 95\% CI & mean & 95\% CI \\ \hline \multicolumn{1}{c}{Seen Site Performance} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ Non UQ ARMED & & \textbf{0.878} & nan - nan & 0.795 & nan - nan & 0.846 & nan - nan & 0.745 & nan - nan & 0.780 & nan - nan & 0.593 & nan - nan \\ BNN First & \textless{}.0001 & 0.869 & 0.862 - 0.877 & \textbf{0.795} & 0.780 - 0.810 & \textbf{0.817} & 0.787 - 0.846 & \textbf{0.773} & 0.750 - 0.796 & \textbf{0.777} & 0.748 - 0.805 & \textbf{0.540} & 0.477 - 0.602 \\ BNN Last & \textless{}.0001 & 0.873 & 0.870 - 0.877 & \textbf{0.806} & 0.786 - 0.825 & \textbf{0.832} & 0.817 - 0.848 & \textbf{0.779} & 0.737 - 0.820 & 0.754 & 0.738 - 0.771 & \textbf{0.561} & 0.525 - 0.597 \\ BNN All & \textless{}.0001 & \textbf{0.874} & 0.869 - 0.879 & \textbf{0.793} & 0.768 - 0.818 & 0.817 & 0.79 - 0.845 & \textbf{0.769} & 0.719 - 0.818 & \textbf{0.759} & 0.730 - 0.787 & \textbf{0.583} & 0.539 - 0.627 \\ SWAG-diag lr=0.1 & \textless{}.0001 & \textbf{0.895} & 0.888 - 0.903 & \textbf{0.801} & 0.763 - 0.838 & \textbf{0.849} & 0.793 - 0.904 & \textbf{0.753} & 0.663 - 0.844 & \textbf{0.818} & 0.766 - 0.869 & \textbf{0.586} & 0.540 - 0.633 \\ SWAG-diag lr=0.01 & 0.0008 & \textbf{0.837} & 0.696 - 0.978 & \textbf{0.769} & 0.685 - 0.852 & \textbf{0.843} & 0.693 - 0.992 & \textbf{0.695} & 0.505 - 0.885 & \textbf{0.721} & 0.543 - 0.900 & \textbf{0.537} & 0.360 - 0.714 \\ SWAG-diag lr=0.001 & \textless{}.0001 & \textbf{0.874} & 0.873 - 0.876 & \textbf{0.791} & 0.776 - 0.805 & \textbf{0.841} & 0.823 - 0.859 & \textbf{0.740} & 0.701 - 0.779 & \textbf{0.763} & 0.734 - 0.792 & \textbf{0.568} & 0.554 - 0.582 \\ SWAG-diag lr=0.0001 & \textless{}.0001 & \textbf{0.878} & 0.878 - 0.878 & \textbf{0.791} & 0.791 - 0.791 & \textbf{0.846} & 0.846 - 0.846 & \textbf{0.735} & 0.735 - 0.735 & \textbf{0.780} & 0.780 - 0.780 & \textbf{0.593} & 0.593 - 0.593 \\ SWAG-full lr=0.1 & 0.5082 & \textbf{0.763} & 0.560 - 0.967 & \textbf{0.663} & 0.508 - 0.818 & \textbf{0.687} & 0.354 - 1.020 & \textbf{0.638} & 0.250 - 1.027 & \textbf{0.568} & 0.255 - 0.882 & \textbf{0.356} & 0.094 - 0.618 \\ SWAG-full lr=0.01 & 0.3556 & \textbf{0.763} & 0.567 - 0.959 & \textbf{0.668} & 0.548 - 0.789 & \textbf{0.733} & 0.426 - 1.040 & \textbf{0.603} & 0.251 - 0.955 & \textbf{0.552} & 0.281 - 0.823 & \textbf{0.390} & 0.142 - 0.639 \\ SWAG-full lr=0.001 & 0.0092 & \textbf{0.827} & 0.721 - 0.934 & \textbf{0.738} & 0.648 - 0.829 & \textbf{0.767} & 0.562 - 0.973 & \textbf{0.709} & 0.505 - 0.914 & \textbf{0.677} & 0.504 - 0.850 & \textbf{0.504} & 0.335 - 0.672 \\ SWAG-full lr=0.0001 & 0.0059 & \textbf{0.831} & 0.728 - 0.934 & \textbf{0.745} & 0.656 - 0.835 & \textbf{0.772} & 0.584 - 0.961 & \textbf{0.719} & 0.532 - 0.905 & \textbf{0.693} & 0.526 - 0.861 & \textbf{0.509} & 0.352 - 0.666 \\ Ensemble Random Initializations & \textless{}.0001 & \textbf{0.892} & 0.858 - 0.927 & \textbf{0.788} & 0.736 - 0.841 & \textbf{0.836} & 0.767 - 0.905 & \textbf{0.741} & 0.629 - 0.853 & \textbf{0.783} & 0.678 - 0.888 & \textbf{0.640} & 0.516 - 0.763 \\ Ensemble MC sample 70\% & \textless{}.0001 & \textbf{0.877} & 0.846 - 0.909 & \textbf{0.781} & 0.727 - 0.835 & \textbf{0.836} & 0.767 - 0.904 & \textbf{0.727} & 0.605 - 0.849 & \textbf{0.764} & 0.664 - 0.865 & \textbf{0.571} & 0.440 - 0.702 \\ Ensemble MC sample 80\% & \textless{}.0001 & \textbf{0.888} & 0.859 - 0.918 & \textbf{0.792} & 0.743 - 0.842 & \textbf{0.846} & 0.781 - 0.912 & \textbf{0.739} & 0.626 - 0.852 & \textbf{0.789} & 0.689 - 0.889 & \textbf{0.590} & 0.468 - 0.713 \\ Ensemble MC sample 90\% & \textless{}.0001 & \textbf{0.892} & 0.868 - 0.916 & \textbf{0.800} & 0.752 - 0.849 & \textbf{0.852} & 0.794 - 0.909 & \textbf{0.749} & 0.644 - 0.855 & \textbf{0.798} & 0.714 - 0.882 & \textbf{0.606} & 0.496 - 0.715 \\ MC Dropout 10\% & \textless{}.0001 & \textbf{0.791} & 0.754 - 0.828 & 0.726 & 0.683 - 0.770 & \textbf{0.750} & 0.684 - 0.816 & \textbf{0.703} & 0.606 - 0.800 & \textbf{0.640} & 0.544 - 0.736 & \textbf{0.413} & 0.306 - 0.520 \\ MC Dropout 20\% & 0.0018 & 0.744 & 0.687 - 0.801 & \textbf{0.703} & 0.653 - 0.753 & \textbf{0.770} & 0.698 - 0.841 & \textbf{0.637} & 0.535 - 0.738 & \textbf{0.577} & 0.453 - 0.701 & 0.365 & 0.249 - 0.481 \\ MC Dropout 30\% & 0.001 & \textbf{0.773} & 0.714 - 0.831 & \textbf{0.724} & 0.667 - 0.782 & \textbf{0.751} & 0.674 - 0.829 & \textbf{0.697} & 0.579 - 0.816 & \textbf{0.594} & 0.456 - 0.732 & \textbf{0.394} & 0.283 - 0.505 \\ MC Dropout 40\% & 0.1116 & \textbf{0.705} & 0.636 - 0.774 & 0.668 & 0.606 - 0.73 & \textbf{0.695} & 0.556 - 0.834 & \textbf{0.641} & 0.498 - 0.784 & \textbf{0.520} & 0.381 - 0.658 & \textbf{0.333} & 0.214 - 0.451 \\ MC Dropout 50\% & 0.0526 & \textbf{0.709} & 0.633 - 0.786 & \textbf{0.681} & 0.618 - 0.745 & \textbf{0.727} & 0.649 - 0.805 & \textbf{0.636} & 0.517 - 0.755 & \textbf{0.515} & 0.365 - 0.664 & \textbf{0.310} & 0.170 - 0.450 \\ \hline \multicolumn{1}{c}{Unseen Site Performance} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ Non UQ ARMED & & 0.799 & nan - nan & 0.713 & nan - nan & 0.606 & nan - nan & 0.820 & nan - nan & 0.574 & nan - nan & 0.398 & nan - nan \\ BNN First & \textless{}.0001 & \textbf{0.796} & 0.785 - 0.806 & \textbf{0.706} & 0.695 - 0.717 & 0.574 & 0.552 - 0.596 & \textbf{0.838} & 0.814 - 0.862 & \textbf{0.578} & 0.550 - 0.605 & 0.395 & 0.363 - 0.427 \\ BNN Last & \textless{}.0001 & 0.791 & 0.786 - 0.796 & \textbf{0.704} & 0.695 - 0.714 & \textbf{0.570} & 0.548 - 0.591 & \textbf{0.839} & 0.821 - 0.858 & \textbf{0.580} & 0.568 - 0.593 & 0.406 & 0.385 - 0.426 \\ BNN All & \textless{}.0001 & \textbf{0.800} & 0.793 - 0.806 & \textbf{0.707} & 0.699 - 0.716 & 0.548 & 0.530 - 0.566 & \textbf{0.867} & 0.851 - 0.884 & \textbf{0.586} & 0.567 - 0.605 & 0.412 & 0.386 - 0.437 \\ SWAG-diag lr=0.1 & \textless{}.0001 & \textbf{0.829} & 0.818 - 0.840 & \textbf{0.691} & 0.665 - 0.718 & \textbf{0.666} & 0.591 - 0.740 & 0.717 & 0.632 - 0.802 & \textbf{0.610} & 0.545 - 0.675 & 0.454 & 0.397 - 0.511 \\ SWAG-diag lr=0.01 & 0.0127 & \textbf{0.773} & 0.651 - 0.894 & \textbf{0.693} & 0.633 - 0.752 & \textbf{0.630} & 0.454 - 0.806 & \textbf{0.755} & 0.550 - 0.959 & \textbf{0.568} & 0.426 - 0.711 & 0.392 & 0.291 - 0.494 \\ SWAG-diag lr=0.001 & \textless{}.0001 & \textbf{0.799} & 0.798 - 0.799 & \textbf{0.715} & 0.711 - 0.719 & \textbf{0.624} & 0.608 - 0.640 & \textbf{0.806} & 0.793 - 0.819 & \textbf{0.576} & 0.572 - 0.580 & 0.402 & 0.395 - 0.409 \\ SWAG-diag lr=0.0001 & \textless{}.0001 & \textbf{0.799} & 0.799 - 0.799 & \textbf{0.714} & 0.714 - 0.714 & \textbf{0.609} & 0.609 - 0.609 & \textbf{0.819} & 0.819 - 0.819 & \textbf{0.576} & 0.576 - 0.576 & 0.398 & 0.398 - 0.398 \\ SWAG-full lr=0.1 & 1.3973 & \textbf{0.724} & 0.546 - 0.901 & \textbf{0.588} & 0.477 - 0.698 & \textbf{0.591} & 0.201 - 0.980 & \textbf{0.585} & 0.136 - 1.033 & \textbf{0.394} & 0.112 - 0.677 & 0.258 & 0.041 - 0.475 \\ SWAG-full lr=0.01 & 0.9274 & \textbf{0.729} & 0.573 - 0.886 & \textbf{0.622} & 0.522 - 0.722 & \textbf{0.547} & 0.208 - 0.886 & \textbf{0.697} & 0.358 - 1.036 & \textbf{0.428} & 0.196 - 0.660 & 0.291 & 0.119 - 0.463 \\ SWAG-full lr=0.001 & 0.1587 & \textbf{0.761} & 0.667 - 0.854 & \textbf{0.667} & 0.599 - 0.735 & \textbf{0.551} & 0.295 - 0.807 & \textbf{0.783} & 0.568 - 0.997 & \textbf{0.521} & 0.367 - 0.675 & 0.354 & 0.230 - 0.479 \\ SWAG-full lr=0.0001 & 0.1146 & \textbf{0.766} & 0.683 - 0.848 & \textbf{0.672} & 0.606 - 0.738 & \textbf{0.553} & 0.316 - 0.790 & \textbf{0.790} & 0.589 - 0.991 & \textbf{0.531} & 0.400 - 0.663 & 0.364 & 0.251 - 0.477 \\ Ensemble Random Initializations & 0.411 & \textbf{0.790} & 0.720 - 0.859 & \textbf{0.653} & 0.572 - 0.734 & \textbf{0.508} & 0.212 - 0.804 & \textbf{0.798} & 0.503 - 1.094 & \textbf{0.588} & 0.465 - 0.712 & 0.393 & 0.280 - 0.507 \\ Ensemble MC sample 70\% & 0.0128 & \textbf{0.797} & 0.760 - 0.833 & \textbf{0.692} & 0.636 - 0.748 & \textbf{0.565} & 0.409 - 0.722 & \textbf{0.819} & 0.675 - 0.963 & \textbf{0.592} & 0.505 - 0.678 & 0.413 & 0.330 - 0.495 \\ Ensemble MC sample 80\% & 0.0003 & \textbf{0.806} & 0.775 - 0.837 & \textbf{0.708} & 0.663 - 0.754 & \textbf{0.591} & 0.467 - 0.715 & \textbf{0.826} & 0.692 - 0.960 & \textbf{0.607} & 0.533 - 0.682 & 0.428 & 0.357 - 0.500 \\ Ensemble MC sample 90\% & 0.0002 & \textbf{0.805} & 0.779 - 0.832 & \textbf{0.706} & 0.663 - 0.749 & \textbf{0.605} & 0.491 - 0.718 & \textbf{0.808} & 0.682 - 0.934 & \textbf{0.606} & 0.534 - 0.677 & 0.426 & 0.355 - 0.498 \\ MC Dropout 10\% & 0.0173 & \textbf{0.732} & 0.717 - 0.746 & \textbf{0.643} & 0.623 - 0.663 & \textbf{0.533} & 0.485 - 0.582 & \textbf{0.753} & 0.697 - 0.808 & \textbf{0.470} & 0.431 - 0.509 & 0.309 & 0.271 - 0.347 \\ MC Dropout 20\% & 1.1758 & \textbf{0.677} & 0.657 - 0.697 & \textbf{0.616} & 0.596 - 0.636 & \textbf{0.569} & 0.523 - 0.615 & 0.664 & 0.612 - 0.716 & \textbf{0.406} & 0.368 - 0.444 & 0.238 & 0.200 - 0.275 \\ MC Dropout 30\% & 0.9628 & \textbf{0.695} & 0.674 - 0.716 & \textbf{0.620} & 0.597 - 0.642 & \textbf{0.547} & 0.494 - 0.600 & \textbf{0.693} & 0.634 - 0.752 & \textbf{0.418} & 0.376 - 0.460 & 0.267 & 0.227 - 0.306 \\ MC Dropout 40\% & 1.9743 & \textbf{0.664} & 0.641 - 0.686 & \textbf{0.584} & 0.552 - 0.616 & \textbf{0.539} & 0.453 - 0.626 & 0.629 & 0.512 - 0.745 & \textbf{0.351} & 0.309 - 0.393 & 0.185 & 0.146 - 0.225 \\ MC Dropout 50\% & 1.9964 & \textbf{0.636} & 0.613 - 0.660 & \textbf{0.587} & 0.567 - 0.608 & \textbf{0.540} & 0.492 - 0.587 & \textbf{0.635} & 0.580 - 0.689 & \textbf{0.349} & 0.314 - 0.384 & 0.203 & 0.160 - 0.245 \end{tabular} \caption[Performance comparison for the prediction of stable vs. progressive mild cognitive impairment.]{ Performance comparison for the prediction of stable vs. progressive mild cognitive impairment. P-values and confidence intervals were computed using pooled performance, sec \ref{sec:UQ-ARMED:meth:modelfit}. Models who's 95\% CI performance included or exceeded the non-UQ ARMED model are bolded. } \label{tab:UQ-ARMED:performance} \end{sidewaystable} \section{Statistical significance for covariate coefficient} \label{sec:UQ-MABO:results:covariate} The previous section focused on the performance and performance uncertainty, this section focuses on the statistical significance of the estimated co-variate coefficients for the fixed effects model. Table \ref{tab:UQ-ARMED:covariate summary} provides summary values for each of the models. These summary metrics show the p value, absolute value, and the rank for the fixed effect of the covariate of the most sadistically significant and highest rank synthetic probes. As the probes calculated to be random effects, high performing models should not provide statistically significant p-values. Overall the ensemble methods successfully deweighted the probes. Specifically, the ensemble with a 0.9 sampling model's highest ranked probe was ranked the 25th most important feature, and also assigned the probes insignificant p-values (p-value > 0.05), and not trending (p-value > 0.1) (table \ref{tab:UQ-ARMED:covariate summary}). The BNN methods also deweighted the probes, however is provided one of them (20\% of them) a statistically trending p-value (0.0669). Some SWAG models successfully deweighted the probes and assigned them insignificant p-values, e.g. SWAG-diag lr=0.01. While other SWAG models did not deweight the probes and assigned them statistically significant p-values. Lastly, the MC dropout models failed to deweight the probes and assigned them statistically significant p-values. \begin{figure}[] \centering \includegraphics[width=\textwidth,keepaspectratio]{figs/UQ-ARMED/MainCovariateCoef.PNG} \caption[Coefficient covariate estimation and calculated uncertainty for 3 models with high predictive performance.]{ Coefficient covariate estimation and calculated uncertainty for 3 models with high predictive performance. Each plot shows the mean, and 95\% CI for each estimated covariate coefficient. statistical significance is provded on the left and color coded to green: highly significant (p $<$ .01), orange: significant (0.01 $<$ p $<$ 0.05), blue: trending to significance (0.05 $<$ p $<$ 0.1), and black: not significant (p > 0.1). Synthetic probes, defined in section \ref{sec:UQ-ARMED:methods:probe gen} are displayed in red, and all other biological covariates are black. \textbf{A.} Absolute value of the covariate coefficient for the Ensemble model with a 90\% sampling of the data. \textbf{B.} Absolute value of the covariate coefficient for the BNN model with all layers Bayesian. \textbf{C.} Absolute value of the covariate coefficient for the SWAG-diag model with a 0.1 learning rate. Models shown here, are highlighted in green in fig \ref{tab:UQ-ARMED:covariate summary}. Coefficient covariate estimation for all other models can be found in \ref{sup:UQ-ARMED:covariate coef}. } \label{fig:UQ-ARMED:covariate coef} \end{figure} \begin{sidewaystable}[] \centering \begin{tabular}{lllllllll} & \multicolumn{4}{c}{Probe with smallest p} & \multicolumn{4}{c}{Probe with largest coefficient} \\ MEDL method & Probe \# & p & Coef & rank & Probe \# & p & Coef & rank \\ \hline Non UQ ARMED & NA & NA & NA & & 2 & NA & 0.0033 & 14 \\ BNN First & 4 & 0.0055 & -0.0018 & 17 & 4 & 0.0055 & -0.0018 & 17 \\ BNN Last & 4 & \textless{}.0001 & -0.0027 & 13 & 4 & \textless{}.0001 & -0.0027 & 13 \\ \textcolor{ForestGreen}{\textbf{BNN All}} & 4 & 0.0669 & -0.0003 & 22 & 4 & 0.0669 & -0.0003 & 22 \\ \textcolor{ForestGreen}{SWAG-diag lr=0.1} & 3 & \textless{}.0001 & 0.0072 & 24 & 2 & \textless{}.0001 & 0.012 & 12 \\ \textbf{SWAG-diag lr=0.01} & 3 & 0.1981 & 0.0029 & 25 & 3 & 0.1981 & 0.0029 & 25 \\ \textbf{SWAG-diag lr=0.001} & 3 & \textless{}.0001 & 0.0029 & 17 & 3 & \textless{}.0001 & 0.0029 & 17 \\ \textbf{SWAG-diag lr=0.0001} & 4 & \textless{}.0001 & -0.0027 & 21 & 2 & \textless{}.0001 & 0.0033 & 15 \\ \textbf{SWAG-full lr=0.1} & 2 & 0.5774 & 0.0068 & 11 & 2 & 0.5774 & 0.0068 & 11 \\ \textbf{SWAG-full lr=0.01} & 4 & 0.5852 & -0.0041 & 12 & 2 & 0.6352 & 0.0044 & 9 \\ \textbf{SWAG-full lr=0.001} & 4 & 0.5737 & -0.0025 & 17 & 4 & 0.5737 & -0.0025 & 17 \\ SWAG-full lr=0.0001 & 4 & 0.4814 & -0.0026 & 11 & 4 & 0.4814 & -0.0026 & 11 \\ \textbf{Ensemble Random Initializations} & 2 & 0.7474 & 0.0018 & 23 & 2 & 0.7474 & 0.0018 & 23 \\ \textbf{Ensemble MC sample 70\%} & 2 & 0.1847 & 0.0031 & 22 & 2 & 0.1847 & 0.0031 & 22 \\ \textbf{Ensemble MC sample 80\%} & 2 & 0.1853 & 0.0031 & 25 & 2 & 0.1853 & 0.0031 & 25 \\ \textcolor{ForestGreen}{\textbf{Ensemble MC sample 90\%}} & 2 & 0.2458 & 0.0029 & 25 & 2 & 0.2458 & 0.0029 & 25 \\ MC Dropout 10\% & 4 & \textless{}.0001 & -0.0024 & 11 & 4 & \textless{}.0001 & -0.0024 & 11 \\ MC Dropout 20\% & 5 & 0.0003 & -0.0021 & 9 & 5 & 0.0003 & -0.0021 & 9 \\ \textbf{MC Dropout 30\%} & 5 & 0.0119 & -0.0016 & 8 & 5 & 0.0119 & -0.0016 & 8 \\ MC Dropout 40\% & 5 & 0.0503 & -0.0016 & 8 & 5 & 0.0503 & -0.0016 & 8 \\ MC Dropout 50\% & 4 & 0.2498 & -0.001 & 8 & 5 & 0.2928 & -0.0011 & 7 \end{tabular} \caption[Summary of synthetic probe fixed effects covariate coefficient.]{ Summary of synthetic probe fixed effects covariate coefficient. This table provides the statistical metrics for the fixed effects covariate coefficient for the probes with the smallest p, and largest coefficient. Rank is the location of the probe when ranked by the absolute value of the coefficient. Coef and p is covariate coefficient and p-value of the probe, respectively, as described in \ref{sec:UQ-ARMED:methods:covariate coef}. Bolded UQ methods provided non-statistically significant p-values for all probes, and ranked them as low, or lower than the non-UQ ARMED model. NA = not applicable. The models in green are displayed in fig \ref{fig:UQ-ARMED:covariate coef}. } \label{tab:UQ-ARMED:covariate summary} \end{sidewaystable} \section{Prediction confidence comparison} Results section \ref{sec:UQ-MABO:results:performance} and \ref{sec:UQ-MABO:results:covariate} both focus on generating model statistics comparable to the linear mixed effects model. However, UQ methods also allow us to provide a prediction probability distribution, allowing for a prediction confidence to be calculated as described in section \ref{sec:UQ-ARMED:methods:confidence}. To compare the validity of the prediction confidence for each Bayesian deep learning method, we compare the confidence calibration scores. In particular if the model's prediction confidence is accurate, we expect that a models confidence should be higher for its correct predictions (correctly classifying a subject as sMCI vs pMCI) than its incorrect predictions. Calibration of prediction confidence is calculated and compared for each model, as described in section \ref{sec:UQ-MABO:results:covariate}. The prediction confidence for 3 top performing models is provided in \ref{fig:UQ-ARMED:covariate coef}, and the remaining in the supplemental \ref{sup:UQ-ARMED:covariate coef}, and a summary in table \ref{tab:UQ-ARMED:covariate summary}. Between the BNN all, the SWAG-diag lr=0.1, and the ensemble method with a subsampling of 90\%, the ensemble method provided best prediction confidence calibration providing a mean separation between the confidence of correctly and incorrectly predicted subjects of 6\% basis points for the seen test data and 3.6\% basis points for the unseen site data and a highly statistically significant difference between the confidence of the incorrectly predicted and correctly predicted subjects. The Bayesian methods showed a smaller, but still statistically significant different between the correctly predicted and incorrectly predicted confidences with a mean difference of 1.1\% and 0.7\% basis points for the seen and unseen test data respectively. Some of the SWAG methods also provided a statistically significant difference between the confidence of the correctly and incorrectly predicted subjects, for example the SWAG-diag with a lr=0.1 showed a statistically significant difference in confidence of 2.9\% and 2.0\% for the seen and unseen test data respectively. Additionally, for many models including the ensemble model with a sampling of 90\%, the confidence for un-seen test data is lower for both the correctly and incorrectly predicted subjects when compared to the seen test data. This is an favorable characteristic as we expect models to be less confident on out of distribution data, which, by definition, un-seen test data is \emph{iid}. \begin{figure}[] \centering \includegraphics[width=\textwidth,keepaspectratio]{figs/UQ-ARMED/MainPredictionConfidence.PNG} \caption[Box plots for comparison of the prediction confidence of correctly and incorrectly predicted subjects per model, for 3 models with high predictive performance.]{ Box plots for comparison of the prediction confidence of correctly and incorrectly predicted subjects per model, for 3 models with high predictive performance. Models shown are consistent with Fig. \ref{fig:UQ-ARMED:covariate coef}. Confidence is calculated as described in sec \ref{sec:UQ-ARMED:methods:confidence}. The x-axis groups the prediction confidence based on the correctly and incorrectly predicted samples. Columns show the distributions on the training, seen-site test data, and unseen-site test data, left to right, for each model. \textbf{A.} Prediction confidence for Ensemble model with a 90\% sampling of the data. \textbf{B.} Prediction confidence for BNN model with all layers Bayesian. \textbf{C.} Prediction confidence for SWAG-diag model with a 0.1 learning rate. Models shown here, are highlighted in green in fig \ref{tab:UQ-ARMED:confidence}. Model confidence for all other models can be found in \ref{sup:UQ-ARMED:confidence} } \label{fig:UQ-ARMED:prediction_confidence} \end{figure} \begin{sidewaystable}[] \footnotesize \centering \begin{tabular}{l|llll|llll|llll} & \multicolumn{4}{l}{Train} & \multicolumn{4}{l}{Seen Site} & \multicolumn{4}{l}{Unseen site} \\ Model & Correct & Incorrect & Difference & p & Correct & Incorrect & Difference & p & Correct & Incorrect & Difference & p \\ \hline Non UQ ARMED & 1.000 & 1.000 & 0.000 & & 1.000 & 1.000 & 0.000 & & 1.000 & 1.000 & 0.000 & \\ BNN First & 0.997 & 0.984 & 0.013 & \textless{}.0001 & 0.998 & 0.979 & 0.020 & 0.0003 & 0.989 & 0.981 & 0.008 & 0.0004 \\ BNN Last & 0.996 & 0.980 & 0.017 & \textless{}.0001 & 0.996 & 0.984 & 0.012 & 0.0325 & 0.993 & 0.990 & 0.002 & 0.168 \\ \textcolor{ForestGreen}{BNN All}& 0.995 & 0.979 & 0.016 & \textless{}.0001 & 0.992 & 0.981 & 0.011 & 0.108 & 0.993 & 0.986 & 0.007 & 0.0002 \\ \textcolor{ForestGreen}{SWAG-diag lr=0.1} & 0.989 & 0.928 & 0.061 & \textless{}.0001 & 0.987 & 0.958 & 0.029 & 0.0007 & 0.971 & 0.951 & 0.020 & \textless{}.0001 \\ SWAG-diag lr=0.01 & 0.965 & 0.925 & 0.040 & \textless{}.0001 & 0.950 & 0.928 & 0.022 & 0.1521 & 0.938 & 0.911 & 0.027 & \textless{}.0001 \\ SWAG-diag lr=0.001 & 0.999 & 0.996 & 0.003 & 0.0023 & 1.000 & 0.996 & 0.004 & 0.019 & 0.998 & 0.997 & 0.001 & 0.4277 \\ SWAG-diag lr=0.0001 & 1.000 & 1.000 & 0.000 & 1.000 & 1.000 & 1.000 & 0.000 & 1.000 & 1.000 & 1.000 & 0.000 & 1.000 \\ SWAG-full lr=0.1 & 0.829 & 0.686 & 0.142 & \textless{}.0001 & 0.817 & 0.694 & 0.123 & \textless{}.0001 & 0.742 & 0.673 & 0.069 & \textless{}.0001 \\ SWAG-full lr=0.01 & 0.870 & 0.699 & 0.171 & \textless{}.0001 & 0.861 & 0.701 & 0.160 & \textless{}.0001 & 0.803 & 0.783 & 0.019 & 0.0022 \\ SWAG-full lr=0.001 & 0.920 & 0.818 & 0.102 & \textless{}.0001 & 0.911 & 0.830 & 0.080 & \textless{}.0001 & 0.877 & 0.858 & 0.019 & 0.0021 \\ SWAG-full lr=0.0001 & 0.926 & 0.844 & 0.082 & \textless{}.0001 & 0.914 & 0.853 & 0.061 & 0.0007 & 0.879 & 0.870 & 0.009 & 0.1475 \\ Ensemble Random Initializations & 0.947 & 0.788 & 0.159 & \textless{}.0001 & 0.927 & 0.789 & 0.138 & \textless{}.0001 & 0.808 & 0.761 & 0.047 & \textless{}.0001 \\ Ensemble MC sample 70\% & 0.942 & 0.780 & 0.162 & \textless{}.0001 & 0.936 & 0.870 & 0.065 & \textless{}.0001 & 0.887 & 0.864 & 0.023 & \textless{}.0001 \\ Ensemble MC sample 80\% & 0.953 & 0.827 & 0.127 & \textless{}.0001 & 0.945 & 0.878 & 0.067 & \textless{}.0001 & 0.909 & 0.873 & 0.036 & \textless{}.0001 \\ \textcolor{ForestGreen}{Ensemble MC sample 90\%} & 0.964 & 0.866 & 0.097 & \textless{}.0001 & 0.954 & 0.894 & 0.060 & 0.0001 & 0.916 & 0.881 & 0.036 & \textless{}.0001 \\ MC Dropout 10\% & 0.965 & 0.897 & 0.068 & \textless{}.0001 & 0.953 & 0.896 & 0.057 & \textless{}.0001 & 0.938 & 0.924 & 0.014 & 0.0015 \\ MC Dropout 20\% & 0.938 & 0.854 & 0.084 & \textless{}.0001 & 0.920 & 0.891 & 0.029 & 0.0642 & 0.918 & 0.908 & 0.010 & 0.0336 \\ MC Dropout 30\% & 0.938 & 0.852 & 0.086 & \textless{}.0001 & 0.914 & 0.870 & 0.044 & 0.0041 & 0.898 & 0.882 & 0.017 & 0.0006 \\ MC Dropout 40\% & 0.924 & 0.862 & 0.062 & \textless{}.0001 & 0.898 & 0.892 & 0.006 & 0.6528 & 0.912 & 0.898 & 0.014 & 0.0005 \\ MC Dropout 50\% & 0.929 & 0.871 & 0.058 & \textless{}.0001 & 0.917 & 0.892 & 0.025 & 0.0172 & 0.920 & 0.900 & 0.020 & \textless{}.0001 \end{tabular} \caption[Mean prediction confidence for each model, for the correctly (correct) and incorrectly (incorrect) predicted MCI conversion.]{ Each model's mean prediction confidence for the correctly (correct) and incorrectly (incorrect) predicted MCI conversion, as calculated in sec \ref{sec:UQ-ARMED:methods:confidence}. The difference and p columns, are the difference in confidence between the correctly and incorrectly predicted subject and associated p-value calculated using a 2 sided students t-test, respectively. The larger the difference, the more accurate the confidence in prediction. The models in green are displayed in Fig. \ref{fig:UQ-ARMED:prediction_confidence}. } \label{tab:UQ-ARMED:confidence} \end{sidewaystable} \section{Time Comparison} Overall, the training time for the ensemble methods take about 30 times longer to train than the other models, which is as expected given that the ensembles require training 30 independent models \ref{fig:UQ-ARMED:TrainingTime}. However, this training task is an embarrassingly parallel problem, meaning that the models could be trained independently in parallel, and as many ML practitioners have access to such parallel compute infrastructure this may not be a barrier. The inference time for all Bayesian deep learning models, excluding the SWAG models, is about 30 times longer than the regular ARMED models. This is also expected as each model is required to produce 30 predictions to estimate the posterior distribution. The SWAG models required about 50 times the inference time. The additional time is due to the sampling of the co-variance matrices. However, after the UQ models have been trained, each inference for all UQ models is independent, and thus could also be paralleled, effectively reducing the inference by 30x. For all models except SWAG this would make the inference time equivalent to the non-UQ ARMED model, given enough compute. Additionally, with a minor update to SWAG, the sampling of the variance/co-variance weight matrix could be completed and saved. This would make inference time for SWAG equivalent to the other UQ methods. While it would also be possible to increase inference time for the non-UQ ARMED model with \emph{faster} compute, inference through a single model is sequential, and not readily parallelizable, therefore keeping all timing metrics relative and additional compute would still compensate for the additional training and inference time. \begin{figure}[] \centering \includegraphics[width=\textwidth,keepaspectratio]{figs/UQ-ARMED/MainTime.PNG} \caption[Training and inference time for the non-UQ ARMED and the different UQ-ARMED models. ]{ Training and inference time for the non-UQ ARMED and the different UQ-ARMED models. \textbf{A.} Training time for each model \textbf{B.} Inference time for each model, with 30 samples from the models weights. } \label{fig:UQ-ARMED:TrainingTime} \end{figure} \begin{table}[] \centering \begin{tabular}{lll} Model & Training Time (s) & Inference Unseen Site Test (s) \\ \hline BNN All & 8.23 & 1.92 \\ BNN First & 5.73 & 1.91 \\ BNN Last & 6.06 & 6.70 \\ Ensemble MC sample 70\% & 118.01 & 1.89 \\ Ensemble MC sample 80\% & 119.47 & 1.91 \\ Ensemble MC sample 90\% & 119.57 & 1.92 \\ Ensemble Random Initializations & 114.76 & 1.92 \\ MC Dropout 10\% & 4.34 & 1.89 \\ MC Dropout 20\% & 4.14 & 1.88 \\ MC Dropout 30\% & 4.32 & 1.90 \\ MC Dropout 40\% & 4.14 & 1.93 \\ MC Dropout 50\% & 4.19 & 1.90 \\ Non UQ ARMED & 4.94 & 0.06 \\ SWAG-diag lr=0.0001 & 10.76 & 2.91 \\ SWAG-diag lr=0.001 & 11.15 & 2.93 \\ SWAG-diag lr=0.01 & 10.59 & 2.92 \\ SWAG-diag lr=0.1 & 10.56 & 2.89 \\ SWAG-full lr=0.0001 & 11.52 & 3.01 \\ SWAG-full lr=0.001 & 11.61 & 2.91 \\ SWAG-full lr=0.01 & 11.59 & 2.96 \\ SWAG-full lr=0.1 & 11.47 & 2.97 \end{tabular} \caption[Mean training and inference time in seconds for each model.]{ Mean training and inference time in seconds for each model. } \label{tab:UQ-ARMED:time} \end{table} \chapter{Discussion} Based on these results, we recommend the ensemble UQ methods. Ensembling UQ performed well across multiple metrics including: prediction performance, probe de-weighting and covariate significance estimation, and confidence calibration. Ensembles were also shown to be the most easy to configure as they demonstrate they are less sensitive to changes in the hyperparameter compared to the other methods. For example, all ensemble methods using sub-sampling performed well compared to the non-UQ ARMED models and provided non-statistically significant FE covariate coefficents for the synthetic probes. Whereas the results for the SWAG models was highly dependent on the learning rate. The largest downside of the ensemble models is the training time, as they took about 30 times longer to train when compared to the other UQ methods tested in this work. Furthermore, the additional training time scales linearly with the number of samples from the posterior required. E.g. for 100 samples, the ensemble methods would take about 100x longer to train than the other approaches. Whereas the training time for BNN's, SWAG, and MC dropout models does not depend on the number posterior samples. Additionally, the ensemble methods require the a priori selection of the number of samples from the prior before training. All other methods allow you to sample from the prior, a theoretically, infinite number of times without any additional training. Many of the Bayesian networks performed similarly to the standard ARMED model \ref{tab:UQ-ARMED:performance}, including the ensemble method using sub-sampling, the BNNs, and some of the SWAG models. One notable decrease in performance is balanced accuracy of the ensemble approach with random initializations. One possibility is, because the model started in different random initializations, each ensemble method found a different local minima, a small subset of which is not as optimal, which would cause the large variance in performance. Whereas the ensembles methods that used the same initialization likely sampled from relatively more similar minima (interestingly, similar to SWAG). However, this drop in performance was successfully captured by the p-value of the model fit, Table \ref{tab:UQ-ARMED:performance}. Based on the 95\% CI, the MC models provided the worst performance, and was the only method where all models performed statistically significantly worse than the non-UQ ARMED model. As per the experimental design, the hyperparameters of the conventional models are keep consistent to provide the fairest comparison to the standard ARMED model. However, as dropout essentially removes model weights, MC models may benefit from increasing the number of neurons per layer in proportion to the dropout rate. This would however require additional optimization, would introduce complexity, be computationally costly, and obscure these results. When comparing the sensitivity of the models UQ to the tested hyperparameters, the subsampling ensemble methods seemed most robust to change in the tested hyperparameter which is a desirable trait. However, with a smaller dataset, subsampling may become problematic as the ensemble modes may not have enough data to train on. Further experimentation would be required to better quantify this effect. For the SWAG models, every metric measured in this work, performance, statistical significance of model fit and covariate coefficient, and prediction confidence varied relatively largely, compared to the other UQ models, based learning rate of SWAG while estimating the co-variance matrix. This suggests that it is critical to carefully optimize the learning rate used when sampling the weights when using SWAG. This is expected, as distance traveled in the loss space is directly proportional to the learning rate, in turn directly effecting the sampled weights that moderates the estimated variance/co-variance matrix, which is also described in the manuscript \cite{Maddox.2019}. For the MC dropout models we saw a general trend in a decrease in performance as the drop out rate increases. In this work we provide 2 reasonable approaches to estimate the covariate coefficient based on the model gradient. The $ave(\frac{\partial y}{\partial x})$ approach samples the covariate coefficient across the feature space rather than at a single point, which may lead to a better approximation. However, it is perhaps more susceptible to outliers or biasing the covariate coefficient to subjects that are in low density areas of covariate space. $\frac{\partial y}{\partial \Bar{x}}$ provides the gradient at the center of the feature space, where the is most support, but is less flexible in accounting for large variations of the gradient over the input space. We acknowledge the advantages of both approaches and provide the results for the $\frac{\partial y}{\partial \Bar{x}}$in the supplemental. Lastly, our work has the following limitations. To provide uncertainty quantification, we employ methods that model epistemic uncertainty, that models the model uncertainty of the weights given the data $p(w|D)$, rather than aleatoric uncertainty. Future work that models both epistemic and aleatoric uncertainty may increase the accuracy of the uncertainty quantification. Additionally, this work does not estimate uncertainty from sources of variation not included in the experiment (e.g additional unseen sites), and thus the uncertainties are lower bounds. However, in our AD experiment we do have 34 un-seen sites, we would argue that the random effect is sufficiently sampled such that the upper and lower bounds on uncertainty are likely similar. This may not be true with data sets with significantly less numbers of groupings in the random effects. Another limitation of this work is the application to a single dataset. While ADNI provides an ideal dataset to test the UQ-ARMED models on due to the large number of sites, future work should aim to compare these results on other data sets with known random effects. \chapter{Conclusions} This work demonstrates the ability to produce readily interpretable statistical metrics for model fit, fixed effects covariance coefficients, and prediction confidence. Importantly, this work compares 4 suitable and commonly applied epistemic UQ approaches, BNN, SWAG, MC dropout, and ensemble approaches in their ability to calculate these statistical metrics for the ARMED MEDL models. In our experiment, not only do the UQ methods provide these benefits, but several UQ methods maintain the high performance of the original ARMED method, some even provide a modest (but not statistically significant) performance improvement. The ensemble models, especially the ensemble method with a 90\% subsampling, performed well across all metrics we tested with (1) high performance that was comparable to the non-UQ ARMED model, (2) properly deweights the confounds probes and assigns them statistically insignificant p-values, (3) attains relatively high calibration of the output prediction confidence. The MC dropout models showed the lowest performance, and failed to provide non-statistically significant fixed effects covariate coefficients. The SWAG model’s performance was dependent on the learning rate. Specifically, the models with low learning rate underestimated the fixed effect covariate coefficient uncertainty providing very small standard errors causing very significant p-values for all covariate coefficients including for the synthetic probes. Lastly, The BNNs also performed reasonably well, showing good model performance, and almost achieved providing statistically insignificant p-values for the synthetic probe covariate coefficients (depending on your cut off), but did not perform as well as the ensemble approaches for both covariate coefficient statistical significance, or model prediction confidence. The largest potential downside to the ensemble approaches is the increased training time, however as discussed in results, the balance between wall clock time and available computational resources could be achieved through parallelization. Additionally, in many instances the models inference time is more important than training time, for which the ensemble approaches are tied as the fastest of the UQ methods. Based on these results, the ensemble approaches, especially with a subsampling of 90\%, provided the best all-round performance for prediction and uncertainty estimation, and achieved our goals to provide statistical significance for model fit, statistical significance covariate coefficients, and confidence in prediction, while maintaining the baseline performance of MEDL using ARMED.
1,314,259,995,385
arxiv
\section{Introduction} \label{sec:intro} Studies have shown that corrosion costs amount to 3.4\% of global GDP \cite{Koch2017}. Common corrosion control measures, such as permanent anti-corrosion coatings or more accurate corrosion detection methods, could reduce the overall cost of corrosion. There are several methods for manipulating carbon steel samples to improve corrosion resistance. These methods include: using different alloys, applying organic coatings, powder coating the metal, and electrogalvanizing. Hot-dip galvanizing involves immersing iron or steel in a bath of molten zinc to produce a corrosion-resistant, multilayer coating of zinc-iron alloy and zinc metal. While the steel is immersed in the zinc, a metallurgical reaction occurs between the iron in the steel and the molten zinc. This zinc coating produced protects the steel in several ways: \begin{itemize}[noitemsep,topsep=0pt] \item The zinc coating deposited acts as a sacrificial anode. In normal environments that are corrosive to iron, the anode is attacked first, rather than iron, which acts as a cathode. \item The zinc coating forms a highly resistant barrier that separates the iron from harmful, corrosive environments. \item When the zinc layer corrodes, it forms a protective layer of zinc carbonate. This additional layer covers the sample with a strongly adherent and mechanically resistant layer. \end{itemize} There are several methods to analyze the corrosion resistance of zinc coatings. Electrochemical Impedance Spectroscopy (EIS)\cite{Wijesinghe2017}, Scanning Vibrating Electrode Technique (SVET)\cite{Wijesinghe2017}, Fourier Transform Infrared Spectroscopy (FTIR)\cite{Kasperek1998}, Atomic Force Microscopy (AFM)\cite{Klassen2001} and Scanning Electrode Microscope (SEM)\cite{Klassen2001}. However, these methods are limited to single-point measurements (FTIR, SVET), are destructive methods (EIS), or are suitable only for laboratory use (AFM, SEM). Visual spectrum imaging \cite{Miyachi2021, DeKerf2021} mitigates the above drawbacks, but since the corrosion products of zinc usually have a white hue, it is difficult to achieve satisfactory detection accuracy. Hyperspectral imaging allows us to combine a chemical measurement approach with a non-contact measurement and image a large field of view. Identifying the corrosion minerals present in the sample can provide us with valuable information about the corrosion degradation process \cite{DeKerf2022}. This method of nondestructive evaluation could provide a rapid and noninvasive detection method for further investigation of the corrosion process in electrogalvanized coatings. \section{Materials and Methods} \label{sec:matmet} \subsection{Sample preparation} Ten carbon steel samples with a protective electroplated zinc coating (DX51D-Z275, according to the standard BS EN 10346:2015) were exposed to a salt spray test chamber to accelerate corrosion growth. The samples have a dimension of 150 mm x 50 mm x 1 mm. A 2 mm x 20 mm slot was milled into these samples to expose the bare metal. The salt spray test was performed according to the ISO 9227 standard and each sample was subjected to a different exposure time: 24, 48, 72, 96, 168, 240, 336, 408, 504 and 672 hours. Each sample was rinsed with demineralised water and air dried before the hyperspectral and FTIR measurements. Sample one, five and ten can be seen in Figure \ref{fig:rgbsamples}. \begin{figure}[h!tpb] \begin{minipage}{0.32\linewidth} \centering \includegraphics[width=\linewidth,height = 8cm]{images/1.jpg} \label{fig:s1_rgb} \centerline{(a) Sample one} \end{minipage}\hfill \begin{minipage}{0.32\linewidth} \centering \includegraphics[width=\linewidth,height = 8cm]{images/5.jpg} \label{fig:s5_rgb} \centerline{(b) Sample five} \end{minipage}\hfill \begin{minipage}{0.32\linewidth} \centering \includegraphics[width=\linewidth,height = 8cm]{images/10.jpg} \centerline{(c) Sample ten} \end{minipage} \caption{RGB images of three different samples shown with different salt spray exposure times: (a) Sample one: 24 hours, (b) Sample five: 168 hours and (c) Sample ten: 672 hours} \label{fig:rgbsamples} \end{figure} \subsection{Microscope FTIR measurements} FTIR measurements were performed using a BRUKER Lumos FTIR microscope. Reflectance spectra were recorded in the range from 600 $cm^{-1}$ to 4000 $cm^{-1}$. For one measurement 64 spectra were recorded and averaged. The resolution of the obtained spectra is 4 $cm^{-1}$. For each location, a cluster of four to six points located in close proximity to each other was measured individually. The motorized translation stage is able to record the location of the measurements and an RGB image was taken with the built-in microscope (8x optical zoom). The instrument is calibrated with a stainless steel reference sample. A total of 188 individual points in 36 clusters were measured. The spectra were smoothed and baseline correction was performed using the Spectragryph software \cite{Spectragryph}. \subsection{Hyperspectral measurements} Hyperspectral measurements were performed using a push-broom shortwave infrared hyperspectral imaging system. This setup consists of a SWIR camera (SPECIM FX17) mounted above a translation stage (SPECIM LabScanner) with adjustable scanning speed. During acquisition, the samples move while the camera is stationary. The camera can acquire a maximum of 224 bands in the range of 900 nm to 1700 nm for 640 pixels simultaneously. A spatial resolution of 0.17 mm/pixel was obtained. For each measurement, a white reference (with a Spectralon tile) and a dark reference (when the shutter was closed) was recorded. Using these reference measurements, we can calculate the calibrated hyperspectral reflectance image. \subsection{Hyperspectral classification} To correlate the hyperspectral and FTIR measurements, we use a two-step process. First, from the FTIR measurements, we identify the most abundant mineral based on spectral features as described in the literature \cite{Kasperek1998,Lebrini2009,Zhu2001,Winiarski2018}. These corrosion minerals are identified as \ch{ZnO} (zincite/zinc oxide) \ch{Zn_{5}(OH)_{8}Cl_{2}.H_{2}O} (simonkolleite), \ch{ZnCO_{3}} (smithsonite), \ch{Zn_{5}(CO _{3})_{2}(OH)_{6}} (marionite/hydrozincite). Resulting in a location in the sample where the identified mineral is present. Second, using these locations, we can obtain the ground truth spectra in the SWIR range. The SWIR spectra are then used to classify the entire image. A Savitsky-Golay filter with a window size of 25 points and a second order polynomial is applied to the spectra to smooth out any irregularities for each spectrum. To minimize noise, the first and last 10 bands were excluded from the analysis as the sensor sensitivity is lower in that range. To create a classification map of the entire sample surface, the spectral angle mapping (SAM)\cite{kruse1993spectral} algorithm is used. This algorithm is a measure of how closely the measured spectra are correlated with the reference spectra. Each pixel is compared to the various reference spectra in the five categories (four minerals and the spectra of uncorroded galvanized steel), resulting in a spectral angle for each of the categories, for a single measured spectrum. When applied to the entire image, this results in a classification map in which each pixel is labeled. If the value of the spectral angle is below a certain threshold for a single spectrum, the pixel is classified as Unknown. \section{Results} \label{sec:pagestyle} \subsection{Identifying corrosion minerals using microscopy FTIR} The specific spectra found for each corrosion mineral can be found in Figure \ref{fig:FTIR_spectra}. These spectra are identified by comparing the distinct peaks in the reference spectra with the measured FTIR spectra. Several spectra showed a mixture of two or more minerals, this is evident by the appearance of features for different minerals in a single measured spectrum. These mixed spectra were discarded and only the spectra that show the pure minerals, are used. \begin{figure}[h!tpb] \begin{minipage}{\linewidth} \centering \includegraphics[width=\linewidth]{images/FTIRplot.png} \label{fig:FTIR_spectra} \centerline{(a)} \end{minipage}\hfill \begin{minipage}{\linewidth} \centering \includegraphics[width=\linewidth]{images/FTIR_rgb.png} \label{fig:ftirrgb} \centerline{(b)} \end{minipage}\hfill % \caption{(a) FTIR spectra of the different corrosion minerals (b) Zoomed RGB image of the microscope FTIR of sample 10. The red dots indicate a single cluster with six separate measurements. The average spectrum of these six points is identified as marionite.} \label{fig:FTIRcombo} \end{figure} \subsection{Hyperspectral classification} From the previous section, we determined the location of the pure minerals using the FTIR measurements on the microscope. The SWIR spectra of the various corrosion minerals can be seen in Figure \ref{fig:hsiSpec}. Several observations can be made from this graph. First, the spectra of marionite and uncorroded electrogalvanized steel are very similar. This can lead to difficulties in distinguishing between these two categories. Second, the spectrum of zincite shows a very distinct low point in the 1440 nm region, while the other spectra do not show this feature. Third, both smithsonite and simonkolleite show very few distinct spectral features. However, the slope and total reflectance values are different. \begin{figure}[h!] \centering \includegraphics[width=\linewidth]{images/HSIspectra.png} \caption{The spectra of the different corrosion minerals.} \label{fig:hsiSpec} \end{figure} Looking at the classification maps, seen in Figure \ref{fig:amapsples}, for samples one, five, and ten, we see a gradual increase in the type of minerals present. In sample one, there are only 2 major components: uncorroded galvanized steel and marionite. The mineral marionite is the first corrosion product that has formed, especially around the center cut and around the edges. Sample five still contains a large amount of uncorroded electrogalvanized steel, but the amount of marionite is greater and, in addition, simonkolleite has formed around the edges. Sample ten appears to be completely corroded as there is little evidence of non-corroded galvanized steel. Simonkolleite has formed around the edges and the central cut area. The central portion contains mainly marionite with small patches of zincite occurring at the interface between marionite and simonkolleite. Small amounts of zinc oxide also form, especially around the areas containing zincite. The other seven samples were also processed and a summary of the classified mineral compositions can be found in the table: \ref{tab:results} Note that the position of the sample in the salt spray chamber or small impurities in the zinc coating could have an effect on the formation of the various corrosion minerals. Therefore, the mineral abundances do not always increase in a linear way. However, it is possible to see the overall trend for each mineral. \begin{figure}[h!tpb] \begin{minipage}{\linewidth} \centering \includegraphics[width=\linewidth]{images/legend.png} \end{minipage} \begin{minipage}{0.32\linewidth} \centering \includegraphics[width=\linewidth,height = 8cm]{images/cropamapS1.png} \label{fig:amaps1} \centerline{(a)} \end{minipage}\hfill \begin{minipage}{0.32\linewidth} \centering \includegraphics[width=\linewidth,height = 8cm]{images/cropamapS5.png} \label{fig:amaps5} \centerline{(b)} \end{minipage}\hfill \begin{minipage}{0.32\linewidth} \centering \includegraphics[width=\linewidth,height = 8cm]{images/cropamapS10.png} \label{fig:amaps10} \centerline{(c)} \end{minipage}% \caption{Three different samples shown with different salt spray exposure times: (a) Sample one: 24 hours, (b) Sample five: 168 hours and (c) Sample ten: 672 hours} \label{fig:amapsples} \end{figure} \begin{table*}[h!tpb] \centering \begin{tabular}{cccccc} \multicolumn{1}{l}{} & \multicolumn{5}{c}{\textbf{Mineral classification (\%)}} \\ \textbf{\begin{tabular}[c]{@{}c@{}}Sample \\ number\end{tabular}} & \begin{tabular}[c]{@{}c@{}}Uncorroded \\ Galvananised \\ Steel\end{tabular} & Simonkolleite & Smithsonite & Zincite & Marionite \\ 1 & 91.08 & 0.02 & 0.01 & 0.05 & 8.84 \\ 2 & 80.76 & 0.04 & 0.19 & 0.02 & 18.97 \\ 3 & 76.55 & 0.29 & 0.29 & 0.05 & 22.8 \\ 4 & 81.23 & 0.20 & 0.20 & 0.05 & 18.31 \\ 5 & 64.24 & 0.77 & 0.69 & 0.18 & 34.09 \\ 6 & 48.79 & 7.76 & 0.58 & 0.29 & 41.27 \\ 7 & 29.51 & 12,20 & 1.35 & 1.10 & 53.66 \\ 8 & 9.83 & 20.32 & 4.58 & 1.49 & 63.58 \\ 9 & 2.05 & 35.80 & 6.05 & 1.88 & 54.13 \\ 10 & 1.85 & 35.69 & 14.41 & 2.41 & 45.26 \end{tabular} \caption{The classification results, shown for each category and per sample.} \label{tab:results} \end{table*} \section{Conclusion} This article presents an alternative chemical imaging method for the detection of corrosion products on zinc electroplated steel by using hyperspectral imaging. An FTIR microscope was used to identify the locations of the pure corrosion minerals (through their spectra). Based on these locations, the specific minerals can be found in the SWIR region. Using the SAM algorithm, we calculate classification maps for each sample with the identified corrosion minerals. This method for identifying corrosion on zinc electroplated steel is faster to other chemical techniques such as FTIR, XRD or SEM. Due to the line scan method, we obtain 640 spectra at once, a remarkable increase compared to the 20 seconds it takes for a single FTIR measurement. This technology can also be applicable outside laboratory conditions. Compared to RGB cameras, this method removes the visual ambiguity between corrosion minerals that appear quite similar in the visible spectrum but are more distinct in the SWIR spectrum. Future work could include the use of other techniques, such as XRD measurements or SEM, to validate the results obtained in this work. \section{Acknowledgements} We thank Yanou Fishel for assistance with microscopy FTIR measurements, and Dries Van Hoegaerden for assistance in creating samples and HSI measurements. \vfill \pagebreak \bibliographystyle{IEEEbib}
1,314,259,995,386
arxiv
\section{Introduction} \label{sec:introduction} The theory of motivic derived algebraic geometry is an enhancement of derived algebraic geometry for the direction of $\mathbb{A}^1$-homotopy theory. In the theory of motivic derived algebraic geometry, we define motivic versions of $\infty$-categories, $\infty$-topoi, classifying $\infty$-topoi which are defined by Lurie \cite{HT} and \cite{DAG5}. We explain the theory of motivic derived algebraic geometry in Section~\ref{sec:MDAG}, and prove an analogy of the existence of the spectrum functor (See \cite[Theorem 2.1.1]{DAG5}). In this paper, we introduce the theory of motivic model categories in order to establish the theory of motivic derived algebraic geometry. For any left proper combinatorial simplicial model category $\mathbf{M}$, the motivic model category $\mathrm{Mot}(\mathbf{M})$ is defined as a generalization of the definition of the model category of motivic spaces $\mathbf{MS}$ introduced by Morel--Voevodsky~\cite{MV}. In the case that $\mathbf{M}$ is the model category of simplicial sets whose model structure is the Kan--Quillen model structure~\cite{QuillenHomotopy}, the motivic model category coincides with the model category of motivic spaces. It is known that the model category $\mathbf{MS}$ is left proper combinatorial simplicial symmetric monoidal model category by Jardine~\cite{Jardine}. By Dugger's representable theorem~\cite[Theorem 1.1]{DRep} and the theory of $\infty$-categories~\cite{HT}, we can prove that the $2$-category $\mathrm{Model}^{\rm lpc}_{\Delta}$ of left proper combinatorial simplicial model categories has a model structure induced by the monoidal structure on the Model category $\mathrm{Cat}_{\Delta}$ of simplicial categories. Here the model structure on $\mathrm{Cat}_{\Delta}$ is the Dayer--Kan model structure introduced by Bergner~\cite{MR2276611}. Furthermore, $\mathrm{Model}^{\rm lpc}_{\Delta}$ has a monoidal structure which is commutative with the monoidal structure on $\mathrm{Cat}_{\Delta}$. In this view point, motivic model categories can be characterized as $\mathbf{MS}$-module objects of $\mathrm{Model}^{\rm lpc}_{\Delta}$ (See Theorem~\ref{Motuniv}). By using the theory of motivic model categories, we define motivic versions of $\infty$-categories, $\infty$-bicategories and $\infty$-topoi introduced by Lurie~\cite{HT} and \cite{LG}. Roughly, these theories are obtained by replacing Kan complexes and motivic spaces. In Section~\ref{sec:MStUnst}, we introduce a motivic version of the theory of classifying $\infty$-topoi by using a motivic version of scaled straightening and unstraightening~\cite[p.114, Section 3.5]{LG} Theorem~\ref{MStUnst}. The main theorem of this paper is Theorem~\ref{spec} which is the existence of a motivic version of spectrum functor $\mathrm{Spec}$. By using Theorem~\ref{spec}, we can formulate the motivic versions of spectral schemes and spectral Deligne--Mumford stacks in~\cite{DAG5} and \cite{DAG7}. This paper is organized as follows: In Section~\ref{sec:model}, we explain that the $2$-category of left proper simplicial model categories has the canonical model structure. In Section~\ref{sec:Infbicat}, following~\cite{LG}, we recall the definition of $\infty$-bicategories and the scaled straightening and unstraightening theorem. These theory are applied to the construction of the theory of motivic derived algebraic geometry in Section~\ref{sec:MDAG}. In Section~\ref{sec:MMC}, we introduce the theory of motivic model categories which is a generalization of the theory of motivic spaces introduced by Morel--Voevodsky~\cite{MV}. In section~\ref{sec:MDAG}, we introduce the theory of motivic derived algebraic geometry by combining the theory of motivic spaces~(See \cite{MV} and \cite{Jardine}) with the theory of derived algebraic geometry~\cite{DAG5} and \cite{DAG7}: We define motivic $\infty$-spaces, motivic $\infty$-categories, motivic $\infty$-topoi and motivic classifying $\infty$-topoi. In the final part of this paper, we prove the main theorem. \section{The model structure on the $2$-category of left proper combinatorial simplicial model categories.} \subsection{The definition of combinatorial model categories.} \label{sec:model} In this section, our main objects of model categories are left proper combinatorial simplicial model categories. We recall the definition of combinatorial model categories. Dugger~\cite{DRep} proved that for any combinatorial model category $\mathbf{M}$ has a small presentation $\mathrm{Rep}(\mathbf{M})$ which is a left proper combinatorial simplicial model category with a left Quillen equivalence $Re:\mathrm{Rep}(\mathbf{M}) \to \mathbf{M}$. By using the theory of $\infty$-categories \cite{Joyal} and \cite{HT}, we obtain a model structure on the $2$-category of left proper combinatorial model categories. First, we recall the definition of model categories. In this paper, we assume that every model category is locally presentable. \begin{definition} \label{Model} A {\it model category} is a locally presentable category $\mathbf{M}$ with a triple of subcategories $(\mathbf{W}_\mathbf{M},\,\mathbf{C}_\mathbf{N},\,\mathbf{F}_\mathbf{M})$, which satisfies the following axioms: \begin{itemize} \item[MC1] The category $\mathbf{M}$ is stable under all small limits and colimits. \item[MC2] The class $\mathbf{W}_\mathbf{M}$ has the $2$-out-of-$3$ property. \item[MC3] The three classes $\mathbf{W}_\mathbf{M},\,\mathbf{C}_\mathbf{M}$ and $\mathbf{F}_\mathbf{M}$ of morphisms contain all isomorphisms and are closed under all retracts. \item[MC4] The class $\mathbf{F}_\mathbf{ M}$ has the right lifting property with respect to all morphisms in the class $\mathbf{C}_\mathbf{M} \cap \mathbf{W}_\mathbf{M}$, and the class $\mathbf{F}_\mathbf{M} \cap \mathbf{W}_\mathbf{M}$ has the right lifting property with respect to all morphisms in the class $\mathbf{C}_\mathbf{M}$. \item[MC5] The couples $(\mathbf{C}_\mathbf{M}\cap \mathbf{W}_\mathbf{M} ,\, \mathbf{F}_\mathbf{M})$ and $(\mathbf{C}_\mathbf{M} ,\, \mathbf{F}_\mathbf{M} \cap \mathbf{W}_\mathbf{M})$ are functorial factorization systems. \end{itemize} A morphism in $\mathbf{W}_\mathbf{M}$, $\mathbf{C}_\mathbf {M}$ and $\mathbf{F}_\mathbf{M}$ is called a {\it weak equivalence}, a {\it cofibration} and a {\it fibration}, respectively. In addition, a morphism in the class $\mathbf{C}_\mathbf{M} \cap \mathbf{W}_\mathbf{M}$ and $\mathbf{F}_\mathbf{M} \cap \mathbf{W}_\mathbf{M}$ is called a {\it trivial cofibration} and a {\it trivial fibration}, respectively. \end{definition} Let $S$ be a collection of morphisms in a locally presentable category $\mathbf{M}$. Let ${}^{\boxslash}S$ denote the set of morphisms in $\mathbf{M}$ that it has the right lifting property with respect to all morphisms of $S$. Similarly, we let $S^\boxslash$ denote the set of morphisms in $\mathbf{M}$ that it has the left lifting property with respect to all morphisms of $S$. We say that the set $({}^\boxslash S)^\boxslash$ is the {\it weakly saturated class} of morphisms generated by $S$. \begin{definition} Let $\mathbf{M}$ be a model category. Let $\mathbf{W}_\mathbf{M}$ be the class of weak equivalences in $\mathbf{M}$ and $\mathbf{C}_\mathbf{M}$ the class of cofibrations in $\mathbf{M}$. We say that $\mathbf{M}$ is {\it combinatorial} if $\mathbf{M}$ has two sets $I$ and $J$ that $\mathbf{C}_\mathbf{M}$ is the weakly saturated class of morphisms generated by $I$ and $\mathbf{C}_\mathbf{M} \cap \mathbf{W}_\mathbf{M}$ is the weakly saturated class of morphisms generated by $J$. We say that a combinatorial model category $\mathbf{M}$ is {\it tractable} if $I$ can be chosen cofibrant domains. \end{definition} If $\mathbf{M}$ is a model category with the property that every object is cofibrant, then $\mathbf{M}$ is tractable. \begin{definition} Let $F:\mathbf{M} \rightleftarrows \mathbf{N} :G$ be an adjunction between model categories. The adjunction $F:\mathbf{M} \rightleftarrows \mathbf{N} :G$ is called a Quillen adjunction if $F$ and $G$ preserve the factorization systems in the axiom MC5 of Definition~\ref{Model}. Then $F$ and $G$ are called a {\it left Quillen functor} and a {\it right Quillen functor}, respectively. Moreover, if the Quillen adjunction induces a categorical equivalence between the homotopy categories (See \cite[Chapter 1]{QuillenHomotopy}.) of the model categories, then the Quillen adjunction is called a {\it Quillen equivalence}. Similarly, $F$ and $G$ are called a {\it left Quillen equivalence} and a {\it right Quillen equivalence}, respectively. \end{definition} \subsection{The model structure on the $2$-category of left proper combinatorial model categories.} Dugger proved that any combinatorial model category has a small presentation: \begin{theorem}[\cite{DRep} Theorem 1.1] \label{Drep} Let $\mathbf{M}$ be a combinatorial model category. Then there exists a small category $\mathcal{C}$ and a left Quillen functor $R: \mathrm{Set}_{\Delta}^{\mathcal{C}^{\rm op}} \to \mathbf{M}$ such that $\mathrm{M}$ is a Quillen equivalent to a Bousfield localization of $\mathrm{Set}_{\Delta}^{\rm \mathcal{C}^{\rm op}}$, where the model structure on $\mathrm{Set}_{\Delta}^{\mathcal{C}^{\rm op}}$ is the projective model structure. \qed \end{theorem} In the proof of \cite[Theorem 1.1]{DRep}, the small category $\mathcal{C}$ is the full subcategory $\mathbf{M}_\lambda^{\rm cof}$ of $\lambda$-compact cofibrant objects of $\mathbf{M}$ for some regular cardinal $\lambda$ (See \cite[Section 5 and 6]{DRep}). Hence every combinatorial model category is Quillen equivalent to some left proper simplicial combinatorial model category. Furthermore if the model category $\mathbf{M}$ is symmetric monoidal, then the small presentation of $\mathbf{M}$ is also symmetric monoidal and the Bousfield localization is a symmetric monoidal. Let $\mathfrak{C}: \mathrm{Set}_{\Delta} \rightleftarrows \mathrm{Cat}_{\Delta}: N_\Delta$ denote the Quillen adjunction which is a Quillen equivalence between left proper combinatorial model categories (See \cite[p.89, Theorem 2.2.5.1]{HT}). The model structure on the category $\mathrm{Set}_{\Delta}$ of simplicial sets is the Joyal~\cite{Joyal} model structure whose fibrant objects are $\infty$-categories. The model structure on $\mathrm{Cat}_{\Delta}$ is the Dayer--Kan model structure introduced by Bergner~\cite{MR2276611}. Let $\mathbf{M}$ be a simplicial model category and $\mathbf{M}^\circ$ denote the full subcategory spanned fibrant-cofibrant objects. Then $\mathbf{M}^\circ$ is a fibrant object of the model category $\mathrm{Cat}_{\Delta}$. Therefore the simplicial model category $\mathbf{M}$ determines an $\infty$-category $N_\Delta ( \mathbf{M}^\circ)$. We call the $\infty$-category $N_\Delta(\mathbf{M}^{\circ})$ the {\it underlying $\infty$-category} of $\mathbf{M}$. The following proposition gives a correspondence between locally presentable $\infty$-categories and left proper combinatorial simplicial model categories. For any left proper combinatorial model category $\mathbf{M}$, we let $\mathrm{Rep}(\mathbf{M})$ denote the small representation obtained by applying Theorem~\ref{Drep} to $\mathbf{M}$. \begin{lemma}[\rm{cf.}\cite{HT} p.906, Remark A.3.7.7] \label{Lemma} Let $F: \mathbf{M} \to \mathbf{N}$ be a left Quillen functor of left proper combinatorial simplicial model categories with the right adjoint $G: \mathbf{N} \to \mathbf{M}$. Then $F$ is a left Quillen equivalence if and only if it induces an equivalence $N_\Delta(\mathbf{M}^\circ) \to N_\Delta(\mathbf{N}^{\circ })$ of $\infty$-categories. \end{lemma} \begin{proof} Let $\lambda$ be a regular cardinal such that the cofibrations and trivial cofibrations of the model categories $\mathbf{M}$ and $\mathbf{N}$ are generated by $\lambda$-filtered colimits. Let $\mathbf{M}_\lambda^{\rm cof}$ denote the full subcategory of $\mathbf{M}$ whose objects are $\lambda$-compact cofibrant objects. Write $\mathcal{C}=\mathbf{M}_\lambda^{\rm cof}$. Then there exists a homotopically surjective left Quillen functor $R:\mathrm{Set}_{\Delta}^{\rm \mathcal{C}^{\rm op}} \to \mathbf{M}$. Let $\mathrm{Rep}(\mathbf{M})$ denotes the Bousfield localization of $\mathbf{M}$ which is Quillen equivalent to $\mathbf{M}$. Let $Re: \mathrm{Rep}(\mathbf{M}) \to \mathbf{M}$ denote the induced left Quillen equivalence. Suppose that $F$ is a left Quillen equivalence. Then $ F \circ Re$ and $Re$ are left Quillen equivalences from $\mathrm{Rep}(\mathbf{M})$ to $\mathbf{N}$ and $\mathbf{M}$. Note that $\mathrm{Rep}(\mathbf{M})$ satisfies the condition of \cite[p.849, Proposition 3.1.10]{HT}. Hence we have weak equivalences of $\infty$-categories $N_\Delta(\mathrm{Rep}(\mathbf{M})^{\circ}) \simeq N_\Delta ( \mathbf{M}^{\circ} )$ and $ N_\Delta(\mathrm{Rep}(\mathbf{M})^{\circ}) \simeq N_\Delta ( \mathbf{N}^{\circ} )$ by~\cite[p.849, Proposition 3.1.10]{HT}. We assume that $F$ induces an equivalence $ N_\Delta (\mathbf{M}^{\circ} ) \simeq N_\Delta ( \mathbf{N}^{\circ})$ of $\infty$-categories. Then $F \circ Re$ induces a weak equivalence $N_\Delta(\mathrm{Rep}( \mathbf{M})^{\circ}) \simeq N_\Delta ( \mathbf{M}^{\circ} )$ of $\infty$-categories. By the converse implication of \cite[p.849, Proposition 3.1.10]{HT}, $F \circ Re$ is a left Quillen equivalence. Hence the left Quillen functor $F: \mathbf{M} \to \mathbf{N}$ is homotopically surjective. By the constructions of the Bousfield localization $\mathrm{Set}_{\Delta}^{\mathcal{C}^{\rm op}} \to \mathrm{Rep}(\mathbf{M})$ and the Quillen equivalence $Re :\mathrm{Rep}(\mathbf{M}) \to \mathbf{M}$, we obtain that $F$ is a left Quillen equivalence. \end{proof} \begin{definition} Given an adjunction between locally presentable categories \[ F: \mathbf{M} \rightleftarrows \mathbf{C} : G \] where $\mathbf{M}$ is a model category. We will define a model structure on $\mathbf{C}$ by the following: \begin{itemize} \item[(F)] A morphism $f:X \to Y$ in $\mathbf{C}$ is a fibration if $G(f):G(X ) \to G(Y) $ is a fibration in the model category $\mathbf{M}$. \item[(W)] A morphism $f:X \to Y$ in $\mathbf{C}$ is a weak equivalence if $G(f) : G(X) \to G(Y)$ is a weak equivalence in the model category $\mathbf{M}$. \item[(WF)] A morphism $f:X \to Y$ in $\mathbf{C}$ is a trivial fibration if $G(f) : G(X) \to G(Y)$ is a trivial fibration in the model category $\mathbf{M}$. \item[(C)] A morphism $f:X \to Y$ in $\mathbf{C}$ is a cofibration if it has the right lifting property with respect to all trivial fibrations. \end{itemize} It is easily checked that $\mathbf{C}$ is a model category. We say that the model structure of $\mathcal{C}$ is the {\it projective model structure} induced by $F$. \end{definition} \begin{example} Let $\mathcal{C}$ be a category and $\mathbf{M}$ a model category. The diagonal functor $D: \mathbf{M} \to \mathbf{M}^{\mathcal{C}}$ has a right adjoint $\prod_{\mathcal{C}}$. The model structure of $\mathbf{M}^{\mathcal{C}}$ is said to be induced by $\mathbf{M}$. \end{example} The $2$-category $\mathrm{Model}^{\rm lpc}_{\Delta}$ has a monoidal structure (it is not a monoidal model structure) which is commutative with the monoidal structure on $\mathrm{Pr}^{\rm L}_{\Delta}$. The model structure is defined as follows: For any two left proper simplicial combinatorial model categories $\mathbf{M}$ and $\mathbf{N}$, the mapping model category $\mathrm{Map}_{\mathrm{Model}^{\rm lpc}_{\Delta}}(\mathbf{M},\,\mathbf{N})$ is defined by the projective model structure of $\mathbf{N}^{\mathbf{M}}$ induced by $\mathbf{N}$. By \cite[p.829, Proposition A.2.8.2 and p.831 Remark A.2.8.4]{HT}, it is known that $\mathbf{N}^{\mathbf{M}}$ is left proper combinatorial. Let $-\otimes-$ denote the monoidal structure on $\mathrm{Model}^{\rm lpc}_{\Delta}$. Then the model category $\mathrm{Set}_{\Delta}$ is the unit object of $\mathrm{Model}^{\rm lpc}_{\Delta}$, where the model structure on $\mathrm{Set}_{\Delta}$ is the Kan--Quillen model structure~\cite{QuillenHomotopy}. Dugger~\cite{UnivHomotopy} proved that every functor $F: \mathcal{C} \to \mathbf{M} $ from a category to a model category factors through the model category $\mathrm{Set}_{\Delta}^{\mathcal{C}^{\rm op}}$ whose model structure is the projective model structure induced by $\mathrm{Set}_{\Delta}$. Furthermore the left Kan extension $F^+:\mathrm{Set}_{\Delta}^{\mathcal{C}^{\rm op}} \to \mathbf{M}$ along $F$ is a left Quillen functor. Moreover, if $\mathbf{M}$ is simplicial then the left Quillen functor $F^+$ is simplicial. Write $U(\mathcal{C})=\mathrm{Set}_{\Delta}^{\mathcal{C}^{\rm op}}$. Then $U(\mathcal{C})$ is called the {universal model category} of $\mathcal{C}$ and $U: \Cat{} \to \mathrm{Model}^{\rm lpc}_{\Delta}$ is called the {\it universal model category functor}. The class of left Quillen equivalences of left proper combinatorial simplicial model categories coincides with the class of colimit preserving functors which induce weak equivalences of the underlying presentable $\infty$-categories. Lurie's theory of presentable $\infty$-categories and Dugger's representable theorem includes the following adjunction: \begin{theorem} \label{Dugger--Lurie} Let $\mathrm{Cat}_{\Delta}$ denotes the category of simplicial categories and $\mathrm{Model}^{\rm lpc}_{\Delta}$ the category of left proper combinatorial simplicial model categories. Then the universal model category functor $U: \mathrm{Cat}_{\Delta} \to \mathrm{Model}^{\rm lpc}_{\Delta}$ induces an adjunction \[ \overline{U}: \mathrm{Pr}^{\rm L}_{\Delta} \rightleftarrows \mathrm{Model}^{\rm lpc}_{\Delta} :G, \] where $\mathrm{Pr}^{\rm L}_{\Delta}$ is the subcategory of $\mathrm{Cat}_{\Delta}$ of locally presentable simplicial categories whose functors are colimit preserving functors and $G$ sends simplicial model categories to the underlying simplicial categories. Then $\overline{U}$ induces a model structure on $\mathrm{Model}^{\rm lpc}_{\Delta}$ which is the projective model structure induced by $U$. Furthermore $\overline{U}$ is a left Quillen equivalence. \end{theorem} \begin{proof} By using Theorem~\ref{Drep} and Lemma~\ref{Lemma}, we have that the left Quillen functor $\overline{U}:\mathrm{Pr}^{\rm L}_{\Delta} \to \mathrm{Model}^{\rm lpc}_{\Delta} $ is a left Quillen equivalence. \end{proof} \section{The definition of $\infty$-bicategories.} \label{sec:Infbicat} In this section, following~\cite{LG}, we explain the definition of $\infty$-bicategories by using the theory of scaled simplicial sets and $\mathrm{Set}_{\Delta}^+$-enriched categories. To establish motivic derived algebraic geometry, we use the $\infty$-bicategorical straightening and unstraightening theorem. \subsection{Scaled simplicial sets and $\mathrm{Set}_{\Delta}^+$-enriched categories.} \begin{definition}[\cite{LG} p.66, Definition 3.1.1] A {\it scaled simplicial set} $\overline{X}=(X,\,T)$ is a pair, where $X$ is a simplicial set and $T$ is a set $2$-simplex of $X$ which contains all degenerate 2-simplex of $X$. We call the simplicial set $X$ the underlying simplicial set and the elements of $T$ {\it thin}. Let $(X,\,T)$ and $(X',\,T')$ be scaled simplicial sets. A morphism from $(X,\,T)$ to $(X',\,T')$ is a map $f: X \to X'$ of simplicial sets which $T$ carries into $T'$. We let $\mathrm{Set}_{\Delta}^{\rm sc}$ denote the category of scaled simplicial sets. \end{definition} Let $\mathrm{Set}_{\Delta}^+$ denote the simplicial model category of marked simplicial sets. The model structure on $\mathrm{Set}_{\Delta}^+$ is Cartesian model structure. Furthermore $\mathrm{Set}_{\Delta}^+$ is a monoidal model category. Let $\Cat{\Delta}^+$ denote the category of $\mathrm{Set}_{\Delta}^+$-enriched categories. Then the category $\mathrm{Cat}_{\Delta}^+$ has a model structure which is induced by the Cartesian model structure on $\mathrm{Set}_{\Delta}^+$. We explain the induced model structure on $\mathrm{Cat}_{\Delta}^+$: Let $X$ be a marked simplicial set. Then we define a new $\mathrm{Set}_{\Delta}^+$-enriched category $[1]_X$ as follows: \begin{itemize} \item The category $[1]$ has only two objects $0$ and $1$. \item The marked simplicial set of morphisms $\mathrm{Hom}_{[1]_X}(x,\,y)$ is defined by the formula: \[ \mathrm{Hom}_{[1]_X}(x,\,y)= \begin{cases} \Delta^0_\flat \quad &(x=y), \\ X \quad &(x=0,\,y=1), \\ \emptyset \quad &(x=1,\,y=0). \end{cases} \] \end{itemize} \begin{definition}[\rm{c.f.} \cite{HT} p.856, Definition A.3.2.1] \label{MCat} Let $F:\mathcal{C} \to \mathcal{D}$ be a functor of $\mathrm{Set}_{\Delta}^+$-enriched categories. We say that the functor $F:\mathcal{C} \to \mathcal{D}$ is a weak equivalence if the following conditions are satisfied: \begin{enumerate} \item For any $X,\,Y \in \mathcal{C}$, the induced map \[ \mathrm{Map}_{\mathcal{C}} (X,\,Y) \to \mathrm{Map}_{\mathcal{D}}(F(X),\,F(Y)) \] is a Cartesian equivalence of marked simplicial sets. \item The functor $F$ induces an essential surjective functor between their homotopy categories. \end{enumerate} \end{definition} \begin{definition}[\cite{HT} p.857, Definition A.3.2.4] \label{DCat} The category $\mathrm{Cat}_{\Delta}^+$ is a model category defined by the following: \begin{itemize} \item[(W)] Weak equivalences are functors satisfying the conditions in Definition~\ref{MCat}. \item[(C)] Cofibrations are morphisms in the smallest weakly saturated class~\cite[p.783, Definition A.1.2.2]{HT} of morphisms containing the following collection of morphisms: \begin{itemize} \item The inclusion $\emptyset \to [0]$, where $\emptyset$ is the empty category and $[0]$ is the category which consists of the single object $\Delta^0_{\flat}$. \item The induced map $[1]_X \to [1]_Y$, where $X \to Y$ is a morphism which belongs to the weakly saturated class of cofibrations in $\mathrm{Set}_{\Delta}^+$. \end{itemize} \item[(F)] Fibrations are morphisms which have right lifting property with respect to all morphisms satisfying both conditions (W) and (C). \end{itemize} \end{definition} Following~\cite[pp.69--70, Definition 3.1.10]{LG}, we define a functor $N^{\rm sc}: \mathrm{Cat}_{\Delta}^+ \to \mathrm{Set}_{\Delta}^{\rm sc}$ as follows: \begin{itemize} \item For $\mathcal{C} \in \mathrm{Cat}_{\Delta}^+$, the underlying simplicial set is the simplicial nerve $N_\Delta(\mathcal{C})$ of $\mathcal{C}$. \item Given a $2$-simplex $\sigma$ of $N_\Delta(\mathcal{C})$ corresponding to a (not necessary commutative) diagram \[ \xymatrix@1{ & Y \ar[rd]^g & \\ X \ar[ur]^f \ar[rr]_h & & Z } \] in $\mathcal{C}$ and an edge $\alpha: h \mapsto g \circ f$ of the marked simplicial set $\mathrm{Map}_\mathcal{C}(X,\,Z )$, we say that $\sigma$ is thin if $\alpha$ is a marked edge. \end{itemize} We call the functor $N^{\rm sc}: \mathrm{Cat}_{\Delta}^+ \to \mathrm{Set}_{\Delta}^{\rm sc}$ the {\it scaled nerve functor}. A right adjoint functor $\mathfrak{C}^{\rm sc}:\mathrm{Set}_{\Delta}^{\rm sc} \to \mathrm{Cat}_{\Delta}^+$ of the scaled nerve functor is defined as follows: \begin{itemize} \item For any scaled simplicial set $\overline{S}=(S,\,T)$, the underlying simplicial category of $\mathfrak{C}^{\rm sc}[\overline{S} ]$ is the simplicial category $\mathfrak{C}[S]$. \item Given $x ,\,y \in S$, an edge $\alpha$ of the simplicial set $\mathrm{Map}_{\mathfrak{C}^{\rm sc}[\overline{S}]} (x,\,y)$ is a marked edge if there exist a sequence $x=x_0 \to x_1 \to \cdots \to x_n=y$ of vertexes and a sequence of thin $2$-simplices \[ \xymatrix@1{ & y_i \ar[rd]^{g_i} & \\ x_{i-1} \ar[rr]_{h_i} \ar[ur]^{f_i} & & x_i} \] of $S$ such that $\alpha=\alpha_n \circ \cdots \alpha_1$ where $\alpha_i : h_i \mapsto g_i \circ f_i$ is an edge of the simplicial set $\mathrm{Map}_{\mathfrak{C}[S]}(x_{i-1},\,x_i)$ for each $1 \le i \le n$. \end{itemize} Then the pair $(\mathfrak{C}^{\rm sc},\, N^{\rm sc})$ of functors determines an adjunction \[ \mathfrak{C}^{\rm sc}: \mathrm{Set}_{\Delta}^{\rm sc} \rightleftarrows \mathrm{Cat}_{\Delta}^+ :N^{\rm sc}. \] Moreover, it is known that the adjunction is a Quillen adjunction by \cite[p.71, Proposition 3.1.13]{LG}. The model category $\mathrm{Set}_{\Delta}^{\rm sc}$ of scaled simplicial sets is left proper combinatorial. By Dugger's representable theorem~\cite{DRep} there exists a left proper combinatorial simplicial model category $\mathrm{Rep}({\mathrm{Set}_{\Delta}^{\rm sc}})$ such that the realization functor $Re:\mathrm{Rep}({\mathrm{Set}_{\Delta}^{\rm sc}}) \to \mathrm{Set}_{\Delta}^{\rm sc} $ is a left Quillen equivalence. \subsection{Scaled straightening and unstraightening.} We explain the definition of the scaled straightening functor and unstraightening functor by the following~\cite[Section 3]{LG}. Let $\overline{S}=(S,\,T)$ be a scaled simplicial set and $\mathcal{C}$ a $\mathrm{Set}_{\Delta}^+$-enriched category. Given a functor $\phi: \mathfrak{C}^{\rm sc}[\overline{S}] \to \mathcal{C}$, we define of the scaled straightening functor $\mathrm{St}_\phi^{\rm sc}:\Set{\Delta /\overline{S}}^+ \to (\mathrm{Set}_{\Delta}^+)^ \mathcal{C}$. \begin{definition}[\cite{LG} p.114, Definition 3.5.1] Let $\overline{X}=(X,\,M)$ be a marked simplicial set. Let $T \subset X \times \Delta^1$ be the collection of all $2$-simplices $\sigma$ with the following properties: \begin{itemize} \item For the projection $:X \times \Delta^1 \to X$, the image of $\sigma$ is a degenerate $2$-simplex of $X$. \item For any $2$-simplex $\pi:\Delta^2 \overset{\sigma}\to X \times \Delta^1 \to \Delta^1 $ satisfying $\pi^{-1}(\{0\}) = \Delta^{0,\,1}$, the image of $\pi$ of the restriction $\sigma |_{\Delta^{0,\,1}} $ determines a marked edge of $X$. \end{itemize} We define a scaled simplicial set $C(\overline{X})$ by the following formula: \[ C(\overline{X}) = (X \times \Delta^1) \coprod_{ (X \times \{0\})_\flat} \{ v \}_\flat. \] We call $C( \overline{X})$ the {\it scaled cone} of $\overline{X}$. More generally, for any scaled simplicial set $\overline{S}$ and $\overline{X} \in (\mathrm{Set}_{\Delta}^{\rm sc})_{/\overline{S}}$, we set $C_{\overline{S}} (\overline{X}) = C(\overline{X}) \coprod_{(X \times \{1\})_\flat} \overline{S}$. We say that $C_{\overline{S}}(\overline{X} )$ is the {\it scaled cone} of $\overline{X}$ over $\overline{S}$. \end{definition} \begin{definition}[\cite{LG} p.115, Definition 3.5.4] Let $\overline{S}$ be a scaled simplicial set, $\mathcal{C}$ a $\mathrm{Set}_{\Delta}^+$-enriched category and $\phi: \mathfrak{C}^{\rm sc}[\overline{S}] \to \mathcal{C} $ a functor of $\mathrm{Set}_{\Delta}^+$-enriched categories. We define the {\it scaled straightening functor} associated to $\phi$ $\mathrm{St}_\phi^{\rm sc}: \Set{\Delta / \overline{S}}^+ \to (\mathrm{Set}_{\Delta}^+)^\mathcal{C}$ of $\overline{X}$ by the following: \[ (\mathrm{St}^{\rm sc}_\phi(\overline{X})) (C)= \mathrm{Map}_{ C_{\overline{S}}[\overline{X}] \coprod_{\mathfrak{C}^{\rm sc}[\overline{S}]} \mathcal{C} } (v,\, C), \] for any $C \in \mathcal{C}$. \end{definition} \begin{remark} The straightening functor $\mathrm{St}^{\rm sc}_\phi:\Set{\Delta / \overline{S}}^+ \to (\mathrm{Set}_{\Delta}^+)^\mathcal{C}$ is defined as a $\mathrm{Set}_{\Delta}^+$-enriched categorical colimit. Let $f: X \to S$ be a marked simplicial set over $S$. Then the straightening $\mathrm{St}^{\rm sc}_\phi(\overline{X})$ is equivalent to the colimit of the diagram: \[ j^{\rm op}\circ \phi \circ \mathfrak{C}^{\rm sc}[F]: \mathfrak{C}^{\rm sc} [ (X \times \Delta^1,\,T )\coprod_{X \times \{1\} }\overline{S} ] \to \mathfrak{C}^{\rm sc}[\overline{S}] \to \mathcal{C} \to (\mathrm{Set}_{\Delta}^+)^\mathcal{C}) \] where $F:$ is an induced map by $f$ and $j: \mathcal{C} \to (\mathrm{Set}_{\Delta}^{+})^{\mathcal{C}^{\rm op}}$ denotes the enriched Yoneda embedding. \end{remark} Since the scaled straightening functor $\mathrm{St}^{\rm sc}_\phi$ preserves all small colimits, it has a right adjoint functor by the adjoint functor theorem for categories, and $\mathrm{Un}^{\rm sc}_\phi$ denotes the right adjoint functor. We call the right adjoint $\mathrm{Un}_\phi^{\rm sc}$ the {\it scaled unstraightening functor} associated to $\phi$. It is known that the adjunction $(\mathrm{St}^{\rm sc}_\phi,\, \mathrm{Un}_\phi^{\rm sc} )$ is a Quillen adjunction. Here the model structure on $ \Set{\Delta / \overline{S}}^+$ is the locally coCartesian model structure~\cite[p.74, Example 3.2.9]{LG} and the model structure on $(\mathrm{Set}_{\Delta}^+)^{\mathcal{C}}$ is the projective model structure~\cite[pp.823--824, Definition A.2.8.1 and Proposition A.2.8.2]{HT}. We recall the scaled straightening and unstraightening theorem: \begin{theorem}[\cite{LG} p.128, Theorem 3.8.1] \label{StUnst} Let $\overline{S}$ be a scaled simplicial set, $\mathcal{C}$ a $\mathrm{Set}_{\Delta}^+$-enriched category and $\phi: \mathfrak{C}^{\rm sc}[\overline{S}] \to \mathcal{C}$ a weak equivalence of $\mathrm{Set}_{\Delta}^+$-enriched categories. Then the Quillen adjunction \[ \mathrm{St}_\phi^{\rm sc}: \Set{\Delta / \overline{S}}^+ \rightleftarrows (\mathrm{Set}_{\Delta}^+)^{\mathcal{C}}:\mathrm{Un}_\phi^{\rm sc} \] is a Quillen equivalence. \qed \end{theorem} \subsection{Definition of $\infty$-bicategories.} We explain the definition of a model structure on $\mathrm{Set}_{\Delta}^{\rm sc}$ by using the model structure on $\mathrm{Cat}_{\Delta}^+$. \begin{definition}[\cite{LG} p.115, Definition 3.5.6] \label{bicat-eq} Let $f:\overline{X} \to \overline{Y}$ be a morphism of scaled simplicial sets. We say that the morphism $f:\overline{X} \to \overline{Y} $ is a {\it bicategorical equivalence} if the induced functor $\mathfrak{C}^{\rm sc}[f]: \mathfrak{C}^{\rm sc}[\overline{X}] \to \mathfrak{C}^{\rm sc}[\overline{Y}]$ is a weak equivalence of $\mathrm{Set}_{\Delta}^+$-enriched categories. \end{definition} \begin{theorem}[\cite{LG} p.143, Theorem 4.2.7] \label{Ibi} Let $\mathrm{Set}_{\Delta}^{\rm sc}$ denote the category of scaled simplicial sets. Then $\mathrm{Set}_{\Delta}^{\rm sc}$ has a left proper combinatorial model structure defined by the following: \begin{itemize} \item[(W)] The weak equivalences in $\mathrm{Set}_{\Delta}^{\rm sc}$ are bicategorical equivalences. \item[(C)] The cofibrations in $\mathrm{Set}_{\Delta}^{\rm sc}$ are monomorphisms. \item[(F)] The fibrations in $\mathrm{Set}_{\Delta}^{\rm sc}$ are morphisms which have the right lifting property with respect to all morphisms satisfying (W) and (C). \end{itemize} \qed \end{theorem} We say that the model structure on $\mathrm{Set}_{\Delta}^{\rm sc}$ in Theorem~\ref{Ibi} is the {\it scaled model structure}. \begin{definition}[\cite{LG} p.145, Definition 4.2.8] An $\infty$-bicategory is a fibrant object of $\mathrm{Set}_{\Delta}^{\rm sc}$ with respect to the scaled model structure. \end{definition} \begin{remark} The model category $\mathrm{Set}_{\Delta}^{\rm sc}$ is Cartesian closed, but I do not know that it has a simplicial model structure. Since $\mathrm{Set}_{\Delta}^{\rm sc}$ is left proper combinatorial and Cartesian closed, we have a canonical left Quillen equivalence $Re: \mathrm{Rep}( \mathrm{Set}_{\Delta}^{\rm sc}) \to \mathrm{Set}_{\Delta}^{\rm sc}$ such that $ \mathrm{Rep}( \mathrm{Set}_{\Delta}^{\rm sc}) $ is left proper combinatorial simplicial monoidal model category. On the other hand, Lurie~\cite{LG} proved that there is a right Quillen equivalence $F: (\mathrm{Set}_{\Delta}^+)_{/N(\Delta^{\rm op})} \to \mathrm{Set}_{\Delta}^{\rm sc}$ where the model structure of $(\mathrm{Set}_{\Delta}^+)_{/N(\Delta^{\rm op})}$ is the complete Segal model structure which is simplicial and monoidal. \end{remark} \section{Motivic model categories.} \label{sec:MMC} In this section, we introduce the theory of motivic model categories for left proper combinatorial simplicial model categories. \subsection{Definition of motivic model structure of a left proper combinatorial simplicial model category.} \label{MSS} Let $S$ be a regular Noetherian separated scheme of finite dimension. Let $\mathbf{M}$ be a left proper combinatorial simplicial model category. Let $\mathbf{M}^{(\mathbf{Sm}_S)^{\rm op}_{\rm Nis}}$ be the functor category from the Nisnevich site $(\mathbf{Sm}_S)_{\rm Nis}$. We write $\mathrm{Mot}({\mathbf{M}})=\mathbf{M}^{(\mathbf{Sm}_S)^{\rm op}_{\rm Nis}}$. We define a new model structure on $\mathrm{Mot}(\mathbf{M})$ by the following: Let $f:X \to Y$ be a map of objects of $\mathbf{M}$. We say that $f$ is a {\it stalk-wise weak equivalence} if the induced morphism on stalks $f_x: X_x \to Y_x$ is a weak equivalence of the model category $\mathbf{M}$ for each point $x$ of $S$. A {\it cofibration} is a pointwise cofibrations of $\mathbf{M}$. A {\it trivial stalk-wise cofibration} is a map of objects of $\mathrm{Mot}(\mathbf{M})$ in which is a stalk-wise equivalence and a cofibration. A {\it global fibration} is a map of simplicial sheaves which has a right lifting property with respect to all trivial stalk-wise cofibrations. If $X \to *$ is a global fibration, then we say that $X$ is {\it globally fibrant}. Let $X$ be an object of $\mathrm{Mot}(\mathbf{M})$. Then $X$ is {\it $\mathbb{A}^1$-local} if the induced map $X(U \times \mathbb{A}^1) \to X(U)$ is a weak equivalence in $\mathbf{M}$ for any smooth schemes $U$ over $S$. A motivic fibrant object is a globally fibrant and $\mathbb{A}^1$-local. A map $f:X \to Y$ in $\mathrm{Mot}({\mathbf{M}})$ is a {\it motivic $\mathbf{M}$-equivalence} if the induced map \[ f^*:\mathrm{Hom}_{\mathrm{Mot}({\mathbf{M}})} (Y,\,Z) \to \mathrm{Hom}_{\mathrm{Mot}({\mathbf{M}})} (X,\,Z) \] is a weak homotopy equivalence of simplicial sets for each motivic fibrant object $Z$ of $\mathrm{Mot}({\mathbf{M}})$. We call the modal category $\mathrm{Mot}(\mathbf{M})$ is the {\it motivic model category} of $\mathbf{M}$. \begin{theorem} Let $\mathbf{M}$ be a left proper combinatorial simplicial model category. There is a left proper combinatorial simplicial model structure on $\mathrm{Mot}(\mathbf{M})$ by the following: \begin{itemize} \item[(C)] Cofibrations are pointwise cofibrations. \item[(W)] Weak equivalences are motivic $\mathbf{M}$-weak equivalences. \item[(F)] Fibrations are morphisms which has a left lifting property with respect to all morphisms which are both cofibrations and motivic $\mathbf{M}$-weak equivalences. \end{itemize} Furthermore, if $\mathbf{M}$ is symmetric monoidal, then the model category $\mathrm{Mot}(\mathbf{M})$ is also symmetric monoidal. \end{theorem} \begin{proof} By \cite[p.56, Corollary 4.55]{MR2771591}, the model category $\mathbf{M}^{(\mathbf{Sm}_S)^{\rm op}_{\rm Nis}}$ is a left proper combinatorial simplicial monoidal model category, whose model structure is the projective model structure. The category $\mathrm{Mot}(\mathbf{M})$ is a Bousfield localization of the monoidal model category. Hence $\mathrm{Mot}(\mathbf{M})$ is also a proper combinatorial simplicial model category. By \cite[p.54 Proposition 4.47]{MR2771591}, the Bousfield localization $\mathrm{Loc}_{\mathbb{A}^1}: \mathbf{M}^{(\mathbf{Sm}_S)^{\rm op}_{\rm Nis}} \to \mathrm{Mot}(\mathbf{M})$ is a symmetric monoidal localization. Hence $\mathrm{Mot}(\mathbf{M})$ is also a symmetric monoidal model category. \end{proof} If $\mathbf{M}=\mathrm{Set}_{\Delta}$ with the Kan--Quillen model structure, we call the fibrant object of $\mathrm{Mot}(\mathbf{M})$ a {\it motivic spaces} and the $\infty$-category $N_\Delta( \mathrm{Mot}(\mathrm{Set}_{\Delta} )^\circ)$ the $\infty$-category of {\it motivic spaces}. If $\mathbf{M}=\mathrm{Set}_{\Delta}^+$ with the Cartesian model structure, then we say that $N_\Delta(\mathrm{Mot}(\mathrm{Set}_{\Delta}^{+})^{\circ})$ is the $\infty$-category of {\it motivic $\infty$-category}. Moreover, we say that the $\infty$-bicategory $N^{\rm sc}(\mathrm{Mot}(\mathrm{Set}_{\Delta}^{+})^{\circ})$ is the $\infty$-bicategory of motivic $\infty$-categories. We write $\mathbf{MS}_\infty=N_\Delta(\mathrm{Mot}(\mathrm{Set}_{\Delta})^{\circ}),\, \mathrm{MCat}_{\infty} = N_\Delta(\mathrm{Mot}(\mathrm{Set}_{\Delta}^{+})^{\circ})$ and $\mathbf{MCat}_{\infty}=N^{\rm sc}(\mathrm{Mot}(\mathrm{Set}_{\Delta}^{+})^{\circ})$. The symmetric monoidal structures on $\mathrm{Set}_{\Delta}^+$ induces the symmetric monoidal structure on $\mathrm{MCat}_{\infty} = N_\Delta(\mathrm{Mot}(\mathrm{Set}_{\Delta}^{+})^{\circ})$. Hence $\mathrm{MCat}_{\infty}$ is a symmetric monoidal $\infty$-category. \subsection{The universal property of $\mathrm{Mot}(\mathbf{M})$.} Let $\mathbf{M}$ be a left proper combinatorial simplicial model category. The motivic model category $\mathbf{M}$ has the following universal property. \begin{theorem} \label{Motuniv} Let $S$ be a regular Noetherian separated scheme of finite dimension. Let $\mathrm{Model}^{\rm lpc}_{\Delta}$ denote the $2$-category of left proper combinatorial simplicial model categories, whose functors are left Quillen functors. Let $\mathbf{M}$ be a left proper combinatorial simplicial model category. Write $\mathbf{MS}=\mathrm{Mot}(\mathrm{Set}_{\Delta})$. Then the left Quillen functor $D \otimes \mathrm{Mot}(\mathbf{1}): \mathbf{M} \otimes \mathbf{MS} \to \mathrm{Mot}( \mathbf{M})$ is a left Quillen equivalence, where $D: \mathbf{M} \to \mathrm{Mot}( \mathbf{M}) $ denotes the diagonal functor. \end{theorem} \begin{proof} By Theorem~\ref{Drep}, it is sufficient to prove the theorem in the case $\mathbf{M}= \mathrm{Set}_{\Delta}^{\mathcal{C}}$ with a simplicial category $\mathcal{C}$. The $2$-category $\mathrm{Model}^{\rm lpc}_{\Delta}$ has a Cartesian closed monoidal structure with the unit object $\mathrm{Set}_{\Delta}$. The unit map $\bf{1}:\mathrm{Set}_{\Delta} \to \mathrm{Set}_{\Delta}^{\mathcal{C}}$ induces the left Quillen functor $\mathrm{Mot}(\bf{1}): \mathbf{MS} \to \mathrm{Mot}( \mathrm{Set}_{\Delta}^{\mathcal{C}} )$. Here the unit map $ \bf{1}:\mathrm{Set}_{\Delta} \to \mathrm{Set}_{\Delta}^{\mathcal{C}} $ is the diagonal functor. Furthermore there is a canonical equivalence $:\mathrm{Mot}( \mathrm{Set}_{\Delta}^{\mathcal{C}} ) \simeq \mathrm{Mot}( \mathrm{Set}_{\Delta} )^{\mathcal{C}}= \mathbf{MS}^{\mathcal{C}}$. By Theorem~\ref{Dugger--Lurie}, we have a chain of equivalences $\mathbf{MS}_\infty \otimes N_\Delta(\mathrm{Set}_{\Delta}^{\mathcal{C}})\simeq \mathrm{Fun}^{\rm L}(\mathbf{MS}_\infty^{\rm op},\,\mathcal{S})^{N_\Delta(\mathcal{C})}\simeq \mathbf{MS}_\infty^{N_\Delta( \mathcal{C})} \simeq N_\Delta (\mathrm{Mot}( \mathrm{Set}_{\Delta} )^{\mathcal{C}}).$ Hence the canonical functor $\mathbf{MS} \otimes \mathbf{M} \to \mathbf{M}$ is a left Quillen equivalence. \end{proof} \begin{corollary} The left proper combinatorial simplicial model category $\mathbf{M}$ is a motivic model category if and only if the underlying $\infty$-category is a $\mathbf{MS}_\infty$-module object of $\mathrm{Pr}^{\rm L}$. Here $\mathrm{Pr}^{\rm L}$ denotes the symmetric monoidal $\infty$-category of presentable $\infty$-categories whose functors are colimit preserving functors. \qed \end{corollary} \section{Motivic derived algebraic geometry.} \label{sec:MDAG} \subsection{The motivic $\infty$-category of motivic spaces and the motivic $\infty$-category of motivic $\infty$-categories.} \label{sec:otimes1} Let $\mathbf{M}$ be a left proper combinatorial simplicial monoidal model category. If $\mathbf{M}$ has a Cartesian closed symmetric monoidal model structure, then the model category $\mathrm{Mot}(\mathbf{M})$ is an $\mathbf{M}$-enriched model category. Therefore $\mathrm{Mot}(\mathbf{M})$ is tensored over $\mathbf{M}$, and we have a left Quillen bifunctor \[ - \otimes - : \mathbf{M} \otimes \mathrm{Mot}(\mathbf{M}) \to \mathrm{Mot}(\mathbf{M}). \] Let $\mathbf{1}$ be the unit element of the monoidal model category $\mathrm{Mot}(\mathbf{M})$. Then the left Quillen bifunctor $- \otimes -$ induces a Quillen adjunction \[ - \otimes \mathbf{1} : \mathbf{M} \rightleftarrows \mathrm{Mot}(\mathbf{M}): \mathrm{Hom}_{\mathrm{Mot}(\mathbf{M} ) } (\mathbf{1},\, - ). \] By using the above Quillen adjunction, we define the motivic $\infty$-category of motivic spaces, the motivic $\infty$-category of motivic $\infty$-categories and the motivic $\infty$-category of motivic $\infty$-topoi. Let $\mathbf{1}$ be the unit element of the symmetric monoidal $\infty$-category $ \mathrm{MCat}_\infty$. Let $\mathcal{S}$ denote the $\infty$-category of spaces, $\Cat{\infty}$ the $\infty$-category of $\infty$-categories and ${}^{\mathrm{L}}\mathfrak{Top}$ the $\infty$-category of $\infty$-topoi. By using Theorem~\ref{Motuniv}, we have $\mathbf{MS}_\infty = \mathcal{S}\otimes \mathbf{1}$ and $\mathbf{MCat}_\infty= \Cat{\infty} \otimes \mathbf{1}$. We say that $\mathbf{MS}$ is the motivic $\infty$-category of motivic spaces and $\mathbf{MCat}_\infty$ is the motivic $\infty$-category of motivic $\infty$-categories. We set ${}^{\mathrm{L}}\mathbf{MTop}= {}^{\mathrm{L}}\mathfrak{Top} \otimes \mathbf{1}$, and we say that ${}^{\mathrm{L}}\mathbf{MTop}$ is the motivic $\infty$-category of {\it motivic $\infty$-topoi} and an object of $ {}^{\mathrm{L}}\mathbf{MTop}$ is a motivic $\infty$-topoi. \subsection{Definition of motivic $\infty$-bicategories.} The model category of scaled simplicial sets $\mathrm{Set}_{\Delta}^{\rm sc}$ is left proper combinatorial. However this is not simplicial or monoidal. In order to formulate the motivic model category of $\mathrm{Set}_{\Delta}^{\rm sc}$, we use the model category $(\mathrm{Set}_{\Delta}^+)_{/N(\Delta^{\rm op})}$ which is left proper combinatorial simplicial symmetric monoidal category. The model structure on $(\mathrm{Set}_{\Delta}^+)_{/N(\Delta)^{\rm op}}$ is the Bousfield localization of the coCartesian model structure induced by the complete Segal model structure on $( \mathrm{Set}_{\Delta}^+)^{\Delta^{\rm op}}$~\cite[p.34, Proposition 1.5.7]{LG}. There is a left Quillen functor $\mathrm{sd}^+: \mathrm{Set}_{\Delta}^{\rm sc} \to (\mathrm{Set}_{\Delta}^+)_{/N(\Delta)^{\rm op}}$ which is called a {\it subdivision} functor~\cite[p.145, Definition 4.3.1]{LG}. By \cite[p.150, Theorem 4.3.1.13]{LG}, the subdivision functor $\mathrm{sd}^+:\mathrm{Set}_{\Delta}^{\rm sc} \to (\mathrm{Set}_{\Delta}^+)_{/N(\Delta)^{\rm op}}$ is a left Quillen equivalence. \begin{proposition} Let $\mathrm{Mot}(\mathrm{Set}_{\Delta}^{\rm sc})$ denote the functor category $(\mathrm{Set}_{\Delta}^{\rm sc})^{(\mathbf{Sm}/S)^{\rm op}}$ and $ \mathrm{Mot}(\mathrm{sd}^+): \mathrm{Mot}(\mathrm{Set}_{\Delta}^{\rm sc}) \rightleftarrows \mathrm{Mot}( (\mathrm{Set}_{\Delta}^+)_{/N(\Delta)^{\rm op}} ): \mathrm{Mot}(F)$ the adjunction induced by the Quillen equivalence $\mathrm{sd}^+: \mathrm{Set}_{\Delta}^{\rm sc} \rightleftarrows (\mathrm{Set}_{\Delta}^+)_{/N(\Delta)^{\rm op}} :F$. We define a model structure on $\mathrm{Mot}(\mathrm{Set}_{\Delta}^{\rm sc})$ as follows: \begin{itemize} \item[(C)] A morphism $f:\overline{X} \to \overline{Y} $ is a cofibration if and only if it is a pointwise cofibration. \item[(W)] A morphism $f:\overline{X} \to \overline{Y}$ is a weak equivalence if and only if it $\mathrm{Mot}(\mathrm{sd}^+)$ is a motivic $(\mathrm{Set}_{\Delta}^+)_{/N(\Delta)^{\rm op}}$-equivalence. \item[(F)] A morphism $f:\overline{X} \to \overline{Y}$ is a fibration if and only if it has the right lifting property with respect to all morphisms which satisfies the condition $(C)$ and $(W)$. \end{itemize} Then the Quillen adjunction $\mathrm{Mot}(\mathrm{sd}^+ ):\mathrm{Mot}(\mathrm{Set}_{\Delta}^{\rm sc}) \rightleftarrows \mathrm{Mot}( (\mathrm{Set}_{\Delta}^+)_{/N(\Delta)^{\rm op}} ): \mathrm{Mot}(F)$ is a Quillen equivalence. \end{proposition} \begin{proof} By the definition of the model structure on $\mathrm{Mot}(\mathrm{Set}_{\Delta}^{\rm sc})$, the induced functor $\mathrm{Mot}(\mathrm{sd}^+)$ is a left Quillen functor. Note that the functor $(-)^{ (\mathbf{Sm}/S)^{\rm op}} $ preserves left Quillen equivalence between the left proper combinatorial model categories where the functor $(-)^{ (\mathbf{Sm}/S)^{\rm op}} $ induces the projective model structure. The model structure on $\mathrm{Mot}(\mathrm{Set}_{\Delta}^{\rm sc})$ is just the Bousfield localization of $(\mathrm{Set}_{\Delta}^{\rm sc})^{(\mathbf{Sm}/S)^{\rm op}}$ that the Quillen adjunction $\mathrm{Mot}(\mathrm{sd}^+ ):\mathrm{Mot}(\mathrm{Set}_{\Delta}^{\rm sc}) \rightleftarrows \mathrm{Mot}( (\mathrm{Set}_{\Delta}^+)_{/N(\Delta)^{\rm op}} ): \mathrm{Mot}(F)$ is a Quillen equivalence. \end{proof} \begin{definition} We say that a fibrant object of the model category $\mathrm{Mot}(\mathrm{Set}_{\Delta}^{\rm sc}) $ is a {\it motivic $\infty$-bicategory} \end{definition} \subsection{The motivic scaled straightening and unstraightening.} \label{sec:MStUnst} Let $\mathbf{M}$ be a left proper combinatorial simplicial monoidal model category and $Re:\mathrm{Rep}(\mathbf{M}) \rightleftarrows \mathbf{M}:G $ a small presentation. Let $X$ be an object of $\mathbf{M}$. Then the small presentation induces an adjunction \[ Re_{/ X}: \mathrm{Rep}(\mathbf{M})_{/G(X)} \rightleftarrows \mathbf{M}_{/X} :G_{/X}, \] where the left is a left proper combinatorial model category on which model structure is the projective model structure induced by the covariant model structure~\cite[p.68, Definition 2.1.4.5]{HT} under the equivalence $(\mathrm{Set}_{\Delta}^{\mathbf{M}_\lambda^{\rm op}} )_{/G(X)}= \int_{c \in \mathbf{M}_\lambda^{\rm op}} (\mathrm{Set}_{\Delta})_{/G(X)(c)}$ of model categories. We have a model structure on $\mathbf{M}_{/X}$ which is the projective model structure induced by $\mathrm{Rep}(\mathbf{M})_{/G(X)}$. We say that the model structure on $\mathbf{M}_{/X} $ is the {\it covariant model structure}. \begin{lemma} \label{Lemma2} Let $\mathbf{M}$ be a left proper combinatorial simplicial monoidal model category. Let $X$ be an object of $M$ and $\mathcal{C}$ a simplicial category. Then $-\otimes \bf{1} :\mathbf{M} \to \mathrm{Mot}(\mathbf{M})$ induces left Quillen equivalences \[ \mathrm{Mot}(\mathbf{M}_{/X}) \to \mathrm{Mot}(\mathbf{M})_{/X \otimes \bf{1}}, \ \mathrm{Mot}(\mathbf{M}^{\mathcal{C}}) \to \mathrm{Mot}(\mathbf{M})^{\mathcal{C}}, \] where the model structures on $\mathbf{M}_{/X}$ and $\mathrm{Mot}(\mathbf{M})_{/X \otimes \bf{1}}$ are covariant model structures and the model structures on $ \mathbf{M}^{\mathcal{C}}$ and $\mathrm{Mot}(\mathbf{M})^{\mathcal{C}}$ are projective model structures. \end{lemma} \begin{proof} By Lemma~\ref{Lemma}, it is sufficient to prove that we have weak equivalences $N_\Delta(\mathrm{Mot}(\mathbf{M}_{/X})^\circ ) \to N_\Delta (\mathrm{Mot}((\mathbf{M})_{/ X \otimes \bf{1}} )^\circ)$ and $ N_\Delta ((\mathrm{Mot}(\mathbf{M}^{\mathcal{C}}) )^\circ) \to N_\Delta( ( \mathrm{Mot}(\mathbf{M})^{\mathcal{C}})^\circ )$ of $\infty$-categories. It is clear that the second induced functor is a weak equivalence. Since these $\infty$-categories $N_\Delta(\mathrm{Mot}(\mathbf{M}_{/X})^\circ )$ and $N_\Delta (\mathrm{Mot}((\mathbf{M})_{/X \otimes \bf{1}} )^\circ)$ are equivalent to the $\infty$-category $\mathrm{Fun}(X^{\rm op},N_\Delta(\mathrm{Mot}(\mathbf{M} ) ^{\circ}))$, the first induced functor is a weak equivalence. \end{proof} Let $\overline{X}$ be a fibrant scaled simplicial set. Since the subdivision functor $\mathrm{sd}^+:\mathrm{Set}_{\Delta}^{\rm sc} \to (\mathrm{Set}_{\Delta}^+)_{N(\Delta)^{\rm op}}$ is a left Quillen equivalence, there exists a fibrant object $X^+$ of $(\mathrm{Set}_{\Delta}^+)_{N(\Delta)^{\rm op}}$ such that $\overline{X}$ is weakly equivalent to $F(X^+)$. Let $-\otimes \mathbf{1}:(\mathrm{Set}_{\Delta}^+)_{/N(\Delta)^{\rm op}} \to \mathrm{Mot}((\mathrm{Set}_{\Delta}^+)_{/N(\Delta)^{\rm op}})$ denote the functor defined Section~\ref{sec:otimes1}, and write $\overline{X} \otimes \mathbf{1}= F(X^+ \otimes \mathbf{1})$. Then we have a motivic version of the straightening and unstraightening: \begin{theorem} \label{MStUnst} Let $\mathcal{C}$ be a $\mathrm{Set}_{\Delta}^+$-enriched category. Then the induced Quillen adjunction the motivic scaled straightening and unstraightening: \[ \mathrm{Mot}(\mathrm{St}_{\mathcal{C}}^{\rm sc}): \mathrm{Mot}(\mathrm{Set}_{\Delta}^{+})_{ / N^{\rm sc}(\mathcal{C}) \otimes \bf{1}} \rightleftarrows \mathrm{Mot}(\mathrm{Set}_{\Delta}^{+})^{ \mathcal{C}^{\rm op}} :\mathrm{Mot}(\mathrm{Un}_{\mathcal{C}}^{\rm sc}). \] is a Quillen equivalence. \end{theorem} \begin{proof} This is a direct result from Lemma~\ref{Lemma2} and the scaled straightening and unstraightening: Theorem~\ref{StUnst}. \end{proof} \subsection{Motivic $\infty$-topoi and motivic classifying $\infty$-topoi.} Let $\mathcal{X}$ be a motivic $\infty$-category. We say that $\mathcal{X}$ is a motivic $\infty$-topos if there exists an $\infty$-topos $\mathcal{X}_0$ such that $\mathcal{X} \simeq \mathcal{X}_0 \otimes \mathbf{MS}_\infty$. Equivalently, a motivic $\infty$-topos is an $\mathbf{MS}_\infty$-module object of the symmetric monoidal $\infty$-category of $\infty$-topoi. \begin{proposition} \label{topos} Let $\mathcal{X}$ be a motivic $\infty$-topos. Then $\mathcal{X}$ is also an $\infty$-topos. \end{proposition} \begin{proof} Let $\mathcal{C}$ be a small motivic $\infty$-category. Then $\mathcal{C}$ is also an $\infty$-category. Then there exists a simplicial category $\mathcal{D}$ and a weak equivalence $\mathfrak{C}[\mathcal{C} \times \mathbf{MS}^{\omega,\,\rm op} ] \to \mathcal{D} $ of simplicial categories such that $\mathcal{X}$ is an accessible left exact localization of $N_\Delta(( \Set{\Delta}^{\mathcal{D}})^\circ)$. Since $\mathbf{MS}^\omega$ and $\mathcal{C}$ are small $\infty$-categories, hence the $\infty$-category $N_\Delta((\Set{\Delta}^{\mathcal{D}})^\circ)$ is an $\infty$-topos. The accessible left exact localization $\mathcal{X}$ is also an $\infty$-topos. \end{proof} Let $\widehat{\BMCat{\infty}}$ denote the $\infty$-bicategory of (not necessary small) motivic $\infty$-categories. Let ${}^{\mathrm{L}}\mathbf{MTop}$ be a subcategory of $\widehat{\BMCat{\infty}}$ whose objects are motivic $\infty$-topoi and morphisms are left exact colimit preserve functors. We say that ${}^{\mathrm{L}}\mathbf{MTop}$ is the $\infty$-bicategory of motivic $\infty$-topoi, and a left exact colimit preserving functor between motivic $\infty$-topoi is a {\it geometric morphism}. \begin{definition}[{\rm cf.}~\cite{HT} p.369, Definition 5.2.8.8(Joyal)] Let $\mathcal{C}$ be a motivic $\infty$-category. A factorization system $(S_L, \, S_R)$ is a pair of collections of morphisms of $\mathcal{C}$ which satisfy the following axioms: \begin{enumerate} \item The collections $S_L$ and $S_R$ are closed under retracts. \item The collection $S_L$ is left orthogonal to $S_R$ \item For any morphism $h:X \to Z$ in $\mathcal{C}$, there exists an object $Y$ of $\mathcal{C}$, morphisms $f: X \to Y$ and $g: Y \to Z$ such that $h= g \circ f$, $f \in S_L$ and $g \in S_R$. \end{enumerate} \end{definition} Let $\mathcal{X}$ and $\mathcal{Y}$ be motivic $\infty$-topoi. Let $\mathrm{Fun}^*(\mathcal{X},\,\mathcal{Y})$ denote the full subcategory of $\mathrm{Fun}(\mathcal{X},\,\mathcal{Y})$ spanned by those functors $f: \mathcal{X} \to \mathcal{Y}$ which admit geometric left adjoints. \begin{definition}[\cite{DAG5} p.27, Definition 1.4.3] \label{crtop} Let $\mathcal{K}$ be a motivic $\infty$-topos. A geometric structure on $\mathcal{K}$ is a factorization system $(S_L^\mathcal{X},\, S_R^\mathcal{X})$ on $\mathrm{Fun}^*(\mathcal{K},\,\mathcal{X})$, which depends functorially on $\mathcal{X}$. We say that $\mathcal{K}$ is a {\it classifying motivic $\infty$-topos} and a morphism in $S^{\mathcal{X}}_R$ is a {\it local morphism}. For any classifying motivic $\infty$-topos $\mathcal{K}$ and motivic $\infty$-topos $\mathcal{X}$, we let $\mathrm{Str}^{\rm loc}_{\mathcal{K}}(\mathcal{X})$ denote the subcategory of $\mathrm{Fun}^*(\mathcal{K},\,\mathcal{X})$ spanned by all the objects of $\mathrm{Fun}^*(\mathcal{K},\,\mathcal{X})$, and all morphisms of $\mathrm{Str}^{\rm loc}_{\mathcal{K}}(\mathcal{X})$ are local. We say that an object of $\mathrm{Str}^{\rm loc}_{\mathcal{K}} (\mathcal{X}) $ is a {\it $\mathcal{K}$-structured sheaf} on $\mathcal{X}$. If a geometric morphism $f: \mathcal{K} \to \mathcal{K}'$ of classifying motivic $\infty$-topoi carries all local morphisms on $\mathrm{Fun}^*(\mathcal{K},\,\mathcal{X})$ to local morphisms on $\mathrm{Fun}^*(\mathcal{K}',\,\mathcal{X})$ for any motivic $\infty$-topos, we say that {\it $f$ is compatible with the geometric structures.} \end{definition} Let $\mathcal{K}$ be a classifying motivic $\infty$-topos. By the motivic scaled straightening and unstraightening Theorem~\ref{MStUnst}, we have a Quillen equivalence \[ \mathrm{Mot}(\mathrm{St}^{\rm sc}): \mathrm{Mot}(\Set{\Delta}^+)_{/ {}^{\mathrm{L}}\mathbf{MTop}^{\rm op} } \rightleftarrows \mathrm{Mot}(\Set{\Delta}^+)^{\mathfrak{C}^{\rm sc}\left[ {}^{\mathrm{L}}\mathbf{MTop} \right]} :\mathrm{Mot}(\mathrm{Un}^{\rm sc}). \] Under the Quillen equivalence, the functor $\mathrm{Str}^{\rm loc}_\mathcal{K}:{}^{\mathrm{L}}\mathbf{MTop}\to \widehat{\MCat{\infty}}$ determines a $\infty$-category ${}^{\mathrm{L}}\mathfrak{MTop}(\mathcal{K})$ and a locally coCartesian fibration $p:{}^{\mathrm{L}}\mathfrak{MTop}(\mathcal{K}) \to {}^{\mathrm{L}}\mathbf{MTop}$. We call the $\infty$-category ${}^{\mathrm{L}}\mathfrak{MTop}(\mathcal{K})$ the {\it $\mathcal{K}$-structured motivic $\infty$-topoi}. Furthermore we have that the motivic $\infty$-categorical Yoneda functor $\mathrm{Fun}^*(\mathcal{K},\,- ): {}^{\mathrm{L}}\mathfrak{MTop}_{\mathcal{K}/} \to \widehat{\MCat{\infty}}$ classifies an $\infty$-category ${}^{\mathrm{L}}\mathfrak{MTop}_{\mathcal{K}/ }$ and a locally coCartesian fibration $q: {}^{\mathrm{L}}\mathfrak{MTop}_{\mathcal{K}/ } \to {}^{\mathrm{L}}\mathbf{MTop}$. By the similar argument of the proof of \cite[p.610, Proposition 6.3.4.6]{HT}, we have that the $\infty$-category ${}^{\mathrm{R}}\mathfrak{MTop}$ admits pullbacks. In other words, for any geometric morphism $f:\mathcal{K} \to \mathcal{K}'$, the forgetful functor $f_* : {}^{\mathrm{R}}\mathfrak{MTop}_{/\mathcal{K}} \to {}^{\mathrm{R}}\mathfrak{MTop}_{/\mathcal{K}'}$ admits a right adjoint. Note that for any motivic $\infty$-topos $\mathcal{X}$, the opposite category of ${}^{\mathrm{L}}\mathfrak{MTop}_{\mathcal{X}/}$ is weakly equivalent to $({}^{\mathrm{R}}\mathfrak{MTop}_{/\mathcal{X}})^{\rm op}$ as $\infty$-categories. Consider the case that $f: \mathcal{K} \to \mathcal{K}'$ is a geometric morphism of motivic classifying $\infty$-topoi such that $f$ is compatible with the geometric structures. Then we have the (homotopically) commutative diagram of $\infty$-categories: \[ \xymatrix@1{ {}^{\mathrm{L}}\mathfrak{MTop}_{ \mathcal{K} / } \ar[r]<0.5mm>^{f_*} & {}^{\mathrm{L}}\mathfrak{MTop}_{\mathcal{K}'/ } \ar[l]<0.5mm>^{f^{-1}} \\ {}^{\mathrm{L}}\mathfrak{MTop}(\mathcal{K}) \ar[u] & {}^{\mathrm{L}}\mathfrak{MTop}(\mathcal{K}') \ar[l]^{f^{-1}} \ar[u] }. \] We will prove that the lower horizontal functor has a left adjoint: \begin{theorem} \label{spec} Let $f:\mathcal{K} \to \mathcal{K}'$ be a geometric morphism of motivic classifying $\infty$-topoi such that $f$ is compatible with geometric structures. Given the commutative diagram \[ \xymatrix@1{ {}^{\mathrm{L}}\mathfrak{MTop}(\mathcal{K}) \ar[dr]_p & & \ar[ll]^{f^{-1}} {}^{\mathrm{L}}\mathfrak{MTop}(\mathcal{K}') \ar[dl]_q \\ & {}^{\mathrm{L}}\mathbf{MTop} & } \] where $f^{-1}$ is the induced functor by $f$, and $p$ and $q$ are locally coCartesian fibrations. Then $f^{-1}:{}^{\mathrm{L}}\mathfrak{MTop}(\mathcal{K}') \to {}^{\mathrm{L}}\mathfrak{MTop}(\mathcal{K})$ admits a left adjoint relative to ${}^{\mathrm{L}}\mathbf{MTop}$. \end{theorem} \begin{proof} Let $\pi: \mathcal{X} \to \mathcal{Y}$ be a geometric morphism between motivic $\infty$-topoi. Then the functor $f^{-1}:{}^{\mathrm{L}}\mathfrak{MTop}(\mathcal{K}') \to {}^{\mathrm{L}}\mathfrak{MTop}(\mathcal{K})$ carries locally $q$-coCartesian edges to locally $p$-coCartesian edges on ${}^{\mathrm{L}}\mathbf{MTop}$. In fact, a locally $q$-coCartesian edge ${}^{\mathrm{L}}\mathfrak{MTop}(\mathcal{K}') $ forms to $\alpha: \mathcal{O}_\mathcal{X} \to \pi_* \mathcal{O}_\mathcal{X}$ for $\mathcal{O}_\mathcal{X} \in \mathrm{Str}^{\rm loc}_{\mathcal{K}'} (\mathcal{X})$. Since $\alpha$ is $q$-coCartesian, we have a chain of equivalences\[ (\pi_* \mathcal{O}_\mathcal{X}) \circ f \simeq (\pi \circ \mathcal{O}_\mathcal{X}) \circ f \simeq \pi \circ (\mathcal{O}_\mathcal{X} \circ f) \simeq \pi _* (\mathcal{O}_\mathcal{X} \circ f). \] Hence $f^{-1}(\alpha):\mathcal{O}_\mathcal{X} \circ f \to (\pi _* \mathcal{O}_\mathcal{X}) \circ f$ is equivalent to a locally $p$-coCartesian edge $:\mathcal{O}_\mathcal{X} \circ f \to \pi _* (\mathcal{O}_\mathcal{X} \circ f) $. By \cite[Proposition 7.3.2.6]{HA}, it is sufficient to prove that the functor \[ f^{-1}_{\mathcal{X}}: \mathrm{Str}^{\rm loc}_{\mathcal{K}'} (\mathcal{X}) \to \mathrm{Str}^{\rm loc}_{\mathcal{K}} (\mathcal{X}) \] admits a left adjoint. Let $\mathcal{O}_\mathcal{X}$ be a object of $\mathrm{Fun}^*(\mathcal{K},\,\mathcal{X})$ and $\mathcal{O}'_\mathcal{X}$ an object of $\mathrm{Fun}^*(\mathcal{K}',\,\mathcal{X})$. Let $\phi: \mathcal{O}_\mathcal{X} \to \mathcal{O}'_\mathcal{X} \circ f$ be a local morphism in $\mathrm{Fun}^*(\mathcal{K},\,\mathcal{X})$. We can obtain a left Kan extension $f_* \mathcal{O}_\mathcal{X}: \mathcal{K}' \to \mathcal{X}$ along $f$. The transformation $ \phi_*: f_*( \mathcal{O}_\mathcal{X}) \to \mathcal{O}'_X $ induced by $\phi$ gives a functorial factorization \[ f_*(\mathcal{O}_\mathcal{X}) \to \mathrm{MSpc}^{\mathcal{K}}_{\mathcal{K}'} (\mathcal{O}_\mathcal{X}) \overset{\mathrm{MSpc}(\alpha)}\to \mathcal{O}'_\mathcal{X} \] where $\mathrm{MSpc} (\alpha)$ is local. Hence we can get a functor $\mathrm{MSpc}^{\mathcal{K}'}_{\mathcal{K},\,\mathcal{X}}$ which is a left Kan extension of $f^{-1}_\mathcal{X}$. \end{proof} \begin{remark} In this paper, the motivic $\infty$-category ${}^{\mathrm{L}}\mathfrak{MTop}(\mathcal{K})$ is constructed by following in~\cite[Remark 1.4.17]{DAG5}. \end{remark} \bibliographystyle{amsplain} \ifx\undefined\leavemode\hbox to3em{\hrulefill}\, \newcommand{\leavemode\hbox to3em{\hrulefill}\,}{\leavemode\hbox to3em{\hrulefill}\,} \fi \begin{bibdiv} \begin{biblist} \bib{MR2771591}{article}{ author={Barwick, C}, title={On left and right model categories and left and right Bousfield localizations}, journal={Homology, Homotopy Appl.}, volume={12}, date={2010}, number={2}, pages={245--320}, } \bib{MR2276611}{article}{ author={Bergner, J. E.}, title={A model category structure on the category of simplicial categories}, journal={Trans. Amer. Math. Soc.}, volume={359}, date={2007}, number={5}, pages={2043--2058}, } \bib{UnivHomotopy}{article}{ author={Dugger, D.}, title={Universal homotopy theories}, journal={Adv. Math.}, volume={164}, date={2001}, number={1}, pages={144--176}, } \bib{DRep}{article}{ author={Dugger, D.}, title={Combinatorial model categories have presentations}, journal={Adv. Math.}, volume={164}, date={2001}, number={1}, pages={177--201}, } \bib{GeSn}{article}{ author={Gepner, D.}, author={Snaith, V.}, title={On the motivic spectra representing algebraic cobordism and algebraic {$K$}-theory}, journal={Documenta Mathematica}, volume={14}, date={2009}, pages={359--396}, } \bib{Joyal}{article}{ author={Joyal, A.}, title={Quasi-categories and Kan complexes}, note={Special volume celebrating the 70th birthday of Professor Max Kelly}, journal={J. Pure Appl. Algebra}, volume={175}, date={2002}, number={1-3}, pages={207--222}, } \bib{HT}{book}{ author={Lurie, J.}, title={Higher topos theory}, series={Annals of Mathematics studies}, volume={170}, publisher={Princeton University Press}, date={2009} pages={xv+925}, } \bib{LG}{article}{ author={Lurie, J.}, title={$(\infty,\,2)$-categories and the Goodwillie calculus I}, journal={Preprint, available at www.math.harvard.edu/lurie} date={2009} } \bib{HA}{article}{ author={Lurie, J.}, title={Higher algebra}, journal={Preprint, available at www.math.harvard.edu/lurie} date={2014} } \bib{DAG5}{article}{ author={Lurie, J.}, title={Structured spaces}, journal={Preprint, available at www.math.harvard.edu/lurie} date={2011} } \bib{DAG7}{article}{ author={Lurie, J.}, title={Spectral Schemes}, journal={Preprint, available at www.math.harvard.edu/lurie} date={2011} } \bib{Jardine}{article}{ author={Jardine, J. F.}, title={Motivic symmetric spectra}, journal={Documenta Mathematica}, volume={5}, date={2000}, pages={445--553 (electronic)} } \bib{MV}{article}{ author={Morel, F.}, author={Voevodsky, V.}, title={{${\bf A}\sp 1$}-homotopy theory of schemes}, journal={Institut des Hautes \'Etudes Scientifiques. Publications Math\'ematiques}, volume={90}, date={1999}, pages={45--143 (2001)} } \bib{QuillenHomotopy}{book}{ author={Quillen, D. G.}, title={Homotopical algebra}, series={Lecture Notes in Mathematics, No. 43}, publisher={Springer-Verlag, Berlin-New York}, date={1967}, pages={iv+156 pp. (not consecutively paged)}, } \bib{V}{article}{ author={Voevodsky, V.}, title={{$\bold A\sp 1$}-homotopy theory}, journal={Documenta Mathematica}, volume={Extra Vol. I}, date={1998}, pages={579--604 (electronic)} } \end{biblist} \end{bibdiv} \end{document}
1,314,259,995,387
arxiv
\section{Introduction} Stellar dynamics, a well established discipline, continues to be an active field of research due to its complex dynamics. Newly developed sophisticated tools on nonlinear dynamics or ever increasing numerical capacities are readily applied to N-body gravitating systems. The problem of relaxation of gravitating systems is among the key ones in stellar dynamics since it can be directly constrained by observations, especially, of well-relaxed globular clusters and elliptical galaxies. Historically, plasma methods were among the first to be applied to gravitating systems (\cite{Chandra}), neglecting, however, a drastic difference between plasma and long-range gravity. In retrospect, it can seem strange how easily were ignored the challenges in theory and observations, and later also in numerical simulations: (a) the Coulomb logarithmic cut-off and hence the canceling of the N-body effects was applied at the absence of Debye screening; (b) the result contradicted observations, i.e. the plasma two-body relaxation timescale exceeds the age of elliptical galaxies by several orders of magnitude; (c) the two-body timescale could never be identified in numerical studies. Then comes the epoch of realization of importance of chaos, occasionally in unexpected forms, for nonlinear systems. Chaos caused by small perturbations appears crucial even to nearly integrable problems, such as the dynamics of planetary system, (see \cite{Laskar,Morb} and refs therein); for non-integrable N-body systems the situation is far more complex. The Fermi-Pasta-Ulam problem is another example of apparently simple, but still not well understood system (\cite{FPU}). Do chaos, chance and randomness have significant role also in the evolution of stellar systems, or the two-body plasma approach is the whole story? Although it has been generally agreed that chaos must affect N-body dynamics (see \cite{GP,Cont,Reg}), proper treatments require well-founded approaches\footnote{Ruelle mentions the appearance of numerous incorrect papers about chaos when chaos became a fashion (\cite{R}).}. Application of ergodic theory methods enabled us to prove a notable result: the spherical systems are exponentially unstable and that chaoticity drives their dynamics (\cite{GS}). An exponential law defines an intrinsic timescale, the collective relaxation time, which for real stellar systems has a value intermediate between the dynamical (crossing) and the two-body timescales. This was obtained by estimating the divergence of the trajectories of the system in a Riemannian space defined by the potential of the interaction (Maupertuis principle), i.e. by a method known in theory of dynamical systems (\cite{Arnold}). Importantly, the derived collective (N-body) relaxation timescale fits the observational data (\cite{Vesp}). The three timescales, i.e., the dynamical $\tau_{dyn}$, collective $\tau_{cr}$, and two-body $\tau_{b}$ timescales, correspond to the 3 distance scales of the system, specifically, its size $D$, mean inter-particle distance $d$, and radius of gravitational influence of particles $r_h$, in particular (\cite{GS}) (see also \cite{Lang}), \begin{equation} \label{RelTime} \tau_{cr}\sim \frac{D}{d} \tau_{dyn}\sim\tau_{dyn}N^{1/3}. \end{equation} When the role of complex N-body dynamics was finally recognized, another confusion did appear, namely, in assigning of the dynamical time as the relaxation timescale, i.e., for reaching fine-grained equilibrium. However, this again contradicts observations: globular clusters would then have already disappeared because of the evaporation of stars (\cite{Amb}) within 100 crossing times, i.e., within around 100 mln years.\footnote{One of the motivations for concluding that the dynamical time equals the relaxation time were the numerical experiments on the apparent divergence in real space first observed in (\cite{Miller}) for systems of between N=8 and 32 particles, for which obviously no relaxation process has any sense. Properly performed numerical experiments (\cite{EZ}) confirmed the timescale obtained in (\cite{GS}) via Maupertuis principle.} The dynamical timescale is responsible for reaching a coarse-grained state in non-stationary systems (violent relaxation) (\cite{LB}). Numerical studies possess their own difficulties, starting from the choice of descriptors up to the interpretation of the results, e.g., as shown in (\cite{GK_L}), the computer image of the Lyapunov exponents is non-equivalent to their definition. So, the importance of further searches of strict methods to study the chaos in stellar systems is doubtless. We revisit the problem of both N-body relaxation and its time scale using both geometric (\cite{AK}) and Van Kampen stochastic differential equation approaches (\cite{VK}). The results confirm those of the Maupertuis principle for the collective relaxation timescale of collisionless spherical systems\footnote{For a study of systems with rotational momentum (spiral galaxies) with quite different dynamics, see e.g., (\cite{GK88})}. \section{Stochastic instability} We consider an $N$-body system described by the Lagrangian \begin{equation} L(\mx{x},\mx{v})=\frac{1}{2}\sum_{a=1}^Nm_a|\mx{v}_a|^2-V(\mx{x}), \end{equation} where \begin{equation} V(\mx{x})=-G\sum_{a<b}\frac{m_am_b}{|\mx{x}_a-\mx{x}_b|}. \end{equation} Hereafter we use units $G=1$ and $m_a=1$, and $$ \mx{x}=(\mx{x}_1,\dots,\mx{x}_N),\quad \mx{v}=(\mx{v}_1,\dots,\mx{v}_N), $$ are the coordinates and velocities of stars, respectively, $$ \mx{x}_a=(\mx{x}_a^1,\mx{x}_a^2,\mx{x}_a^3),\quad \mx{v}_a=(\mx{v}_a^1,\mx{v}_a^2,\mx{v}_a^3), $$ where $a=1,\dots,N$. According to the theory of dynamical systems, the statistical properties of the systems can be studied from the behaviour of close trajectories (\cite{Arnold}). It is shown (\cite{AK}), that the evolution of the distance between two nearby trajectories, denoted by $\ell$, is described by a generalized Jacobi equation, which can be written in the form \begin{equation} \ddot{\ell}+\mathcal{B}(\mx{x},\mx{v})\ell=0, \end{equation} where \bea \mathcal{B}(\mx{x},\mx{v})&=&-\frac{1}{3N-1}\left[\Delta V(\mx{x}) +\frac{\nabla_{\mx{v}}^2V(\mx{x})}{2(E-V(\mx{x}))}\right]\\ &+&\frac{3}{3N-1}\left[\frac{\sum\limits_{a=1}^N|F_a(\mx{x})|^2}{2(E-V(\mx{x}))} -\frac{\left(\sum\limits_{a=1}^NF_a(\mx{x})\cdot\mx{v}_a\right)^2}{4(E-V(\mx{x}))^2}\right], \end{eqnarray*} $E$ is the total energy of the system \begin{equation} E=\frac{1}{2}\sum\limits_{a=1}^N\mx{v}_a^2+V(\mx{x}), \end{equation} and \begin{equation} F_a(\mx{x})=\mathop{\sum\limits_{b=1}^N}_{b\ne a}\frac{\mx{x}_b-\mx{x}_a}{|\mx{x}_b-\mx{x}_a|^3}. \end{equation} As in \cite{GS}, we assume that the system is collisionless\footnote{By collisionless systems, we understand, as usual, systems in which the direct impact of two stars have no role in their dynamics, and not the neglect of gravitational encounters (scattering) of two stars. In this sense, collisionless are even the dense cores of star clusters and galaxies.}. We then have $\Delta V(\mx{x})=0$ and one can substitute \begin{equation} \frac{\mx{v}_a^i\mx{v}_b^j}{2(E-V(\mx{x}))}=\frac{1}{3N}\delta_{ab}\delta^{ij}, \end{equation} to obtain \begin{equation} \label{ell} \ddot{\ell}+\omega(\mx{x})\ell=0, \end{equation} where \begin{equation} \omega(\mx{x})=\langle\mathcal{B}(\mx{x},\mx{v})\rangle =\frac{1}{2N(E-V(\mx{x}))}\sum\limits_{a=1}^N|F_a(\mx{x})|^2>0. \end{equation} One observes that $E-V(\mx{x})$ is the total kinetic energy of the system \begin{equation} E-V(\mx{x})=\frac{1}{2}\sum\limits_{a=1}^N|\mx{v}_a|^2\sim\frac{1}{2}N\langle v^2\rangle. \end{equation} Thus, \begin{equation} \omega(\mx{x})\sim\frac{2}{N^2\langle v^2\rangle}\sum\limits_{a=1}^N|F_a(\mx{x})|^2. \end{equation} For spherically symmetric systems, we can replace $|F_a(x)|^2$ with $|F(x)|^2$ (see \cite{Chandra}, \cite{Cohen}), where \begin{equation} F(\mx{x})=\sum\limits_{a=1}^N\frac{\mx{x}_a}{|\mx{x}_a|^3}. \end{equation} We then replace $\omega$ with a stochastic process (cf. \cite{CPC},~\cite{Chandra_st}). Let $X_1,\dots, X_N$ be a sequence of $N$ independent and identically distributed (i.i.d.) random variables each having finite values of expectation $\mu$ and variance $\sigma^2 > 0$. The central limit theorem states that as $N$ increases, the sum of $N$ random variables given by \begin{equation} S_N = X_1 + \dots + X_N \end{equation} approaches the normal distribution $\mathfrak{n}(\mu,\sigma)$. Then, at large $N$, $S_N$ will behave like a Gaussian process. This can be written as \begin{equation} \frac{S_N-N\mu}{\sigma\sqrt{N}}\sim\mathfrak{n}(0,1). \end{equation} In our case, we have $X=|F(\mx{x})|^2$, \begin{equation} \mu=\langle X \rangle = \langle |F(\mx{x})|^2 \rangle,\quad \sigma^2=\mbox{Var}(X)=\langle|F(\mx{x})|^4\rangle-\mu^2. \end{equation} Thus, \begin{equation} \omega\sim\omega_0+\omega_1\mathfrak{n}(0,1), \end{equation} where \begin{equation} \omega_0\sim\frac{\mu}{N\langle v^2\rangle},\quad \omega_1\sim\frac{\sigma}{N^{3/2}\langle v^2\rangle}. \end{equation} We investigate Eq.\Eq{ell} by means of a technique developed by Van Kampen (\cite{VK}) (cf. \cite{CPC}). One can derive the second moments of $\ell$ by simply rewriting Eq.\Eq{ell} in the following form \begin{equation} \label{Variance} \frac{d}{dt} \begin{pmatrix} \langle\ell^2(t)\rangle\\ \langle\dot{\ell}^2(t)\rangle\\ \langle\ell(t)\dot{\ell}(t)\rangle \end{pmatrix} =\begin{pmatrix} 0 & 0 & 2\\ 2\hat{\tau}\omega_1^2 & 0 & -2\omega_0\\\ -\omega_0 & 1 &0 \end{pmatrix} \begin{pmatrix} \langle\ell^2(t)\rangle\\ \langle\dot{\ell}^2(t)\rangle\\ \langle\ell(t)\dot{\ell}(t)\rangle \end{pmatrix}, \end{equation} where (\cite{CPC}) \begin{equation} \hat{\tau}=\frac{1}{2}\cdot \frac{\pi\sqrt{\omega_0}}{2\sqrt{\omega_0(\omega_0+\omega_1)}+\pi\omega_1}. \end{equation} The system given by Eq.\Eq{Variance} has a positive Lyapunov exponent $\chi$ defined by \begin{equation} \chi =\frac{1}{2}\left(q-\frac{4\omega_0}{3q}\right), \end{equation} where \begin{equation} q=\left(2\hat{\tau}\omega_1^2 +\sqrt{(2\hat{\tau}\omega_1^2)^2+(4\omega_0/3)^3}\right)^{1/3}. \end{equation} We now estimate $\mu$ and $\sigma$ to calculate $\omega_0$, $\omega_1$, and then $\hat{\tau}$ and $\chi$. Since one has (\cite{Cohen}) \begin{equation} \mu=\bigg\langle|F(\mx{x})|^2\bigg\rangle\sim n\langle v^2\rangle, \end{equation} where $n$ is the mean concentration of stars in the system, then \begin{equation} \sigma^2=\langle|F(\mx{x})|^4\rangle-\mu^2\sim n^2N^2\langle v^2\rangle^2. \end{equation} Therefore, \begin{equation} \omega_0\sim\frac{n}{N},\quad \omega_1\sim\frac{n}{\sqrt{N}},\quad \hat{\tau}\sim\frac{1}{2\sqrt{n}}\sim\tfrac{1}{2}\tau_{dyn}. \end{equation} And finally we derive the relaxation time \begin{equation} \tau_{cr}\sim\chi^{-1}\sim \tau_{dyn} N^{1/3}, \end{equation} confirming the result of Eq.\Eq{RelTime} derived in (\cite{GS}). \section{Conclusion} The stochastic equation approach used above complements the probabilistic approach of Chandrasekhar and von Neumann (1943). Both are supported by the fact of decay of the time correlation function due to the exponential instability and the Holtsmark distribution of the fluctuating force.\footnote{At distances of the order of the radius of gravitational influence $r_h = 1/\langle v^2\rangle$ the Holtsmark distribution diverges and should be cut off, as done in (\cite{Chandra_st}, \cite{GS}). For real stellar systems however, this cutoff is insensitive to the precise value of $r_h$, since for them $r_h$ is far smaller than the mean interstellar distance $d$ and hence the Holtsmark law vanishes far earlier.} Thus, the stochastic equation method confirms the purely geometric derivation of the collective relaxation time given by Eq.(\ref{RelTime}). Although that formula is also supported also by alternative numerical analyses (\cite{antoni}), the present derivation avoids any approximations in numerical simulations. Chaotic effects could be useful in constraining observationally modified gravity theories in the Solar system and galaxies (see \cite{Cap}).
1,314,259,995,388
arxiv
\section{Introduction} The most prolific source of gravitational waves (GWs) in the mHz band are galactic ultra compact binaries (UCBs), primarily comprised of two white dwarf stars. Ref.~\cite{Korol:2017qcx} describes a contemporary prediction for the population of UCBs detectable by the Laser Interferometer Space Antenna (LISA)~\cite{LISA}. GWs from UCBs are continuous sources for LISA, several thousands of which will be individually resolvable. The remaining binaries blend together to form a confusion-limited foreground that is expected to be the dominant ``noise'' contribution to the LISA data stream at frequencies below ${\sim}3$ mHz, the extent of which depending on the population of binaries and the observing time of LISA~\cite{Cornish:2017vip}. Of the thousands of resolvable binaries, the best-measured systems will serve as laboratories for studying the dynamical evolution of the binaries. Encoded within the orbital dynamics are relativistic effects, the internal structure of WD stars, and effects of mass transfer~\cite{Taam1980,Savonije1986,Willems2008,Nelemans2010,Littenberg_2019, Piro_2019}. The observable population of UCBs will depend on astrophysical processes undergone by binary stars that are currently not well understood, including the formation of the compact objects themselves, binary evolution, and the end result for such binaries~\citep{Webbink1984}. UCBs are detectable anywhere in the galaxy because the GW signals are unobscured by intervening material in the Galactic plane, providing an unbiased sample to infer large scale structure of the Milky Way~\cite{Adams:2012qw,Korol2018}. While LISA will dramatically increase our understanding of UCBs in the galaxy, there is an ever-increasing number of systems discovered by electromagnetic (EM) observations that will be easily detectable by LISA~\cite{Kupfer_2018, Burdge_2019,Burdge_2019b, Brown_2020}. Thus UCBs are guaranteed multimessneger sources and the joint EM+GW observations provide physical constraints on masses, radii, and orbital dynamics far beyond what independent EM or GW observations can achieve alone~\cite{Shah2014a, Littenberg_2019b}. The optimal detection, characterization, and removal of UCBs from the data stream has been long recognized as a fundamentally important and challenging aspect of the broader LISA analysis. Over-fitting the galaxy will result in a large contamination fraction in the catalog of detected sources, while under-fitting the UCB population will degrade the analyses of extragalactic sources in the data due to the excess residual. In this paper we describe a modern implementation of a UCB analysis pipeline which is a direct descendent of the trailblazing algorithms designed in response to the original Mock LISA Data Challenges (MLDCs)~\cite{Babak_2008,Babak_2010}, and similar methods developed for astrophysical transients and non-Gaussian detector noise currently in use for ground-based GW observations~\cite{Cornish:2014kda,Littenberg:2015}. \begin{figure*}[htp] \includegraphics[width=0.45\textwidth]{figures/highf_waveform.pdf} \includegraphics[width=0.45\textwidth]{figures/highf_corner.pdf} \caption{\label{fig:money_plot} Demonstration of the algorithm on a single, isolated, high frequency source. The top left panel shows the power spectrum of the data (black) after 1 year of observations, the posterior distribution of the residual (light blue), and the inferred noise level (light green). The residual and noise level are plotted as the median with 50\% and 90\% credible intervals. The bottom left panel shows the reconstructed signal waveform posterior (green) identified by the median frequency of the posterior distribution, $f_0^{\rm med} = 0.0183131182\ \rm{Hz}$. The right panel is a corner plot showing the marginalized posterior distributions of select parameters likely of most interest to the research community, including the frequency $f_0$, frequency derivative $\dot{f}$, amplitude $\mathcal{A}$, and sky location ($\theta,\phi$).} \end{figure*} \section{Previous work} Compared to other GW sources, UCBs are simple to model. When in the LISA band, the binary is widely separated and the stars' velocities are small compared to the speed of light $c$. Therefore the waveforms are well predicted using only leading order terms for the orbital dynamics of the binary~\cite{Peters_1963} and appear as nearly monochromatic (constant frequency) sources. Accurate template waveforms are computed at low computational cost using a fast/slow decomposition of the waveform convolved with the instrument response~\cite{Cornish:2007if}. The UCB population is nevertheless a challenging source for LISA analysis due to the sheer number of sources expected to be in the measurement band, rather than the complication of detecting and characterizing individual systems. Each source is well-modeled by $\mathcal{O}(10)$ parameters and over $10^4$ sources are expected to be individually resolvable by LISA, resulting in a ${\sim}10^5$ parameter model and thus ruling out any brute-force grid-based method. Compounding the challenge is the fact that the GW signals, though narrow-band, are densely packed within the LISA measurement band to the extent that sources are overlapping. As a consequence, a hierarchichal/iterative scheme where bright sources are removed and the data is reanalyzed produces biased parameter estimation and poorer detection efficiency: Each iteration leaves behind some residual due to imperfect subtraction, and enough iterations are required for the residuals to build up to the point where they limit further analysis~\cite{gClean}. It was determined in the early 2000s that stochastic sampling algorithms performing a global fit to the resolvable binaries, while simultaneously fitting a model for the residual confusion or instrument noise and using Bayesian model selection to optimize the number of detectable sources, provided an effective approach. The first full-scale demonstration of a galactic binary analysis was put forward by Crowder and Cornish~\cite{Cornish:2005qw,Crowder:2006eu} with the Blocked Annealed Metropolis (BAM) Algorithm. The BAM Algorithm started from the full multi-year data set provided by the Mock LISA Data Challenges (MLDCs)~\cite{Babak_2008}. Because the sources are narrow-band compared to the full measurement band of the detector, the search was conducted independently on sub-regions in frequency. The analysis region in each segment was buffered by additional frequency bins that overlapped with neighboring segments. The noise spectrum was artificially increased over the buffer frequencies to suppress signal power from sources in neighboring bands which spread into the analysis window. The template waveforms were computed in the time domain, Fourier transformed, and tested against the frequency domain data. In accordance to the MLDC simulations, the waveform model did not include the intrinsic frequency evolution of the binaries, and the frequency-dependent detector noise level was assumed to be known \emph{a priori}. The BAM analysis was a quasi-Bayesian approach, using a generalized multi-source $\mathcal{F}$ statistic likelihood that maximized, rather than marginalized, over four of the extrinsic parameters of each waveform. Model parameters used flat priors except for the sky location which was derived from an analytic model for the spatial distribution of binaries in the galaxy, projected onto the sky as viewed by LISA. To improve the convergence of the algorithm, particularly for high-$\rm{SNR}$ signals, the sampler used simulated annealing~\cite{Kirkpatrick671} during the burn-in phase. To sample from the likelihood function, BAM employed a custom Markov Chain Monte Carlo (MCMC) algorithm with a mixture of proposal distributions including uniform draws from the prior, jumps along eigenvectors of the Fisher information matrix for a given source, and localized uniform jumps over a range scaled by the estimated parameter errors. The BAM Algorithm made use of domain knowledge by explicitly proposing jumps by the modulation frequency $f \rightarrow f \pm 1/{\rm yr}$ to explore sidebands of the signal imparted by LISA's orbital motion. To determine the number of detectable sources, BAM employed an approximate Bayesian model selection criteria, where models of increasing dimension (i.e., number of detectable sources) were hierarchically evaluated, starting with a single source in each analysis segment and progressively adding additional sources to the fit. The different dimension models were ranked using the Laplace approximation to the Bayesian evidence~\cite{Jeffreys61}. The stopping criteria for the model exploration was met when the approximated model evidence reached a maximum. In response to the next generation of MLDCs, Littenberg~\cite{Littenberg:2011zg} extended the BAM Algorithm in several key ways, but maintained the original concept of analyzing independent segments with attention paid to the segment boundaries to avoid edge effects. The primary advancement of this generation of the search pipeline was the use of replica exchange between chains of different temperatures (parallel tempering)~\cite{PhysRevLett.57.2607} and marginalizing over the number of sources in the data (as opposed to hierarchically stepping through models) using a Reversible Jump MCMC (RJMCMC)~\cite{doi:10.1093/biomet/82.4.711} to identify the range of plausible models. To guard against potentially poor mixing of the RJMCMC a dedicated fixed-dimension follow-up analysis with Bayesian evidence computed via thermodynamic integration~\cite{Goggans_2004} was used for the final model selection determination. The algorithm continued using the $\mathcal{F}$ statistic likelihood and simulated annealing during burn in (the ``search phase'') but switched to the full likelihood, sampling over all model parameters, during the parameter estimation and model selection phase of the analysis. The algorithm additionally made use of the burn-in by building proposal distributions from the biased samples derived during the non-Markovian search phase using a naive binning of the model parameters. The algorithm included a parameterized noise model, by fitting coefficients to the expected noise power spectral density (proportional to the variance of the noise). The waveform model included frequency evolution, and was computed directly in the Fourier domain using the fast-slow decomposition described in~\cite{CornLitt07}. Experience gained from the noise modeling and trans-dimensional algorithms originally applied to the LISA galactic binary problem permeated into analyses of ground-based GW data from the LIGO-Virgo detectors. For spectral estimation the \texttt{BayesLine} algorithm uses a two-component phenomenological model to measure the frequency-dependent variance of the detector noise~\cite{Littenberg:2015}, while the \texttt{BayesWave} algorithm uses a linear combination of wavelets to fit short-duration non-Gaussian features in the data~\cite{Cornish:2014kda}. The wavelet model in each detector is independent when fitting noise transients, and is coherent across the network when fitting GW models. The Bayes factor between the coherent and incoherent models is used as a detection statistic as part of a hierarchichal search pipeline~\cite{PhysRevD.93.022002}. The number of wavelets, and components to the noise model, are all determined with an RJMCMC algorithm. The large volume of data, number of event candidates, and thorough measurement of search backgrounds motivated development of global proposal distributions to improve convergence times of the samplers. The \texttt{BayesWave} and \texttt{BayesLine} models were both inspired by the previous work on the galactic binary problem, with the wavelets substituting the UCB waveforms and the \texttt{BayesLine} model replacing the confusion noise fits. Completing the feedback loop, lessons learned from the development and deployment of the methods on the LIGO-Virgo data have formed part of the foundation in this work, particularly through global proposal distributions, numerical methods for reducing computational time of likelihood evaluations, and infrastructure for deploying the pipeline on distributed computing resources. \section{A New Hope} The new UCB algorithm incorporates many of the features from the earlier efforts, but improves on them in several ways. The biggest change is the adoption of a time evolving strategy, which reflects the reality of the data collection. Analyzing the data as it is acquired also eliminates dedicated algorithm tuning choices for dealing with very loud sources. When new data are acquired the analysis starts on the residual after the bright sources identified previously are removed from the data. In each analysis segment, the removed sources are added back into the data before the RJMCMC begins sampling. This eliminates the problem of having power leakage between analysis segments, and the resultant noise model manipulation to suppress the model from being biased by edge effects in each segment. The time-evolving analysis is also naturally ``annealed'' as the $\rm{SNR}$ of sources builds slowly over time. Other significant changes include improvements to the RJMCMC implementation with the addition of global proposal distributions which eliminate the need for a separate, non-Markovain, search phase or the fixed-dimension follow-up analysis for evidence calculation--the model selection is now robustly handled by the RJMCMC itself as originally intended. For the first time in the context of our UCB work, we have also considered how to distill the unwieldy output from the RJMCMC to more readily useable, higher-level, data products which is how the majority of the research community will interact with the LISA observations. The code described in this work is open source and available under the GNU/GPL v2 license~\cite{littenberg_tyson_2020_3756199}. \section{Model and Implementation} Bayesian inference requires the specification of a likelihood function and prior probability distributions for the model components. The implementation of the analysis employs stochastic sampling techniques, in our case the trans-dimensional Reversible Jump Markov Chain Monte Carlo (RJMCMC)~\cite{doi:10.1093/biomet/82.4.711} algorithm with replica exchange~\cite{PhysRevLett.57.2607}, to approximate the high dimensional integrals that define the marginalized posterior distributions. As with all MCMC algorithms, the choice of proposal distributions is critical to the performance. Here we detail the model and the implementation, hopefully in sufficient detail for the analysis to be repeated by others. \subsection{Likelihood function} The LISA science analysis can be carried out using any complete collection of Time Delay Interferometry (TDI) channels~\cite{Prince:2002hp, Adams:2010vc}. For example, we could use the set of Michelson-type channels $I=\{X,Y,Z\}$, or any linear combination thereof. Schematically we can write ${\bf d}_I = {\bf h}_I + {\bf n}_I$, where ${\bf}h_I$ is the response of the $I^{\rm th}$ channel to all the gravitational wave signals in the Universe, and ${\bf n}_I$ is the combination of all the noise sources impacting that channel. Here the ``noise'' will include gravitational wave signals that are individually too quiet to extract from the data. The goal of the analysis is to reconstruct the detectable gravitational wave signal using a signal model ${\bf h}_I$ such that the residual ${\bf r}_I = {\bf d}_I - {\bf h}_I$ is consistent with the noise model. For Gaussian noise the likelihood is written as: \begin{equation} p({\bf d} | {\bf h}) = \frac{1}{(2\pi \, \det{\bf C})^{1/2}} \, e^{- \frac{1}{2}(d_{Ik} - h_{Ik}) C^{-1}_{(I k)(J m)} (d_{Jm} - h_{Jm})}\, , \end{equation} where ${\bf C}$ is the noise correlation matrix, and the implicit sum over indicies spans the TDI channels $I=\{X,Y,Z\}$ and the data samples $k$. If the data are stationary, then the noise correlation matrix is partially diagonalized by moving to the frequency domain: $C_{(I k)(J m)} = S_{IJ}(f_k) \delta_{km}$, where $S_{IJ}(f)$ is the cross-power spectral density between channels $I,J$~\cite{Adams:2010vc}. The cross-spectral density matrix is diagonalized by performing a linear transformation in the space of TDI variables. If the noise levels are are equal on each spacecraft, this leads to the $I'=\{A,E,T\}$ variables~\cite{Prince:2002hp} via the mapping \begin{equation} \left[ \begin{array}{c} A \\ \\ E \\ \\ T \end{array} \right] = \begin{bmatrix} \frac{2}{3} & -\frac{1}{3} & -\frac{1}{3} \\ && \\ 0 & - \frac{1}{\sqrt{3}}& \frac{1}{\sqrt{3}} \\ && \\ \frac{1}{3} & \frac{1}{3} & \frac{1}{3} \end{bmatrix} \left[ \begin{array}{c} X \\ \\Y \\ \\Z \end{array} \right] \end{equation} In practice, the noise levels in each spacecraft will not be equal, and the $\{A,E,T\}$ variables will not diagonalize the noise correlation matrix~\cite{Adams:2010vc}. However, $\{A,E,T\}$ serve another purpose as they diagonalize the gravitational wave polarization response of the detector for signals with frequencies $f < f_* = 1/(2 \pi L) \simeq 19.1 \; {\rm mHz}$, such that $A \sim h_+$, $E \sim h_\times$ and $T \sim h_\odot$. Since the breathing mode $h_\odot$ vanishes in general relativity, the gravitational wave response of the $T$ channel is highly suppressed for $f < f_*$, making the $T$ channel particularly valuable for noise characterization and the detection of stochastic backgrounds~\cite{Tinto:2001ii,Hogan:2001jn} and un-modeled signals~\cite{travis4}. Full expressions for the instrument noise contributions to the cross spectra $S_{IJ}(f)$ are given in Ref.~\cite{Adams:2010vc}. Added to these expressions will be contributions from the ``confusion noise'' from the millions of signals that are too quiet to detect individually. The confusion noise will add to the overall noise as well as introduce off-diagonal terms in the frequency domain noise correlation matrix ${\bf C}$, as the confusion noise is inherently non-stationary with periodic amplitude modulations imparted by LISA's orbital motion~\cite{PhysRevD.69.123005}. For now we have made a number of simplifying assumptions that will be relaxed in future work: We ignore the non-stationarity of the noise and assume that the noise correlation matrix is diagonal in the frequency domain; In addition, since we are mostly interested in signals with frequencies well below the transfer frequency $f_* \simeq 19.1 \; {\rm mHz}$, we only use the $A$ and $E$ data combinations in the analysis, and we assume that the noise in these channels is uncorrelated; Rather than working with a component level model for the noise, as was done in Ref.~\cite{Adams:2010vc}, we break the analysis up into narrow frequency bands $[f_i, f_i+\Delta f]$ and approximate the noise in each band as an undetermined constant $S_i$. The noise level in each band becomes a parameter to be explored by the RJMCMC algorithm, resulting in a piecewise fit to the instrument noise over the full analysis band. The signal model ${\mathbf h}(\mathbf\Lambda)$ is the superposition of each individual UCB in the model parameterized by $\params$: \begin{equation} {\mathbf h}_I(\mathbf\Lambda) = \sum_{a=0}^{N_{\rm GW}} {\boldsymbol h}_I(\params_a) \end{equation} where ${\boldsymbol h}_I(\params_a)$ denotes the detector response of the $I^{\rm th}$ data channel to the signal from a galactic binary with parameters $\params_a$. Note that the number of detectable systems, $N_{\rm GW}$, is {\it a priori} unknown, and has to be determined from the analysis. Indeed, we will arrive at a probability distribution for $N_{\rm GW}$, which implies that there will be no single definitive source catalog. The individual binary systems are modeled as isolated point masses on slowly evolving quasi-circular orbits neglecting the possibility of orbital eccentricity~\cite{Seto:2001pg}, tides~\cite{2012MNRAS.421..426F} or third bodies~\cite{Robson:2018svj}. The signals are modeled using leading order post-Newtonian waveforms. The instrument response includes finite arm-length effects of the LISA constellation and arbitrary spacecraft orbits, but the TDI prescription currently implemented makes the simplifying assumption that the arm lengths are equal and unchanging with time. Adopting more realistic instrument response functions increases the computational cost but does not change the complexity of the analysis. To compute the waveforms, a fast/slow decomposition is employed that allows the waveforms to be modeled efficiently in the frequency domain~\cite{Cornish:2007if}. The basic idea is to use trigonometric identities to re-write the detector response to the signal in the form $h(t) = a(t) \cos(2 \pi f_k t)$ where $f_k = n_k/T_{\rm obs}$, $n_k = {\rm int}[ f_0 T_{\rm obs}]$, and $f_0$ is the gravitational wave frequency of the signal (twice the orbital frequency) at some fiducial reference time. The Fourier transform of $h(t)$ is then $\tilde h(f) =\frac{1}{2} ( \tilde a(f-f_k) + \tilde a(f+f_k))$. Since $a(t)$,which includes the orbital evolution and time-varying detector response, varies much more slowly than the carrier signal $\tilde h(f) =\frac{1}{2} \tilde a(f-f_k)$, the Fourier transform of $a(t)$ is computed numerically using a lower sample cadence than needed to cover the carrier. A sample cadence of days is usually sufficient. Note that in the original implementation~\cite{Cornish:2007if} the signal was written as $h(t) = a(t) \cos(2 \pi f_0 t)$, which was less efficient as it required the convolution $\tilde{h} * \tilde{a}$. By mapping the carrier frequency to a multiple of the inverse observation time the Fourier transform of the carrier becomes a pair of delta functions and the convolution becomes the sum of just two terms, one of which effectively vanishes. Each binary is parameterized by $N_{\rm P}$ parameters. $N_{\rm P}$ is typically eight, with $\params \rightarrow (\mathcal{A}, f_0, \dot{f}, \varphi_0, \iota, \psi, \theta, \phi)$, where $\mathcal{A}$ is the amplitude, $f_0$ is the initial frequency, $\dot{f}$ is the (constant) time derivative of the, $\varphi_0$ is the initial phase, $\iota$ is the inclination of the orbit, $\psi$ the polarization angle and $\theta,\phi$ are the sky location in an ecliptic coordinate system. If the evolution of the binary were purely driven by gravitational wave emission we could replace the parameters $\left\{\mathcal{A}, \dot{f}\right\}$ by the chirp mass ${\cal M}$ and luminosity distance $D_L$ via the mapping \begin{eqnarray}\label{MD} \dot{f} &=& \frac{96}{5} \pi^{8/3} {\cal M}^{5/3} f_0^{11/3} \nonumber \\ \mathcal{A} &=& \frac{2 {\cal M}^{5/3} \pi^{2/3} f_0^{2/3}}{D_L} \, . \end{eqnarray} We prefer the $\left\{\mathcal{A},\dot{f}\right\}$ parameterization as it is flexible enough to fit systems with non-GW contributions to the orbital dynamics, e.g. mass transferring systems, and it is better suited to modeling systems where $\dot{f}$ is poorly constrained (it is better to have just one parameter filling its prior range than two). For binaries with unambiguously positive $\dot{f}$, and assuming GW-dominated evolution of the orbit, we resample the posteriors to $\cal M$ and ${\rm d}_L$ in post-processing~\cite{Littenberg_2019}. We also have optional settings to increase $N_{\rm P}$ by including the second derivative of the frequency~\cite{Littenberg_2019} in which case the frequency derivative is no longer constant, so the parameter $\dot{f}\rightarrow\dot{f}_0$ is fixed at the same fiducial time as $f_0$ and $\varphi_0$. Additional, optional changes to the source parameterization includes holding an arbitrary number of parameters fixed at input values determined, for example, by EM observations~\cite{Littenberg_2019b}, or to include parameters which use the UCBs as phase/amplitude standards for self-calibration of the data~\cite{Littenberg_2018}. \subsection{Prior distributions} The model parameters are given by the $N_n$ noise levels for each frequency band $S_i$ and the collection of $N_{\rm GW}\timesN_{\rm P}$ signal parameters $\Lambda$. The number of noise parameters $N_n$ is fixed by our choice of bandwidth $\Delta f$ and the frequency range we wish to cover in the analysis. In the current configuration of the pipeline we use analysis windows with $\Delta f \sim\mathcal{O}(\mu\rm{Hz})$ in width resulting in $N_n=\mathcal{O}(10^4)$ noise parameters to cover the full measurement band of the mission. We use a uniform prior range $S_I \in [10^{-1} S_I(f_i), 10^{2} S_I(f_i)]$ where $S_I(f_i)$ is the theoretical value for the noise level of data channel $I$ used to generate the data. In practice the prior ranges on the noise will be set using information from the commissioning phase of the mission. The total number of detectable signals $N_{\rm GW}$ per frequency band are unknown. We use a uniform prior covering the range $N_{\rm GW} \in U[0,30]$. For the individual source parameters we used uniform priors on the initial phase $\varphi_0 \in [0,2\pi]$ and polarization angle $\psi \in [0,\pi]$, and a uniform prior on the cosine of the inclination $\cos \iota \in [-1,1]$. In each analysis window the initial frequency $f_0$ was taken to have a uniform prior covering the range $[f_i, f_i+\Delta f]$. The allowed range of the frequency derivative is informed by population synthesis models which provide information on the mass and frequency distribution of galactic binaries~\cite{Toonen_2012}. While the expression for the frequency derivative is only valid for isolated point masses, the balancing of accretion torques and gravitational wave emission in mass-transferring AM CVn type systems is thought to lead to a similar magnitude for the frequency derivative, but with the sign reversed~\cite{Nelemans2010}. Using these considerations as input, we adopt a uniform prior on $\dot{f}$ in each frequency band that covers the range $\dot{f} =[ - 5\times 10^{-6} f_i^{13/3}, 8\times 10^{8} f_i^{11/3}]$. \begin{figure}[htp] \includegraphics[width=0.5\textwidth]{figures/galaxy_prior.pdf} \caption{\label{fig:skyprior} The sky prior plotted in ecliptic coordinates. The color scale is logarithmic prior density $\ln p(\theta,\phi)$.} \end{figure} For RJMCMC algorithms with scale parameters--in our case the amplitude--the choice of prior influences both the recovery of those parameters as well as on the model posterior. For example, a simple uniform prior between $U[0,\mathcal{A}_{\rm max}]$ will support including low-amplitude sources in the model. Adding a source to the model with $\rm{SNR}\sim0$ will not degrade the likelihood, and the remaining model parameters will sample their prior such that the so-called ``Occam penalty'' from including extra (constrained) parameters is small. The need to derive an amplitude prior that results in model posteriors as we intuitively expect--namely that templates are included in the model predominantly when there is a detectable source for them to fit--and does not bias the recovery of the amplitude parameter was addressed in the \texttt{BayesWave} algorithm~\cite{Cornish:2014kda}. There the prior on the amplitudes had to be considered to suppress large numbers of low-amplitude wavelets saturating the model prior. The solution was to evaluate the prior not on the amplitudes themselves, but on the {\rm{SNR}} of the wavelet. The prior was tuned to go to 0 at low \rm{SNR}, peak in the regime where most wavelets were expected to appear in the model (near the ``detection'' threshold), and taper off at high \rm{SNR}. We adopt that approach for the UCB model as follows. Up to geometrical factors of order unity, the $\rm{SNR}$ of a galactic binary $\rho$ is related to the amplitude via the linear mapping \begin{equation} \rho = \frac{\mathcal{A}}{2} \left( \frac{T_{\rm obs}\sin^2(f_0/f_*)}{S_A(f_0)}\right)^{1/2} \, . \end{equation} The prior on the amplitude is then mapped from a prior on $\rho$ of the form \begin{equation} p(\rho) = \frac{3 \rho}{4 {\rho}_*^2 (1 + {\rho}/(4{\rho}_*))^5} \end{equation} which peaks at $\rho=\rho_*$ and falls off as $\rho^{-4}$ for large $\rho$. Because most detections will be close to the detection threshold we set $\rho_* = 10$. For bright sources the likelihood, which scales as $e^{\rho^2}$, overwhelms the prior, and there is little influence in the the recovered amplitudes from our choice of prior. For the sky location the pipeline has support for two options: Either the model can use a uniform priors on the sky or a prior weighted towards the sources being distributed in the galaxy according to an analytic model for its overall shape. As currently implemented we use a simple bulge-plus-disk model for the stellar distribution of the form \begin{equation} \varrho = \varrho_0 \left[ \alpha e^{-r^2/R_b^2} + (1-\alpha) e^{-u/Rd} \rm{sech}^2\left(\frac{z}{Z_d}\right)\right]. \end{equation} Here $r^2 = x^2+y^2 +z^2$ and $u^2=x^2+y^2$, and $x,y,z$ are a set of Cartesian coordinates with origin at the center of the galaxy and the $z$ axis orthogonal to the galactic plane. The parameters are the overall density scaling $\varrho_0$, bulge fraction $\alpha$, bulge radius $R_b$, disk radius $R_d$ and disk scale height $Z_d$. Ideally we would make these quantities hyper-parameters in a hierarchical Bayesian scheme~\cite{Adams:2012qw}, but for now we have fixed them to the fiducial values $\alpha=0.25$, $R_b=0.8$ kpc, $R_d=2.5$ kpc, and $Z_b=0.4$ kpc and $\varrho_0$ determined by numerically normalizing the distribution, . LISA views the galaxy from a location that is offset from the galactic center by an amount $R_G$ in the $x$-direction, and use ecliptic coordinates to define the sky locations. This necessitates that we apply a translation and rotation to the original galactic coordinates. We then compute the density $\varrho(\theta,\phi)$ in the new coordinate system and normalize the density on the sky to unity for use as a prior. In order to ensure full sky coverage we rescale the normalized density by a factor of $(1-\beta)$ and add to it a uniform sky distribution that has total probability $\beta$. Figure~\ref{fig:skyprior} shows the sky prior for the choice $\beta = 0.1$. \subsection{Trans-dimensional MCMC} Trans-dimension modeling is a powerful technique that simultaneously explores the range of plausible models for the data as well as the parameters of each candidate model. The trans-dimensional approach is particularly valuable in situations where it is unclear how many components should be included in the model and there is a danger of either over- or under-fitting the data. Trans-dimensional modeling allows us to explore a wide class of models in keeping with our motto ``model everything and let the data sort it out''~\cite{Cornish:2014kda}. While fixed dimension (signal model) sampling techniques have thus far proven sufficient for LIGO-Virgo analyses of isolated events, we see no alternative to using trans-dimensional algorithms for the multi-source fitting required for LISA data analysis. Trans-dimensional MCMC algorithms are really no different from ordinary MCMC algorithms. They simply operate on an extended parameter space that is written in terms of a model indicator parameter $k$ and the associated parameter vector $\vec{\theta}_k$. It is worth noting that the number of models can be vast. For example, suppose we were addressing the full LISA data analysis problem using a model that included up to $N_{\rm UCB}\sim 10^5$ galactic binaries, $N_{\rm BH} \sim 10^3$ supermassive black holes, $N_{\rm EMRI}\sim 10^3$ extreme mass ratio inspirals and $N_{\rm n}\sim 10^3$ parameters in the noise model. Since the number of parameters for each model component are not fixed, the total number of possible models is the {\em product}, not the sum, of the number of possible sub-components, resulting in $\sim 10^{14}$ possible models in this instance. The advantage of the RJMCMC method is that it is not necessary to enumerate or sample from all possible models but, rather, to have the {\em possibility} of visiting the complete range of models. This is in contrast to the product space approach~\cite{10.2307/2346151}, which requires that all models be enumerated and explored while most of the computing effort is spent exploring models that have little or no support. Just as an ordinary MCMC spends the majority of its time exploring the regions of parameter space with high posterior density, the RJMCMC algorithm spends most of the time exploring the most favorable models. Our goal is to compute the joint posterior of model $k$ and parameters $\gparams_k$ \begin{equation} p(k, \gparams_k | {\bf d}) = \frac{ p({\bf d} | k, \gparams_k) p(k, \gparams_k)}{ p({\bf d})} \end{equation} which is factored as \begin{equation} p(k, \gparams_k | {\bf d}) = p(k | {\bf d}) p(\gparams_k | k, {\bf d}) \, , \end{equation} where $p(k|{\bf d})$ is the posterior on the model probabilities and $p(\gparams_k | k, {\bf d})$ is the usual parameter posterior distribution for model $k$. The quantity $O_{ij} = p(i | {\bf d})/p(j | {\bf d})$ is the odds ratio between models $i,j$. The RJMCMC algorithm generates samples from the joint posterior distribution $p(k, \vec{\theta}_k | {\bf d})$ by developing a Markov Chain via proposing transitions from state $\{k, \gparams_k\}$ to state $\{l,\gparams_l\}$ using a proposal distribution $q(\{k, \gparams_k\}, \{l,\gparams_l\})$. Transitions are accepted with probability $\alpha = \min \left\{ 1, H_{l\rightarrow k} \right\}$ with the Hastings Ratio \begin{equation}\label{rjmcmc} H_{l\rightarrow k} = \frac{p({\bf d} | k, \gparams_k)}{p({\bf d} | l, \gparams_l)} \, \frac{p(k, \gparams_k)}{p(l, \gparams_l)} \, \frac{q(\{k, \gparams_k\}, \{l,\gparams_l\})}{q(\{l,\gparams_l\}, \{k, \gparams_k\} )}. \end{equation} Proposals are usually separated into within-model moves, where $k=l$ and only the model parameters $\gparams_k$ are updated, and between-model moves where both the model indicator $l$ and the model parameters $\gparams_l$ are updated. Written in the form of Eq.~\ref{rjmcmc} the RJMCMC algorithm is no different than the usual Metropolis-Hastings algorithm. In practice the implementation is complicated by the need to match dimensions between the model states, which introduces a Jacobian determinant of the mapping function~\cite{doi:10.1093/biomet/82.4.711}. This can all become very confusing, and may explain the slow adoption of trans-dimensional modeling in the gravitational wave community. Thankfully the models we consider are {\em nested}, such that the transition from state $k$ to $l$ involves the addition or removal of a model component. In the case of nested models the mapping function is a linear addition or subtraction of parameters, and the Jacobian is simply the ratio of the prior volumes~\cite{doi:10.1111/j.1365-246X.2006.03155.x}. For example, the Hasting ratio for adding a single UCB source with parameters $\params_{k+1}$ to the current state of the model already using $k$ templates (with joint parameters $\mathbf\Lambda_k$) is \begin{equation} H_{k\rightarrow k+1} = \frac{p({\bf d} |\mathbf\Lambda_k, \params_{k+1}) p(\params_{k+1}) } {p({\bf d} |\mathbf\Lambda_k) q(\params_{k+1})} \end{equation} where $q(\params_{k+1})$ is the proposal distribution that generated the new source parameters, and we assume for the reverse move ($k+1\rightarrow k$) that existing sources are selected for removal with uniform probability. The efficiency of any MCMC algorithm depends critically on the choice of proposal distributions. The necessity for finding good proposal distributions is even more acute for the trans-dimensional moves of a RJMCMC algorithm. In the UCB pipeline, an increase in dimension comes about when a new waveform template is added to the solution. For such a move to be accepted the parameters for the new source must land sufficiently close to the true parameters of some signal for the transition to be accepted. Arbitrarily choosing the $N_{\rm P}$ parameters that define a signal has low probability of improving the likelihood enough for the transition to be accepted. The strategy we have adopted to improve the efficiency, which is explicitly detailed in the following section, is to identify promising regions of parameter space in pre-processing, in effect producing coarse global maps of the likelihood function, and using these maps as proposal distributions. The global proposals are also effective at promoting exploration of the multiple posterior modes that are a common feature of GW parameter spaces for single sources. To further aid in mixing we use replica exchange (also know as parallel tempering). Parallel tempering uses a collection of chains to explore models with the modified likelihood $p({\bf d} | \mathbf\Lambda, \beta) = p({\bf d} | \mathbf\Lambda)^{\beta}$, where $\beta\in[0,1]$ is an inverse ``temperature''. Chains with high temperatures (low $\beta$) explore a flattened likelihood landscape and move more easily between posterior modes, while chains with lower temperature sample the likelihood around candidate sources and map out the peaks in more detail. Only those chains with $\beta=1$ provide samples from the target posterior. A collection of chains at different temperatures are run in parallel, and information is passed up and down the temperature ladder by proposing parameter swaps, which are accepted with probability $\alpha = \min\left\{ 1,H_{i\leftrightarrow j}\right\}$ and \begin{equation}\label{ptmcmc} H_{i\leftrightarrow j} = \frac{ p({\bf d} | i, \mathbf\Lambda_i,\beta_i) \, p({\bf d} | j, \mathbf\Lambda_j,\beta_j)}{ p({\bf d} | i, \mathbf\Lambda_i,\beta_j) \, p({\bf d} | j, \mathbf\Lambda_j,\beta_i) } \, . \end{equation} Here we are proposing to swap the parameters of the model $\{i, \mathbf\Lambda_i\}$ at inverse temperature $\beta_i$ with the model $\{j, \mathbf\Lambda_j\}$ at inverse temperature $\beta_j$. Note that if $\beta_i=\beta_j$ the swap is always accepted. Models with higher temperatures typically have lower likelihoods. If the likelihoods of the two models are very different the Hastings Ratio $H_{i\leftrightarrow j}$ will be small. We only propose exchanges between chains that are near one another in temperature. Choosing the temperature ladder so that chain swaps are readily accepted is a challenge. The situation we need to avoid is a break in the chain, where a collection of hotter chains decouples from the colder chains such that no transitions occur between the two groups. When that happens the effort spent evolving the hot chains is wasted as their findings are never communicated down the temperate ladder to the $\beta=1$ chain(s) that accumulate the posterior samples. It is generally more effective to run a large number of chains that are closely spaced in temperature for few iterations than it is to run with fewer chains for longer. We adopt the scheme described in Ref.~\cite{Vousden_2015} where the temperature spacing between chains is adjusted based on acceptance rates of chain swaps, and the degree to which the temperatures adjust based on the acceptance rates, asymptotically approaches zero as the number of chain iterations increases. Thus the temperature spacing is dynamically adjusting to the rapidly changing model when the sampler is ``burning in'' but settles into a steady-state when the sampler is exploring the posterior. \subsection{Proposal Distributions} As mentioned previously, the efficiency of a MCMC algorithm is heavily dependent on the design of the proposal distributions. This ``tuning'' requirement for an efficient MCMC has led to the development of samplers designed to be more agnostic to the parameter space such as ensemble samplers (e.g.~\cite{Foreman_Mackey_2013}), Hamiltonian Monte Carlo~\cite{betancourt2017conceptual}, etc. However, there has been less development of alternatives to sampling transdimensional posteriors and the scale of the LISA UCB problem may be prohibitive to brute-force evaluation of many competing models. It is our view that continued innovation in development of custom proposal distributions that leverage the hard-earned domain knowledge is worth the investment. To that end, we observe that the posterior is the ideal proposal distribution--setting $q(\{i,\mathbf\Lambda_i\}, \{j, \mathbf\Lambda_j\} ) = p(i, \mathbf\Lambda_i | {\bf d})$ we have $H_{i\rightarrow j}=1$, so every proposed move is accepted and the correlation between successive samples can be made arbitrarily small. Of course, if we could produce independent samples from the posterior in advance there would be no need to perform the MCMC, but this observation provides guidance in the design of effective proposal distributions--we seek distributions that are computationally efficient approximations to the posterior distribution, which usually amounts to finding good approximations to the likelihood function. Consider the log likelihood for model $k$ describing $N_k$ galactic binaries, which is written as \begin{eqnarray} \label{ll} \ln p({\bf d} | k, \mathbf\Lambda_k ) &=& \sum_{i=1}^{N_k} \ln p({\bf d} | \params_i) + \frac{1}{2}(N_k-1)\innerproduct{d}{d} \nonumber \\ &+&\sum_{i>j} \innerproduct{\params_i}{\params_j} \, , \end{eqnarray} where \begin{equation} \innerproduct{a}{b} \equiv a_{Im} C^{-1}_{(I m)(J n)}(\vec{\kappa}) b_{Jn} \end{equation} and we are neglecting terms from the noise parameters. The first term in the expression for the log likelihood in Eq.\ref{ll} is the sum of the individual likelihoods for each source, while the final term describes the correlations between the sources. While accounting for these correlations is crucial to the global analysis, the correlation between any pair of sources is typically quite small, and we ignore them in the interest of finding a computationally efficient approximation to the likelihood to use as a proposal. Figure~\ref{fig:overlap} shows the maximum match between pairs of sources with $\rm{SNR} > 7$, using a simulated galactic population and assuming 1, 2, and 4 year observation periods. Here the match, or overlap, is defined as: \begin{equation}\label{eq:match} M_{ij} \equiv \frac{\innerproduct{{\boldsymbol h}(\params_i)}{{\boldsymbol h}(\params_j)}}{\sqrt{ \innerproduct{{\boldsymbol h}(\params_i)}{{\boldsymbol h}(\params_i)} \innerproduct{{\boldsymbol h}(\params_j)}{{\boldsymbol h}(\params_j)} }}\, , \end{equation} and we are using the $A,E$ TDI data channels. Less than 1\% of sources have overlaps greater than 50\%, and the fraction diminishes with increased observing time. Thus we will develop proposals for individual sources and propose updates to their parameters independently of other sources in the model. The MCMC still marginalizes over the broader parameter space, including the rare but non-zero case of non-negligible covariances between sources, in effect executing a blocked Gibbs sampler where the blocks are individual source's parameters. \begin{figure}[htp] \includegraphics[clip=true,angle=0,width=0.5\textwidth]{figures/match.pdf} \caption{\label{fig:overlap} Survival function of the maximum match between any pair of detectable sources computed using a simulated galactic population of UCBs. For 1 year of observing (green) $\lesssim1\%$ of sources have overlaps greater than 50\%. That fraction is reduced to $0.1\%$ after 2 years (orange), and $0$ after 4 years (purple) as the resolving power of LISA increases.} \end{figure} \subsubsection{$\mathcal{F}$ statistic Proposal} We construct a global proposal density using the single source $\mathcal{F}$ statistic to compute the individual likelihoods $\ln p({\bf d} | \params_i)$ maximized over the extrinsic parameters $\mathcal{A}, \varphi_0, \iota, \psi$. Up to constants that depend on the noise parameters, the maximized log likelihood is equal to \begin{equation} {\cal F}(f_0,\theta, \phi) = \frac{1}{2} \innerproduct{{\bf g}_i}{{\bf g}_j}^{-1} \innerproduct{{\bf d}}{{\bf g}_i} \innerproduct{{\bf d}}{{\bf g}_j} \end{equation} where the four filters ${\bf g}_i$ are found by computing waveforms with parameters $f_0, \dot{f}=0, \theta, \phi$, $\mathcal{A}=2$ and \begin{eqnarray} \label{filters} && {\bf g}_1 = {\boldsymbol h}\left(\varphi_0=0, \iota=\frac{\pi}{2},\psi=0\right) \\ && {\bf g}_2 = {\boldsymbol h}\left(\varphi_0=\pi, \iota=\frac{\pi}{2},\psi=\frac{\pi}{4}\right) \\ && {\bf g}_3 = {\boldsymbol h}\left(\varphi_0=\frac{3\pi}{2}, \iota=\frac{\pi}{2},\psi=0\right) \\ && {\bf g}_4 = {\boldsymbol h}\left(\varphi_0=\frac{\pi}{2}, \iota=\frac{\pi}{2},\psi=\frac{\pi}{4}\right) \, . \end{eqnarray} The $\mathcal{F}$ statistic proposal is the three dimensional histogram precomputed from the data using a grid in $f_0, \theta, \phi$. We use a fixed grid spacing governed by what is needed for the best resolved sources which are found in the ecliptic plane (which maximises the doppler modulations imparted by LISAs orbital motion) and at the highest frequencies covered by the analysis. The probability density of a cell $(a,b,c)$ of the three-dimensional histogram is ${\cal F}_{a,b,c}$ normalized by the sum of ${\cal F}$ over all cells, and the parameter volume of the cell. The optimal spacing of the grid can be estimated from the reduced Fisher information matrix $\gamma_{ij}$, which is found by projecting out the parameters $\mathcal{A}, \varphi_0, \iota, \psi$ from the full Fisher information matrix $\Gamma_{ij} = \innerproduct{\partial{\boldsymbol h}/\partial\params_i}{\partial{\boldsymbol h}/\partial\params_j}$~\cite{Cornish:2005qw}. The reduced Fisher matrix is not constant across the parameter space and will naturally reduce the grid size as $f_0$ gets larger, and for sky locations near the ecliptic equator compared to those near the poles. The grid spacing will also become finer as the observation time grows. These modifications, as well as extending to a 4D grid including $\dot{f}$, will further improve the efficiency of the proposal and are left for future development. \begin{figure}[htp] \begin{center} \includegraphics[clip=true,width=0.5\textwidth]{figures/fstat_sky.pdf} \end{center} \vspace*{-0.2in} \caption{\label{fig:fstat-proposal} Frequency slices of the multidimensional $\mathcal{F}$ statistic proposal for the same segment of data shown in Fig.~\ref{fig:money_plot}. The color scale is linear in the proposal density, and each panel is on the same scale. The proposal promotes frequencies and sky locations consistent with the signal in the data (top right and bottom left panels) and returns a low-density and diffuse distribution at frequencies consistent with random noise (top left and bottom right panels).} \end{figure} \subsubsection{Multi-modal Proposals} Due to the parameterization of the gravitational wave signals, and the instrument response to those signals, there are known exact or near degeneracies which appear as distinct modes in the likelihood/posterior distribution. While MCMC algorithms are not generically efficient at sampling from multimodal distributions, we have developed dedicated proposal distributions to exploit the predictable multi-modality and improve the chain convergence time. Due to the annual orbital motion of the LISA constellation, continuous monochromatic sources will have non-zero sidebands at the modulation frequency $f_m = 1/{\rm year}$. Sources that are detectable at low $\rm{SNR}$ after several years of observation can have likelihood support at multiple modes separated by $f_m$, while for high $\rm{SNR}$ sources the secondary modes are subdominant local maxima, challenging to generic MCMC sampling algorithms. We have adopted a dedicated proposal that updates the UCB initial frequency by $f_0 \rightarrow f_0 + n f_m$ where $n$ is drawn from $N[0,1]$ and mapped to the nearest integer. The sky location of the source correlates with the frequency through the doppler modulations imparted by the detector's orbital motion, so the proposal alternates between updates to the extrinsic parameters using the Fisher matrix proposal, F-statistic proposal, and draws from the prior. A similar proposal was deployed and demonstrated in Refs.~\cite{Crowder:2006eu,Littenberg:2011zg}. We also take advantage of a linear correlation between the gravitational wave phase $\varphi_0$ and polarization angle $\psi$, and a perfectly degenerate pair of modes over the prior $\psi \in [0,\pi]$ and $\varphi_0 \in [0,2\pi]$ by proposing $\{\psi,\varphi_0\}\rightarrow \{\psi \pm \delta/2,\varphi_0 \pm \delta\}$ where $\delta \in U[0,2\pi]$ and the sign of the shift in the parameters is random, as the sign of the $\psi/\varphi_0$ correlation depends on the sign of $\cos{\iota}$, i.e. if the stars are orbiting clockwise or counterclockwise as viewed by the observer. \subsubsection{Posterior-Based Proposals} The UCBs are continuous sources for LISA and will be detectable from the beginning of operations throughout the lifetime of the mission. Our knowledge of the gravitational wave signal from the galaxy will therefore build gradually over time. We have designed a proposal distribution to leverage this steady accumulation of information about the galaxy by analyzing the data as it is acquired, and building proposal functions for the MCMC from the archived posterior distributions inferred at each epoch of the analysis. For a particular narrow-band segment of data, the full posterior is a complicated distribution due to the probabilistically determined number of sources in the data, and their potentially complicated, multimodal structure. The posterior is known to us only through the discrete set of samples returned by the MCMC but for use as a proposal must be a continuous function over all of parameter space (as we must be able to evaluate the proposal anywhere in order to maintain detailed balance in the Markov chain). Therefore some simplifications must be made to convert the discrete samples of the chain into a continuous function. In the release of the pipeline accompanying this paper, we select chain samples from the maximum marginalized likelihood (i.e. highest evidence) model at the current epoch to build the proposals used in the subsequent analysis when more data are available.. We post-process the chain samples to cluster those that are fitting discretely identified sources, and to filter out samples from the prior or from weaker candidate sources that don't meet our threshold for inclusion in the source catalog. The post-production analysis is described in Sec.~\ref{sec:catalog}. Each source $i$ identified in the post-production step will have at least two modes, because of the degeneracy in the $\psi-\varphi_0$ plane. For each mode $n$, we compute the vector of parameter means $\bar\params_{i,n}$ from the one-dimensional marginalized posteriors, the full $N_{\rm P}\timesN_{\rm P}$ covariance matrix $\mathbf{C}_{i,n}$ from the chain samples, and the relative weighting $\alpha_{i,n}$ which is the number of samples in the mode normalized by the total number of samples used to build the proposal. The proposal is evaluated for arbitrary parameters $\params$ as \begin{equation} p(\params) = \sum_{i=0}^{i<I} \sum_{n=0}^{i<2} \alpha_{i,n} \frac{ e^{-\frac{1}{2} (\params-\bar\params_{i,n}) \mathbf{C}_{i,n}^{-1} (\params-\bar\params_{i,n})} }{\left((2\pi)^{N_{\rm P}} \det\mathbf{C}_{i,n}\right)^{1/2}} . \end{equation} To draw new samples from this distribution, we first select which mode by rejection sampling on $\alpha_{i,n}$, and then draw new parameters $\params$ via: \begin{equation} \params = \bar\params_{i,n} + \mathbf{L}_{i,n}\mathbf{n} \end{equation} where $\mathbf{n}$ is an $N_{\rm P}$-dimension vector of draws from a zero-mean unit-variance Gaussian, and $\mathbf{L}_{i,n}$ is the LU decomposition of $\mathbf{C}_{i,n}$. Fig.~\ref{fig:cov-proposal} shows the 1 and 2$\sigma$ contours of the set of covariance matrices computed from a 6 month observation of simulated LISA data around 4 mHz in two projections of the full posterior: the $f_0-\mathcal{A}$ plane (top) and sky location (bottom). Shown in gray is the scatter plot of all chain samples before being filtered by the catalog production step described in the next section. The color scheme is consistent between the two panels. Note that for well localized (e.g. high amplitude) sources the covariance matrix is a good representation of the posterior, as should be the case since the posterior should trend towards a Gaussian distribution with increased $\rm{SNR}$, and will therefore serve as an efficient proposal when new data are acquired. \begin{figure}[htp] \begin{center} \includegraphics[clip=true,angle=0,width=0.5\textwidth]{figures/covariance_f-A.pdf} \includegraphics[clip=true,angle=0,width=0.5\textwidth]{figures/covariance_sky.pdf} \end{center} \vspace*{-0.2in} \caption{\label{fig:cov-proposal} Two-dimensional projections of the multi-source covariance matrix proposal produced after analyzing 6 months of simulated data round 4 mHz. The gray scatter plots show all of the chain samples from the analysis which are then filtered and clustered into discrete sources by the catalog production step. The mean parameter values and covariance matrix for each discrete source are computed from the chain samples and used a proposal for the next step of the analysis after more data are acquired. Parameter combinations shown are the frequency-amplitude plane (top panel) and sky location (bottom panel). Ellipses enclose the 1 and $2\sigma$ contours of the covariance matrices, and sources are colored consistently in the top and bottom panels.} \end{figure} Fig.~\ref{fig:logL} shows the log-likelihood of the model as a function of chain step for observations of increasing duration $T$ with (teal) and without (orange) using the covariance matrix proposal built from each intermediate analysis. This demonstration was on the same data from Fig.~\ref{fig:money_plot} containing the type of high-$f_0$ and high-$\rm{SNR}$ source that proved challenging for the previous RJMCMC algorithm~\cite{Littenberg:2011zg}. With the covariance matrix proposal the chain convergence time is orders of magnitude shorter than using the naive sampler, to the point where the $T=24$ month run failed to converge in the number of samples it took the analysis with the covariance matrix proposal to finish. \begin{figure}[htp] \begin{center} \includegraphics[width=0.5\textwidth]{figures/highf_logL.png} \vspace*{-0.2in} \caption{\label{fig:logL} Log-likelihood chains from analyses of the same data as shown in Fig.~\ref{fig:money_plot} run with (teal) and without (orange) the covariance matrix proposal. As the observing time increases, the chain sampling efficiency gained by including the proposal built from previous analyses becomes more significant.} \end{center} \end{figure} Using the customized proposals described in this section allows the sampler to robustly mix over model space and explore the parameters of each model supported by the data. The pipeline dependably converges without the need for non-Markovian maximization steps as were used in the ``burn in'' phase of our previously published UCB pipelines, and reliably produces results for model selection and parameter estimation analyses simultaneously. \subsection{Data selection} While the UCB pipeline is pursuing a global analysis of the data, we leverage the narrow-band nature of the sources to parallelize the processing. Sources separated by more than their bandwidth--typically less than a few hundred frequency bins--are uncorrelated and can therefore be analyzed independently of one another. As was done in previous UCB algorithms~\cite{Crowder:2006eu,Littenberg:2011zg}, we divide the full Fourier domain data stream into adjacent segments and process each in parallel, without any exchange of information during the analysis between segments. To prevent edge effects from templates trying to fit sources outside of the analysis window, each segment is padded on either side in frequency with data amounting to the typical bandwidth of a source, thus overlapping the neighboring segments. The MCMC is free to explore the data in the padded region, but during post-production only samples fitting sources in the original analysis window are kept, preventing the same source from being included in the catalog twice. Meanwhile, sources within the target analysis region but close to the boundary will not have part of their signal cut off in the likelihood integral. Unlike in Refs.~\cite{Crowder:2006eu,Littenberg:2011zg}, there is no manipulation of the likelihood or noise model to prevent loud sources outside of analysis region from corrupting the fit. Instead, we leverage the time-evolving analysis by ingesting the list of detections from previous epochs of the catalog, forward modeling the sources as they would appear in the current data set and subtracting them from the data. This will be an imperfect subtraction but is adequate to suppress the signal power in the tails of the source which extend into the adjacent segments and, due to the padding, does not alter the data in the target analysis region. In the event that an imperfect subtraction leaves a detectable residual, it will not corrupt the final catalog of detected sources because templates fitting that residual will be in the padded region of the segment and removed in post-processing. The downside is merely in the computational efficiency, as poorly-subtracted loud signal with central frequency out of band for the analysis will require several templates co-adding to mitigate the excess power, wasting computing cycles and increasing the burden on the MCMC to produce converged samples. The effectiveness of the subtraction will improve as the duration of observing time between analyses decreases, and is an area to explore when optimizing the overall cost of the multi-year analysis. The strategy for mitigating edge effects is prone to failure if the posterior distribution of a source straddles the boundary. The frequency is precisely constrained for any UCB detection so having a source so precariously located is unlikely but nonetheless needs to be guarded against. While not yet implemented, we envision checks for sources near the boundaries in post-production to see if posterior samples from different windows should be combined, and/or adaptively choosing where to place the segment boundaries based on the current understanding of source locations from previous epochs of the analysis. There is no requirement on the size or number of analysis windows except that they are much larger than the typical source bandwidth, and the segment boundaries do not need to remain consistent between iterations of the analysis as more data are added. \begin{figure}[htp] \begin{center} \includegraphics[width=0.5\textwidth]{figures/padding.pdf} \vspace*{-0.2in} \caption{\label{fig:padding} Demonstration of data selection and padding procedure. The top panel shows the power spectrum of an example analysis segment in black and the reconstructed waveforms from the analysis in various colors. The vertical dashed lines mark the region of the analysis region where sources will be selected for the catalog. Gray reconstructions are from the analyses of the adjacent segments. The bottom panel shows the same frequency interval in the $\{f_0,\mathcal{A}\}$ plane with injected signals marked as gray circles and a scatter plot of the MCMC samples in green. Note that the chain samples extend into the padded region and fit sources there, but those waveforms are not included in the top-panel's reconstructions} \end{center} \end{figure} Fig.~\ref{fig:padding} demonstrates the data selection and padding procedure by displaying results from the center analysis region of three adjacent windows processed with the time-evolving RJMCMC algorithm. The top and bottom panels show the reconstructed waveforms and posterior samples, respectively. The posterior samples extend outside of the analysis region (marked by vertical dashed lines) to fit loud signals in neighboring frequency bins, but are rejected during the catalog production step. The frequency padding ensures that the waveform templates of sources inside of the analysis region are not truncated at the boundary. Sources recovered from the neighboring analyses are marked in gray. Note that there is no conflict between the fit near the boundaries despite their being overlapping sources in this example at the upper frequency boundary. \section{Catalog Production}\label{sec:catalog} The output of the RJMCMC algorithm is thousands of random draws from the variable dimension posterior, with each sample containing an equally likely set of parameters \emph{and} number of sources in the model. Going from the raw chain samples to inferences about individual detected sources is subtle, as a model using $N_{\rm GW}$ templates does not necessarily contain $N$ discrete sources. For example, the model may be mixing between states where the $N_{\rm GW}^{\rm th}$ template is fitting one (or several) weak sources, or sampling from the prior, and such a model could be on similar footing with the $N_{\rm GW}-1$ or $N_{\rm GW}+1$ template models purely on the grounds of the evidence calculation. How then to answer the questions ``How many sources were detected?'' or ``What are the parameters of the detected sources?'' in a way that is robust to the more nuanced cases where the data supports a broad set of models containing several ambiguous candidates? \subsection{Filtering and Clustering Posterior Samples} In Ref.~\cite{Littenberg:2011zg}, for the sake of responding to the Mock LISA Data Challenge, post-processing the chains went only as far as selecting the maximum likelihood chain sample from the maximum likelihood model. Condensing the rich information in the posterior samples down to single point estimate defeats the purpose of all the MCMC machinery in the first place. Furthermore, due to the large number of sources being fit simultaneously and the finite number of samples, the maximum likelihood sample within a particular dimension model does not necessarily correspond to the maximum likelihood parameters for each of the many sources in the analysis should they have been fit by the model in isolation. It was therefore necessary that we begin to seriously consider how to post-process the raw chain samples into a more manageable data product for the sake of producing source catalogs that are easily ingested by end users of the LISA observations, but are not overly reduced to the point of being prohibitively incomplete or misleading. We originally explored using standard ``off the shelf'' clustering algorithms to take the $N_{\rm GW}\timesN_{\rm P}$ samples from the chain and group them into the discrete sources being fit by the model. Although not an exhaustive effort, this proved challenging due to the large dimension of parameter space, different sources located close to one another in parameter space, and the multi-modal posteriors. A more robust approach was to group the parameters of the model by using the match between the waveforms as defined in Eq.~\ref{eq:match} and applying a match threshold $M^*$ that must be exceeded for the parameter sets to be interpreted as fitting the same source. Seeing as it is the waveforms that are fundamentally what is being fit to the data, whereas the model parameters are just how we map from the template space to the data, clustering chain samples based on the waveform match, rather than the parameters, is naturally more effective. The catalog production algorithm goes as follows: Beginning with the first sample of the chain, we compute the waveform from the parameters, produce a new \emph{Entry} to the catalog (i.e., a new discrete detection candidate), and store the chain sample in that Entry. The parameters and corresponding waveform become the \emph{Reference Sample} for the Entry. For each subsequent chain sample we again compute the waveform and check it against each catalog Entry. If the GW frequency of the chain sample is within 10 frequency bins of the Reference Sample we compute the match $M_{ij}$ and, if $M_{ij} > M^*$ the sample is appended to the Entry, effectively filtering all chain samples but those associated with the discrete feature in the data corresponding to the Entry. The check on how close the two samples are in frequency is to avoid wasteful inner-product calculations that will obviously result in $M_{ij}\sim0$. If a chain sample has been checked against all current Entries without exceeding the match threshold $M^*$ it becomes the reference sample for a new Entry in the catalog. Once the entire chain has been processed, the Catalog will contain many more candidate Entries than actual sources in the data (imagine a chain that has templates in the model occasionally sampling from the prior). However, the total number of chain samples in an Entry is proportional to the evidence $p({\bf d}) = \int p(\params|{\bf d})\, d\params$ for that candidate source. Thus each Entry has an associated evidence that is used to further filter insignificant features. The default match threshold is $M^*=0.5$ but is easily adjustable by the user. For each Entry, additional post-processing is then done to produce data products of varying degrees of detail depending on the needs of the end user. We select a point-estimate as the sample containing the median of the marginalized posterior on $f_0$, and store the $\rm{SNR}$, based on the reasoning that $f_0$ is by far the best constrained parameter and likely the most robust way of labeling/tracking the sources. We also compute the full multimodal $N_{\rm P}\timesN_{\rm P}$ covariance matrix $C_{ij}$ as a condensed representation of the measurement uncertainty, and for use as a proposal when more data are acquired. From the ensemble of waveforms for each Sample in the Entry, we also compute the posterior on the reconstructed waveform. Finally, metadata about the Catalog is stored including the total number of above-threshold Entries, the full set of posterior samples, and the model evidence. A block diagram for the data products and how they are organized is shown in Fig.~\ref{fig:catalog}. \begin{figure}[htp] \begin{center} \includegraphics[width=0.5\textwidth]{figures/CatalogBlockDiagram.pdf} \vspace*{-0.2in} \caption{\label{fig:catalog} Proposed scheme for packaging chain output into higher level data products for publication in source catalogs. Raw chain output and evidences are available, as well as the posterior samples after having been filtered and clustered into discrete detected sources. Each discrete source candidate will have its own detection confidence (evidence), chain samples, point estimate, and covariance matrix error estimates so that the user can choose the most appropriate level of detail for their application of the catalog, along with metadata including the source name and history (for continuity over catalog releases), etc.} \end{center} \end{figure} \subsection{Catalog Continuity} As the observing time grows the UCB catalog will evolve. New sources will become resolvable, marginal candidates may fade into the instrument noise, and overlapping binaries which may have been previously fit with a single template will be resolved as separate sources with similar orbital periods. Our scheme of identifying the binaries by their median value of $f_0$ will also evolve between releases of the catalog. While the association for a particular source from one catalog to the next is obvious upon inspection, the sheer number of sources requires an automated way of generating and storing the ancestry of a catalog entry in meta data. To ensure continuity of the catalog between releases, we construct the ``family tree'' of sources in the catalog after each incremental analysis is performed. A source's ``parent'' is determined by again using the waveform match criteria, now comparing the new entry to sources in the previous catalog computed using the previous run's observing time. In other words, we are taking Entries found in the current step of the analysis and ``backward modeling'' the waveforms as they would have appeared during the production of the previous catalog. The waveforms are compared to the recovered waveforms from the previous epoch to identify which sources are associated across catalogs, tracing a source's identification over the entire mission lifetime, and making it easy to quickly identify new sources at each release of the catalog. \section{Demonstration} To demonstrate the algorithm performance we have selected two stress-tests using data simulated for the LISA Data Challenge \emph{Radler} dataset~\footnote{\url{https://lisa-ldc.lal.in2p3.fr/ldc}}. The first is a high-frequency, high-$\rm{SNR}$ isolated source that challenges the convergence of the pipeline due to the many sub-dominant local maxima in the likelihood function. As shown in Figs.~\ref{fig:money_plot} and ~\ref{fig:logL}, new features in the algorithm have the desired affect of improving the convergence time. We have also tested the pipeline on data at lower frequencies where the number of detectable sources is high, focusing on a ${\sim}140\ \mu$Hz wide segment starting at 3.98 mHz. The segment is subdivided into three regions to test the performance at analysis boundaries, and processed after 1.5, 3, 6, 12, and 24 months of observing. For the 24 month analysis, the full bandwidth was further divided into six regions to complete the analysis more quickly. Fig.~\ref{fig:4mHz-model} shows a heat map of the posterior distribution function on the model dimension for the six adjacent frequency segments analyzed to cover the ${\sim}140\ \mu$Hz test segment. The maximum likelihood model is selected for post-processing to generate a resolved source catalog. In the event that multiple dimension models have equal likelihood the lower dimension model is selected. Fig.~\ref{fig:4mHz-waveforms} shows the data, residual, and noise model (top panel) and the posterior distributions on the reconstructed waveforms which met the criteria for inclusion into the detected source catalog after 24 months of observing (bottom panel). The waveforms, residuals, and noise reconstructions are plotted with 50\% and 90\% credible intervals, though the constraints are sufficiently tight that the widths of the intervals are small on this scale. The reconstructed waveforms are shown over a narrower-band region than the full analysis segment, containing the middle two of the six adjacent analysis windows. \begin{figure}[htp] \begin{center} \includegraphics[clip=true,angle=0,width=0.5\textwidth]{figures/model_posterior.pdf} \end{center} \vspace*{-0.2in} \caption{\label{fig:4mHz-model} Heat map of posterior distribution function as a function of frequency segment and number of signals in the model.} \end{figure} \begin{figure*}[htp] \begin{center} \includegraphics[clip=true,angle=0,width=1\textwidth]{figures/24mo_full_waveform_v2.pdf} \end{center} \vspace*{-0.2in} \caption{\label{fig:4mHz-waveforms} Top panel: Power spectrum for 24 months of simulated TDI-$A$ channel used to test the algorithm performance on multi-source data, with inferred residual (light blue) and noise level (green) posteriors, showing 50 and 90\% credible intervals. Bottom panel: Reconstructed waveform posteriors (using the same credible intervals) discretely identified after the 24 month analysis and post-processing zoomed in to a narrower bandwidth of the top panel, including two adjacent analysis windows.} \end{figure*} The recovered source parameters are tested against the true values used in the data simulation and we find that our inferences about the data correspond to the simulated signals that we would expect to be detected. Fig.~\ref{fig:4mHz-posteriors} shows the 1- and 2-sigma contours of the marginalized 2D posteriors for the frequency-amplitude plane (top) and sky location (bottom) with gray circles marking the true parameter values. These results come from a single analysis window because the results from the full test region are overwhelming when all plotted together. \begin{figure}[htp] \begin{center} \includegraphics[angle=0,width=0.5\textwidth]{figures/4mHz_f-A.pdf} \\ \includegraphics[angle=0,width=0.5\textwidth]{figures/4mHz_sky.pdf} \\ \end{center} \vspace*{-0.2in} \caption{\label{fig:4mHz-posteriors} Two-dimensional marginalized posteriors for a single analysis window of the full test segment of simulated data around 4 mHz after 12 months of observing time by LISA. The analysis was built up from 1.5, 3, and 6 month observations. Gray circles mark the parameter values of the injected sources. The top panel shows the frequency-amplitude plane, and the bottom panel shows the sky location in ecliptic coordinates. Contours enclose the 1 and $2\sigma$ posterior probability regions for each discrete source found in the catalog production, and the color scheme is consistent with Fig.~\ref{fig:4mHz-waveforms}.} \end{figure} \begin{figure*}[htp] \begin{center} \includegraphics[width=1.0\textwidth]{figures/ucb_catalog_tree.pdf} \vspace*{-0.2in} \caption{\label{fig:continuity} Demonstration of how catalog evolves as more data are acquired. White entries have clear ``parentage'', green are new sources in the catalog, and blue are split from a single parent. Each entries ``geneology'' is stored as metadata in the catalog.} \end{center} \end{figure*} Fig.~\ref{fig:continuity} is a graphical representation of the family tree concept for tracking how the source catalog evolves over time. From this diagram one can trace the genealogy of a source in the current catalog through the previous releases. The diagram is color-coded such that new sources are displayed in green, sources unambiguously associated with an entry from the previous catalog in white, and sources that share a ``parent'' with another source are in blue. Based on the encouraging results of the narrow-band analysis shown here we will begin the analysis of the full data set. A thorough study of the pipeline's detection efficiency, the robustness of the parameter estimation, and optimization of MCMC and post-production settings will be presented with the culmination of the full analysis. \section{Future Directions} The algorithm presented here is a first step towards a fully functional prototype pipeline for LISA analysis. We envision continuous development as the LISA mission design becomes more detailed, and as our understanding of the source population, both within and beyond the galaxy, matures. The main areas in need of further work are: (1) Combining the galactic binary analysis with analyses for other types of sources; (2) Better noise modeling, including non-stationarity on long and short timescales; (3) Handling gaps in the data; (4) More realistic instrument response modeling and TDI generation; (5) Further improvements to the convergence time of the pipeline. \begin{figure}[htp] \begin{center} \includegraphics[width=0.5\textwidth]{figures/cycle.png} \vspace*{-0.2in} \caption{\label{fig:global} The UCB search as one component of a global fit. The residuals from each source analysis block are passed along to the next analysis in a sequence of Gibbs updates. New data is incorporated into the fit during the mission. The noise model and instrument models are updated on a regular basis.} \end{center} \end{figure} Figure \ref{fig:global} shows one possible approach for incorporating the galactic analysis as part of a global fit. In this scheme, the analyses for each source type, such as super massive black hole binaries (SMBH), stellar origin (LIGO-Virgo) binaries (SOBH), un-modeled gravitational waves (UGW), extreme mass ratio inspirals (EMRI), and stochastic signals (Stochastic) are cycled through, which each analysis block passing updated residuals (i.e., the data minus the current global fit) along to the next analysis block. New data is added to the analysis as it arrives. The noise model and the instrument model (e.g., spacecraft orbital parameters, calibration parameters, etc.) are regularly updated. This blocked Gibbs scheme has the advantage of allowing compartmentalized development, and should be fairly efficient given that the overlap between different signal types is small. A more revolutionary change to the algorithm is on the near horizon, where we will switch to computing the waveforms and the likelihood using a discrete time-frequency wavelet representation. A fast wavelet domain waveform and likelihood have already been developed~\cite{Cornish:2020}. This change of basis allows us to properly model the non-stationary noise from the unresolved signals which are modulated by the LISA orbital motion, as well as any long-term non-stationarity in the instrument noise. Rectangular grids in the time-frequency plane are possible using wavelet wave packets~\cite{Klimenko_2008} which make it easy to add new data as observations continue, instead of needing the new data samples to fit into a particular choice for the wavelet time-frequency scaling, e.g. being $2^n$ or a product of primes. Wavelets are also ideal for handling gaps in the data as they have built-in windowing that suppresses spectral leakage with minimal loss of information. The time-frequency likelihood~\cite{Cornish:2020} also enable smooth modeling of the dynamic noise power spectrum $S(f , t)$ using \texttt{BayesLine} type methods extended to two dimensions. Convergence of the sampler will be improved by including directed jumps in the extrinsic parameters when using the $\mathcal{F}$ statistic proposal (as opposed to the uniform draws that are currently used). The effectiveness of the posterior-based proposals can be improved by including inter-source correlations in the proposal distributions. This would be prohibitively expensive if applied to all parameters as the full correlation matrix is $D \times D$, where $D=N_{\rm GW}\timesN_{\rm P} \sim 10^4$. However, if the sources are ordered by frequency, the $D \times D$ correlation matrix of source parameters will be band diagonal. We can therefore focus only on parameters that are significantly correlated, and only between sources that are close together in parameter space, while explicitly setting to zero most of the off-diagonal elements of the full correlation matrix. There may also be some correlations with the noise model parameters, but we do not expect these to be significant. Along a similar vein, we will include correlations between sources in the Fisher matrix proposals. This will only be necessary for sources with high overlaps~\cite{Crowder:2004ca} which will be identified adaptively within the sampler. Then the Fisher matrix is computed using the parameters set $\params = \{\params_1, \params_2\}$ and waveform model ${\bf h}(\params) = {\bf h}_1(\params_1) + {\bf h}_2(\params_2)$. There is a large parameter space of analysis settings to explore when optimizing the computational cost of the full analysis, as well as the ``wall'' time for processing new data. The first round of tuning the deployment strategy for the pipeline will come from studying the optimal segmenting of the full measurement band, and the cadence for reprocessing the data as the observing time increases. We will extend the waveform model to allow for more complicated signals including eccentric white dwarf binaries, hierarchical systems and stellar mass binary black holes which are the progenitors of the merging systems observed by ground-based interferometers~\cite{Sesana_2016}, and develop infrastructure to jointly analyze multimessenger sources simultaneously observable by both LISA and EM observatories~\cite{Korol:2017qcx,Kupfer_2018, Burdge_2019, Littenberg_2019b}. \section*{Acknowledgments} We thank Q. Baghi, J. Baker, C. Cutler, J. Slutsky, and J. I. Thorpe for insightful discussions during the development of this pipeline, particularly related to the catalog development. We also thank the LISA Consortium's LDC working group for curating and supporting the simulated data used in this study. TL acknowledges the support of NASA grant NNH15ZDA001N-APRA and the NASA LISA Study Office. TR and NJC appreciate the support of the NASA LISA Preparatory Science grant 80NSSC19K0320. KL's research was supported by an appointment to the NASA Postdoctoral Program at the NASA Marshall Space Flight Center, administered by Universities Space Research Association under contract with NASA. NJC expresses his appreciation to Nelson Christensen, Direct of Artemis at the Observatoire de la C\^{o}te d'Azur, for being a wonderful sabattical host.
1,314,259,995,389
arxiv
\section{Introduction} It has been recently shown by the authors of this paper \cite{ourpaper1} that the Hawking temperatures \cite{Hawking_rad} of black holes can be found using purely topological methods. This has led to a deeper understanding of the topological nature of the Hawking temperature of not only individual black hole event horizons, but also multihorizon systems, as well as cosmological horizons such as that found in pure de Sitter spacetime \cite{ourpaper2,ovgun}. The foundation of the method rests on a familiar topological invariant known as the Euler characteristic $\chi$, which can be defined for Euclideanised black hole spacetimes \cite{Chern1,Chern2,Gibbons}. Using this invariant, plus some extra basic information about black hole spacetimes, such as their curvature scalars and Killing horizon structure, allows for Hawking temperatures to be easily calculated. This has been shown to be reliable for a wide variety of black hole systems in both two and four dimensions and is independent of coordinate system choice \cite{ourpaper1,ourpaper2}. For our four-dimensional world, the most physically-relevant targets of study are four-dimensional black holes. It has recently been shown that both two- and four-dimensional topological approaches can be used to study a large collection of four-dimensional black holes, in every case investigated until now giving completely equivalent results \cite{ourpaper1,ourpaper2}. In cases where a four-dimensional topological study is impeded by technical problems (for example by a vanishing Euler characteristic), a dimensional reduction to two dimensions works astonishingly well, even for black holes with highly nontrivial topologies, suggesting that two-dimensional descriptions may be sufficient for describing the thermodynamical properties of all black holes. This thermodynamical information storage in two dimensions is reminiscent of the well-known area scaling law of black hole entropy, as well as more speculative research on the holographic nature of the Universe as a whole \cite{Bekenstein,Susskind,Susskind2}. The present work demonstrates that due to the Hawking-Page phase transition for a black hole in four-dimensional anti-de Sitter ($\rm{AdS_{4}}$) spacetime, two- and four-dimensional topological approaches give conflicting Hawking temperature results, signifying that these black holes constitute a special case deserving of careful study. We show that a topological analysis of the Hawking temperature of a Schwarzschild-AdS (S-$\rm{AdS_{4}}$) black hole {\em demands} a two-dimensional approach, despite the S-$\rm{AdS}_{4}$ black hole being a four-dimensional object. Our analysis shows that this deviation is caused by the Hawking-Page phase transition, manifesting as a topological instability. This instability is, however, a blessing in disguise, as it sheds more light on the intrinsic two-dimensional character of black holes. A simple dimensional reduction of the S-$\rm{AdS_{4}}$ metric proves sufficient in yielding the correct Hawking temperature. The reduction succeeds because it removes all information concerning the Hawking-Page phase transition, in effect stabilising the system's topology. The plan of the paper is as follows. In Section \ref{sec:Sec1} we reintroduce our two- and four-dimensional topological methods for calculating Hawking temperatures. These two approaches have in the past given equivalent results for black hole spacetimes \cite{ourpaper1,ourpaper2}, however we show in Section \ref{sec:Sec2} that this is not the case in an anti-de Sitter background. In Section \ref{sec:Sec3} we prove using a minimisation procedure that the reason for this discrepancy is the presence of the Hawking-Page phase transition, which introduces a topological instability. Reducing the description of the S-$\rm{AdS}_{4}$ black hole down to two dimensions is shown to eliminate any information regarding the phase transition and hence stabilise the system, accounting for the correct Hawking temperature yielded by the two-dimensional approach. Finally, an Appendix treats other members of the AdS family of black holes. \section{Calculating the Hawking temperature of black holes in two and four dimensions} \label{sec:Sec1} We reproduce here the formula for the Hawking temperature of a two-dimensional (or four-dimensional, after reduction) black hole with a time-independent Ricci scalar $R$: \begin{equation} \label{eq:2d_form} T_{\mathrm{H}}=\frac{\hbar c}{4\pi \chi k_{\mathrm{B}}}\sum_{j \leq \chi}\int_{r_{\mathrm{H_{j}}}}\sqrt{g}Rdr, \end{equation} where $k_{\rm B}$ is Boltzmann's constant, $c$ is the speed of light in vacuum, $\hbar$ the reduced Planck constant, $R(r)$ the Ricci scalar depending only on the `spatial' variable $r$, $r_{\rm H_{j}}$ the location of the $j$-th Killing horizon, $g$ the (Euclidean) metric determinant, and $\chi$ the Euler characteristic of the black hole's Euclideanised spacetime. The symbol $\sum_{j \leq \chi}$ is a sum over all Killing horizons, where one should pay attention to the sign of each term in the sum, as it can be positive or negative depending on the specific features of each horizon -- the overall temperature result is of course always positive \cite{ourpaper1}. A four-dimensional version of the above Hawking temperature expression cannot be found in such simple and closed form. This is because the value of $\chi$ in four dimensions is not simply equal to the number of fixed points of a Killing vector field on the manifold (as holds in two dimensions) leading to complications on how to sum correctly over horizons in any potential temperature expression \cite{Hawking}. A closed expression for a four-dimensional black hole temperature would therefore be much more complicated, if indeed possible. This lack of a general expression is not a problem for practical calculations. The Euler characteristic of a four-dimensional manifold is \begin{equation} \label{eq:4d_Euler} \chi=\frac{1}{32\pi^2}\int d^4 x \sqrt{g}\left( K_{1} - 4R_{ab}R^{ab} + R^2 \right), \end{equation} where $R_{ab}$ is the Ricci tensor, $K_{1}$ the Kretschmann invariant, and $g$ the Euclidean metric determinant. Using the above expression, a Hawking temperature formula on an ad hoc basis for each four-dimensional spacetime, can be directly found \cite{ourpaper2}. A detailed treatment of this last point, along with a full derivation of formula (\ref{eq:2d_form}) and the necessary topological background, can be found in Refs. \cite{ourpaper1,ourpaper2,Chern1,Chern2,Gibbons,Hawking,Eguchi,Morgan}. \section{Two- and four-dimensional topology of black holes} \label{sec:Sec2} In this work, we show for the first time that the Hawking temperature of a black hole in $\rm{AdS_{4}}$ spacetime cannot be found using a four-dimensional topological approach due to a topological instability seeded by the Hawking-Page phase transition. This phase transition causes S-$\rm{AdS_{4}}$ black holes to be thermodynamically unfavoured below a certain critical temperature \cite{Page,Ong}. We show that a very simple dimensional reduction down to two dimensions suppresses the phase transition. Therefore, any information pertaining to the thermodynamical, and therefore topological, instability of the system is removed after this reduction, allowing for a two-dimensional topological calculation of the black hole's Hawking temperature using formula (\ref{eq:2d_form}). In this work we also show that dimensional reduction allows topological Hawking temperature calculations to be performed for other, more exotic, members of the $\rm{AdS_{4}}$ black hole family, including those with toral and hyperbolic horizons. These results are presented in the Appendix. As a first step, let us directly calculate an S-$\rm{AdS_{4}}$ black hole's Hawking temperature using formula (\ref{eq:2d_form}). The Euclideanised S-$\rm{AdS_{4}}$ metric is given by \cite{Charm} \begin{widetext} \begin{equation} \label{S-AdS4} ds^2 = \left( 1+\frac{r^2}{L^2} - \frac{2M}{r} \right) dt^2 + \left( 1+\frac{r^2}{L^2} - \frac{2M}{r} \right)^{-1} dr^2 + r^2 d\Omega^2, \end{equation} \end{widetext} where $L$ is the AdS curvature length scale, related (in four dimensions) to the cosmological constant by $\Lambda=-3/L^2$; $M$ is the hole's mass. As its name suggests, the metric for the S-$\rm{AdS_{4}}$ black hole reverts to that of the Schwarzschild black hole in the $\Lambda \rightarrow 0$ limit. The S-$\rm{AdS_{4}}$ black holes we study in the main text have spherical horizons and in the Appendix we analyse their flat and hyperbolic cousins. Dimensionally reducing metric (\ref{S-AdS4}) in the crudest way possible, by excising the two-sphere section from the line element, remarkably lets one use formula (\ref{eq:2d_form}) yielding the correct Hawking temperature result \cite{Ong}: \begin{equation} \label{correct_temp} T_{\rm{H}}=\frac{L^2 + 3r_{h}^2}{4\pi r_{h}L^2}, \end{equation} where $r_{h}$ is the AdS black hole radius. Using a four-dimensional topological formula, which can be derived directly from the Euler characteristic expression (\ref{eq:4d_Euler}), produces a Hawking temperature result which differs from the correct temperature (\ref{correct_temp}) \cite{ourpaper2}. From metric (\ref{S-AdS4}), the geometric quantities required to evaluate $\chi$ from (\ref{eq:4d_Euler}) can be immediately found, i.e. $\sqrt{g}=r^2 \rm{sin}(\theta)$ and $K_{1} - 4R_{ab}R^{ab} + R^2=24\left( 1/L^4 + 2M^2/r^6 \right)$. Substituting these values into (\ref{eq:4d_Euler}) and integrating over the two-sphere gives \cite{Gibbons} $\chi=(3/\pi)\int_{0}^{\beta_{2}} dt \int_{r_{h}} dr \left( r^2/L^4 + 2M^2/r^4 \right)$, where $\beta_{2}$ is the inverse of the black hole's Hawking temperature. One can then integrate over time and space leading to $\chi T_{\rm{H}} = \left( -r_{h}^6/L^4 + 2M^2 \right)/(r_{h}^3 \pi)$. Metric (\ref{S-AdS4}) has time isometry, with associated Killing vector, and has one real-valued fixed point two-surface at $r_{h}$; this single ``bolt" gives a value for this space of $\chi=2$ \cite{Hawking}. At the event horizon, the $g_{tt}$ component of (\ref{S-AdS4}) vanishes, providing an expression for the black hole's mass: $M=r_{h}\left( L^2 + r_{h}^2 \right)/(2L^2)$. Substituting the mass and $\chi$ values into our previous expression for $\chi T_{\rm{H}}$ finally gives \begin{equation} \label{eq:wrong1} T_{\rm{H}}=\frac{L^2 + 2r_{h}^2 - r_{h}^4/L^2}{4\pi r_{h}L^2}; \end{equation} this is different from the correct value (\ref{correct_temp}). The reason for this discrepancy is that S-$\rm{AdS_{4}}$ black holes are known to suffer from a thermodynamic instability via the Hawking-Page phase transition \cite{Page}. This phase transition leads to a violent topology change in the spacetime manifold, with the Euler characteristic's value transitioning from non-zero to zero below a critical temperature. The vanishing of $\chi$ is something our topological methods are unable to deal with due to $\chi$'s presence in the denominator of all temperature formulas. The Hawking-Page phase transition demonstrates that S-$\rm{AdS_{4}}$ black holes below a critical temperature become thermodynamically unfavoured, with radiation in thermal equilibrium in $\rm{AdS_{4}}$ space favoured \cite{Page,Ong}. Crucially, Euclideanised $\rm{AdS_{4}}$ space containing only radiation has an Euler characteristic of $\chi=0$, whereas Euclideanised S-$\rm{AdS_{4}}$ space has $\chi=2$; for further details on each Euclidean space's topology, see \cite{Witten}. Therefore, below the critical temperature, a manifold with $\chi=0$ is thermodynamically favoured over that with $\chi=2$. Whenever $\chi=0$ one would expect our finite $\chi$-based temperature method to break down and, indeed, this is the case. Let us now plot the temperature form (\ref{eq:wrong1}) along with the correct, known result (\ref{correct_temp}) to see how they differ, i.e. to pinpoint where formula (\ref{eq:wrong1}) breaks down and where the two- and four-dimensional approaches diverge. \begin{figure} \centering \includegraphics[width=80mm]{Temp_plots.eps} \caption{The orange line shows the incorrect S-$\rm{AdS_{4}}$ black hole Hawking temperature as a function of horizon radius predicted by (\ref{eq:wrong1}). The blue line is the correct, known form verified by (\ref{correct_temp}). The green line is at $T_{crit}$, the critical temperature below which S-$\rm{AdS_{4}}$ black holes become unfavourable thermodynamically. The AdS scale $L$ has been set to one for simplicity and all units are arbitrary.} \label{fig:wrong1} \end{figure} As can be clearly seen in Fig. \ref{fig:wrong1}, (\ref{correct_temp}) and (\ref{eq:wrong1}) match very closely at temperatures above the critical temperature (marked by the green line) on the small $r_{h}$ branch. At the critical temperature the Hawking-Page phase transition occurs and below it the spacetime tends to thermal AdS with $\chi=0$, a region in which our formulas are no longer applicable, evinced by the diverging plots. One may ask: why does the two-dimensional formula (\ref{eq:2d_form}) give the correct S-$\rm{AdS_{4}}$ Hawking temperature despite the presence of $\chi$ in its denominator? Surely $\chi$ will vanish at temperatures below the Hawking-Page phase transition, creating a divergence? The reason (\ref{eq:2d_form}) functions is that a dimensional reduction down to two dimensions removes all information connected with the phase transition, stabilising the thermodynamics and hence the topology as always having a finite $\chi$ value. In the next section we prove this. \section{The Hawking-Page phase transition} \label{sec:Sec3} The free energy of a Euclideanised spacetime is known to be \cite{Ong} \begin{equation} \label{free_energy} F=-T\mathrm{log}Z \approx TS, \end{equation} where $Z$ is the partition function, and $TS$ the product of temperature and the Euclidean action. Using (\ref{free_energy}) one can see that for two Euclideanised spacetimes {\em at the same temperature} the one with the lower Euclidean action will have a lower free energy and hence will be thermodynamically favoured. We now derive the Hawking-Page phase transition in four dimensions by comparing the Euclidean actions of two spacetimes: $\rm{AdS_{4}}$ space with and without a black hole \cite{Page,Ong}. This standard method will then be applied to a dimensionally-reduced spacetime, explaining the efficacy of our topological temperature formula (\ref{eq:2d_form}). The Einstein-Hilbert action has both Lorentzian and Euclidean versions, differing from each other by a global minus sign. The four-dimensional Euclidean action is given by \begin{equation} \label{Eucl_act} S_{E}=-\frac{1}{16\pi}\int d^4 x \sqrt{-g}\left( R-2\Lambda \right). \end{equation} Using (\ref{Eucl_act}), the Euclidean actions of S-$\rm{AdS_{4}}$ and $\rm{AdS_{4}}$ spacetimes can be compared, identifying the Hawking-Page phase transition which occurs when these two actions become equal \cite{Page}. The Euclidean actions can be derived straight from Lorentzian metrics. Firstly, $\rm{AdS_{4}}$ spacetime can be described by \cite{Ong} \begin{equation} ds^2 = -\left( 1+\frac{r^2}{L^2} \right) dt^2 + \left( 1+\frac{r^2}{L^2} \right)^{-1} dr^2 + r^2 d\Omega^2, \end{equation} which, using (\ref{Eucl_act}), gives a Euclidean action of \begin{equation} S_{\mathrm{EAdS}}=-\frac{\Lambda}{8\pi}\int d^4 x \sqrt{-g} = -\frac{\Lambda}{6}\beta_{1}\xi^3, \end{equation} where the radial integral $r$ was evaluated from zero to a cutoff $\xi$ to ensure finiteness, and time $t$ was integrated from zero to an arbitrary period $\beta_{1}$, the value of which can later be fixed by an asymptotic condition. The S-$\rm{AdS_{4}}$ black hole spacetime, described by \begin{widetext} \begin{equation} \label{SAdS_Lorentz} ds^2 = -\left( 1+\frac{r^2}{L^2} - \frac{2M}{r} \right) dt^2 + \left( 1+\frac{r^2}{L^2} - \frac{2M}{r} \right)^{-1} dr^2 + r^2 d\Omega^2, \end{equation} \end{widetext} has Euclidean action \begin{equation} S_{\mathrm{EAdS-Sch}} = -\frac{\Lambda}{6}\beta_{2}\left( \xi^3 - r_{h}^3 \right), \end{equation} where the radial integral was taken between $r_{h}$ and the cutoff $\xi$, and over time period $\beta_{2}$ (the inverse of the S-$\rm{AdS_{4}}$ black hole's temperature). The Hawking-Page phase transition can now be derived by finding where the two actions defined above become equal, at fixed temperature. The metric tensors must match at the cutoff $\xi$, whose limit will be taken as $\xi \rightarrow \infty$, enforcing the same asymptotic geometry for the two metrics \cite{Page,Ong}. This asymptotic constraint fixes the time period $\beta_{1}$ by \begin{equation} \beta_{1}\sqrt{1+\frac{\xi^2}{L^2}}=\beta_{2}\sqrt{1-\frac{2M}{\xi}+\frac{\xi^2}{L^2}}. \end{equation} The difference between the two actions at the asymptotic limit is \begin{equation} \lim_{\xi \rightarrow \infty} \left( S_{\mathrm{EAdS-Sch}} - S_{\mathrm{EAdS}} \right) = -\frac{\beta_{2}\left( r_{h}^3 - ML^2 \right)}{2L^2}. \end{equation} Clearly, when this difference in actions is positive then thermal $\rm{AdS_{4}}$ space becomes thermodynamically favoured over the existence of a black hole. This occurs when $r_{h}^3 < ML^2$. Substituting in the S-$\rm{AdS_{4}}$ black hole mass defined earlier leads to the constraint $r_{h}^2 < L^2$. It is at the radius $r_{h}=L$ that the black hole's temperature equals $T_{\mathrm{H}}=T_{crit}=1/(\pi L)$; this is where the Hawking-Page phase transition occurs. Interestingly, unlike the Schwarzschild black hole, the S-$\rm{AdS_{4}}$ black hole has a minimum temperature $T_{min}=\sqrt{3}/(2\pi L)$; it is between temperatures $T_{crit}$ and $T_{min}$ that pure radiation in $\rm{AdS_{4}}$ space is thermodynamically favoured over the existence of a black hole. We now adapt the above proof to a dimensionally-reduced two-dimensional action showing that the phase transition vanishes. This explains the predictive power of equation (\ref{eq:2d_form}). The S-$\rm{AdS_{4}}$ metric is dimensionally reduced by simply removing the angular section of metric (\ref{SAdS_Lorentz}) to obtain \begin{equation} \label{eq:trunc} ds^2 = -\left( 1+\frac{r^2}{L^2} - \frac{2M}{r} \right) dt^2 + \left( 1+\frac{r^2}{L^2} - \frac{2M}{r} \right)^{-1} dr^2. \end{equation} To perform the new thermodynamical analysis, the two-dimensional Euclidean Einstein-Hilbert action must be used: \begin{equation} \label{eq:2d_EH} S_{E}=-\int d^2 x \sqrt{-g}R. \end{equation} The Ricci scalar $R$ in the above action is defined using metric (\ref{eq:trunc}). The Euclidean action of dimensionally-reduced S-$\rm{AdS_{4}}$ spacetime can now be found from (\ref{eq:trunc}) and (\ref{eq:2d_EH}): \begin{equation} S_{\mathrm{EAdS-Sch}}=-\int_{0}^{\beta_{2}}dt \int_{r_{h}}^{\xi}dr R, \end{equation} giving \begin{equation} \label{eq_comp1} S_{\mathrm{EAdS-Sch}} = 2\beta_{2}\left( -\frac{r_{h}}{L^2} - \frac{M}{r_{h}^2} +\frac{M}{\xi^2} + \frac{\xi}{L^2} \right). \end{equation} The Euclidean action of the reduced $\rm{AdS_{4}}$ spacetime, described by metric (\ref{eq:trunc}) at $M=0$, is \begin{equation} \label{eq_comp2} S_{\mathrm{EAdS}} = -\int_{0}^{\beta_{1}}dt \int_{0}^{\xi}dr R = \frac{2\beta_{1}\xi}{L^2}, \end{equation} and, as before, we must constrain the time periods using \begin{equation} \beta_{1}\sqrt{1+\frac{\xi^2}{L^2}}=\beta_{2}\sqrt{1-\frac{2M}{\xi}+\frac{\xi^2}{L^2}} \end{equation} in order to compare the free energies of the two spacetimes consistently. The difference in actions at the asymptotic limit is using (\ref{eq_comp1}) and (\ref{eq_comp2}) \begin{equation} \lim_{\xi \to \infty} \left( S_{\mathrm{EAdS-Sch}} - S_{\mathrm{EAdS}} \right) = -\frac{2\beta_{2}}{L^2 r_{h}^2}\left( r_{h}^3 + ML^2 \right). \end{equation} This is a new result showing that the S-$\rm{AdS_{4}}$ black hole in the dimensionally-reduced description is thermodynamically unfavoured only when the following holds: $r_{h}^3 < -ML^2$. As this can clearly never be satisfied, the black hole is completely stable, thermodynamically and hence topologically, in the dimensionally-reduced description. In this description, the Euclideanised spacetime has $\chi=1$ for any black hole temperature above its minimum value. This provides the reason why our two-dimensional formula (\ref{eq:2d_form}) gives the correct Hawking temperature after reduction whereas an attempt at a full four-dimensional description falters. The effectiveness of temperature formula (\ref{eq:2d_form}) to other types of AdS black holes is demonstrated in the Appendix. \section{Conclusions} In this work we have shown that the topological method of determining Hawking temperatures is only applicable for S-$\rm{AdS_{4}}$ black holes after a dimensional reduction down to two dimensions. This is due to the Hawking-Page phase transition. This result, along with our previous work, suggests that the Hawking temperatures of a large number, and most probably all, four-dimensional black holes can be studied using a two-dimensional topological approach. This characteristic of black hole systems hints at a previously unfamiliar lower-dimensional encoding of horizon temperature information, reminiscent of the known area law of entropy and the holographic principle. A thermodynamical analysis of the dimensionally-reduced S-$\rm{AdS_{4}}$ black hole spacetime was carried out, showing that the Hawking-Page phase transition is no longer present in this description. The phase transition in four dimensions destabilises the Euclideanised topology, making a manifold with vanishing Euler characteristic thermodynamically favoured. In the reduced description, the Euler characteristic is always non-zero, allowing a topological two-dimensional Hawking temperature formula to be employed. It would be interesting to investigate further the topological description of black hole thermodynamics in future work, perhaps in more exotic and extreme spacetimes. \section*{Acknowledgments} L.D.M.V. acknowledges support from EPSRC (UK, Grant No. EP/L015110/1) under the auspices of the Scottish Centre for Doctoral Training in Condensed Matter Physics. F.B. acknowledges funding from the German Max Planck Society for the Advancement of Science (MPG), in the framework of the International Max Planck Partnership (IMPP) between Scottish Universities and MPG. \section*{Appendix} In the main text of this paper we have looked at only one type of $\rm{AdS_{4}}$ black hole, namely the S-$\rm{AdS_{4}}$ black hole with spherical event horizon. In fact, a whole family of different black holes exist in $\rm{AdS_{4}}$ spacetime with a wide variety of features. This family of black holes can be divided into three classes: those with spherical, hyperbolic, or flat event horizons. In this Appendix we will study the Hawking temperature using topology of two interesting anti-de Sitter black hole solutions: those with torus-shaped flat event horizons and those with negatively curved, hyperbolic horizons. One black hole from each class of the $\rm{AdS_{4}}$ family has therefore been studied using our topological method in this work, in each case giving correct temperature results. The toral black hole in $\rm{AdS_{4}}$ spacetime has vanishing Euler characteristic due to its horizon's compactness and its genus value of $g=1$. A zero value for $\chi$ precludes a topological Hawking temperature study as discussed in the main text but its metric can be dimensionally reduced by simply removing the toral section, after which formula (\ref{eq:2d_form}) can be used. The toral AdS black hole has a four-dimensional Euclidean metric \cite{Ong2}: \begin{equation} \begin{gathered} ds^2 = \left( \frac{r^2}{L^2} - \frac{2M}{\pi K^2 r} \right) dt^2 + \left( \frac{r^2}{L^2} - \frac{2M}{\pi K^2 r} \right)^{-1} dr^2 \\ + r^2 \left( d\zeta^2 + d\xi^2 \right), \end{gathered} \end{equation} where $K$ is a compactification parameter acting on the torus. As the torus has genus $g=1$, the Euler characteristic of the full manifold is zero however, crucially, dimensionally reducing to two dimensions gives \begin{equation} \label{toral_reduc} ds^2 = \left( \frac{r^2}{L^2} - \frac{2M}{\pi K^2 r} \right) dt^2 + \left( \frac{r^2}{L^2} - \frac{2M}{\pi K^2 r} \right)^{-1} dr^2, \end{equation} changing the topology to one with $\chi=1$. Metric (\ref{toral_reduc}) can be processed by our temperature formula (\ref{eq:2d_form}) giving the correct result for the toral black hole \cite{Birm} \begin{equation} T_{\mathrm{H}}=\frac{3r_{h}}{4\pi L^2}, \end{equation} where $r_{h}=\left( 2ML^2/\pi K^2 \right)^{1/3}$. Black holes are also known to exist in $\rm{AdS_{4}}$ spacetime having hyperbolic horizons. These seem rather unphysical (as their mass can reach negative values) however for completeness we treat them here. We look at one particular example, treated in \cite{Ong2}, of an $\rm{AdS_{4}}$ black hole with compact hyperbolic horizon of genus $g=2$. This horizon form leads to a negative Euler characteristic: $\chi=2-2g=-2$. This spacetime is quite complicated, with the number of horizons depending on certain parameter values \cite{Vanzo}. Here, for simplicity, we choose a black hole with parameter values fixing the number of horizons to one and with a positive mass $M>0$. The hyperbolic $\rm{AdS_{4}}$ black hole metric, after a dimensional reduction leaving only the $g_{rr}$ and $g_{tt}$ sections of the line element, is \begin{equation} \label{eq:hyperb} ds^2 = \left( -1+\frac{r^2}{L^2} - \frac{2M}{r} \right) dt^2 + \left( -1+\frac{r^2}{L^2} - \frac{2M}{r} \right)^{-1} dr^2. \end{equation} Dimensionally reducing from the full four-dimensional metric to (\ref{eq:hyperb}) alters the Euler characteristic from $\chi=-2$ to $\chi=1$. Again, a Ricci scalar $R$ can be found for this geometry, which when inputted into (\ref{eq:2d_form}) leads to the known Hawking temperature \begin{equation} T_{\mathrm{H}}=\frac{3r_{h}^2 - L^2}{4\pi L^2 r_{h}}. \end{equation}
1,314,259,995,390
arxiv
\section{Introduction:}\label{sec:intro} Quasisymmetric stellarators\cite{nuhren1988,boozer1981,rodriguez2020} (sometimes referred to as helically-symmetric\cite{tessarotto1995,tessarotto1996}) constitute an attractive choice for magnetic confinement fusion. Theoretically, such designs exhibit transport properties analogous to those of axisymmetric devices\cite{boozer1983} while possessing larger three-dimensional freedom. The latter allows avoiding some of the inherent limitations of tokamaks. The quasisymmetric stellarator achieves this by possessing a hidden symmetry, namely, the magnitude of the magnetic field $|\mathbf{B}|$ is symmetric, while the full vector $\mathbf{B}$ is not. \par The concept of quasisymmetry (QS) is elegant, but it appears to have a significant theoretical limitation. It was soon realized\cite{garrenboozer1991b} that constructing a configuration with exact QS is not possible. Although there is no definitive proof that shows this impossibility, work on near-axis expansions\cite{garrenboozer1991b,landreman2018a} supports this point of view. The governing system of equations is overdetermined: there are more constraints than degrees of freedom. This limitation does not, however, prevent designs that exhibit behavior close to QS in a volume.\cite{bader2019,henneberg2019} \par Recently, studies that explore the concept of QS more deeply have appeared.\cite{rodriguez2020,burby2021} The main idea has been to separate the concept of QS from that of macroscopic force balance. In these studies, QS is defined as a property of the configuration that confers the single-particle dynamics an approximately conserved quantity without making any statement about the form of macroscopic equilibrium. This perspective differs significantly from the standard approach. Prior to [\onlinecite{rodriguez2020}] and [\onlinecite{burby2019}] {{} (and with very few exceptions\cite{tessarotto1996,burby2013})}, the concept of QS was framed in the context of magnetohydrostatic (MHS) equilibria satisfying $\mathbf{j}\times\mathbf{B}=\nabla p$, where $\mathbf{j} {{}=\nabla\times\mathbf{B}}$ is the current density and $p$ is the scalar pressure. As MHS is not intrinsic to QS, it is important to define QS without reference to a particular form of equilibrium. Separating QS from equilibrium allows us to investigate more deeply its meaning and limitations. \par Abandoning the convenient form of MHS equilibrium, although conceptually appropriate, comes at a cost. The magnetic field no longer needs to satisfy the condition $\mathbf{j}\cdot\nabla p=0${ {}, implicitly assumed in most of the literature\cite{nuhren1988,boozer1983,tessarotto1995}}. Hence, one cannot guarantee the existence of Boozer coordinates\cite{boozer1981} as presently understood, even if magnetic flux surfaces exist. Boozer coordinates are particularly convenient for studying QS, as it presents the symmetry in an explicit, simple form. This leads to a search for an analogous, convenient, but more general coordinate system for quasisymmetric configurations. \par In this paper, we construct such a coordinate system, which we call \textit{generalized Boozer coordinates} (GBC). This coordinate system was used to formulate the near-axis expansion in [\onlinecite{rodriguez2020i}], in which a QS equilibrium with anisotropic pressure was shown to avoid the conventional problem of overdetermination. The present paper is organized as follows. First, we introduce, develop and discuss this coordinate system systematically and rigorously. We start by presenting a constructive proof for GBC and the class of fields for which it exists. {{} We then present the set of equations describing a quasisymmetric magnetic field in this coordinate system.} This gives us the opportunity to gain an alternative\cite{burby2019,rodriguez2020} perspective on the distinction between the so-called weak and strong forms of quasisymmetry, as well as a comparison to axisymmetry. We close by summarizing the equations that link the equilibrium and the quasisymmetric field and concluding remarks. \section{Generalized Boozer coordinates} \subsection{Explicitly symmetric formulation of quasisymmetry} \label{sec:GBCQS} Let us start this section by introducing the notion of QS. We consider QS from the recent general perspective of single particle motion.\cite{rodriguez2020,burby2021} QS (and in particular \textit{weak} QS) is the property of the fields that confers the motion of guiding-centre single particles with an approximately conserved quantity.\cite{rodriguez2020} For the dynamics of particles to exhibit this conservation it is necessary for the magnetic field to have nested flux surfaces $\mathbf{B}\cdot\nabla\psi=0$ and satisfy, \begin{equation} \nabla\psi\times\nabla B\cdot\nabla\left(\mathbf{B}\cdot\nabla B\right)=0. \label{eqn:tripleQS} \end{equation} This form of quasisymmetry is most commonly referred to as the \textit{triple vector product formulation}. Although this is not the form that comes directly from the single-particle analysis (that form is the one used in Sec.~IIIA), it is a succinct way to impose QS on a magnetic field. \par {{} Given that this generalized concept of QS has a single particle origin, no notion of macroscopic equilibrium is involved. Of course, from a practical point of view, any steady field of interest will be in some form of force balance. With the definition of QS in (\ref{eqn:tripleQS}), different forms of equilibria may be investigated to understand how they interact with QS. One may generally refer to the macroscopic forces by $\mathbf{F}$, and we do not attempt to assess their origin in this paper. Instead, we focus on the requirements at a fluid level. A complete view of the problem would require an investigation of the kinetic basis of the forces to link the fluid forces to microscopic behavior. This is particularly important as microscopic forces could break the symmetry (and with it, the QS-related conserved momenta). An important example is the electrostatic potential, whose symmetry is needed to the appropriate order and imposes constraints even on the forces that arise from the plasma, such as centrifugal force or anisotropic pressure. With this in mind, we shall focus on the macroscopic aspects.} \par Looking at the statement of QS in the form presented in Eq.~(\ref{eqn:tripleQS}), the existence of a symmetry in the problem is not apparent. In the usual context of $\mathbf{j}\times\mathbf{B}=\nabla p$, this absence of obvious symmetry can be amended by using Boozer coordinates. In these coordinates, a field in magnetostatic equilibrium with well-defined flux surfaces is quasisymmetric (aside from quasipoloidal symmetry) iff, \begin{equation} B=B(\psi,\theta-\Tilde{\alpha}\phi), \label{eqn:BQSBooz} \end{equation} where $\{\psi,\theta,\phi\}$ are Boozer coordinates, $\psi$ the flux surface label, $\theta$ and $\phi$ poloidal and toroidal angles respectively, and $\Tilde{\alpha}=N/M|M,N\in\mathbb{Z}$. In order to employ Boozer coordinates, the $\mathbf{j}\cdot\nabla\psi=0$ property of MHS equilibria is central. Boozer coordinates {{} have the standard straight field-line coordinate Jacobian \begin{equation} \mathcal{J}=\frac{B_\phi+\iota B_\theta}{B^2}, \label{eqn:jacCoord} \end{equation} where the covariant $B_\theta$ and $B_\phi$ are flux functions, \begin{equation*} \mathbf{B}=B_\theta(\psi)\nabla\theta+B_\phi(\psi)\nabla\phi+B_\psi\nabla\psi. \end{equation*}} Boozer coordinates are widely used in stellarator theory and applications. The coordinates are a natural set that simplify many of the governing equations, including QS. In particular, construction of solutions by near-axis expansion of three-dimensional fields is most convenient in these coordinates\cite{garrenboozer1991a,garrenboozer1991b,landreman2018a,rodriguez2020i,rodriguez2020ii}. \par In the context of our general quasisymmetric $\mathbf{B}$, it is not necessary to assume that $\mathbf{j}\cdot\nabla\psi=0$. However, we demonstrate that any quasisymmetric field must satisfy $\oint (\mathrm{d}l/B)(\mathbf{j}\cdot\nabla\psi)=0$ (see Appendix A). The question that naturally arises is, can we construct an appropriate straight field line coordinate system that explicitly expresses the symmetry in QS in the Boozer fashion? \par The answer is yes. To see this, we write Eq.~(\ref{eqn:tripleQS}) in straight-field line coordinates $\{\psi,\theta,\phi\}$ using the contravariant representation $\mathbf{B}=\nabla\psi\times(\nabla\theta-\iota\nabla\phi)$, \begin{multline*} (\nabla\psi\times\nabla\theta\:\partial_\theta B+\nabla\psi\times\nabla\phi\:\partial_\phi B)\cdot\\ \cdot\nabla(\mathcal{J}^{-1}(\partial_\phi B+\iota \partial_\theta B))=0, \end{multline*} where $\mathcal{J}$ is the Jacobian associated to the coordinate system chosen. Now assume we can construct a straight-field line coordinate system with a Jacobian of the form $\mathcal{J}=\mathcal{J}(\psi,B)$, then the Jacobian factor may be dropped from the equation above to obtain, \begin{gather*} (\nabla\psi\times\nabla\theta\:\partial_\theta B+\nabla\psi\times\nabla\phi\:\partial_\phi B)\cdot\nabla(\partial_\phi B+\iota \partial_\theta B)=0 \\ \text{i.e.}\quad \left[\partial_\phi-\left(\frac{\partial_\phi B}{\partial_\theta B}\right)\partial_\theta\right](\partial_\phi+\iota\partial_\theta)B=0 \end{gather*} assuming that $\partial_\theta B\neq0$. (This makes quasi-poloidally symmetric solutions a special case.) From near-axis expansion we know that these solutions have a very restricted QS region\cite{rodriguez2020}. \par Commuting operators, we obtain \begin{equation*} (\partial_\phi+\iota\partial_\theta)\left(\frac{\partial_\phi B}{\partial_\theta B}\right)\partial_\theta B=(\mathbf{b}\cdot\nabla)\left(\frac{\partial_\phi B}{\partial_\theta B}\right)\partial_\theta B=0 \end{equation*} which implies that $B=B(\psi,\theta-\Tilde{\alpha}\phi)$ or $B=B(\psi,\phi)$, where $\Tilde{\alpha}=-\partial_\phi B/\partial_\theta B$ is a flux function. The additional requirement that $\Tilde{\alpha}$ is rational to avoid $B=B(\psi)$ at non-degenerate surfaces requires $\Tilde{\alpha}$ to be constant.\cite{rodriguez2020} \par In summary, if we are able to construct a straight field line coordinate system that has a Jacobian that can be written as a function of $\psi$ and $B$ only, then a field is QS in the \textit{weak} sense if and only if $B$ depends on a single linear combination of those coordinate angles. Note that under this assumption, the reverse direction of the proof is straightforward. \subsection{Constructing generalized Boozer coordinates} \label{sec:GBCconstr} We define \textit{generalized Boozer coordinates} (GBC) as a set of straight field line coordinates whose Jacobian can be written in the form $\mathcal{J}=B_\alpha(\psi)/B^2$, where $B_\alpha$ is some flux function {{}without requiring $\mathbf{j}\cdot\nabla\psi$ to vanish identically. This choice is more general than what is permitted by Boozer coordinates, which separately requires $B_\theta$ and $B_\phi$ to be flux functions. We shall now constructively explore the conditions under which such a coordinate system exists.} \par Let us start with a given magnetic field, assuming that it lies on well-defined flux surfaces. It then follows\cite{kruskuls1958,boozer1981,Helander2014} that there exists some initial straight field line coordinate system $\{\psi,\theta,\phi\}$, so that $\mathbf{B}=\nabla\psi\times\nabla\theta+\iota\nabla\phi\times\nabla\psi$. The definition of the angular straight field line coordinates is arbitrary up to a transformation of the form \begin{gather*} \theta=\theta'+\iota\omega, \\ \phi=\phi'+\omega. \end{gather*} The function $\omega=\omega(\psi,\theta,\phi)$ is a well behaved periodic function that defines a family of transformations\cite{Helander2014}. The periodicity of $\omega$ preserves the toroidal and poloidal character of the two angular coordinates. \par Starting with some given straight-field line coordinate system, we want to understand how to construct a set with a Jacobian of the GBC form. Thus, we need to know how to transform the coordinate Jacobian induced by $\omega$. This reads \begin{equation*} \mathcal{J}'^{-1}=\nabla\psi\times\nabla(\theta-\iota\omega)\cdot\nabla(\phi-\omega). \label{jacTrans} \end{equation*} Here $\{\psi,\theta',\phi'\}$ represents the newly defined straight field line coordinates whose associated Jacobian is $\mathcal{J}'$. This equation may be recast into the form of a magnetic differential equation, \begin{equation} \mathbf{B}\cdot\nabla\omega=\frac{1}{\mathcal{J}}-\frac{1}{\mathcal{J}'}. \label{eqn:magDifEqOr} \end{equation} Now, let us require\footnote{A more general form $\mathcal{J}'=\mathcal{J}'(\psi,B)$ could have been demanded. However, one may show in that case that the system may always be cast into the form here employed.} \begin{equation} \mathcal{J}'=\frac{B_\alpha(\psi)}{B^2}. \label{eqn:GBCjac} \end{equation} In order for the magnetic differential equation Eq.~(\ref{eqn:magDifEqOr}) to have a single-valued solution for the transformation function $\omega$, Eq.~(\ref{eqn:magDifEqOr}) must satisfy Newcomb's criterion\cite{newcomb1959}. According to Newcomb, for a magnetic differential equation $\mathbf{B}\cdot\nabla f=s$ to have a single-valued solution for $f$, the source term $s$ must satisfy the line-integral condition along a field line, \begin{equation} \oint s\frac{\mathrm{d}l}{B}=0. \end{equation} For our problem $s=1/\mathcal{J}-1/\mathcal{J}'$. Start with, \begin{equation*} \oint \frac{1}{\mathcal{J}}\frac{\mathrm{d}l}{B}= \oint\mathbf{B}\cdot\nabla\phi\frac{\mathrm{d}l}{B}=\oint \mathrm{d}\phi = 2\pi n. \end{equation*} Here we have used the definition of the Jacobian in terms of the magnetic field $\mathbf{B}\cdot\nabla\phi=\nabla\psi\times\nabla\theta\cdot\nabla\phi=1/\mathcal{J}$, where $\phi$ and $\theta$ are, respectively, toroidal and poloidal angular coordinates that increase by $2\pi$ in going around the long or short toroidal circular paths. We also considered the rotational transform $\iota=n/m$, where $n,m\in\mathbb{Z}$ and are coprime. \par Now, consider Newcomb's condition on the last term on the right-hand-side of Eq.~(\ref{eqn:magDifEqOr}), and define the closed line integral $\mathcal{I}$ so that, \begin{equation} \oint \frac{1}{\mathcal{J}'}\frac{\mathrm{d}l}{B}=\frac{\mathcal{I}}{B_\alpha(\psi)}. \end{equation} For Newcomb's condition on Eq.~(\ref{eqn:magDifEqOr}) to hold, the following must be true: \begin{equation} B_\alpha(\psi)=\frac{\mathcal{I}}{2\pi n}. \label{eqn:BaDef} \end{equation} With this choice \begin{equation} \oint \mathbf{B}\cdot\nabla\omega\frac{\mathrm{d}l}{B}=0. \end{equation} Then there must exist a single-valued solution $\omega$ that enacts the coordinate transformation into the set associated with the Jacobian (\ref{eqn:GBCjac}). \par For Eq.~(\ref{eqn:BaDef}) to hold, it is necessary for $\mathcal{I}$ to be a flux function. This condition may be written in the form, \begin{equation} \mathcal{I}(\psi)=\oint\mathbf{B}\cdot\mathrm{d}\mathbf{r}, \end{equation} \begin{figure} \includegraphics[width=5cm]{ribbonGBC.jpg} \caption{\textbf{Ribbon surface.} Ribbon defined by two closely lying rational magnetic field lines labelled by $C_1$ and $C_2$.} \label{fig:ribbonGBC} \end{figure} along a magnetic field line. Focus on the case of rational field lines, for which the condition is most stringent. To proceed further, consider a non-self-intersecting ribbon over a flux surface bounded by two adjacent magnetic field lines (see Fig.~\ref{fig:ribbonGBC}). Denote the line integrals along each of these lines by $C_1$ and $C_2$. We may then write, using Stoke's theorem, \begin{equation} \oint_{C_1}\mathbf{B}\cdot\mathrm{d}\mathbf{r}-\oint_{C_2}\mathbf{B}\cdot\mathrm{d}\mathbf{r}=\int_\mathrm{rib}\mathbf{j}\cdot\mathrm{d}\mathbf{S}, \label{eqn:ribbonInt} \end{equation} where the surface element is taken to be perpendicular to the flux surface. The integral over the surface may be written as,\cite{Helander2014} \begin{equation*} \int_\mathrm{rib}\mathbf{j}\cdot\mathrm{d}\mathbf{S}=\int_{\alpha_0}^{\alpha_0+\delta\alpha}\mathrm{d}\alpha\oint\frac{\mathrm{d}l}{B}\mathbf{j}\cdot\nabla\psi. \end{equation*} Here $\alpha$ labels the field lines on the surface. Now, if $\mathcal{I}$ is to be truly a flux function, then following (\ref{eqn:ribbonInt}) the last surface integral must vanish. And it must do so for all field line labels $\alpha_0$ and adjacent field lines $\delta\alpha$. This gives the necessary and sufficient condition, \begin{equation} \oint\frac{\mathrm{d}l}{B}\mathbf{j}\cdot\nabla\psi=0. \label{eqn:subclassGBC} \end{equation} This subclass of magnetic fields grant the required form of $\mathcal{I}$, and thus the single-valued solution to Eq.~(\ref{eqn:magDifEqOr}). This subclass includes the stronger constraint $\mathbf{j}\cdot\nabla\psi=0$ {{} (for which the coordinates reduce to Boozer coordinates)}, or more to our concern, the QS scenario (see Appendix A). Note that magnetic fields that possess nested surfaces have the property $\langle\mathbf{j}\cdot\nabla\psi\rangle=0$, following an application of $\nabla\cdot(\mathbf{B}\times\nabla\psi)$. However, the Newcomb condition (\ref{eqn:subclassGBC}) is more stringent.\cite{newcomb1959} \par Restricting ourselves to this subclass of fields, we can always choose $B_\alpha$ as in (\ref{eqn:BaDef}). This choice might appear artificial, especially given the presence of the discrete $n$. However, we may relate this to the surface average over irrational flux surfaces. To see this, we write, \begin{multline} B_\alpha=\frac{1}{2\pi n}\oint B^2\frac{\mathrm{d}l}{B}=\frac{1}{4\pi^2 n}\int_0^{2\pi}\mathrm{d}\alpha\underbrace{\oint B^2\frac{\mathrm{d}l}{B}}_{n~\mathrm{turns}}=\\ =\frac{1}{4\pi^2}\int_0^{2\pi}\mathrm{d}\alpha\underbrace{\oint B^2\frac{\mathrm{d}l}{B}}_{1~\mathrm{turn}}=\frac{V'}{4\pi^2}\langle B^2\rangle, \label{eqn:BaIrrat} \end{multline} where $V'=\int_0^{2\pi}\mathrm{d}\alpha\int\mathrm{d}l/B=V'\langle 1\rangle$ is the usual volume $\psi$ derivative. {{} $B_\alpha$ is, therefore, a well behaved quantity for both rational and irrational surfaces.} \par As it stands, with an appropriate choice of $B_\alpha$, and restricting ourselves to the subclass of fields satisfying Eq.~(\ref{eqn:subclassGBC}), one may perform a coordinate transformation that provides a Jacobian of the desired form (\ref{eqn:GBCjac}). It remains to be shown that this new coordinate system is well-behaved. By this we mean that the Jacobian does not vanish nor diverge. To see this, it is most useful to rewrite $\mathcal{J}'$ in terms of (\ref{eqn:BaIrrat}), \begin{equation} \mathcal{J}'=\frac{B_\alpha}{B^2}=\frac{V'}{4\pi^2}\frac{\langle B^2\rangle}{B^2}. \end{equation} It is clear that $\mathcal{J}'$ has a definite sign, as given by the sign of $V'$. Thus, the Jacobian will never vanish nor diverge, given that $B^2>0$. \subsection{Magnetic field in generalized Boozer coordinates} We have shown that under the assumption that the magnetic field is quasisymmetric, we have a straight-field-line coordinate system, GBC, with a Jacobian $J=B_\alpha(\psi)/B^2$. In this coordinate system and following Sec.~\ref{sec:GBCQS}, a quasisymmetric field is one whose magnitude can be expressed in GBC as $B=B(\psi,\theta-\Tilde{\alpha}\phi)$. This form is analogous to the Boozer formulation of QS but is only subordinated to the configuration being QS and not $\mathbf{j}\cdot\nabla\psi=0$. \par Before proceeding to analyze some of the properties of QS and other governing equations in GBC, we explicitly write the covariant and contravariant forms for $\mathbf{B}$. The covariant form is \begin{gather} \mathbf{B}=B_\theta\nabla\theta+(B_\alpha-\iota B_\theta)\nabla\phi+B_\psi\nabla\psi. \label{eqn:BinGBC} \end{gather} The usual covariant function $B_\phi$ has been deprecated for $B_\alpha$. This is the flux function that appears in the Jacobian (\ref{eqn:GBCjac}). The simplicity of (\ref{eqn:BinGBC}) shows why it was convenient to choose the Jacobian of GBC to have the particular form in (\ref{eqn:GBCjac}). To obtain (\ref{eqn:BinGBC}), it is sufficient to take its scalar product with the contravariant representation, \begin{equation} \mathbf{B}=\nabla\psi\times\nabla\theta+\iota\nabla\phi\times\nabla\psi, \label{eqn:contra} \end{equation} and capitalize on the definition of $\mathcal{J}$. Compared to Boozer coordinates, the covariant function $B_\theta$ in GBC is not necessarily a flux function. Thus, GBC is an extension of Boozer coordinates. {{} When $\mathbf{j}\cdot\nabla\psi=0$, however, and as previously pointed, GBCs reduce to Boozer coordinates.} So far, the forms in (\ref{eqn:BinGBC}) and (\ref{eqn:contra}) have only required of the existence of GBC, and not of QS per se --other than to guarantee (\ref{eqn:subclassGBC}). To enforce the latter, one needs to specify $|\mathbf{B}|$ as a symmetric function.\cite{rodriguez2020i,rodriguez2020ii,sengupta2021} \section{Describing weak quasisymmetry in GBC} Having developed GBC, let us see what this coordinate system can teach us about \textit{weak} QS. We first write down the complete set of equations that describe a weakly quasisymmetric magnetic field. The first relevant equation equates the covariant (\ref{eqn:BinGBC}) and contravariant (\ref{eqn:contra}) forms of the magnetic field. The equation reads, \begin{multline} B_\theta\nabla\theta+(B_\alpha-\iota B_\theta)\nabla\phi+B_\psi\nabla\psi=\\ =\nabla\psi\times\nabla\theta+\iota\nabla\phi\times\nabla\psi. \label{eqn:contravariantEq} \end{multline} To specify that we are using GBC, and to introduce the quasisymmetric condition, we require \begin{equation} \nabla\psi\times\nabla\theta\cdot\nabla\phi=\frac{B_\alpha(\psi)}{B^2(\psi,\theta-\Tilde{\alpha}\phi)}. \label{eqn:jacobianEq} \end{equation} This set of equations (\ref{eqn:contravariantEq}) and (\ref{eqn:jacobianEq}) is entirely self-contained. It describes a {{} general} magnetic field (a vector field satisfying $\nabla\cdot\mathbf{B}=0=\mathbf{B}\cdot\nabla\psi${{}, without a particular form of equilibrium}) that is quasisymmetric ---no more, no less. Such equations have been recently used in near-axis expansions with anisotropic pressure equilibria.\cite{rodriguez2020i,rodriguez2020ii} Equations (\ref{eqn:contravariantEq}) and (\ref{eqn:jacobianEq}), referred to as the \textit{co(ntra)variant} and \textit{Jacobian} equations respectively, were there expanded systematically. A more thorough and complete exploration of the expansion of the magnetic equations and its implications will be presented in a separate publication. \par Equations~(\ref{eqn:contravariantEq}) and (\ref{eqn:jacobianEq}) apply beyond near-axis expansions. Alternatively, one could study the behavior of this system in a perturbative sense but around a surface rather than the axis.\cite{sengupta2021} This could shed some light to standard optimization approaches to QS\cite{henneberg2019,bader2019}. \par Beyond the set of equations (\ref{eqn:contravariantEq}) and (\ref{eqn:jacobianEq}), the formulation of weak QS in terms of GBC can be used to compare \textit{weak} QS to other (quasi)symmetric forms. This comparison to \textit{strong} and axisymmetry will help frame the notion of \textit{weak} QS in the larger space of configurations. \subsection{Comparison to strong quasisymmetry} \label{sec:compweakstrong} \textit{Strong} QS is a more resrictive form of QS compared to its \textit{weak} form\cite{burby2019,rodriguez2020}. Weak QS is the necessary and sufficient condition for the leading guiding centre dynamics to have an \textit{approximately} conserved momentum to leading gyro-centre order.\cite{rodriguez2020} This condition can be written as in (\ref{eqn:tripleQS}), but is most naturally given as, \begin{gather} \mathbf{u}\cdot\nabla B=0, \label{eqn:u.dB}\\ \mathbf{B}\times\mathbf{u}=\nabla\Phi(\psi), \label{eqn:Bxu}\\ \nabla\cdot\mathbf{u}=0.\label{eqn:d.u} \end{gather} The vector field $\mathbf{u}$ is defined by these equations and points in the direction of symmetry. In \textit{strong} QS the conserved momentum for the particle dynamics is exact for the first-order guiding centre Lagrangian.\cite{burby2019,rodriguez2020,tessarotto1996} In the notation that explicitly introduces the symmetry vector $\mathbf{u}$, strong QS is equivalent to weak QS -- i.e., Eqs.~(\ref{eqn:u.dB})-(\ref{eqn:d.u})-- plus the constraint \begin{equation} \mathbf{j}\times\mathbf{u}+\nabla C=0, \label{eqn:strongCond} \end{equation} where $C=\mathbf{B}\cdot\mathbf{u}$. Note that only the $\mathbf{b}$ component of this additional constraint is contained in the weak formulation of the problem. We now explore the significance of (\ref{eqn:strongCond}) in the context of GBC. \par To begin, we need to construct $\mathbf{u}$. From Eqs.~(\ref{eqn:u.dB})-(\ref{eqn:d.u}),\cite{rodriguez2020} \begin{align} \mathbf{u}=\Bar{\iota}\frac{\mathbf{\nabla}\psi\times \mathbf{\nabla}\chi}{\mathbf{B}\cdot \mathbf{\nabla}\chi}, \label{eqn:u_def} \end{align} where $\chi=\theta-\Tilde{\alpha}\phi$, and it is convenient to use $\chi$ as part of the coordinate triplet $\{\psi,\chi,\phi\}$. We have made the choice $\Phi'=\Bar{\iota}$ so that $\mathbf{u}\cdot\nabla\phi=1$, for simplicity. Scaling of the flux label for $\mathbf{u}$ leaves the weak QS conditions unchanged (see Appendix B for further discussion on the gauge and the particular choice). This form of $\mathbf{u}$ is equivalent to Eqs.~(\ref{eqn:u.dB})-(\ref{eqn:d.u}) only if we enforce $B=B(\psi,\chi)$ and the coorrdinate system $\mathbf{B}\cdot\nabla\chi=\Bar{\iota}\mathcal{J}^{-1}$ with the Jacobian in (\ref{eqn:GBCjac}). The parameter $\Bar{\iota}=\iota-\Tilde{\alpha}$ has been defined.\par From (\ref{eqn:BinGBC}) and (\ref{eqn:u_def}), we find, \begin{equation} C=B_\alpha-\Bar{\iota}B_\theta. \label{eqn:CandBtheta} \end{equation} This relation (\ref{eqn:CandBtheta}), together with (\ref{eqn:u_def}) and the definition of $C$ can be used to write the gauge-independent form, \begin{equation} \mathbf{B}\cdot\nabla\psi\times\nabla B= \frac{B_\alpha-\Bar{\iota}B_\theta}{\Bar{\iota}}\mathbf{B}\cdot\nabla B. \label{eqn:prePaulForm} \end{equation} The magnetohydrostatic form of this equation has been used previously\cite{paul2020,Helander2014}. \par The contravariant form of $\mathbf{u}$ together with (\ref{eqn:contra}) give simple expressions for directional derivatives in GBC. The differential operators can be simply written as partial derivatives, \begin{align} \mathbf{B}\cdot\mathbf{\nabla}= \mathcal{J}^{-1}\left(\Bar{\iota}\partial_\chi +\partial_\phi\right),\quad \mathbf{u}\cdot\mathbf{\nabla}=\partial_\phi. \label{eqn:BDel_uDel} \end{align} Since the Jacobian is quasisymmetric from (\ref{eqn:GBCjac}), the two operators $\mathbf{B}\cdot\mathbf{\nabla}$ and $\mathbf{u}\cdot\mathbf{\nabla}$ commute with each other\cite{burby2021}. This commutation property is made manifest in the GBC. \par The symmetry field $\mathbf{u}$ can also be written in the following covariant form: \begin{align} \mathbf{u}= u_\psi \mathbf{\nabla}\psi +u_\chi \mathbf{\nabla}\chi +u_\phi \mathbf{\nabla}\phi. \label{eqn:u_deaf_covid} \end{align} Taking the scalar product with $\mathbf{u}$ and $\mathbf{B}$ we obtain \begin{align} u_\phi =u^2 ,\quad \Bar{\iota}u_\chi + u_\phi = \frac{C}{\mathcal{J}^{-1}}. \label{eqn:u_covid_symptoms} \end{align} To complete the family of vectors required for the strong quasisymmetric condition (\ref{eqn:strongCond}), we need a closed form for $\mathbf{j}$ in GBC. From the curl of the covariant form of $\mathbf{B}$ in Eq.~(\ref{eqn:BinGBC}), we obtain \begin{equation} \mathbf{j}=\mathbf{\nabla}B_\psi\times \mathbf{\nabla}\psi + \mathbf{\nabla}B_\theta\times \mathbf{\nabla}\chi +\mathbf{\nabla}(B_\alpha-\Bar{\iota} B_\theta)\times \mathbf{\nabla}\phi. \label{eqn:current_affairs} \end{equation} Using \eqref{eqn:current_affairs} and (\ref{eqn:CandBtheta}), we can show that \begin{multline} \mathbf{j}\times \mathbf{u}+\mathbf{\nabla} C = (\mathbf{u}\cdot\mathbf{\nabla}B_\psi) \mathbf{\nabla}\psi+(\mathbf{u}\cdot\mathbf{\nabla} B_\theta)\left( \mathbf{\nabla}\chi - \Bar{\iota}\mathbf{\nabla}\phi\right). \label{eqn:Jxu} \end{multline} The simplicity of (\ref{eqn:Jxu}) is due to the choice of GBC. \par Recall that strong QS requires the expression $ \mathbf{j}\times \mathbf{u}+\mathbf{\nabla} C$ to be identically zero. This means that all the terms on the right side of \eqref{eqn:Jxu} need to vanish. That is to say, the covariant functions $(B_\psi, B_\theta)$ are required to be quasisymmetric. If $B_{\theta}$ is quasisymmetric, then $C$ is automatically so from \eqref{eqn:CandBtheta}. In an explicit coordinate representation, using (\ref{eqn:BDel_uDel}), we may write $B_\theta(\psi,\chi)$ and $B_\psi(\psi,\chi)$. \par Thus, the GBC representation provides an elegant way to formulate strong QS, which can now be understood as weak quasisymmetry plus the conditions that $B_\psi$ and $B_\theta$ are QS. In other words, not only is $B$ QS but so are $B_\theta$ and $B_\psi$. \par \paragraph{Implications for near-axis expansion}. We refer to [\onlinecite{rodriguez2020i}] and [\onlinecite{rodriguez2020ii}] for a detailed treatment using near-axis expansions of the weakly quasisymmetric problem. The procedure is based on expanding all governing equations describing the weak quasisymmetric field in some form of equilibrium in powers of the distance from the magnetic axis. To do so efficiently, both the field and equations --including Eqs.~(\ref{eqn:contravariantEq}) and (\ref{eqn:jacobianEq})-- are expressed in GBC. It was shown there that when equilibria with anisotropic pressure are considered, the common overdetermination problem\cite{garrenboozer1991b,landreman2018a} that limits the expansion is overcome. The number of governing equations becomes the same as that of functions to be solved. \par Extending the expansion to strong QS is straightforward. From the discussion above, the only difference is that the covariant functions $B_\theta$ and $B_\psi$ are both quasisymmetric rather than general functions of space. In practical terms, this simply leads to more restricted Taylor-Fourier expansions of those functions; the coefficients that were functions of $\phi$ become constants. This restriction in freedom once again leads us back to the \textit{Garren-Boozer overdetermination} problem. In fact, it does so the same way as it did in the case of MHS equilibrium. The restriction on the covariant functions imposes very severe constraints on the allowed geometry. The only way to escape this impasse is to assume axisymmetry ($\phi$ independent). Once again, consistent with what we observed in [\onlinecite{rodriguez2020i}], the asymmetry of the covariant representation of $\mathbf{B}$ appears to be vital to the construction of QS solutions. \par \subsection{Comparison to axisymmetry} We have seen that the strong formulation of QS is more constraining than its weak form. We would also like to compare QS to the limiting case of axisymmetry. We shall think of this case as a symmetry generated by rotation in space: the system is invariant under rotations about an axis. In Euclidean space, a rotation is an isometry, and it is generated by a vector field known as a Killing vector. Using the notion of a Killing vector, we want to explore how `far' the weak concept of QS is from this `true' symmetry. \par A measure of the departure of a symmetry generator from a Killing vector is the so-called deformation metric.\cite{burby2019} Taking $\mathbf{u}$ to represent the symmetry vector for QS, the idea is to see how far it is from being a Killing vector. A vector field $\mathbf{v}$ is Killing if and only if the deformation tensor $\mathcal{L}_{\mathbf{v}}g=0$. Here $\mathcal{L}_u$ denotes the Lie derivative along $\mathbf{u}$ and $g$ is the Euclidean metric. In 3D, this may be written as, \begin{equation} \mathcal{L}_{\mathbf{u}}g=\nabla\mathbf{u}+(\nabla\mathbf{u})^T. \end{equation} Evaluating this tensor for a quasisymmetric configuration should then provide information regarding the closeness to an isometry. It is convenient\cite{burby2019} to evaluate this rank-2 tensor in a basis defined by $\{\mathbf{B},\mathbf{u},\nabla\psi\}$, a triplet that we shall take to be independent. Then, \begin{align} \left[\nabla\mathbf{u}+(\nabla\mathbf{u})^T\right]\cdot\mathbf{B}=&\mathbf{j}\times\mathbf{u}+\nabla C, \label{eqn:Lg1}\\ \left[\nabla\mathbf{u}+(\nabla\mathbf{u})^T\right]\cdot\mathbf{u}=&\nabla u^2-\mathbf{u}\times\nabla\times\mathbf{u}, \label{eqn:wDef}\\ \left[\nabla\mathbf{u}+(\nabla\mathbf{u})^T\right]\cdot\nabla\psi=&\nabla\psi\times\nabla\times\mathbf{u}+2\nabla\psi\cdot\nabla\mathbf{u}, \label{eqn:Lg3} \end{align} where the \textit{weak} quasisymmetric properties have been used where necessary. Equation~(\ref{eqn:wDef}) is what Burby et al. call $\mathbf{w}$.\cite{burby2021} We shall explore this vector $\mathbf{w}$ in more detail after obtaining an explicit form for $\mathcal{L}_{\mathbf{u}}g$. Using (\ref{eqn:Lg1})-(\ref{eqn:Lg3}), and projecting once again onto the non-orthogonal basis triplet, we obtain \begin{equation} \mathcal{L}_{\mathbf{u}}g=\begin{pmatrix} 0 & \mathbf{u}\cdot\nabla C & \nabla\psi\cdot(\mathbf{j}\times\mathbf{u}+\nabla C) \\ \dots & \mathbf{u}\cdot\mathbf{w} & \mathbf{w}\cdot\nabla\psi \\ \dots & \dots & -\mathbf{u}\cdot\nabla|\nabla\psi|^2 \end{pmatrix}. \end{equation} The matrix is symmetric by construction. The content of its elements can be made clearer using GBC explicitly. \par The top row, corresponding to (\ref{eqn:Lg1}), has already been dealt with, as it is precisely the piece corresponding to strong QS. We made the observation that for this condition to be satisfied, (\ref{eqn:Jxu}) required $\mathbf{u}\cdot\nabla B_\theta=0=\mathbf{u}\cdot\nabla B_\psi$. \par For the other components, the vector field $\mathbf{w}= \mathbf{\omega_u}\times \mathbf{u} + \mathbb{\nabla} u^2, \quad \mathbf{\omega_u}=\mathbf{\nabla}\times \mathbf{u}$ is key. Using the covariant form of $\mathbf{u}$ we obtain the curl of the vector $\mathbf{u}$ in the form \begin{align} \mathbf{\omega_u}=\mathbf{\nabla}u_\psi\times \mathbf{\nabla}\psi + \mathbf{\nabla}u_\chi\times \mathbf{\nabla}\chi +\mathbf{\nabla}u_\phi\times \mathbf{\nabla}\phi. \label{eqn:omega_u_def} \end{align} Taking the cross product with $\mathbf{u}$, using the orthogonality of $\mathbf{u}$ with $\mathbf{\nabla}\psi$ and $ \mathbf{\nabla}\chi$, and (\ref{eqn:CandBtheta}) and (\ref{eqn:u_covid_symptoms}) we get \begin{multline} \mathbf{w} = (\mathbf{u}\cdot\mathbf{\nabla}u_\psi)\mathbf{\nabla}\psi + (\mathbf{u}\cdot\mathbf{\nabla}u^2)\left(\mathbf{\nabla}\phi - \frac{1}{\Bar{\iota}}\mathbf{\nabla}\chi\right)+\\ -\mathcal{J}(\mathbf{u}\cdot\mathbf{\nabla}B_\theta)\mathbf{\nabla}\chi, \label{eqn:w_useful_expression} \end{multline} which implies that \begin{align} \mathbf{B}\cdot \mathbf{w}=& -\Bar{\iota}\mathbf{u}\cdot\mathbf{\nabla}B_\theta, \label{eqn:BDotw_expression}\\ \mathbf{u}\cdot\mathbf{w}=&\mathbf{u}\cdot\nabla u^2. \end{align} Most importantly, a vanishing $\mathbf{w}$ implies that the covariant components of the symmetry vector as well as $B_\theta$ are quasisymmetric. \par To complete the simplification of the metric tensor, we invoke $B^2=(C^2+|\nabla\psi|^2)/u^2$, which follows from the definition of $\mathbf{u}$. This means, \begin{equation} \mathbf{u}\cdot\nabla|\nabla\psi|^2=B^2\mathbf{u}\cdot\nabla u^2-\mathbf{u}\cdot\nabla C^2. \end{equation} With this coordinate representation, the dependence of the various metric pieces is made explicit. We may then schematically present the dependence of $\mathcal{L}_{\mathbf{u}}g$ as follows, \begin{equation} \mathcal{L}_{\mathbf{u}}g\sim\begin{pmatrix} 0 & \boxed{\partial_\phi B_\theta} & \boxed{\substack{\partial_\phi B_\theta,~\partial_\phi B_\psi}} \\ \dots & \boxed{\partial_\phi u^2} & \boxed{\substack{\partial_\phi u^2,~\partial_\phi u_\psi\\\partial_\phi B_\theta}} \\ \dots & \dots & \boxed{\substack{\partial_\phi B_\theta,~\partial_\phi u^2}} \end{pmatrix}. \label{eqn:LugSim} \end{equation} The boxed expressions are meant to indicate that the corresponding tensor component vanishes if the expressions there do. If the tensor (\ref{eqn:LugSim}) is to vanish, then the symmetry vector would correspond to a rotation. This is not surprising if one looks at what it means for the components in (\ref{eqn:LugSim}) to vanish. Axisymmetry is reached when the covariant components of the magnetic field and the symmetry vector are themselves symmetric. The latter is intimately related to the geometry, as we may see when writing $\mathbf{u}\propto\partial_\phi \mathbf{x}|_{\chi,\psi}$. \par From (\ref{eqn:LugSim}), it follows that in some sense, weak QS is far from being an isometry. This is so because only one of the components of the tensor exactly vanishes. The $\phi$ dependence of the functions $B_\psi$, $B_\theta$, $u^2$, and $u_\psi$ takes the configuration away from axisymmetry. These apparent four degrees of freedom (especially those involving $\mathbf{u}$) may not be independent and involve highly non-linear combinations---they should ultimately be related through the quasisymmetric magnetic equations. Anyhow, the field-line dependence is key in distinguishing the weakly quasisymmetric form from, say, an axisymmetric tokamak. \par To make a comparative measurement of the departure from axisymmetry, consider now the case of strong QS. In this case, following (\ref{eqn:strongCond}), the first whole row (and thus also column) of (\ref{eqn:LugSim}) drop. The remaining dependence also simplifies, and the system is precluded from being an isometry, a priori, through the $\phi$ dependence of $u^2$ and $u_\psi$ only, which is consistent with the work by Burby et al.\cite{burby2019} \par Imposing additional properties to the field may also affect the form of the deformation tensor. An example would be a particular form of force balance. We now explore how the magnetics and equilibria are linked. \par \section{Quasisymmetry and equilibria} Let us consider the force balance part of the problem. Generally, a magnetic equilibrium with some arbitrary force $\mathbf{F}$ reads, \begin{equation} \mathbf{j}\times\mathbf{B}=\mathbf{F}. \label{eqn:Fbal} \end{equation} {{} As we argued in Sec.~II, we are concerned in this work with a general fluid force $\mathbf{F}$. Its connection to the microphysics is not considered.} Let us express the left-hand side of (\ref{eqn:Fbal}) in GBC. Using the contravariant form for the current (\ref{eqn:current_affairs}) together with (\ref{eqn:CandBtheta}), we obtain \begin{multline} \mathbf{j}\times \mathbf{B} = \left[\mathbf{B}\cdot\mathbf{\nabla}B_\psi-\mathcal{J}^{-1}\left( B_\alpha' -\Bar{\iota}' B_\theta\right)\right] \mathbf{\nabla}\psi+\\ +(\mathbf{B}\cdot\mathbf{\nabla} B_\theta) \left( \mathbf{\nabla}\chi -\Bar{\iota} \mathbf{\nabla}\phi\right), \label{eqn:JxB_simplified} \end{multline} which is an explicit coordinate representation of $\mathbf{j}\times\mathbf{B}$. The form of (\ref{eqn:JxB_simplified}) mirrors the form of (\ref{eqn:Jxu}). In this case, the magnetic differential operators substitute the directional derivatives along $\mathbf{u}$. We note that \eqref{eqn:JxB_simplified} does not have any component along $\mathbf{B}$, as can be checked by taking the dot product with $\mathbf{B}$. \par The form of (\ref{eqn:JxB_simplified}) puts constraints on the allowable forms for $\mathbf{F}$. As already noted, $\mathbf{F}\cdot\mathbf{B}=0$ must hold true. Otherwise the system would be imbalanced, as the magnetic field is unable to exerts forces along $\mathbf{B}$. Because of this reduction in the dimensionality of $\mathbf{F}$ and in view of (\ref{eqn:JxB_simplified}), it is convenient to write \begin{align} \mathbf{F}=F_\psi \nabla \psi + F_\alpha \left(\nabla \chi - \Bar{\iota}\nabla \phi\right). \label{eq:F_form} \end{align} An alternative form would be to use the contravariant form of $\mathbf{B}$ \eqref{eqn:contra} in \eqref{eqn:Fbal} to get \begin{align} \mathbf{F}= \left(\mathbf{J\cdot\nabla}\chi-\Bar{\iota}\mathbf{J\cdot\nabla}\phi \right)\nabla \psi -(\mathbf{J\cdot\nabla}\psi) \left(\nabla \chi - \Bar{\iota}\nabla \phi\right). \label{eq:F_n_jdel} \end{align} Substituting \eqref{eqn:JxB_simplified} and \eqref{eq:F_form} into \eqref{eqn:Fbal} we get two magnetic differential equations \begin{subequations} \begin{align} &\mathbf{B}\cdot\mathbf{\nabla}B_\psi=F_\psi + \mathcal{J}^{-1}\left(B_\alpha'-\Bar{\iota}' B_\theta\right), \label{eqn:MDE_B_psi}\\ &\mathbf{B}\cdot\mathbf{\nabla} B_\theta=F_\alpha= -\mathbf{J\cdot \nabla}\psi. \label{eqn:MDE_B_theta} \end{align} \label{eqn:MDE_B_psi_B_theta} \end{subequations} Therefore, the generalized force-balance condition is equivalent to two magnetic differential equations (MDEs) and $\mathbf{B\cdot F}=0$. If solutions to these equations can be found together with the magnetic equations (\ref{eqn:contravariantEq}) and (\ref{eqn:jacobianEq}), we will have obtained a quasisymmetric configuration in equilibrium. \par Let us describe in more detail the implications of these equations. First look at the simpler (\ref{eqn:MDE_B_theta}). This equation has two pieces to it. First, and regardless of the assumed form for $F_\alpha$, it follows from weak QS that $\mathbf{B}\cdot\nabla B_\theta=-\mathbf{j}\cdot\nabla\psi$ (see Appendix A as well). This imposes the condition (\ref{sec:GBCconstr}) on the field. Secondly, the component $F_\alpha$ of the forcing $\mathbf{F}$ directly sets the off-surface current. This means that,\cite{newcomb1959} \begin{equation} \oint \frac{d\ell}{B} F_\alpha =0. \label{eqn:Newcomb_F_alpha} \end{equation} We may not choose $F_\alpha$ arbitrarily. It must satisfy (\ref{eqn:Newcomb_F_alpha}) if the force is to be consistent with QS --condition like (\ref{eqn:subclassGBC}). Then the magnetic differential equation can be satisfied, and one can directly relate $B_\theta$ and $F_\alpha$ up to a flux function. In Fourier representation, it is clear that the $\phi$ content of $B_\theta$ will be non-zero only if that of $F_\alpha$ is (and vice versa). In the light of (\ref{eqn:LugSim}), choosing $\partial_\phi(\mathbf{j}\cdot\nabla\psi)=0$ brings the quasisymmetric configuration closer to an isometry. This freedom in the form of $B_\theta$ does not exist in the strong formulation of QS, for which $\mathbf{j}\cdot\nabla\psi$ is independent of the field line label. \par A similar analysis is suitable for (\ref{eqn:MDE_B_psi}). The appropriate Newcomb condition in this case is, \begin{align} \oint \frac{d\ell}{B} \left[F_\psi + \mathcal{J}^{-1}\left(B_\alpha'-\Bar{\iota}' B_\theta\right) \right]=0. \label{eqn:Newcomb_F_psi} \end{align} This condition may be understood as an averaged radial equilibrium equation. A similar solvability condition can be found, for the special case of MHS, in [\onlinecite{tessarotto1995}], where the notion of a QS field is presented as one that satisfies the Newcomb conditions. Given Eq.~(\ref{eqn:Newcomb_F_psi}), Eq.~(\ref{eqn:MDE_B_psi}) relates $B_\psi,~B_\theta$ (or $F_\alpha$) and $F_\psi$. Once again, we see the close relationship between the forcing, the magnetic covariant representation, and the deviation from axisymmetry. A $\phi$ dependence on $B_\psi$ will bring a finite deviation of $\mathbf{u}$ from being a Killing vector. However, it will also force $F_\alpha$ and $F_\psi$ to have a $\phi$ dependence which may require very particular shaping of the forces. This observation is consistent with the Constantin-Drivas-Ginsberg theorem\cite{constantin2021}, in which the forcing is seen to be intimately related to the deviation from an isometry. Here the asymmetric geometry, quasisymmetry, and the forcing are all intimately connected. \par When the magnetic differential equations imposing force balance are brought together with the magnetic equations from the previous section, it is not obvious how the system of equations is to be interpreted: what is to be taken as an input and what should be solved for. Just as an analogy, in the Grad-Shafranov equation, it is clear that $p$ and $F$ are inputs, and $\psi$ is the output. In the present problem, we have the construction of GBC in addition to the various magnetic covariant forms and the components of $\mathbf{F}$. Motivated by the treatment in [\onlinecite{rodriguez2020}] (which deals with a special case of the above), we propose as a possibility for a well-posed problem (\ref{eqn:MDE_B_theta}) to be solved for $B_\theta$ given $F_\alpha$, while $F_\psi$ is the output of (\ref{eqn:MDE_B_psi}), with the function $B_\psi$ specified from the magnetic equations. It is not a trivial matter to determine a convenient way in which to formulate the problem. A more elaborate discussion on this procedure and its implications on constructing solutions is left to future work. \subsection{Ideal MHD: $\mathbf{j}\times \mathbf{B} =\mathbf{\nabla}p(\psi)$} \label{sec:ideal_MHD_static} Let us now revisit the limit of ideal MHD without flows, $\mathbf{j}\times \mathbf{B}=\mathbf{\nabla} p$. More general forms will be discussed elsewhere, together with a more systematic treatment of the quasisymmetric system of equations. \par In ideal MHD, from $\mathbf{j}\cdot\nabla p(\psi)=0$ it follows that \begin{gather} \mathbf{B}\cdot\mathbf{\nabla} B_\theta=0. \end{gather} Thus taking $F_\alpha=0$ forces $B_\theta$ into a flux function. Of course, this also means that $\mathbf{u}\cdot\mathbf{\nabla} B_\theta=0$. Furthermore, as $F_\psi=p'$, in static ideal MHD, \eqref{eqn:JxB_simplified} leads to the magnetic differential equation for $B_\psi$, \begin{align} \mathbf{B}\cdot\mathbf{\nabla} B_\psi = p'(\psi) + \mathcal{J}^{-1}\left(B_\alpha'-\Bar{\iota}' B_\theta\right) \label{eqn:MDE_Bpsi_MHD} \end{align} Since $B_\psi$ must be a single-valued function the flux-surface average of \eqref{eqn:MDE_Bpsi_MHD} gives \begin{align} p'(\psi) +\frac{\langle B^2\rangle}{B_\alpha}\left(B_\alpha'-\Bar{\iota}' B_\theta\right)=0. \label{eqn:iota_from_MDE_Bpsi} \end{align} If we choose the forms of $B$, $p$, and $\iota$, this pins the form of $B_\alpha$ down. Now, looking back to \eqref{eqn:MDE_Bpsi_MHD}, every term on the right-hand side is quasisymmetric. Therefore, $B_\psi$ must also be quasisymmetric if it is to satisfy the force-balance equation. Note that we have already recognized this constraining requirement on the form of $B_\psi$ as the origin of the Garren-Boozer overdetermination problem \cite{rodriguez2020i}. The Newcomb condition on this equation can be recognized as the condition to avoid Pfirsch-Schl\"{u}ter current singularities. \par The simplifications due to ideal static MHD leads to the vanishing of \eqref{eqn:Jxu}. Therefore, in this limit weak QS is identical to strong QS. \par One can further show using \eqref{eqn:current_affairs}, \eqref{eqn:BinGBC}, \eqref{eqn:u_def} and \eqref{eqn:MDE_Bpsi_MHD}, \begin{align} \mathbf{j}= -\frac{1}{\Bar{\iota}}\partial_\psi (B_\alpha-\Bar{\iota}B_\theta)\mathbf{B}-\frac{1}{\Bar{\iota}}p'(\psi) \mathbf{u}, \label{eq:JB_JQ_def} \end{align} where the gauge choice for $\mathbf{u}$ has been made $\Phi'=\Bar{\iota}$. The expression for $\mathbf{j}$ ought to be $\mathbf{u}$-gauge independent, as it is a physical quantity. The $\mathbf{B}$ piece as written is gauge independent, but the $\mathbf{u}$ term is not. The $\Bar{\iota}$ factor in the latter is to be interpreted as $\Phi'$. This equation had been obtained previously\cite{burby2019,burby2021} using coordinate free, differential forms (see Appendix C). Two special cases of ideal MHD, a) vacuum ($\mathbf{j}=0$) and b) force-free ($\mathbf{j}=\lambda(\psi,\alpha)\mathbf{B} $), are worth pointing to for their importance in plasma physics. For both these cases $p'(\psi)=0$. From \eqref{eq:JB_JQ_def} we see that for the magnetic field to be curl-free (vacuum) and quasisymmetric $C'(\psi)=0$; i.e., $C(\psi)$ must be a constant. For quasisymmetric force-free fields, we must have $\lambda= -C'(\psi)$. Note that in strong QS these conclusions follow directly from the equation $\mathbf{j\times u}+\mathbf{\nabla}C=0$ with $\mathbf{j}=0$ and $\mathbf{j}=\lambda \mathbf{B}$. \section{Conclusions} In this paper, we have presented, defined, and discussed a straight field line coordinate system which is natural for the representation of general-equilibria quasisymmetric magnetic fields: \textit{generalized Boozer coordinates}. We proved the existence of the said coordinate system for the subset of fields for which $\oint\mathbf{j}\cdot\nabla\psi\mathrm{d}l/B=0${{}, to which quasisymmetric fields belong. These coordinates reduce to Boozer coordinates when $\mathbf{j}\cdot\nabla\psi=0$. } \par The explicit form of the symmetry in this coordinate representation enables a simple formulation of the quasisymmetric problem. We explicitly construct the governing equations setting clearly the foundation for future investigations, including expansion\cite{rodriguez2020i,rodriguez2020ii} and global approaches. Exploiting GBC, we explicitly show the essential differences between the weak and strong formulations of QS and between quasisymmetry and axisymmetry. Weak QS generally lies far from axisymmetry, for which many functions describing the field and symmetry need to be symmetric. \par We also included a set of simple magnetic differential equations that fully describe equilibrium with an arbitrary macroscopic force to complete the treatment. The property of QS, together with the force-balance structure, imposes requirements on the forcing terms in the form of Newcomb conditions. In addition, the equations establish clear connections between QS, forcing, and departures from axisymmetry. \hfill \section*{Acknowledgements} We are grateful to J. Burby, N. Duigan, J. Meiss, and D. Ginsburg for stimulating discussions. This research is supported by grants from the Simons Foundation/SFARI (560651, AB) and DoE Contract No DE-AC02-09CH11466. \section*{Data availability} Data sharing is not applicable to this article as no new data were created or analyzed in this study.
1,314,259,995,391
arxiv
\section*{Methods Summary}\label{Data and model} We produced images for all 30 individual {\it Chandra}\ pointings (Figure~1; see Online Methods for details), and spectra were extracted over the 0.3-8.0\hbox{$\rm\thinspace keV$}\ energy band for each of the 4 lensed images in each observation (all energies are quoted in the observed frame unless stated otherwise). Previous {studies\cite{ChartasKochanek2012quasar} have demonstrated that certain lensed images/epochs might suffer from a moderate level of pileup\cite{pileup2010}. As such, we exclude spectra that displayed any significant level of pile-up in all further analysis} (see Online Methods for details and Extended~Data~Figs.~7,8). The remaining spectra sample a period of $\sim8$~years which allows for both a time-resolved and time-averaged analysis of {RX~J1131-1231}. We also analyse a deep {\it XMM-Newton}\ observation taken in July 2013, which provides an average spectrum of the four lensed images over the 0.3--10.0\,keV energy range.\\ \noindent\textbf{Acknowledgements~} R.R. thanks the Michigan Society of Fellows and NASA for support through the Einstein Fellowship Program, grant number PF1-120087. All authors thank the ESA {\it XMM-Newton}\ Project Scientist Norbert Schartel and the {\it XMM-Newton}\ planning team for carrying out the DDT observation. The scientific results reported in this article are based on data obtained from the {\it Chandra}\ Data Archive.\\ \noindent\textbf{Author Contributions~} R.R. performed the data reduction and analysis of all the data reported here. The {\it XMM-Newton}\ data was reduced by both R.R and M.R. The pileup study was carried out by R.R, J.M and M.R. The text was composed, and the paper synthesised by R.R, with help from D.W and M.R. The smoothed subpixel images were made by R.R and M.R. All authors discussed the results and commented on the manuscript.\\ \noindent\textbf{Author Information~} Reprints and permissions information is available at www.nature.com/reprints. The authors declare that they have no competing financial interests. Correspondence and requests for materials should be addressed to R.~C.~Reis.~(email: rdosreis@umich.edu).\\ \vspace{1cm} \setcounter{firstbib}{0}
1,314,259,995,392
arxiv
\section{Introduction} Large-scale networked dynamical systems play a crucial role in many emerging engineering systems such as the power grid \citep{fang2011smart}, autonomous vehicle platoons \citep{li2015overview}, and swarm robots \citep{morgan2014model}. Two salient features of such systems that lead to major challenges for controller design are scalability and information/communication constraints. \emph{First}, scalability is a significant challenge since the number of subsystems in large networked systems is growing exponentially for modern applications. As a result, designing local and distributed control algorithms is especially crucial. Extensive work has investigated scalable control algorithm design for large-scale systems \citep{wang2014localized,zheng2017scalable,sturz2020distributed,matni2016regularization,wang2018separable}. \emph{Second}, a consequence of the growing scale is that information and communication constraints, e.g., resulting from delay, impose significant structural constraints on controller design \citep{fardad2014design,han2003lmi}. Specifically, each subsystem only observes delayed partial state feedback as opposed to instantaneous global state feedback in large-scale networked systems. The structural constraints resulting from this are generally non-convex even for linear suboptimal controller design \citep{rotkowitz2005characterization} and the optimal controller design problem for general information constraints remains open \citep{witsenhausen1968counterexample}. Therefore, a large body of work has investigated the challenging problem of control design for communication constraints with convex reformulation under special cases \citep[\& references therein]{zheng2020equivalence}, with System Level Synthesis (SLS) \citep{anderson2019system} emerging as a promising approach. Control methods seeking to address the two challenges above heavily rely on the knowledge of the system model. However, as engineering systems become more complex with larger scales, it is restrictive to assume perfect knowledge of the underlying model. Therefore, learning techniques are required. Recently there has been growing interest in learning distributed controllers for unknown networked linear time invariant (LTI) systems \citep{bu2019lqr, faradonbeh2022joint, furieri2020learning, ye2021sample, li2021distributed}. However, since most existing literature ports centralized learning-based control techniques over to the distributed case, they a priori assume that the underlying dynamics are stable, or that a stabilizing distributed controller is known. For a large-scale networked system, such assumptions are typically unrealistic, since designing stabilizing distributed controllers itself is a nontrivial task as described above. Further, until now, scalability and information constraints have only been considered separately; no general approach exists. In this work, we address scalability and information constraints simultaneously for unknown networked systems and address the following fundamental question: \begin{center} \textit{Is it possible to scalably stabilize an unknown networked system \\ with communication delay under adversarial disturbances? } \end{center} We remark that stability is one of the central goals for the control of dynamical systems. Many engineering problems, such as set point tracking in chemical process or altitude maintenance in flight control, can be cast and solved as stabilization problems. Therefore, the primary goal of control design is stabilization. Learning a stabilizing controller when the dynamics is unknown has been shown to be challenging even in the centralized case and many recent works focus on this sole issue, e.g., see \cite{zhang2021adversarially,lamperski2020computing, perdomo2021stabilizing}. \paragraph{Contributions.} In this paper, we propose the first online algorithm that provably stabilizes a networked LTI system under adversarial disturbances without any prior knowledge of the true dynamics (Theorem \ref{thrm:main}). The proposed algorithm (Algorithm \ref{alg:main}) is fully distributed and handles a large class of information constraints, while also scaling favorably to the number of subsystems in the network. In particular, this work presents the first distributed stabilization technique for unknown systems under adversarial disturbances, and does not require identifying the underlying system. As demonstrated in Appendix \ref{sec:sysid}, system identification-based stabilization methods incur prohibitively large state norm due to adversarial disturbances. On the other hand, our approach significantly improves the system behavior because it does not require full excitation of the system. The proposed algorithm is built on a distributed version of the nested convex body Steiner point chasing algorithm for selecting dynamics parameters that are consistent with online observations at each subsystem locally. The selected consistent parameters are then used for local distributed controller synthesis where we adopt SLS as the control strategy. Each local SLS synthesis problem is a low-dimensional optimization problem that uses delayed information. Previous applications of SLS in learning-based control problems have only leveraged the SLS parameterization result \citep{boczar2018finite,dean2020sample,umenberger2020optimistic}. Our work is the first to explore the controller realization result for SLS, where it is instrumental to the analysis and the scalability of the algorithm. Therefore we shed new light on the usefulness of SLS for learning and control problems with information constraints. The main result of this paper is the following stability guarantee (\Cref{thrm:main}), \begin{align*} \sup_t \{\|x(t)\|_\infty , \|u(t)\|_\infty \} &\leq O \left( e^{\text{poly}(\bar{n})\bar{d}} \cdot \left( e^{- t/H}x(0) + W \right)\right), \end{align*} where $\bar{d}$ and $\bar{n}$ are local constants depending on the network connectivity and can be much smaller than the global dimensions. This result provides a first quantification of the effect of communication delay in fully-distributed learning-based control problems. Further, along the way, we prove a set of technical lemmas that are of independent interests when SLS controllers are used for learning-based control. In particular, we derive a sensitivity result for SLS synthesis (\Cref{thrm:sensitivity}) that globally bounds the sensitivity of the optimal solution to the SLS synthesis problem with respect to the model, which is also applicable to a class of MPC problems. \paragraph{Related work.} This work contributes to a large and growing body of work on the topics related to learning-based control design, online control, and distributed control. We briefly review the literature most related to this work below. \emph{Stabilization of unknown systems.} The problem of stabilization for unknown linear systems has received considerable attention. Most works have developed methods either under the no-noise \citep{lamperski2020computing, talebi2021regularizability} or stochastic-noise models \citep{faradonbeh2018finite}. Many approaches are based on policy gradient and search over a stabilizing feedback gain matrix \citep{perdomo2021stabilizing,zhao2021learning}. In the adversarial noise setting, the only work that guarantees stabilization for unknown LTI system is \cite{chen2021black}, where a model-based algorithm exponentially excites all directions of the state space in order to identify the underlying system with a small margin. Recently, \cite{ho2021online} proposes an online nonlinear robust control method that guarantees finite mistakes under adversarial disturbances. Inspired by \cite{ho2021online}, our approach proposes a novel stabilization technique under adversarial disturbances where no system identification is required. \emph{Learning distributed controllers.} Significant progress has been made on the design of learning-based centralized LQR controllers when the underlying dynamics are unknown \citep{dean2020sample,fazel2018global, simchowitz2020naive}. As a result, work has begun to explore the problem for the distributed setting. Much of this work has adopted a centralized learning or computational approach with the objective of regret minimization, e.g., \cite{fattahi2020efficient, bu2019lqr,ye2021sample, faradonbeh2022joint, furieri2020learning}. Among these, \cite{fattahi2020efficient} demonstrates a first use case of SLS theory in learning-based distributed controller learning. However, most prior work that considers distributed learning and control schemes use the stochastic noise or no-noise model, assume a known stabilizing distributed controller is given, and cannot handle general communication delay during learning, e.g., \cite{ li2021distributed, alonso2021data, jing2021learning, alemzadeh2019distributed, talebi2021distributed, alemzadeh2021d3pi}. \cite{adaptivesls} presents an adaptive SLS controller but requires a known stabilizing controller and does not have guaranteed stability for large uncertainties. Here, we propose the first fully distributed learning-based control algorithm that handles general communication delay and adversarial disturbances. \paragraph{Notation.} Let $\|\cdot\|$ be the $\ell_2$ norm and $\|\cdot\|_F$ the Frobenius norm. We denote the $(i,j)^{\text{th}}$ position of a matrix $M$ as $M(i,j)$ and use $M(:,j), M(i,:)$ for the $j^{\text{th}}$ column and $i^{\text{th}}$ row of $M$ respectively. We use $[N]$ for the set of positive integers up to $N$. Positive integers are denoted as $\mathbb{N}$. Bold face lower cases are reserved for vector signal of the form $\mathbf{x} := [x(0)^T,x(1)^T, \dots]^T$ with $x(t) \in \mathbb{R}^n$ is an infinite sequence of vectors. A causal linear operator $\mathbf{K}$ can be represented as an infinite-dimensional lower-triangular Toeplitz matrix with components $K[0], K[1], \dots$, where each $K[k] \in \mathbb{R}^{n \times n}$ for $k \in \mathbb{N} \cup \{0\}$. In particular, we write column operator $\mathbf{K}(:,i)$ to mean the operator that maps a sequence of scalars $\{\alpha_i\}$ to a sequence of vectors with components $ \alpha_0 \cdot K[0](:,i), \alpha_1 \cdot K[1](:,i),\dots$, where $ K[k](:,i)$ is the $i^{\text{th}}$ column of $K[k]$. We write $\mathbf{u} = \mathbf{K}\mathbf{x}$ to mean that $u(t) = \sum_{k=0}^t K[k] x(t-k)$. Given any binary matrix $\mathcal{C} \in \{1,0\}^{N\times N}$, we say $M \in \mathcal{C}$ for a matrix $M\in \mathbb{R}^{N \times N}$ if the sparsity of $M$ is $\mathcal{C}$. We use $\{e_j\}_{j=1}^n$ for the standard basis in $\mathbb{R}^n$. \section{Preliminaries and Problem Setup} \label{sec:dynamics} We consider the task of controlling an unknown networked system made up of $N$ interconnected, heterogeneous linear time-invariant (LTI) subsystems, illustrated in Figure \ref{fig:example}. For each subsystem $i \in [N]$, let $x^i(t) \in \mathbb{R}^{n_i}$, $u^i(t) \in \mathbb{R}^{m_i}$, $w^i(t) \in \mathbb{R}^{n_i}$ be the local state, control, and disturbance vectors respectively. Each subsystem $i$ has dynamics, \begin{equation} \label{eq:local_sys} x^i(t+1) = \sum_{j\in \mathcal{N}(i)} \left( A^{ij}x^j(t) + B^{ij}u^j(t) \right) + w^i(t), \end{equation} where we write $j \in \mathcal{N}(i)$ if the states or control actions of subsystem $j$ affect those of subsystem $i$ through the open-loop network dynamics ($i \in \mathcal{N}(i)$). Concatenating all the subsystem dynamics, we can represent the global dynamics as \begin{equation} \label{eq:global_sys} x(t+1) = A x(t) + B u(t) + w(t), \end{equation} where $x(t) \in\mathbb{R}^{n_x}$, $u(t) \in \mathbb{R}^{n_u}$, $w(t) \in \mathbb{R}^{n_x}$, with $n_x = \sum_{i=1}^N n_i$ and $n_u = \sum_{i=1}^N m_i$. \begin{figure} \centering \subfigure[System with communication graph $\mathcal{G}^C$. ]{\label{fig:example}\includegraphics[scale = 0.4]{figures/example.pdf}}\hfill \subfigure[$A$, $B$ matrix with parameter $\Theta$.]{\label{fig:AB}\includegraphics[scale = 0.29]{figures/AB.pdf}} \hfill \subfigure[Communication matrix $\mathcal{C}$.]{\label{fig:C}\includegraphics[scale = 0.29]{figures/C.pdf}} \caption{Example networked LTI system with information constraints.} \end{figure} We assume that the dynamical connectivity among subsystems is known, \textit{i.e.}, the sets $\mathcal{N}(i)$ for $i \in [N]$ are known. However, the parameters of the dynamics (entries of matrices $A^{ij}$, $B^{ij}$) are unknown. We denote the unknown parameters for $A$ and $B$ collectively as $\Theta:= \bigcup_{i \in [N]} \theta^i$, where $\theta^i$ are the local parameters accounting for $A^{ij}$ and $B^{ij}$ component of the global model $A(\Theta)$, $B(\Theta)$. This is illustrated in Example \ref{ex:model}. Due to \eqref{eq:local_sys}, each subsystem $i$ has a disjoint set of local parameters $\theta^i$ that make up the global parameter $\Theta$, so $\theta^i \cap \theta^j = \emptyset$ for all $i \not =j$. We write $A(\Theta)$ and $B(\Theta)$ (equivalently $A^{ij}(\theta^i)$, $B^{ij}(\theta^i)$) to emphasize that $A$ and $B$ are matrices constructed with appropriate zeros according to the network structure (known), and nonzero entries decided by $\Theta$ (unknown). \begin{example} \label{ex:model} Consider the networked system in Figure \ref{fig:example} where each subsystem $i\in [6]$ has $x^i(t) \in \mathbb{R}$ and $u^i(t) \in \mathbb{R}$. For each $i$, the set $\mathcal{N}(i)$ contains the subsystems that has a dashed arrow pointing towards $x^i$ in the figure. For example, $\mathcal{N}(6) = \{1,\, 3,\, 5\}$. Each $A^{ij}$ and $B^{ij}$ for $j \in \mathcal{N}(i)$ is a scalar. The stacked global dynamics has matrix $A$ and $B$ with structure shown in Figure \ref{fig:AB}. The unknown global parameter $\Theta$ is a vector containing parameters corresponding to the $*$ entries in $A$ and $B$. Local parameter $\theta^i$ corresponds to the $*$ entries of the $i^{\text{th}}$ row of $A$ and $B$. Since each $\theta^i$ accounts for one row of $A$ and $B$, they are non-overlapping. \end{example} We now state the assumptions for the system. \begin{assumption}[Adversarial disturbance] \label{assump:noise} For the global dynamics \eqref{eq:global_sys}, $\left\|w(t)\right\|_{\infty} \leq W $. \end{assumption} \begin{assumption}[Compact Parameter Set] \label{assump:compact} The network structure $\mathcal{N}(i)$ for $i \in [N]$ is known. The true system parameter $\Theta^* := \bigcup_{i \in [N]} \theta^{i,*}$ is an element of a (potentially large) known compact convex set $\mathcal{P}_0 = \mathcal{P}^1_0 \times \dots \times \mathcal{P}^{N}_0 $, which is a product space of local parameter sets where $\theta^{i,*} \in \mathcal{P}_0^i$. The known parameter set is bounded such that $\norm{A\left(\Theta\right)}_F, \norm{B\left(\Theta\right)}_F \leq \kappa$ for all $\Theta \in \mathcal{P}_0$. \end{assumption} \begin{assumption}[Controllability] \label{assump:controllable} For all $\Theta \in \mathcal{P}_0$, $(A(\Theta),B(\Theta))$ is controllable. \end{assumption} Bounded adversarial disturbance is a common assumption in online learning problems \citep{agarwal2019online, hazan2020nonstochastic}. Since we make no assumptions on how large $W$ is, Assumption \ref{assump:noise} can model a variety of disturbance models, such as bounded stochastic noise or linearization errors for nonlinear dynamics \citep{tu2019sample}. Assumption \ref{assump:compact} is standard in learning-based control literature \citep{cohen2019learning, agarwal2019online}. Assumption \ref{assump:controllable} ensures the wellposedness of the learning-based control problem and is commonly employed \citep{ibrahimi2012efficient, abbasi2011regret}. If a parameter set $\mathcal{P}_0$ has a few singular points where $(A,B)$ loses controllability such as when $B=0$, a simple heuristic is to ignore these points in the algorithm since we assume the underlying system is controllable. We discuss the general case of nonconvex parameter sets in Appendix \ref{sec:convex}. For a variety of networked systems such as robotic manipulation \citep{everett2021certifiable} and power systems \citep{pal2006robust}, Assumption \ref{assump:noise}, \ref{assump:compact}, \ref{assump:controllable} hold reasonable. \subsection{Communication Constraints} \label{sec:comm} A key feature of large-scale networked LTI systems is that information observed locally at each subsystem cannot be immediately available to the global network. Instead, information sharing among subsystems is constrained by communication limitations among subsystems. Such limitations lead to delayed partial observation and pose major challenges for learning and control. To formalize communication constraints, we define a communication graph $\mathcal{G}^C=(V^C, E^C)$ for \eqref{eq:global_sys}, where $V^C = [N]$ and $E^C$ is the set of directed communication link from one subsystem to the other. Self-loops at all vertices are included in $\mathcal{G}^C$ and they represent zero delay. The communication graph is demonstrated by the solid blue lines in Figure \ref{fig:example}. We now present two representations of the communication constraints induced by $\mathcal{G}^C$. \begin{definition}[Communication Matrix] \label{def:communication} A binary matrix $\mathcal{C} \in \{1,0\}^{N\times N}$ is called the communication matrix of a network with $N$ subsystems, where $\mathcal{C}(i,j) \not = 0$ if and only if $(j,i) \in E^C$. \end{definition} \begin{definition}[Communication Delay] \label{def:delay} The communication delay from subsystem $i$ to subsystem $j$ is defined to be the graph distance from $i$ to $j$ according to $\mathcal{G}^C$ and is denoted as $d(i\rightarrow j)$. \end{definition} The two definitions provide a global and local view of the communication constraints. Globally, matrix $\mathcal{C}^k$ for $k\in\mathbb{N}\cup \{0\}$ has nonzero $(i,j)$th entry if subsystem $i$ gets $k$-delayed information from subsystem $j$. Locally, at each time step $t$, subsystem $i$ has access to subsystem $j$'s full information up to time $t-d(j \rightarrow i)$. Moreover, $d(j\rightarrow i)$ is the smallest integer such that $\mathcal{C}^{d(j\rightarrow i)} (i,j) \not =0$. With slight abuse of notation, we write $\mathcal{C}^k$ to mean the support of the matrix such that $\mathcal{C}^k\in \{1,0\}^{N\times N}$. Given $\mathcal{G}^C$, we make a mild assumption on the communication constraints. This assumption ensures that the graph describing the global dynamics is a subgraph of the communication graph. Such an assumption is commonly referred to as information nestedness \citep{ho1972team} and is used frequently in the distributed control literature \citep{lamperski2015optimal,ye2021sample}. It holds true for many modern engineering systems where communication operates at least as fast as the dynamical propagation. We discuss more about the communication constraints in Appendix \ref{sec:info}. \begin{assumption}[Communication Topology] \label{assump:comm} $\mathcal{C}(i,j) = 1$ for all $j \in \mathcal{N}(i)$. \end{assumption} Finally, due to communication delay, each subsystem has access to asynchronous information about other subsystems. Therefore, we define $\mathcal{I}(i,t)$ to be the information available to subsystem $i$ at time $t$. The set $\mathcal{I}(i,t)$ contains sets $\mathcal{I}(j,t-d(j\rightarrow i))$ from other subsystem $j$ with delay $d(j\rightarrow i)$. We end the section by returning to our example. \begin{example} Consider the system in Figure \ref{fig:example} where the solid blue line denotes the communication among subsystems. The communication matrix $\mathcal{C}$ is depicted in Figure \ref{fig:C}. Observe that $\mathcal{C}(1,3) =0 $ but $\mathcal{C}^2(1,3) \not = 0$. Therefore, the delay from subsystem 2 to subsystem 1 is $d(2 \rightarrow 1) = 2$. \end{example} \subsection{Locality for Scalable Implementation} \label{sec:scalability} Even though communication delay causes asynchronous partial information for each subsystem, eventually each subsystem can obtain the delayed global information. However, due to the scale of the global system, it can be prohibitively costly for subsystems to compute their local control actions using such delayed global information. Moreover, a larger delay between subsystems means, intuitively, that they are more dynamically decoupled due to Assumption \ref{assump:comm}. Therefore, by discarding information from far-away subsystems, each subsystem has a smaller and more up-to-date information set. Following the above intuition, the proposed algorithm requires each subsystem $i$ to only use delayed information from neighbors that are ``at most $\bar{d}$ away'' for a fixed $\bar{d} \in \mathbb{N}$. Specifically, we define three sets that capture the notion of $\bar{d}$-neighbors. Formally, we define the set $\din{i}:=\{j\in[N]: d(j\rightarrow i )\leq \bar{d}\}$ and $\dout{i}:=\{j\in[N]: d(i\rightarrow j )\leq \bar{d}\}$ respectively as the $\bar{d}$-incoming and $\bar{d}$-outgoing neighbors of subsystem $i$ according to $\mathcal{G}^C$. Further, we define a set $\M{i} = \left\{ \ell \in [N]: j \in \mathcal{N}(\ell) \text{ for some }j \in \dout{i} \right\}$. The intuition behind $\M{i}$ is that subsystem $j \in \dout{i}$ makes decisions based on information {from} $i$, and these decisions at $j$ influence $\ell$ through dynamics because $j \in \mathcal{N}(\ell)$. Therefore, any local decisions at subsystem $i$ should take into account the information from $\ell$'s who will be indirectly influenced by $i$ in the future. Together, we call subsystems in $\din{i}, \dout{i}, \M{i}$ the $\bar{d}$-neighbors of $i$. The choice of $\bar{d}$ is system-dependent. Given a communication graph, $\bar{d}$ can be treated as a tunable parameter for performance. This form of local control is common, and has been studied both in multi-agent reinforcement learning \citep{qu2020scalable,lin2020distributed,qu2020bscalable} and distributed control \citep{alonso2020distributed,wang2018separable} as a method for ensuring a scalable implementation of the control policy in large-scale networked systems. Our analysis relies on the following standard feasibility assumption regarding the communication and locality considerations. \begin{assumption}[Feasibility] \label{assump:feasibility} The communication graph $\mathcal{G}^C$ and $\bar{d}\in \mathbb{N}$ is chosen such that for all $\Theta \in \mathcal{P}_0$, there exists a conforming stabilizing distributed controller for $A(\Theta), B(\Theta)$. \end{assumption} A priori verification of this assumption can be performed because the dynamics connectivity and the initial parameter set is known per Assumption \ref{assump:compact}. Assumption \ref{assump:feasibility} can be seen as the strengthening of Assumption \ref{assump:controllable} where in addition to controllability, control design for all $A(\Theta),B(\Theta)$ with $\Theta \in \mathcal{P}_0$ is assumed to be feasible for the prescribed communication and locality constraints. Similar to Assumption \ref{assump:controllable}, if singularities exist in the parameter set $\mathcal{P}_0$ where Assumption \ref{assump:feasibility} does not hold, we exclude these points in the algorithm because the true underlying system is assumed to be feasible. The rest of the paper demonstrates and analyzes a fully-distributed algorithm that learns to stabilize \eqref{eq:global_sys} with unknown true parameter $\Theta^*$ subject to the communication and locality constraints under adversarial disturbances in the online setting (without system resets or offline data). We refer Appendix \ref{sec:stabilization} for a formal definition of stability. \section{Algorithm} We propose a novel online algorithm, presented in Algorithm \ref{alg:main}, that for the first time guarantees the stability of unknown interconnected LTI systems with information constraints under bounded adversarial disturbances. The algorithm combines ideas from an online learning method for nested convex body chasing \citep{bubeck2020chasing} and the SLS control literature \citep{anderson2019system}. Our approach is distinguished from prior learning-based distributed control policies in that it assumes no knowledge about the underlying true system (such as a known stabilizing controller) and does not perform system identification as part of the algorithm. The algorithm is scalable with respect to the number of subsystems in the network and handles a broad class of communication constraints. Algorithm \ref{alg:main} works as follows. After observing the latest dynamics transition (line \ref{algoline:observe}), each subsystem uses its locally available information set (line \ref{algoline:info}) to select a \textbf{local consistent parameter} (line \ref{algoline:consist}). It then computes its local control action in two steps (line \ref{algoline:sls}). First, the subsystem synthesizes a \textbf{local sub-controller} based on the consistent parameter selected in the previous step. Next, a \textbf{local control action} is computed based on a global controller that is composed of the local sub-controllers synthesized at the previous step, together with the (delayed) sub-controllers from other subsystems. \LinesNumbered \begin{algorithm} \DontPrintSemicolon \KwIn{Parameter set $\mathcal{P}_0$} \KwInit{$t=0$, $u(0)=0$, $\mathcal{I}(i,0)=\emptyset$ for $i \in [N]$} \For{$t = 1,2,\dots$ }{ \For{ Subsystem $i = 1,2,\dots,N$}{ Observe $x^i(t)$ \label{algoline:observe} \\ Reads available information set $\mathcal{I}(i,t)=\mathcal{I}(i,t-1)\bigcup $ $ \left\{x^j\left(t-d(j\rightarrow i)\right),\,\,\theta^j_{t-d(j\rightarrow i)},\,\,\bm{\phi}^j_{t-d(j\rightarrow i)},\,\, u^j(t-d(j\rightarrow i)),\,\, \hat{w}^j(t-d(j\rightarrow i)) \right\}_{j \in \bar{d}\text{-neighbor}(i)} $ \label{algoline:info} \\ $\theta^i_t$ $\leftarrow$ CONSIST$\left(\mathcal{I}(i,t)\right)$ with Algorithm \ref{alg:consist} \tcp*{Select local consistent parameter } \label{algoline:consist} $u^i(t) \leftarrow \text{CONTROL}\left(\mathcal{I}(i,t),\, \theta^i_t\right)$ with Algorithm \ref{alg:sls} \tcp*{Compute local control action } \label{algoline:sls } } \caption{Distributed parameter section and model-based control} \label[algorithm]{alg:main} \end{algorithm} Though inspired by the approach in \cite{ho2021online}, our algorithm performs both the parameter selection and the model-based control design \textit{distributedly} for each local subsystem with \textit{delayed} information from other subsystems, whereas \cite{ho2021online} is a single-agent algorithm. In what follows, we describe the three main components of the proposed algorithm in more detail. For ease of exposition, we let subsystems have scalar state and fully actuated control actions ($n_x=n_u=N$) in order to minimize notation. The general vector case can be found in \Cref{sec:vector}. \subsection{Local Consistent Parameter Selection} \label{sec:local_select} The first component of Algorithm \ref{alg:main} is for each subsystem $i$ to select a local parameter $\theta^i_t$ that is consistent with the locally available observation at time $t$. We name this subroutine CONSIST, shown in Algorithm \ref{alg:consist}. \begin{algorithm}[t] \DontPrintSemicolon \KwIn{Information set $\mathcal{I}(i,t)$} \KwOut{Local consistent parameter $\theta^i_t$} Compute local consistent parameter set $\mathcal{P}^i_t$ with \eqref{eq:local-chasing}\; Select local consistent parameter $\theta^i_t \leftarrow n_i \cdot \mathbb{E}_{v: \|v\|=1} \left[ v \cdot \max_{q\in \mathcal{P}^i_t }(v \cdot q) \right]$ \; \caption{$\text{CONSIST}(\cdot)$ for Subsystem $i$ \label[algorithm]{alg:consist} \end{algorithm} CONSIST first constructs the set of all $\theta^i$ such that $A^{ij}(\theta^i), B^{ij}(\theta^i)$ satisfy \eqref{eq:local_sys} with some admissible disturbances defined in Assumption \ref{assump:noise}. Specifically, each locally observed transition, namely, the transition from $\{x^j(t-1),u^j(t-1)\}_{j \in \mathcal{N}(i)}$ to $x^i(t)$, defines a linear constraint on $\theta^i$ and we construct the \textit{local consistent parameter set}, $\mathcal{P}^i_t$, as \begin{align} \label{eq:local-chasing} \mathcal{P}^i_t:= \left\{\theta^i \in \mathcal{P}^i_{t-1} \,|\, \left\|x^i(t) - \left(\sum_{j\in \mathcal{N}(i)} A^{ij}(\theta^i) x^j(t-1) + B^{ij}(\theta^i)u^j(t-1) \right) \right\|_\infty \leq W \right\}, \end{align} with $\mathcal{P}_0^i$ as the local initial parameter set defined in Assumption \ref{assump:compact}. Note that the local consistent parameter set $\mathcal{P}^i_t$ is always non-empty, convex, and nested within the parameter set $\mathcal{P}^i_{t-1}$ recursively. The recursion is from the fact that the observed trajectory at $t$ overlaps with the observed trajectory at $t-1$ up to $x^i(t-1)$. With $\mathcal{P}^i_t$, we want to select a local parameter $\theta^i_t \in \mathcal{P}^i_t$ in order to perform model based control in later parts of Algorithm \ref{alg:main}. In particular, the longer a particular parameter $\theta^i_t$ stays consistent for future observation, the more likely the controller constructed based on the current parameter will perform well. This intuition motivates us to select a $\theta^i_t$ that could remain an element of the (yet unknown) future consistent parameter set. This problem is an instance of the Nested Convex Body Chasing (NCBC) problem, where a selector is presented a sequence of nested convex bodies $\{\mathcal{K}_t\}_{t=1}^{T} \subset \mathbb{R}^n$ at each $t$ and asked to select a point $q_t \in \mathcal{K}_t$ to minimize the total movement $\sum_{t=0}^T \|q_{t+1} - q_{t}\|$. The movement criteria formalizes a measure of \textit{model consistency} in our case: the less the total movement a selector incurs, the longer the selected points stay consistent overall. A known selector for the NCBC problem is the Steiner point selector $\text{St}(\mathcal{K}):= n \cdot \mathbb{E}_{v: \|v\|=1} \left[ v \cdot \max_{q\in\mathcal{K}}(v \cdot q) \right]$, of which the competitive property is crucial for the stability proof of Algorithm \ref{alg:main}. \citep{bubeck2020chasing}. Therefore, each subsystem $i$ selects the Steiner point of $\mathcal{P}^i_t$ as its local consistent parameter. We remark there are efficient algorithms for approximating the Steiner point, e.g. \cite{argue2021chasing}. Treating the dynamics \eqref{eq:local_sys} that generates the observed state trajectory as a black box, CONSIST does not differentiate between the \textit{true} parameter $\theta^{i,*}$ and true disturbances $\{w^*(k)\}_{k=0}^{t-1}$ from a \textit{consistent} parameter $\theta^i_t$ and the corresponding admissible disturbances. The simple idea of \textit{consistency} in CONSIST turns out to be sufficient for the task of online stabilization under adversarial disturbances. We emphasize this point with a numerical example in Appendix \ref{sec:sysid} that demonstrates the power of consistency compared with system identification. \subsection{Local Control action} \label{sec:subcontroller} After each subsystem $i$ selects a local consistent parameter $\theta^i_t$, it proceeds to compute a local control action based on $\theta^i_t$ and $\mathcal{I}(i,t)$. We name this subroutine CONTROL (Algorithm \ref{alg:sls}). At every time step, subsystem $i$ first assembles a local estimate of the ``global'' model using delayed information from other subsystems (line \ref{algoline:assemble}) and uses it to generate a local SLS sub-controller (line \ref{algoline:local-synth}). Then, subsystem $i$ assembles a local SLS controller with the local sub-controller $\bm{\phi}^i_t$ and the delayed sub-controllers from other subsystems (line \ref{algoline:assemble2}). Finally, the local control action is computed using the locally assembled SLS controller \eqref{eq:local-controller} (line \ref{algoline:control}). Below we present an overview of the two components of CONSIST and defer detailed discussion of the algorithm in Appendix \ref{sec:sls-control}. \begin{algorithm} \DontPrintSemicolon \KwIn{ Information Set $\mathcal{I}(i,t)$, Local consistent parameter $\theta^i_t$} \KwOut{Local control action $u^i(t)$} Assemble local estimate of the global model $A\left(\hat{\Theta}^i_t\right),B\left(\hat{\Theta}^i_t\right)$ with \eqref{eq:delayed-local-model} \label{algoline:assemble}\\ $\bm{\phi}^i_t$ $\leftarrow$ \eqref{eq:synth} \label{algoline:local-synth}\; Assemble delayed local sub-controllers $\bigcup_{j\in \din{i}} \bm{\phi}^j_{t-d(j\rightarrow i)}$ from subsystems in $\din{i}$ \label{algoline:assemble2}\; Compute local control action $u^i(t)$ with \eqref{eq:local-controller} \label[algorithm]{algoline:control} \caption{$\text{CONTROL}(\cdot)$ for Subsystem $i$ \label{alg:sls} \end{algorithm} \paragraph{Local Sub-controller Synthesis} The first step in the CONTROL subroutine is to synthesize a local sub-controller. Specifically, subsystem $i$ starts by accessing $\theta^j_{t-d(j\rightarrow i)}$ from the $\bar{d}$-neighbors of $i$, and assembles a local estimate of the ``global'' parameter $\hat{\Theta}^i_t$, \begin{equation} \label{eq:delayed-local-model} \hat{\Theta}^i_t := \mathop{\cup}_{j\in \M{i}} {\theta}^j_{t-d(j\rightarrow i)}, \end{equation} where recall the set $\M{i}$ is defined to be $\left\{ \ell \in [N]: j \in \mathcal{N}(\ell) \text{ for some }j \in \dout{i} \right\}$. In particular, $\M{i}$ represents the smallest set of ``neighboring'' subsystems of $i$ whose model information is needed for sub-controller synthesis. We provide more intuition of $\M{i}$ in Appendix \ref{sec:sls-control}. Next, subsystem $i$ synthesizes a \textit{column} of the standard SLS controller \citep{anderson2019system} based on its locally estimated ``global'' parameter $\hat{\Theta}^i_t$. We call this column the \textit{sub-controller}. A standard SLS controller is made up of two causal linear operators $\bPX{}$ and $\bPU{}$ with components $\PX{}[k],\PU{}[k] \in \mathbb{R}^{N \times N}$ for $k\in\mathbb{N}\cup \{0\}$. The operators $\bPX{}$ and $\bPU{}$ represents the mappings from disturbance $\mathbf{w}$ to the state $\mathbf{x}$ and control action $\mathbf{u}$, respectively, under some linear controller $\mathbf{K}$ such that $\mathbf{u} = \mathbf{K}\mathbf{x}$ for \eqref{eq:global_sys}. We provide background and details of SLS in Appendix \ref{sec:sls}. It is known that the communication and locality constraints are convex sparsity constraints on $\bPX{}$ and $\bPU{}$ \citep{wang2018separable}. Therefore, we follow standard SLS synthesis procedure and synthesize \textit{the $i^{\text{th}}$ column of} $\bPX{}$ and $\bPU{}$, respectively denoted as $\bm{\phi}^{i,x}_t := \bPX{}(:,i)$, $\bm{\phi}^{i,u}_t :=\bPU{}(:,i)$ and collectively as $\bm{\phi}^i_t$, for subsystem $i$ as follows: \begin{subequations} \label{eq:synth} \begin{alignat}{3} &\min_{\bm{\phi}^i_t} && \quad \left\| \bm{\phi}^i_t \right\|_{\mathcal{H}_2}^2 \label{eq:sls-cost} \\ &\text{s.t.} && {\phi}^{i,x}_t[0] = e_i, \quad {\phi}^{i,x}_t[H]=0 \label{eq:characterization1}\\ & && {\phi}^{i,x}_t[k+1] = \A{\hat{\Theta}^i_t}{\phi}^{i,x}_t[k] + \B{\hat{\Theta}^i_t}{\phi}^{i,u}_t[k]\,, \quad \text{ for } k = 0,1,\dots,H-1 \label{eq:characterization2}\\ & && {\phi}^{i,x}_t[k],\,\, {\phi}^{i,u}_t[k] \,\in \,\mathcal{C}^k(:,i) \cap \mathcal{C}^{\bar{d}}(:,i)\,,\quad \text{ for } k = 0,1,\dots,H-1, \label{eq:comm-constraint} \end{alignat} \end{subequations} where \eqref{eq:characterization1} and \eqref{eq:characterization2} are the standard SLS closed-loop operators characterization, specialized to the $i^{\text{th}}$ column of $\bPX{}$ and $\bPU{}$. This characterization has been extensively used in the learning-based control literature \citep{dean2020sample, umenberger2020optimistic, bu2019lqr, xue2021data}. \eqref{eq:comm-constraint} is the communication and scalability constraints from Section \ref{sec:comm} expressed equivalently in terms of the $i^{\text{th}}$ column of $\bPX{}, \bPU{}$. This problem is always feasible due to Assumption \ref{assump:controllable} and \ref{assump:feasibility}. Objective \eqref{eq:sls-cost} is chosen to minimize the $\mathcal{H}_2$ norm of column operator $\bm{\phi}^i_t$. \paragraph{Local Control Action Computation} \label{sec:local-compute} The second and final step in CONTROL is to compute a local control action. To do so, each subsystem $i$ assembles a local SLS controller with the sub-controller synthesized in the previous step and the delayed sub-controllers $\bm{\phi}^j_{t-d(j\rightarrow i)}$ from its $\bar{d}-$neighbors $j$. This information is plugged into the following SLS controller \citep{wang2019system} to compute a local control action $u^i(t)$ as \begin{subequations} \label{eq:local-controller} \begin{align} \hat{w}^i(t) &= x^i(t) - \sum_{j \in \din{i}}\sum_{k = 1}^{H-1} {\phi}^{j,x}_{t-d(j\rightarrow i)}[k](i)\cdot \hat{w}^j(t-k) \label{eq:sls-1}\\ u^i(t) &= \sum_{j \in \din{i}}\sum_{k = 0}^{H-1} {\phi}^{j,u}_{t-d(j\rightarrow i)}[k](i) \cdot \hat{w}^j(t-k), \label{eq:sls-2} \end{align} \end{subequations} where $x^i(t), u^i(t), \hat{w}^i(t) \in \mathbb{R}$ are the local state, control action, and estimated disturbance respectively. The local controllers are initiated with $\hat{w}^i(0) = x^i(0)$. The intuition behind \eqref{eq:local-controller} is that each subsystem $i$ \textit{counterfactually} assumes that the global closed loop of \eqref{eq:global_sys} behaves exactly as the columns $\bm{\phi}^j_{t-d(j\rightarrow i)}$ prescribe. Recall \eqref{eq:clm-relation}, thus the $i^{\text{th}}$ position of ${\phi}^{j,x}_{t-d(j\rightarrow i)}$ and ${\phi}^{j,u}_{t-d(j\rightarrow i)}$ maps the $j^{\text{th}}$ position of disturbance vector to the $i^{\text{th}}$ position of $\mathbf{x}$ and $\mathbf{u}$, \textit{i.e.,} the state and control action of subsystem $i$. Therefore, \eqref{eq:sls-1} estimates the local disturbances by comparing observed local state $x^i(t)$ and the counterfactual state computed with $\bm{\phi}^j_{t-d(j\rightarrow i)}$. Then \eqref{eq:sls-2} acts upon the computed disturbances. \section{Main Results} \label{sec:main} We now present the main result of this paper. This is the first stabilization result for a distributed policy (\Cref{alg:main}) in a setting with unknown dynamics and adversarial disturbances. \begin{theorem}[Stability] \label[theorem]{thrm:main} Under Assumptions \ref{assump:noise}-\ref{assump:feasibility}, Algorithm \ref{alg:main} guarantees stability of the closed loop of \eqref{eq:global_sys} with $$ \sup_t \{\|x(t)\|_\infty , \|u(t)\|_\infty\} \leq O \left( e^{\text{poly}(\bar{n})\bar{d}} \cdot \left( e^{- t/H} x(0) + W \right)\right), $$ where $x(0)$ is the initial condition, $W$ is the bound on the adversarial disturbance, and local dimension $\bar{n} = \max\{\|\mathcal{C}^{\bar{d}} \|_1,\, \|\mathcal{C}^{\bar{d}} \|_\infty,\, \max_j \abs{\M{j}} \} $ represents the total state dimension in any $\bar{d}$-neighbors specified by communication matrix $\mathcal{C}$. $\bar{d}$ is the largest local delay each subsystem considers for the algorithm, and $H$ is the SLS controller horizon. \end{theorem} \Cref{thrm:main} makes explicit that communication delay adds an exponential factor of error on the state deviation from the desired steady state. Additionally, note that both the state dimension and delay in the result are \textit{local} constants, which can remain small even when the number of subsystems in the network is large due to sparse connections in large-scale systems \citep{yazdanian2014distributed, wang2016localized}. Observe that initial condition $x(0)$ exponentially decays, and the state norm is bounded by a constant relating to the bound on disturbance. Finally, the decay factor $e^{-t/H}$ corroborates the fact that $H$ quantifies the controllability of the parameter set $\mathcal{P}_0$. The smaller $H$ can be for the SLS synthesis \eqref{eq:synth} to be feasible, the easier the systems in the set can be learned and controlled. We elaborate on this point in \Cref{sec:controllability}. \textbf{Proof Outline.} The remainder of this section gives an overview of the key lemmas that underlie our proof of \Cref{thrm:main}. The intuition behind our proof follows from a characterization of the closed loop dynamics of \textit{any} algorithms that apply SLS controllers (\Cref{lem:closed-loop}). We then derive a sufficient condition for stability of the closed-loop dynamics under adversarial disturbances (\Cref{lem:sufficient}). Given communication delay, we need to carefully keep track of the errors caused by asynchronous information at different subsystems throughout the algorithm. We show that the errors caused by delay can be absorbed partly into the competitive ratio of CONSIST (\Cref{prop:bounded-error}) and partly into the sensitivity of the SLS synthesis procedure through a novel perturbation analysis (\Cref{thrm:sensitivity}). We defer formal proofs to \Cref{sec:proof}. To begin, we show that despite the fact that each subsystem in Algorithm \ref{alg:main} uses differently delayed information to compute the local parameter, sub-controller, and control actions, the closed loop for the global system under such distributed policy can be characterized with a simple global representation as follows. \begin{lemma}[Closed loop Dynamics] \label[lemma]{lem:closed-loop} The closed loop of \eqref{eq:global_sys} under Algorithm \ref{alg:main} is characterized as follows for all time $t \in \mathbb{N}$: \begin{subequations} \begin{align} x(t) &= \sum_{k=0}^{H-1} \PX{t}[k] \hat{w}(t-k), \quad u(t) = \sum_{k=0}^{H-1} \PU{t}[k] \hat{w}(t-k) \label{eq:cl-a-1}\\ \hat{w}(t) &= \sum_{k=1}^H \left( A_t \PX{t-1}[k-1] + B_t \PU{t-1}[k-1] - \PX{t}[k] \right)\hat{w}(t-k) + \tilde{w}(t-1). \label{eq:cl-a-2} \end{align} \end{subequations} where $u(t),\,\hat{w}(t)$ are concatenated control action and estimated disturbance from \eqref{eq:local-controller}. Vector $\tilde{w}(t)$ has property $\infnorm{\tilde{w}(t)}\leq W$ for all $t$. Matrices $A_t, B_t$ are the global consistent parameter concatenated with local consistent parameters $A^{ij}(\theta^i_t), B^{ij}(\theta^i_t)$. $\bPX{t}, \bPU{t}$ are shorthand for global closed-loop operators when \eqref{eq:local-controller} is implemented, with $$\PX{t}[k](i,j) := {\phi}^{j,x}_{t-d(j\rightarrow i)}[k](i), \quad \PU{t}[k](i,j) := {\phi}^{j,u}_{t-d(j\rightarrow i)}[k](i) \,.$$ \end{lemma} This result is a strengthening of the original SLS theorem (Theorem 2.1 part 2 in \citep{anderson2019system}) where we characterize the closed loop behaviour of SLS controllers constructed from \textit{any} closed-loop operators (not necessarily satisfying characterization in \Cref{thrm:SLS}). Therefore, \Cref{lem:closed-loop} subsumes Theorem 2.1 part 2 in \citep{anderson2019system}. \Cref{lem:closed-loop} gives an equivalent condition for stability of any closed loops under SLS controllers, which is the boundedness of $\hat{w}(t)$. In particular the following result can be used in conjunction with \Cref{lem:closed-loop} to bound $\hat{w}(t)$. \begin{lemma}[Sufficient condition for $H$-convolution] \label[lemma]{lem:sufficient} Let $W\in\mathbb{R}_+$ and $H \in \mathbb{N}$. For $k \in [H]$, let $\left\{a_{t}[k]\right\}_{t=1}^\infty$ be a positive sequence. Let $\{s_t\}_{t=0}^\infty$ be a positive sequence such that $s_t \leq \sum_{k=1}^H a_{t-1}[k]\cdot s_{t-k} + W \,.$ Then $\{s_t\}$ is bounded if $\sum_{t=0}^\infty \sum_{k=1}^H a_{t}[k] \leq L$ for some $L \in \mathbb{R}_+$. In particular, for all $t$, $$s_t \leq e^{-t/H} \cdot e^L s_0 + \frac{W \left( e^L + e -1 \right)}{e-1} \, .$$ \end{lemma} The above sufficient condition is suitable for analyzing dynamical evolution under adversarial inputs. Consider taking the norm on both sides of \eqref{eq:cl-a-2}. Then \Cref{lem:sufficient} is immediately applicable with $s_t = \infnorm{\hat{w}(t)} $, $W = \infnorm{\tilde{w}(t-1)}$, and \begin{equation} \label{eq:error-term} a_t[k] =\infnorm{ A_t \PX{t}[k-1] + B_t \PU{t}[k-1] - \PX{t+1}[k]} . \end{equation} Therefore, a sufficient condition for stability is boundedness of \eqref{eq:error-term} summing over time $t$ and horizon $k$. This quantity represents the error of the implemented closed-loop operators synthesized from the learnt dynamics model and delayed information, with respect to the \textit{correct} closed-loop operators generated from true dynamics without delay. The following lemma provides such a bound. \begin{lemma}[Bounded error for closed loop operators] \label[lemma]{prop:bounded-error} Let $\bPX{t}, \bPU{t}$ denote the global closed loop operators concatenated from sub-controllers generated with Algorithm \ref{alg:sls} where $\PX{t}[k](i,j) := {\phi}^{j,x}_{t-d(j\rightarrow i)}[k](i)$ and $\PU{t}[k](i,j) := {\phi}^{j,u}_{t-d(j\rightarrow i)}[k](i)$. Then we have \begin{align*} \sum_{t=1}^\infty \sum_{k=1}^H \infnorm{ A_t \PX{t-1}[k-1] + B_t \PU{t-1}[k-1] - \PX{t}[k]} \leq O(\text{poly}(\bar{n})\bar{d}) \, . \end{align*} \end{lemma} Note that \Cref{lem:closed-loop} and \Cref{lem:sufficient} essentially reduce the stability analysis of general SLS controllers to bounding the error term \eqref{eq:error-term}, thus greatly reducing the complexity of the analysis. They are applicable to any SLS controllers of the form \eqref{eq:local-controller}, regardless of how the closed-loop column operators $\bm{\phi}^i_t$ are synthesized or whether they are implemented distributedly or centralized. Given the growing applications of SLS for learning and control \citep{dean2020robust, sun2021learning, lian2021system}, \Cref{lem:closed-loop} and \Cref{lem:sufficient} could be of independent interest. Our proof of \Cref{prop:bounded-error} requires the following sensitivity result for SLS synthesis problem, where the formal statement and proof is presented in Appendix \ref{sec:sensitivity}. \begin{theorem}[Informal, Sensitivity bound] \label[theorem]{thrm:sensitivity} Let $\phi^*(A,B) := [x^{*,T}, u^{*,T}]^T$ denote the optimal solution to the following optimization problem \begin{alignat}{3} \label{eq:mpc} &\min_{x,u} && \,\,\, \sum_{t = 0}^H x(t)^TQx(t) + u(t)^TRu(t) \\ &\text{s.t.} && x(t+1) = Ax(t) + Bu(t), \quad x(0) = x_0, \quad x(H) = 0 \, , \nonumber \end{alignat} with $Q,R \succ 0$. Let $(A_1,B_1)$ and $(A_2,B_2)$ be two system matrices such that \eqref{eq:mpc} is feasible. Then the corresponding optimal solutions $\phi^*(A_1, B_1)$ and $\phi^*(A_2, B_2)$ satisfy \begin{align*} \|\phi^*(A_1, B_1) - \phi^*(A_2, B_2)\| \leq \frac{\max\{\sigma_{max}(Q),\sigma_{max}(R)}{\min\{\sigma_{min}(Q),\sigma_{min}(R))}(\Gamma_A\|A_1-A_2\|_F + \Gamma_B \|B_1-B_2\|_F) \end{align*} where $\Gamma_A,\Gamma_B$ involves the system theoretical quantities for $A_1, A_2, B_1, B_2$. \end{theorem} \Cref{thrm:sensitivity} enables the first sensitivity analysis for the SLS synthesis problem \eqref{eq:synth}, which we detail in \Cref{cor:sls-sensitivity} and Appendix \ref{sec:sls}. In particular, \Cref{thrm:sensitivity} can be used to analyze a class of model predictive control (MPC) problems of the form \eqref{eq:mpc}, where one can quantify the sensitivity of optimal solution $x^*,u^*$ with respect to the perturbation to the model $A$ and $B$. \section{Concluding Remarks} We have proposed and analyzed the first learning-based algorithm that provably achieves online stabilization for networked LTI systems subject to communication delays under adversarial disturbances. Our approach is modular because one can replace the CONSIST component of Algorithm \ref{alg:main} with other model selecting subroutines targeted for different disturbance models, e.g., distributed system identification when disturbances are stochastic. Therefore, the proposed approach can serve as a template for generalizing centralized learning-to-control SLS-based algorithms to distributed settings with communication constraints. For example, a promising and challenging direction is to use our framework to generalize centralized SLS-based regret-optimal algorithms, such as the one in \cite{dean2018regret}, to the distributed setting. Many of the technical results in this paper can be leveraged to quantify the effect of delay in the analysis that would be needed for such an extension. \bibliographystyle{abbrvnat} \subsection{From $\cl{H}_2$-optimal control to Least Squares} Due to notation overhead, we will drop time indices and suppress the horizon index $k\in[H]$ in closed-loop operators $\PX{}[k]$, $\PX{}[k]$ and write $\Phi^x_k$, $\Phi^u_k$ instead, since this section presents results about general SLS synthesis. \noindent Let $\Phi^x_i\in\mathbb{R}^{n \times n}$ and $\Phi^u_i\in\mathbb{R}^{m \times n}$ and consider the following canonical SLS synthesis problem for system matrices $[A,B]$ and weighting matrices $C \in \mathbb{R}^{n \times n}, D\in \mathbb{R}^{m \times m}$ : \begin{align} \label{eq:obj} S = \min & \left\| \begin{bmatrix}C & 0\\0& D \end{bmatrix} \begin{bmatrix} \Phi^x_1 & \Phi^x_2 & \dots & \Phi^x_T \\ \Phi^u_1 & \Phi^u_2 & \dots & \Phi^u_T \end{bmatrix} \right\|^2_{F} \\ \notag \text{ s.t.: } & \Phi^x_{1} = I \\ \notag & \Phi^x_{k+1} = A \Phi^x_{k} + B \Phi^u_{k}, \quad \forall \: k: 1 \leq k \leq H \\ \notag & \Phi^x_{H+1} = 0 \end{align} The objective in \eqref{eq:obj} is equivalent to weightted $\mathcal{H}_2$ norm on the closed-loop operators $\bPX{}$ and $\bPU{}$. Denote $\phi^{j,x}_{k}\in\mathbb{R}^n$, $\phi^{j,u}_{k}\in\mathbb{R}^m$ as the $j$-th column of $\Phi^{x}_{k} \in \mathbb{R}^{n \times n}$, $\Phi^{u}_{k} \in \mathbb{R}^{m \times n}$ and $e_j$ the unit vector in the $j$-th coordinate axis. As described in \Cref{subsec:distributed}, we can separate the problem by columns and can equivalently restate \eqref {eq:obj} in terms of each column $\phi^{j,x}_k$ and $\phi^{j,u}_k$ : \begin{align} \label{eq:objcol} S_j :=\min & \left\| \begin{bmatrix}C & D \end{bmatrix} \begin{bmatrix} \phi^{j,x}_1 & \phi^{j,x}_{2} & \dots & \phi^{j,x}_{H} \\ \phi^{j,u}_1 & \phi^{j,u}_{2} & \dots & \phi^{j,u}_{H} \end{bmatrix} \right\|^2_{F} \\ \notag \text{ s.t.: } & \phi^{j,x}_1 = e_j \\ \notag & \phi^{j,x}_{k+1} = A \phi^{j,x}_{k} + B \phi^{j,u}_{k}, \quad \forall \: 1 \leq k \leq H \\ \notag & \phi^{j,x}_{H+1} = 0 \end{align} We will now fix $j$ and rewrite \eqref{eq:objcol} further and introduce new variables to avoid tedious notation. Define $u_k = \phi^{j,u}_{k}, \forall 1 \leq k \leq H $, $\bm{u} = [u^{\top}_1 ,\dots, u^{\top}_H]^\top$ and the block-lower-triangular matrix $\bm{G}_u \in\mathbb{R}^{Hn \times Hm}$, the vector $\xi_j \in \mathbb{R}^{Hn}$ and the lifted weight matrices $\bm{C}$, $\bm{D}$ as \begin{align} &\bm{G}_u = \begin{bmatrix}B &0 & 0 & \dots & 0\\ AB &B & 0 & \dots & 0\\ A^2B &AB & B & \dots & 0\\ & \dots & \dots & & \\ A^{H-1}B & A^{H-2}B & A^{H-3}B & \dots & B\\ \end{bmatrix} & & \xi_j =\begin{bmatrix} -A e_j \\ -A^2 e_j \\ \dots \\ -A^H e_j \end{bmatrix} && \bm{C} = I_H \otimes C && \bm{D} = I_H \otimes D , \end{align} where $I_k$ is the identity matrix in $\mathbb{R}^k$. Denote by $P_i$, $1 \leq i \leq H$ the $i$-th block-row of $\bm{G}_u$: \begin{align} P_i = [A^{i-1}B, A^{i-2}B, \dots, B, 0, \dots,0] \end{align} Notice that with these variables, it holds that for any feasible $\phi^{j,u}_{k}$, $\phi^{j,x}_{k}$ holds $\forall 1 \leq k \leq H$: $$\phi^{j,x}_{k+1} = -\xi_{j,k} + P_k \bm{u}$$ Now, if we separate out the state cost for the first time-step, we can rewrite the subproblem $S_j$ as \begin{align} \label{eq:Sj}S_j = \min\limits_{\bm{u}} \quad & \left\|\begin{bmatrix}\bm{C} \bm{G}_u\\\bm{D} \end{bmatrix} \bm{u} - \begin{bmatrix}\bm{C}\xi_j\\\bm{0}\end{bmatrix} \right\|^2_2 \quad + (C^{\top}C)_{jj} \\ \label{eq:cons}\text{ s.t.: } &\quad 0 = A^H e_j + P_H \bm{u} \end{align} For large systems which consist of many interconnected (sparsely) small systems, it is often the case that the overall system is $H$-controllable for some suitable choice of $H\ll n$: \subsection{Representation as a Least-Squares problem} \noindent From \eqref{eq:Sj}, define $\bm{u}^*_{c} := P^{\top}_H (P_H P^{\top}_H)^{-1}A^T e_j $, which is the solution to the optimization problem \begin{align} \min\limits_{\bm{u}} & \quad \|\bm{u}\|^2_2 \\ \text{ s.t. } & 0 = -A^T e_j + P_H \bm{u} \end{align} We can interpret $\bm{u}^*_{c}$ as the smallest control action, measured in $\ell_2$, that drives the system from the origin to $-A^Te_j$ in $H$ time-steps. This relates to controllability grammians as described in \cite{dullerud2013course}. Using $M^+$ to denote the Moore-Penrose Inverse of a matrix $M$, we can also write $\bm{u}^*_{c} := P^{+}_H A^T e_j = P^\top_H W^{-1}_H A^T e_j $, where $W_H = P_H P_H^\top$.\\ \noindent Ignoring the constant term $(C^TC)_{jj}$, we can reparametrize $\bm{u}= -\bm{u}^*_c + \bm{u}'$ where $\bm{u}'\in\mathrm{null}{N}(P_H)$ and describe \eqref{eq:Sj} as the optimization problem: \begin{align}\label{eq:sj} S_j \quad &:= \min\limits_{\bm{u}' \in \mathrm{null}(P_H(A,B))} \Big\|\begin{bmatrix}\bm{C}& 0\\ 0 & \bm{D}\end{bmatrix}\begin{bmatrix}\bm{G}_u(A,B) \\ I \end{bmatrix}(\bm{u}'-\bm{u}_c^*(A,B))\Big\|^2_2 \end{align} \paragraph{The least-squares problem. } Let $H$ denote the FIR-Horizon of the problem, then define the matrices \begin{align}\label{eq:gab} &\bm{G}_w(A) = \begin{bmatrix}I &0 & 0 & \dots & 0\\ A &I & 0 & \dots & 0\\ A^2 &A & I & \dots & 0\\ & \dots & \dots & & \\ A^{H-1} & A^{H-2} & A^{H-3} & \dots & I\\ \end{bmatrix}, &\bm{G}_u(A,B) = \begin{bmatrix}B &0 & 0 & \dots & 0\\ AB &B & 0 & \dots & 0\\ A^2B &AB & B & \dots & 0\\ & \dots & \dots & & \\ A^{H-1}B & A^{H-2}B & A^{H-3}B & \dots & B\\ \end{bmatrix}. \end{align} and denote $P_i(A,B)$ as the $i$th block matrix row of $\bm{G}_u(A,B)$: \begin{align}\label{eq:pdef} P_i(A,B) = [A^{i-1}B, A^{i-2}B, \dots, B, 0, \dots,0] \end{align} $\bm{G}_u(A,B)$ can be written as $\bm{G}_u(A,B) = \bm{G}_w(A)(I_H \otimes B)$, where $I_H$ is the identity matrix in $\mathbb{R}^H$. Let $Z \in \mathbb{R}^{H \times H}$ be defined as the nilpotent matrix \begin{align} Z = \begin{bmatrix} \bm{0}_{H-1 \times 1} & I_{H-1} \\ 0& \bm{0}_{1 \times H-1} \end{bmatrix}, \end{align} and notice it's psuedo-inverse is $Z^+ = Z^\top$. Using $Z$, it is easy to verify that $\bm{G}_w(A)$ can be expressed as: \begin{align} \bm{G}_w(A) = \left(I_H - Z^+ \otimes A\right)^{-1} \end{align} Recall our derivation of the reduced problem $S_j$ \eqref{eq:sj} for a fixed $j$: \begin{align}\label{eq:sj} S_j \quad &:= \min\limits_{\bm{u} \in \mathrm{null}(P_H(A,B))} \Big\|\begin{bmatrix}\bm{C}& 0\\ 0 & \bm{D}\end{bmatrix}\begin{bmatrix}\bm{G}_u(A,B) \\ I \end{bmatrix}(\bm{u}-\bm{u}_c(A,B))\Big\|^2_2 \end{align} Where $\bm{u}_{c}$ denotes $\bm{u}_{c} := P^{+}_H A^H e_j = P^\top_HW^{-1}_HA^H e_j$. Let $\bm{u}^*(A,B)$ be a minimizer of the above problem for fixed $A,B$, we are interested in the SLS solutions $$\phi^{*j}(A,B) := \begin{bmatrix}\phi^{*j}_x(A,B) \\ \phi^{*j}_u(A,B) \end{bmatrix} = \begin{bmatrix}\bm{G}_u(A,B)\\I \end{bmatrix}(\bm{u}^*(A,B)-\bm{u}_c(A,B))$$ and how these solutions are perturbed with changes in $A,B$. For the rest of the discussion, we will drop mentioning the explicit dependence on $(A,B)$ to reduce the notational burden. First, we (over-)parametrize $\bm{u}$ as $$\bm{u} = (I-P^+_H P_H)\bm{\eta}, $$ to cast the above problem into an unconstrained one: \begin{align}\label{eq:sj} S_j \quad &:= \min\limits_{\bm{\eta}} \Big\|\underbrace{\begin{bmatrix}\bm{C}& 0\\ 0 & \bm{D}\end{bmatrix}\begin{bmatrix}\bm{G}_u(A,B) \\ I \end{bmatrix}(I-P^+_HP_H)}_{\bm{F}}\bm{\eta}- \underbrace{\begin{bmatrix}\bm{C}& 0\\ 0 & \bm{D}\end{bmatrix}\begin{bmatrix}\bm{G}_u(A,B) \\ I \end{bmatrix}\bm{u}_c(A,B)}_{\bm{g}}\Big\|^2_2 \end{align} The unique min-norm solution $\eta^*$ to the above problem is $\eta^* = \bm{F}^+\bm{g}$ and therefore the optimal solution $\phi^*$ takes the form \begin{align} \phi^* = \begin{bmatrix}\bm{C}^{-1}& 0\\ 0 & \bm{D}^{-1}\end{bmatrix}(\bm{F} \bm{F}^+\bm{g} - \bm{g}) = \begin{bmatrix}\bm{C}^{-1}& 0\\ 0 & \bm{D}^{-1}\end{bmatrix}\underbrace{(\bm{F} \bm{F}^+ - I)\bm{g}}_{\nu^*} =: \begin{bmatrix}\bm{C}^{-1}& 0\\ 0 & \bm{D}^{-1}\end{bmatrix} \nu^* \end{align} \subsection{Local lipshitzness of $\cl{H}_2$-optimal closed-loop operators} \noindent Here, we perform perturbation analysis on the term $\nu^* = (\bm{F} \bm{F}^+ - I)\bm{g}$. Throughout the discussion, we will make frequent use of the following identities: \begin{lemma}\label{lem:diffs} For arbitrary matrices $X,Y\in\mathbb{R}^{n \times m}$ and $A, B \in \mathbb{R}^{n \times n}$ holds: \begin{enumerate}[i)] \item $A^k_1-A^k_2 = \sum^{k-1}_{j=0} A^{k-1-j}_1(A_1-A_2)A^j_2$ \item $XX^+-YY^+ = (I-XX^+)(X-Y)Y^+ + \left[(I-YY^+)(X-Y)X^+\right]^\top$ \item If $A$ and $B$ are invertible, then $A^{-1}-B^{-1}= A^{-1}(B-A)B^{-1}$. \end{enumerate} \end{lemma} The following is a corollary from Theorem 4.1 in \cite{wedin1973perturbation}: \begin{theorem}\label{thm:wedin} Let $X$ and $Y$ be matrices with equal rank, let $\|\:\cdot\:\|_2$ denote the induced 2-norm and $\|\:\cdot\:\|_F$ denote the Frobenius norm. The following inequalities hold: \begin{align*} \|X^+-Y^+\|_2 &\leq \varphi \|X^+\|_2\|Y^+\|_2 \|X-Y\|_2\\ \|X^+-Y^+\|_F &\leq \sqrt{2} \|X^+\|_2\|Y^+\|_2 \|X-Y\|_F \end{align*} where $\varphi = \tfrac{1 + \sqrt{5}}{2}$ denotes the golden ratio constant. \end{theorem} \noindent Next we present the core theorem of the perturbation analysis: Given two \textit{arbitrary} controllable systems $(A_1,B_1)$ and $(A_2,B_2)$, \thmref{thm:mainsensitivity} bounds the worst-case difference in solutions $\|\phi^{*j}_1-\phi^{*j}_2\|_2$ in terms of the differences in parameters space $\|A_1-A_2\|_{F}$ and $\|B_1-B_2\|_2$ between both systems. This result is the first global (considering arbitrary pairs of $A_1,A_2$ and $B_1,B_2$) perturbation bound for $\cl{H}_2$-optimal control with SLS. \begin{theorem}\label{thm:mainsensitivity} Let $C,D \succ 0$, let $(A_1,B_1)$ and $(A_2,B_2)$ be two controllable pairs of system matrices with FIR horizon $H$ and let $\phi^{*j}_1$ and $\phi^{*j}_2$ be the corresponding SLS-solutions to the subproblem $S_j$. Then, it holds that: \begin{align} \|\phi^{j*}_1-\phi^{j*}_2\|_2 \leq \Gamma_A\|A_1-A_2\|_F + \Gamma_B \|B_1-B_2\|_F \end{align} where the Lipshitz-constants $\Gamma_A,\Gamma_B$ stand for \begin{align*} \Gamma_A &= \kappa_{CD}\Gamma'_1 + \kappa_{CD}\Gamma'_2 \|B_1\|_2\|\bm{G}_w(A_1)\|_2, &&\kappa_{CD} = \frac{\max\{\sigma_{max}(C),\sigma_{max}(D)\}}{\min\{\sigma_{min}(C),\sigma_{min}(D)\}} \\ \Gamma_B &= \kappa_{CD}\Gamma'_2 \|\bm{G}_w(A_2)\|_2 && \end{align*} and $\Gamma'_1$ and $\Gamma'_2$ are defined as: \begin{align*} \Gamma'_1 &= \alpha_{H,1} \alpha_{H,2}H(1+\left\|\bm{G}_{u,2}\right\|_2)\|P^+_{H,2}\|_2 \\ \Gamma'_2 &= \alpha_{H,1} \|P^+_{H,1}\|_2 \left(1 + \bm{\varphi} \|P^+_{H,2}\|_2 +\bm{\varphi} \|P^+_{H,2}\|_2\left\|\bm{G}_{u,2}\right\|_2 \right) + \|\bm{g}\|_2(\|\bm{F}^+_1\|_2 + \|\bm{F}^+_2\|_2) + \dots \\ \notag & \quad + \bm{\varphi}\|\bm{g}\|_2(\|\bm{F}^+_1\|_2 + \|\bm{F}^+_2\|_2)\|P^+_{H,1}\|_2 \|P^+_{H,2}\|_2(\|P_{H,1}\|_2 + \|P_{H,2}\|_2)(1+\|\bm{G}_{u,1}\|_2). \end{align*} and $\varphi=\tfrac{1+\sqrt{5}}{2}$ is the golden ratio. \end{theorem} \begin{proof} Recall the identities of \lemref{lem:diffs}. Write $\nu^*_1-\nu^*_2$ as \begin{align} \notag \nu^*_1-\nu^*_2 &= (\bm{F}_1\bm{F}^+_1-I)(\bm{g}_1-\bm{g}_2) + (\bm{F}_1\bm{F}^+_1 - \bm{F}_2\bm{F}^+_2)\bm{g}_2 \\ \label{eq:nu1bound} \|\nu^*_1-\nu^*_2\|_2&\leq \|\bm{g}_1-\bm{g}_2\|_2 + \|\bm{F}_1\bm{F}^+_1 - \bm{F}_2\bm{F}^+_2\|_2\|\bm{g}\|_2, \end{align} where used the fact that $(\bm{F}_1\bm{F}^+_1-I)$ is a projection and therefore $\|\bm{F}_1\bm{F}^+_1-I\|_2 =1$. Rewrite $\bm{F}_1\bm{F}^+_1 - \bm{F}_2\bm{F}^+_2$ as $$ (I-\bm{F}_1\bm{F}^+_1)(\bm{F}_1-\bm{F}_2)\bm{F}^+_2 + \left[(I-\bm{F}_2\bm{F}^+_2)(\bm{F}_1-\bm{F}_2)\bm{F}^+_1\right]^\top $$ to conclude that \begin{align} \|\bm{F}_1\bm{F}^+_1 - \bm{F}_2\bm{F}^+_2\|_2 \leq \|\bm{F}_1-\bm{F}_2\|_2(\|\bm{F}^+_1\|_2 + \|\bm{F}^+_2\|_2). \end{align} Substitution into \eqref{eq:nu1bound} yields: \begin{align} \label{eq:nu2bound} \|\nu^*_1-\nu^*_2\|_2 &\leq \|\bm{g}_1-\bm{g}_2\|_2 + \|\bm{F}_1-\bm{F}_2\|_2(\|\bm{F}^+_1\|_2 + \|\bm{F}^+_2\|_2)\|\bm{g}\|_2, \end{align} \begin{enumerate} \item \underline{Bounding $\|\bm{F}_1-\bm{F}_2\|_2$}: Rewrite $\bm{F}_1-\bm{F}_2$ as \begin{align} \begin{bmatrix}\bm{C}^{-1}& 0\\ 0 & \bm{D}^{-1}\end{bmatrix}(\bm{F}_1-\bm{F}_2) &= \begin{bmatrix}\bm{G}_{u,1} \\ I \end{bmatrix}(I-P^+_{H,1}P_{H,1}) -\begin{bmatrix}\bm{G}_{u,2} \\ I \end{bmatrix}(I-P^+_{H,2}P_{H,2}) \\ &= \begin{bmatrix}\bm{G}_{u,1} \\ I \end{bmatrix}(P^+_{H,2}P_{H,2}-P^+_{H,1}P_{H,1}) + \begin{bmatrix}\bm{G}_{u,1} - \bm{G}_{u,2} \\ 0 \end{bmatrix}(I-P^+_{H,2}P_{H,2}) \end{align} From the above we can derive the inequality: \begin{align}\label{eq:F12bound} \frac{\|\bm{F}_1-\bm{F}_2\|_2}{\max\{\|C\|_2,\|D\|_2\}} &\leq (1+\|\bm{G}_{u,1}\|_2)\|P^+_{H,2}-P^+_{H,1}\|_2(\|P_{H,1}\|_2 + \|P_{H,2}\|_2) + \|\bm{G}_{u,1}-\bm{G}_{u,2}\|_2 \end{align} Now we will use the result \thmref{thm:wedin} to bound $\|P^+_{H,2}-P^+_{H,1}\|_2$ as \begin{align} \|P^+_{H,2}-P^+_{H,1}\|_2 \leq \bm{\varphi} \|P^+_{H,1}\|_2 \|P^+_{H,2}\|_2 \|P_{H,2}-P_{H,1}\|_2 \end{align} Furthermore, noticing $P_{H,2}-P_{H,1} = [\bm{0},\dots,\bm{0},\bm{I}_n](\bm{G}_{u,2}-\bm{G}_{u,1})$ we can conclude \begin{align} \|P^+_{H,2}-P^+_{H,1}\|_2 \leq \bm{\varphi} \|P^+_{H,1}\|_2 \|P^+_{H,2}\|_2 \|\bm{G}_{u,2}-\bm{G}_{u,1}\|_2. \end{align} We combine this into \eqref{eq:F12bound} to obtain \begin{align}\label{eq:F12bound-2} \notag &\frac{\|\bm{F}_1-\bm{F}_2\|_2}{\max\{\|C\|_2,\|D\|_2\}}\\ \leq &\left( 1+ \bm{\varphi} \|P^+_{H,1}\|_2 \|P^+_{H,2}\|_2 (1+\|\bm{G}_{u,1}\|_2)(\|P_{H,1}\|_2 + \|P_{H,2}\|_2) \right)\|\bm{G}_{u,1}-\bm{G}_{u,2}\|_2 \end{align} \item \underline{Bounding $\|\bm{g}_1-\bm{g}_2\|_2$}: Introduce the constant $\alpha_H := \max_{0\leq k\leq H} \|A^k\|_2$ and observe that $\|A^H_1-A^H_2\|_2$ can be bounded as: \begin{align} \|A^H_1-A^H_2\|_2 = \|\sum^{H-1}_{j=0} A^{H-1-j}_1(A_1-A_2)A^j_2\| \leq H \alpha_{H,1} \alpha_{H,2} \|A_1-A_2\|_2 \end{align} We can rewrite $\bm{g}_1-\bm{g}_2$ as \begin{align} \begin{bmatrix}\bm{C}^{-1}& 0\\ 0 & \bm{D}^{-1}\end{bmatrix}(\bm{g}_1-\bm{g}_2) &= \begin{bmatrix}\bm{G}_{u,1} \\ I \end{bmatrix}P^+_{H,1}A^H_1e_j - \begin{bmatrix}\bm{G}_{u,2} \\ I \end{bmatrix}P^+_{H,2}A^H_2e_j \\ &=\begin{bmatrix}(\bm{G}_{u,1}-\bm{G}_{u,2}) \\ 0 \end{bmatrix}P^+_{H,1}A^H_1e_j + \begin{bmatrix}\bm{G}_{u,2} \\ I\end{bmatrix}(P^+_{H,1}-P^+_{H,2})A^H_1e_j \\ \notag&\quad \dots+ \begin{bmatrix}\bm{G}_{u,2} \\ I \end{bmatrix}P^+_{H,2}(A^H_1-A^H_2)e_j \end{align} and obtain the bound: \begin{align} \frac{\|\bm{g}_1-\bm{g}_2\|_2}{\max\{\|C\|_2,\|D\|_2\}} &\leq \alpha_{H,1}\left\| \bm{G}_{u,1}-\bm{G}_{u,2}\right\|_2 \|P^+_{H,1}\|_2+ \alpha_{H,1}(1+\left\|\bm{G}_{u,2}\right\|_2)\left\|P^+_{H,1}-P^+_{H,2}\right\|_2 \\ &\quad \dots + \alpha_{H,1} \alpha_{H,2}H(1+\left\|\bm{G}_{u,2}\right\|_2)\|P^+_{H,2}\|_2\|A_1-A_2\|_2 \\ &\leq \alpha_{H,1} \|P^+_{H,1}\|_2 \left(1 + \bm{\varphi} \|P^+_{H,2}\|_2 +\bm{\varphi} \|P^+_{H,2}\|_2\left\|\bm{G}_{u,2}\right\|_2 \right)\left\| \bm{G}_{u,1}-\bm{G}_{u,2}\right\|_2 \\ &\quad \dots + \alpha_{H,1} \alpha_{H,2}H(1+\left\|\bm{G}_{u,2}\right\|_2)\|P^+_{H,2}\|_2\|A_1-A_2\|_2 \end{align} \end{enumerate} We get the bound \begin{align} \frac{\|\nu^*_1-\nu^*_2\|_2}{\max\{\|C\|_2,\|D\|_2\}} \leq \Gamma'_1 \|A_1-A_2\|_2 + \Gamma'_2 \|\bm{G}_{u,1}-\bm{G}_{u,2}\|_2 \end{align} where $\Gamma'_1$ and $\Gamma'_2$ are the constants: \begin{align} \Gamma'_1 &= \alpha_{H,1} \alpha_{H,2}H(1+\left\|\bm{G}_{u,2}\right\|_2)\|P^+_{H,2}\|_2 \\ \Gamma'_2 &= \alpha_{H,1} \|P^+_{H,1}\|_2 \left(1 + \bm{\varphi} \|P^+_{H,2}\|_2 +\bm{\varphi} \|P^+_{H,2}\|_2\left\|\bm{G}_{u,2}\right\|_2 \right) + \|\bm{g}\|_2(\|\bm{F}^+_1\|_2 + \|\bm{F}^+_2\|_2) + \dots \\ \notag & \quad + \bm{\varphi}\|\bm{g}\|_2(\|\bm{F}^+_1\|_2 + \|\bm{F}^+_2\|_2)\|P^+_{H,1}\|_2 \|P^+_{H,2}\|_2(\|P_{H,1}\|_2 + \|P_{H,2}\|_2)(1+\|\bm{G}_{u,1}\|_2) \end{align} Using \lemref{lem:Gwu}, we obtain the final bound: \begin{align}\label{eq:Gamma12} \|\phi^*_1 - \phi^*_2\|_2 \leq \kappa_{CD}\|\nu^*_1-\nu^*_2\|_2 \leq \Gamma_A \|A_1-A_2\|_2 + \Gamma_B \|B_1-B_2\|_2 \end{align} with the constants $\Gamma_A,\Gamma_B$ defined as: \begin{align}\label{eq:GammaAB} \Gamma_A &= \kappa_{CD}\Gamma'_1 + \kappa_{CD}\Gamma'_2 \|B_1\|_2\|\bm{G}_w(A_1)\|_2 \|\bm{G}_w(A_2)\|_2 \\ \Gamma_B &= \kappa_{CD}\Gamma'_2 \|\bm{G}_w(A_2)\|_2 \end{align} \end{proof} \subsection{Global lipshitzness of $\cl{H}_2$-optimal closed-loop operators over compact sets $\cl{S}$} For the next sections we will assume that we are given some fixed compact set $\cl{S}$ of controllable systems. Recall the basic implications of \lemref{lem:baseCtrl}; we define accordingly the FIR horizon $H$ and the constants $\overline{\sigma}^w$, $\underline{\sigma}^w$, $\overline{\sigma}^u$, $\underline{\sigma}^u$ and assume that these quantities are all known.\\ \noindent This section derives a global Lipshitz bound for $\cl{H}_2$-optimal SLS solutions over a compact set of controllable systems $\cl{S}$. As a starting point we consider the previous theorem \thmref{thm:mainsensitivity}. Our main proof strategy is to derive global bounds on the constants $\Gamma_A$ and $\Gamma_B$. We proceed with a collection lemmas bounding individual terms in the equations \eqref{eq:Gamma12}, \eqref{eq:GammaAB}. \subsubsection{Auxiliary Lemmas} \begin{lemma}\label{lem:Gwu} For any pair of system matrices $(A_1,B_1)$ and $(A_2,B_2)$ (with compatible dimensions) holds \begin{align} \|\bm{G}_w(A_1)-\bm{G}_w(A_2)\|_2 & \leq \|\bm{G}_w(A_1)\|_2 \|\bm{G}_w(A_2)\|_2 \| A_1 - A_2\|_2 \\ \notag \|\bm{G}_u(A_1,B_1)-\bm{G}_u(A_2,B_2)\|_2& \leq \|B_1\|_2\|\bm{G}_w(A_1)\|_2 \|\bm{G}_w(A_2)\|_2 \| A_1 - A_2\|_2 + \|\bm{G}_w(A_2)\|_2\|B_1- B_2\|_2 \end{align} \end{lemma} \begin{proof} Using \lemref{lem:diffs} we can write Using $\bm{G}_u(A,B) = \bm{G}_w(A)(I_H \otimes B)$ and \lemref{lem:diffs} we can write $\bm{G}_{u,1}-\bm{G}_{u,2}$ as \begin{align} \bm{G}_{u,1}-\bm{G}_{u,2} &= \bm{G}_w(A_1)(I_H \otimes B_1) - \bm{G}_w(A_2)(I_H \otimes B_2) \\ &= \left(\bm{G}_w(A_1)-\bm{G}_w(A_2)\right)(I_H \otimes B_1) + \bm{G}_w(A_2)\left( I_H \otimes (B_1- B_2)\right) \end{align} It holds that \begin{align} \bm{G}_w(A_1)-\bm{G}_w(A_2) &= \bm{G}_w(A_1)(\bm{G}_w(A_2)^{-1}-\bm{G}_w(A_1)^{-1})\bm{G}_w(A_2) \\ &= \bm{G}_w(A_1)(Z^+ \otimes (A_1 - A_2))\bm{G}_w(A_2) \end{align} which leads to the bound \begin{align} \left\| \bm{G}_w(A_1)-\bm{G}_w(A_2) \right\|_2 \leq \|\bm{G}_w(A_1)\|_2 \|A_1-A_2\|_2 \|\bm{G}_w(A_2)\|_2 \end{align} \end{proof} In total, we need to global bounds on the quantities $\|\bm{G}_{u}\|_2$,$\|\bm{G}_{w}\|_2$, $\|P^+_{H}\|_2$, $\|P_{H}\|_2$, $\|\bm{F}^+\|_2$, $\|\bm{g}\|_2$. \begin{lemma} Let $(A,B)$ be pair of fixed system matrices, let $\bm{G}_{u}(A,B)$, $\bm{G}_w(A)$ be the matrices defined in \eqref{eq:gab}, and let $W^u_H = \sum^{H-1}_{i=0} A^iBB^\top A^{i\top}$, $W^w_H = \sum^{H-1}_{i=0} A^i A^{i\top}$ be the $H$th controllability grammian w.r.t to the input $u$ and the distrubance $w$, respectively. Then it holds: \begin{align} &\|\bm{G}_{u}(A,B)\|_2 \leq \sqrt{H \sigma_{max}(W^u_H(A,B))} && \|\bm{G}_{w}(A)\|_2 \leq \sqrt{H \sigma_{max}(W^w_H(A))} \end{align} \end{lemma} \begin{proof} $\|\bm{G}_{u}\|_2$ is defined as $\|\bm{G}_{u}\|^2_2 := \max\limits_{\|u\|_2 = 1} \|\bm{G}_u \bm{u}\|^2_2$, by decomposing $\bm{u} = [u^\top_0,\dots,u^\top_{H-1}]^\top$ we can rewrite this as \begin{align} \|\bm{G}_{u}\|^2_2 &= \max\limits_{\|u\|_2 = 1} \left\| \begin{bmatrix} B u_0 \\ ABu_0 + Bu_1\\ \dots \\ A^{H-1}Bu_0 + \dots + Bu_{H-1} \end{bmatrix} \right\|^2_2 = \max\limits_{\|u\|_2 = 1} \sum^{H}_{k=1} \|P_k \bm{u}\|^2_2 \\ &\leq \sum^{H}_{k=1} \max\limits_{\|u\|_2 = 1} \|P_k \bm{u}\|^2_2 =\sum^{H}_{k=1} \|P_k\|^2_2 \leq H \|P_H\|^2_2 \leq H \|W^u_H\|_2 \end{align} Where we used the fact that $\|P_k\|^2_2$ increases in $k$ and that $\|P_k\|^2_2$ is equal to the induced $2$-norm of the corresponding controllabillity grammian $W^u_k = \sum^{k-1}_{i=0} A^iBB^\top A^{i\top}$. Thus, we obtain the bound $$\|\bm{G}_{u}(A,B)\|_2 \leq \sqrt{H \sigma_{max}(W^u_H(A,B))}, $$ and the bound on $\|\bm{G}_{w}(A)\|_2$ follows in the same way. \end{proof} \begin{lemma} Let $(A,B)$ be pair of $H$-controllable fixed system matrices, let $P_H(A,B)$ be the matrix defined in \eqref{eq:pdef}, and let $W^u_H = \sum^{H-1}_{i=0} A^iBB^\top A^{i\top}$ be the $H$th controllability grammian w.r.t to the input $u$. Then, the induced $2$ norm of $P_H(A,B)$ and its Moore-Penrose Inverse $P^+_H(A,B)$ can be written as: \begin{align} &\|P_H(A,B)\|_2 = \left(\sigma_{max}(W^u_H(A,B))\right)^{\tfrac{1}{2}} && \|P^+_H(A,B)\|_2 = \left(\sigma_{min}(W^u_H(A,B))\right)^{-\tfrac{1}{2}} \end{align} \end{lemma} \begin{proof} Because we assume a sufficient degree of controllability, $P_H(A,B)$ is full row-rank. This implies that \begin{align} &\|P_H(A,B)\|_2 = \sqrt{\lambda_{max}(P_H(A,B)P^\top_H(A,B))} = \sqrt{ \sigma_{max}(W^u_H(A,B))}\\ &\left(\|P^+_H(A,B)\|_2\right)^{-1} = \sqrt{\lambda_{min}(P_H(A,B)P^\top_H(A,B))} = \sqrt{ \sigma_{min}(W^u_H(A,B))} \end{align} \end{proof} \begin{lemma} Let $(A,B)$ be a fixed pair of $H$-controllable system matrices, and let $\bm{F}(A,B)$ denote the matrix \begin{align} \bm{F}(A,B) = \begin{bmatrix}\bm{C}& 0\\ 0 & \bm{D}\end{bmatrix}\begin{bmatrix}\bm{G}_u(A,B) \\ I \end{bmatrix}(I-P^+_H(A,B)P_H(A,B)). \end{align} Then, $\|\bm{F}^+(A,B)\|_2 \leq \sigma^{-1}_{min}(D)$. \end{lemma} \begin{proof} For an arbitrary matrix $M$, $(\|M^+\|_2)^{-1}$ is equal to the smallest \underline{non-zero} singular eigenvalue of $M$ (we will denote this quantity as $\sigma_{-1}(M)$). Thus, in order to bound $\|M^+\|_2$ from above, we have to bound $\sigma_{-1}(M)$ from below. Denote $\bm{L}$ as the matrix $$ \bm{L}:=\begin{bmatrix}\bm{C}& 0\\ 0 & \bm{D}\end{bmatrix}\begin{bmatrix}\bm{G}_u(A,B) \\ I \end{bmatrix} $$ and notice that it is full column rank and has rank of $H\times n_u$. The projection $\Pi_{\cl{N}(P_H)}:=(I-P^+_H(A,B)P_H(A,B))$ has rank $H\times n_u-n_x$ due the assumption of $H$-controllability. Hence, $\bm{F}=\bm{L}\Pi_{\cl{N}(P_H)}$ is full column rank with rank $r_{\bm{F}} := H\times n_u-n_x$ and has a null space $\cl{N}(\bm{F})$ of dimension $n_x$. From these observations, we can equivalently say that $\sigma_{-1}(\bm{F})$ is the $r_{\bm{F}}$th largest (or equivalently $n_x+1$ smallest) singular eigenvalue of $\bm{F}$. Using the Minimax principle, we can therefore write: \begin{align} \sigma_{-1}(\bm{F}) &= \max\limits_{\mathrm{proj. }\Pi, \text{ s.t.: } \mathrm{rank}(\Pi) = r_{\bm{F}}} \min\limits_{x\text{ s.t.: }\|\Pi x\|=1} x^\top \Pi \bm{F}^\top \bm{F} \Pi x \\ &= \max\limits_{\mathrm{proj. }\Pi, \text{ s.t.: } \mathrm{rank}(\Pi) = r_{\bm{F}}} \min\limits_{x\text{ s.t.: }\|\Pi x\|=1} x^\top \Pi \Pi_{\cl{N}(P_H)} \bm{L}^\top \bm{L} \Pi_{\cl{N}(P_H)} \Pi x \end{align} Now recall that $\Pi_{\cl{N}(P_H)}$ is of rank $r_{\bm{F}}$, hence it is a feasible choice for the variable $\Pi$ of the outer optimization problem. This leads to the bound \begin{align} \sigma_{-1}(\bm{F}) &\geq \min\limits_{x\text{ s.t.: }\|\Pi_{\cl{N}(P_H)} x\|=1} x^\top \Pi_{\cl{N}(P_H)} \bm{L}^\top \bm{L} \Pi_{\cl{N}(P_H)} x \\ &\geq \min\limits_{z\text{ s.t.: }\|z\|=1} z^\top \bm{L}^\top \bm{L} z = \sigma_{min}(\bm{L}) \end{align} We obtain a simple, but possibly conservative, lower bound on $\sigma_{min}(\bm{L})$ as follows: \begin{align*} &\sigma^2_{min}(\bm{L}) = \min\limits_{z\text{ s.t.: }\|z\|=1} \|\bm{L} z\|^2_2 = \min\limits_{z\text{ s.t.: }\|z\|=1} \|\bm{C}\bm{G}_u(A,B) z\|^2_2 + \|\bm{D}z\|^2_2 \geq \sigma^2_{min}(\bm{C}\bm{G}_u(A,B)) + \sigma^2_{min}(\bm{D})\\ &\implies \quad \sigma_{min}(\bm{L}) \geq \sigma_{min}(\bm{D}) \end{align*} Finally, this provides us with the final result: $\|\bm{F}^+(A,B)\|_2 = \sigma^{-1}_{-1}(\bm{F}) \leq \sigma^{-1}_{min}(\bm{L}) \leq \sigma^{-1}_{min}(\bm{D})$ \end{proof} We obtain an upper bound for $\|\bm{g}\|_2$, as a corollary of the previous three Lemmas: \begin{lemma} Let $(A,B)$ be a fixed pair of $H$-controllable system matrices. Let $\bm{g} = \bm{L}\bm{u}^*_c$, where $\bm{L}$ and $\bm{u}^*_{c}$ are defined as: \begin{align} & \bm{L}:=\begin{bmatrix}\bm{C}& 0\\ 0 & \bm{D}\end{bmatrix}\begin{bmatrix}\bm{G}_u(A,B) \\ I \end{bmatrix} & &\bm{u}^*_{c} := P^{+}_H A^H e_j = P^\top_HW^{-1}_HA^H e_j. \end{align} Then, it holds: $$\|\bm{g}\|_2 \leq \left(\|C\|_2 \sqrt{H}\sigma^{\tfrac{1}{2}}_{max}(W^u_H) + \|D\|_2\right)\sigma^{-\tfrac{1}{2}}_{min}(W^u_H)\alpha_H $$ where $\alpha_H := \max_{0\leq k\leq H} \|A^k\|_2$ \end{lemma} \subsection{The final bound} With the results of the last section, we can now bound the constants $\Gamma_A$ and $\Gamma_B$ used in \thmref{thm:mainsensitivity}. Rather than writing the explicit form of the constants we shall only analyze how they scale with system parameters. $\Gamma_A$, $\Gamma_B$ are defined as \begin{align*} \Gamma_A &= \kappa_{CD}\Gamma'_1 + \kappa_{CD}\Gamma'_2 \|B_1\|_2\|\bm{G}_w(A_1)\|_2 \|\bm{G}_w(A_2)\|_2 \\ \Gamma_B &= \kappa_{CD}\Gamma'_2 \|\bm{G}_w(A_2)\|_2, \end{align*} where $\Gamma'_1$, $\Gamma'_2$ are dominated by the terms: \begin{align*} \Gamma'_1 &\sim \cl{O}\left(\alpha_{H,1} \alpha_{H,2}H\left\|\bm{G}_{u,2}\right\|_2\|P^+_{H,2}\|_2\right)\\ \Gamma'_2 &\sim \cl{O}\left(\|\bm{g}\|_2(\|\bm{F}^+_1\|_2 + \|\bm{F}^+_2\|_2)\|P^+_{H,1}\|_2 \|P^+_{H,2}\|_2(\|P_{H,1}\|_2 + \|P_{H,2}\|_2)(1+\|\bm{G}_{u,1}\|_2)\right) \end{align*} Recall the FIR horizon $H$ and the constants $ \underline{\sigma}_u, \overline{\sigma}_u$, $\underline{\sigma}_w$, $\overline{\sigma}_w$, $\kappa_{CD}$ and revisit the collection of bounds we have derived: \begin{enumerate}[i)] \item $\|\bm{G}_{u}(A,B)\|_2 \leq \sqrt{H \sigma_{max}(W^u_H(A,B))}$, $\|\bm{G}_{w}(A)\|_2 \leq \sqrt{H \sigma_{max}(W^w_H(A))} $ \item $\|P_H(A,B)\|_2 = \left(\sigma_{max}(W^u_H(A,B))\right)^{\tfrac{1}{2}},\quad \|P^+_H(A,B)\|_2 = \left(\sigma_{min}(W^u_H(A,B))\right)^{-\tfrac{1}{2}}$ \item $\|\bm{F}^+(A,B)\|_2 \leq \sigma^{-1}_{min}(D)$ \item $\|\bm{g}\|_2 \leq \left(\|C\|_2 \sqrt{H}\sigma^{\tfrac{1}{2}}_{max}(W^u_H) + \|D\|_2\right)\sigma^{-\tfrac{1}{2}}_{min}(W^u_H)\alpha_H $ \item $\alpha_H := \max_{0\leq k\leq H} \|A^k\|_2$ \item $\kappa_{CD} = \frac{\max\{\sigma_{max}(C),\sigma_{max}(D)\}}{\min\{\sigma_{min}(C),\sigma_{min}(D)\}}$ \end{enumerate} Per \lemref{lem:baseCtrl}, we can bound the singular values of the arbitrary grammians using $ \underline{\sigma}_u, \overline{\sigma}_u$, $\underline{\sigma}_w$, $\overline{\sigma}_w$: \begin{align} \underline{\sigma}_u \leq &\sigma_{min}(W^u_H(A_1,B_1)), \sigma_{min}(W^u_H(A_2,B_2)),\\ &\sigma_{max}(W^u_H(A_1,B_1)), \sigma_{max}(W^u_H(A_2,B_2)) \leq \overline{\sigma}_u \\ \underline{\sigma}_w \leq &\sigma_{min}(W^w_H(A_1)), \sigma_{min}(W^w_H(A_2)),\\ &\sigma_{max}(W^w_H(A_1)), \sigma_{max}(W^w_H(A_2)) \leq \overline{\sigma}_w \end{align} Then, we obtain \begin{align} &\Gamma'_2 = \cl{O}\left( \alpha_H \:\kappa_{CD}\: H \:\left(\frac{\overline{\sigma}_u}{\underline{\sigma}_u}\right)^{\frac{3}{2}}\right) && \Gamma'_1 = \cl{O}\left( \alpha^2_H H^{\frac{3}{2}}\left(\frac{\overline{\sigma}_u}{\underline{\sigma}_u}\right)^{\frac{1}{2}} \right) \end{align} and finally \begin{align} &\Gamma_A = \cl{O}\left( \alpha^2_H \:\kappa^2_{CD}\:\|B_1\|_2\: H^2 \:\left(\frac{\overline{\sigma}_u}{\underline{\sigma}_u}\right)^{\frac{3}{2}} \overline{\sigma}_w\right) && \Gamma_B = \cl{O}\left( \alpha_H \:\kappa^2_{CD}\: H^{\frac{3}{2}} \:\left(\frac{\overline{\sigma}_u}{\underline{\sigma}_u}\right)^{\frac{3}{2}} \overline{\sigma}_w^{\frac{1}{2}}\right) \end{align} \begin{theorem}\label{thm:mainsensitivity_global} Let $C,D \succ 0$, and let $\cl{S}$ be a compact set of controllable systems with known FIR horizon $H$ and constants $ \underline{\sigma}_u, \overline{\sigma}_u$, $\underline{\sigma}_w$, $\overline{\sigma}_w$ as defined in \lemref{lem:baseCtrl}. Then there are fixed constants $\Gamma_A,\Gamma_B$, such that for any two pairs of system matrices $(A_1,B_1), (A_2,B_2) \in \cl{S}$ the corresponding $\cl{H}_2$ optimal SLS-solutions of problem $S_j$ ($j$ arbitrary), denoted $\phi^{*j}_1$ and $\phi^{*j}_2$, satisfy the following inquality: \begin{align} \|\phi^{j*}_1-\phi^{j*}_2\|_2 \leq \Gamma_A\|A_1-A_2\|_F + \Gamma_B \|B_1-B_2\|_F. \end{align} Furthermore, $\Gamma_A$ and $\Gamma_B$ satisfy \begin{align} &\Gamma_A = \cl{O}\left( \alpha^2_H \:\kappa^2_{CD}\:\beta\: H^2 \:\left(\frac{\overline{\sigma}_u}{\underline{\sigma}_u}\right)^{\frac{3}{2}} \overline{\sigma}_w\right) && \Gamma_B = \cl{O}\left( \alpha_H \:\kappa^2_{CD}\: H^{\frac{3}{2}} \:\left(\frac{\overline{\sigma}_u}{\underline{\sigma}_u}\right)^{\frac{3}{2}} \overline{\sigma}_w^{\frac{1}{2}}\right), \end{align} where $\beta:= \max\limits_{(A,B)\in\cl{S}}\|B\|_2$ and $\kappa_{CD}$ stands for $$\kappa_{CD} = \frac{\max\{\sigma_{max}(C),\sigma_{max}(D)\}}{\min\{\sigma_{min}(C),\sigma_{min}(D)\}}$$ \end{theorem}
1,314,259,995,393
arxiv
\section{Introduction} Multiple access techniques are used to allow a number of mobile users to share the same spectrum. FDMA, TDMA, CDMA and OFDMA are used in the 1th to 4th generation mobile communication systems, all in an orthogonal way. With the coming of 5G, non orthogonal multiple access (NOMA) became popular, both in the downlink \cite{DownlinkNOMA} and uplink\cite{UplinkNOMA}. The capacity region of multiple access channel (MAC) or uplink channel is well known \cite{Andrea,DavidTse}. The two user model of MAC is described as \begin{equation} y=x_1+x_2+n, \end{equation} where $x_1$ and $x_2$ are signals of user one and two, with the power of $P_1$ and $P_2$, $n$ is the white noise with power $N$, $y$ is the received signal. \begin{figure}[!ht] \begin{center} \includegraphics[width=3in]{CapacityAreaMAC.png} \caption{Two-User MAC Capacity Region\cite{Andrea}.} \label{Fig-PTx2} \end{center} \end{figure} Let $R_1$ and $R_2$ be the reachable data rate of user one and two, the capacity region of such a model is represented by \begin{equation} R_1\leqslant \log(1+\frac{P_1}{N})=C_1, \end{equation} \begin{equation} R_2\leqslant \log(1+\frac{P_2}{N})=C_2, \end{equation} and \begin{equation} R_1+R2\leqslant \log(1+\frac{P_1+P_2}{N}). \end{equation} This region is illustrated in Fig. \ref{Fig-PTx2} by the solid line. Notice that $x_1$ and $x_2$ share the same resource, such a way of communication is called superposition coding. The decoding of superposition coding relies on a technique called successive interference cancellation (SIC). Let's take the rate point $(C_1, C_2^*)$ in Fig. \ref{Fig-PTx2} as an example. In the encoding phase, we can put $x_1$ with power $P_1$ on the channel first, yielding a rate $C_1$, then put $x_2$ with power $P_2$ on the channel, treating $x_1$ as noise, yielding a rate $C_2^*$, \begin{equation} C_2^*= \log(1+\frac{P_2}{P_1+N}). \end{equation} In the decoding phase, $x_2$ is decoded first and then reconstructed and eliminated from $y$, so $x_1$ can be decoded successfully without the interference of $x_2$. If the two users are time divided (TD) , suppose user one is allocated a fraction $\alpha\in [0,1]$ of the whole time, and the rest of time is allocated to user two, then the sum rate \begin{equation} R_1+R2 \leqslant \alpha C_1+ (1-\alpha)C_2. \end{equation} If two users are frequency divided (FD) and $B$ is the total bandwidth, user one is allocated $\alpha B$ and $(1-\alpha)B$ is assigned to user two, the capacity region is \begin{equation} R_1+R2 \leqslant \alpha \log (1+\frac{P_1}{\alpha N})+ (1-\alpha)\log(1+\frac{P_2}{(1-\alpha)N}), \end{equation} which is illustrated by the dashed curve in Fig. \ref{Fig-PTx2}. Notice there is one point FD can reach the same sum rate as superposition coding. On this point, the power of each user is proportional to the bandwidth allocated. \section{Discussions} Based on the former information, it was widely believed that superposition coding is the way to achieve maximum capacity, dominating orthogonal measures as TD or FD. However, this superiority exists only under the former stated constraints, i.e. the power of the two users are $P_1$ and $P_2$. This constraint is not reasonable in practical cases, since a mobile station usually does not transmit at full power and can adjust its transmit power according to its position and data rate. Then, if we relax the constraint of certain power for each user to certain sum power for two users, the capacity region of superposition coding becomes \begin{equation} R_1+R2\leqslant \log(1+\frac{P_1+P_2}{N}). \end{equation} For the TD case, in any time slot, one user use the power $P_1+P_2$ and the other keep silent, the sum power is $P_1+P_2$. For the FD case, if user one is allocated $\alpha B$ with power $\alpha(P_1+P_2)$, and user two is allocated $(1-\alpha)B$ with power $(1-\alpha)(P_1+P_2)$, the sum power constraint is also satisfied. It can be easily verified that TD and FD have the same capacity region as superposition coding in MAC. \section{Conclusion} Under the constraint of sum power, TD, FD and superposition coding have the same capacity region in MAC. So NOMA is not an option in the 5G uplink for the capacity reason. \bibliographystyle{IEEEtran}
1,314,259,995,394
arxiv
\section{Introduction} The concepts of convergence of a sequence of real numbers has been generalised to statistical convergence independently by Fast \cite{f} and Steinhaus \cite{q}. Then a lot of developments were carried out in this area by many authors. The concepts of statistical convergence of sequence has been extended to $I$-convergence by Kostyrko et al. \cite{h,t**} using the structure of the ideal $I$ of subsets of the set of natural numbers. An another type of convergence which is closely related to the ideas of $I$-convergence is the idea of $I^*$-convergence given by Kostyrko et al. \cite{4th4}. It is seen in \cite{h} that these notions are equivalent if and only if the ideal satisfies the property (AP). Several works have been done in recent years on $I$-convergence (see \cite{4,3,5,6,i,j}).\\ The idea of rough convergence in a finite dimensional space was introduced by Phu \cite{R2} in 2001. In 2014, D\"undar et al. \cite{R7} introduced the notion rough $I$-convergence using the concepts of $I$-convergence and rough convergence. For given two arbitrary ideals $I$ and $K$ on a set $S$, the idea of $I^K$-convergence in topological space was given by M. Ma\v{c}aj and M. Sleziak in \cite{4th2} as a generalization of the notion of $I^*$-convergence. In their paper they modified the condition (AP) and have showed that when such condition termed as AP($I$, $K$) holds then $I$-convergence implies $I^K$-convergence and the converse of this result also holds for the first countable space which is not finitely generated( or Alexandroff space). Indeed, they worked with functions instead of sequences. One of the reasons is that using functions sometimes helps to simplify notations.\\ In our work we have introduced the notion of rough $I^*$-convergence and rough $I^K$-convergence in a normed linear space. Rough $I^K$-convergence is a common generalization of rough $I^*$-convergence. Here we have studied the ideas of rough $I^*$-convergence and rough $I^K$-convergence in terms of sequences instead of functions. We then intend to find the relation between rough $I$-convergence and rough $I^*$-convergence. We also tried to find the relation of rough $I$-convergence with rough $I^K$-convergence and we have observed that the condition AP($I$,$K$) is an necessary and sufficient condition for the rough $I$-limit set to be a subset of rough $I^K$-limit set. We have tried to verify whether some property of $I^K$-convergence as in \cite{4th2} also holds for rough $I^K$-convergence. \\ We now recall some definitions and notions which will be needed in sequel. \section{Preliminaries} Throughout the paper, $\mathbb{N}$ denotes the set of all natural numbers, $\mathbb{R}$ the set of all real numbers unless otherwise stated. \begin{defn}\cite{f} Let $K$ be a subset of the set of natural numbers $\mathbb{N}$ and let us denote the set $K_i =\{k\in K : k\leq i\}$. Then the natural density of $K$ is given by $d(K)=\displaystyle{\lim_{i\rightarrow \infty}}\frac{|K_i|}{i}$, where $|K_i|$ denotes the number of elements in $K_i$. \end{defn} \begin{defn}\cite{f} A sequence $\{x_n\}_{n\in\mathbb{N}}$ of real numbers is said to be statistically convergent to $x$ if for any $\varepsilon >0$, $d(A(\varepsilon))=0$, where $A(\varepsilon)=\{n\in \mathbb{N}: |x_n - x|\geq \varepsilon\}$. \end{defn} Let $I$ be a collection of subset of a set $S$. Then $I$ is called an ideal on $S$ if $(i)$ $A, B\in I$ $\Rightarrow A\cup B\in I$ and $(ii)$ $A\in I$ and $B\subset A$ $\Rightarrow B\in I$ \cite{t}.\\ An ideal $I$ on $S$ is called admissible if it contains all singletons, that is, $\{s\}\in I$ for each $s\in S$. $I$ is called nontrivial if $S\notin I$ \cite{t}. From the definition it is noted that $\phi\in I$.\\ If $S=\mathbb{N}$, the set of all positive integers then $I$ is called an ideal on $\mathbb{N}$. We will denote by Fin the ideal of all finite subsets of a given set $S$. \begin{defn}\cite{4th3} Let $S\neq \phi$. A non empty class $F$ is called a filer in $S$ provided: $(i)$ $\phi\notin F$, $(ii)$ $A, B\in F\Rightarrow A\cap B\in F$, $(iii)$ $A\in F, A\subset B \Rightarrow B\in F$. \end{defn} \begin{lem}\cite{h} If $I$ is a non trivial ideal on $\mathbb{N}$, then the class $F(I)=\{M\subset \mathbb{N}: \text{there exists } A\in I\:\: \text{such that }\: M=\mathbb{N}\setminus A\}$ is a filter on $\mathbb{N}$, called the filter associated with $I$. \end{lem} \begin{defn}\cite{h} An admissible ideal $I\subset 2^\mathbb{N}$ is said to satisfy the condition (AP) if for every countable family of mutual disjoint sets $\{A_1, A_2,\cdots\}$ belonging to $I$ there exists a countable family of sets $\{B_1, B_2,\cdots\}$ such that the symmetric difference $A_j\Delta B_j$ is a finite set for each $j\in\mathbb{N}$ and $B=\displaystyle{\bigcup_{j=1} ^ \infty} B_j \in I$. Several example of countable family satisfying (AP) are seen in \cite{h}. \end{defn} \begin{defn}\cite{h,t**} Let $(X, ||\cdot||)$ be a normed linear space and $I\subset 2^\mathbb{N}$ be a non-trivial ideal. A sequence $\{x_n\}_{n\in\mathbb{N}}$ of elements of $X$ is said to be $I$-convergent to $x\in X$ if for each $\varepsilon >0$ the set $A(\varepsilon)=\{n\in\mathbb{N}: ||x_n - x||\geq \varepsilon\}$ belongs to $I$. The element $x$ is here called the $I$-limit of the sequence $\{x_n\}_{n\in\mathbb{N}}$. \end{defn} It should be noted here that if $I$ is an admissible ideal then usual convergence in $X$ implies $I$-convergence in $X$. \begin{exmp}\cite{h} If $I_d$ denotes the class of all $A\subset \mathbb{N}$ with $d(A)=0$. Then $I_d$ is non trivial admissible ideal and $I_d$ convergence coincides with the statistical convergence. \end{exmp} \begin{defn}\cite{4th4,h} Let $(X, ||\cdot||)$ be a normed linear space and $I\subset 2^\mathbb{N}$ be a non-trivial ideal. A sequence $\{x_n\}_{n\in\mathbb{N}}$ in $X$ is said to be $I^*$-convergent to $x$ if there exists a set $M=\{m_1<m_2<\cdots<m_k<\cdots\}$ in $F(I)$ such that the sub sequence $\{x_{m_k}\}_{k\in \mathbb{N}}$ is convergent to $x$ i.e., $\displaystyle{\lim_{k\rightarrow \infty}} ||x - x_{m_k}|| =0$ \end{defn} It is seen in \cite{h} that $I^*$-convergence implies $I$-convergence. If an admissible ideal $I$ has the property (AP), then for a sequence $\{x_n\}_{n\in\mathbb{N}}$ in a normed linear space $X$, $I$-convergence implies $I^*$-convergence. \begin{defn}\label{correction}\cite{R2} Let $\{x_n\}_{n\in\mathbb{N}}$ be a sequence in a normed linear space $(X, ||\cdot||)$ and $r$ be a non-negative real number. Then $\{x_n\}_{n\in\mathbb{N}}$ is said to be rough convergent of roughness degree $r$ to $x$ or simply $r$-convergent to $x$, denoted by $x_n \xrightarrow{r} x$, if for all $\varepsilon >0$ there exists $N(\varepsilon)\in\mathbb{N}$ such that $n\geq N(\varepsilon)$ implies $|| x_n - x||< r +\varepsilon$ and $x$ is called rough limit of $\{x_n\}_{n\in\mathbb{N}}$ of roughness degree $r$. \end{defn} For $r=0$ the definition \ref{correction} reduces to definition of classical convergence of sequences. Here $x$ is called the $r$-limit point of $\{x_n\}_{n\in\mathbb{N}}$, which is usually no more unique (for $r>0$). So we have to consider the so called $r$-limit set (or shortly $r$-limit) of $\{x_n\}_{n\in\mathbb{N}}$ defined by $LIM^r x_n :=\{ x\in X : x_n \xrightarrow{r} x\}$. A sequence $\{x_n\}_{n\in\mathbb{N}}$ is said to be $r$-convergent if $LIM^r x_n \neq \phi$. In this case, $r$ is called a rough convergence degree of $\{x_n\}_{n\in\mathbb{N}}$. \begin{prop}\cite{R2} A sequence $\{x_n\}_{n\in\mathbb{N}}$ in a normed linear space $X$ is bounded if and only if there exists an $r\geq 0$ such that $LIM^r x_n\neq \phi$. For all $r>0$, a bounded sequence $\{x_n\}_{n\in\mathbb{N}}$ always contains a subsequence $\{x_{m_k}\}_{k\in\mathbb{N}}$ with $LIM^r x_{m_k}\neq \phi$. \end{prop} \begin{prop}\cite{R2} If $\{x'_n\}_{n\in\mathbb{N}}$ is a sub sequence of $\{x_n\}_{n\in\mathbb{N}}$ in a normed linear space $(X, ||\cdot||)$, then $LIM^r x_n \subset LIM^r x'_n$. \end{prop} \begin{prop}\cite{R2} For all $r\geq 0$, the $r$-limit set $LIM^r x_n$ of an arbitrary sequence $\{x_n\}_{n\in\mathbb{N}}$ in a normed linear space $(X, ||\cdot||)$ is closed set. \end{prop} \begin{prop}\cite{R2} For an arbitrary sequence $\{x_n\}_{n\in\mathbb{N}}$ in a normed linear space $(X, ||\cdot||)$ the $r$-limit set $LIM^r x_n$ is convex. \end{prop} \begin{defn}\cite{R7} A sequence $\{x_n\}_{n\in\mathbb{N}}$ in a normed linear space $(X, ||\cdot||)$ is said to be $I$-bounded if there exists a positive real number $M$ such that the set $\{n\in\mathbb{N}: ||x_n||\geq M\}\in I$ \end{defn} \begin{defn}\cite{R7} Let $I$ be an admissible ideal and $r$ be a non-negative real number. A sequence $\{x_n\}_{n\in\mathbb{N}}$ in a normed linear space $(X, ||\cdot||)$ is said to be rough $I$-convergent of roughness degree $r$ to $x$, denoted by $x_n \xrightarrow { r-I} x$ provided that $\{n\in\mathbb{N}: ||x_n - x||\geq r +\varepsilon\}\in I$ for every $\varepsilon >0$ and $x$ is called rough $I$-limit of $\{x_n\}_{n\in\mathbb{N}}$ of roughness degree $r$. \end{defn} \begin{rem} If $I$ is an admissible ideal, then the usual rough convergence implies rough $I$-convergence. \end{rem} \begin{note} If we take $r=0$, then we obtain the definition of ordinary $I$-convergence. In general, the rough $I$-limit of a sequence may not be unique for the roughness degree $r>0$. So we have to consider the so-called rough $I$-limit set of a sequence $\{x_n\}_{n\in\mathbb{N}}$ which is defined by $I-LIM^r x_n :=\{x\in X: x_n\xrightarrow{r -I} x\}$. A sequence $\{x_n\}_{n\in\mathbb{N}}$ is said to be rough $I$-convergent if $I-LIM^r x_n \neq \phi$. \end{note} \begin{thm}\cite{R7} Let $I\subset 2^\mathbb{N}$ be an admissible ideal. A sequence $\{x_n\}_{n\in\mathbb{N}}$ in $(X,||\cdot||)$ is $I$-bounded if and only if there exists a non negative real number $r$ such that $I-LIM^r x_n\neq \phi$. \end{thm} \begin{thm}\cite{R7} Let $I\subset 2^\mathbb{N}$ be an admissible ideal. If $\{x_{m_k}\}_{k\in\mathbb{N}}$ is a sub sequence of $\{x_n\}_{n\in\mathbb{N}}$ in $(X, ||\cdot||)$ then $I-LIM^r x_n \subset I- LIM^r x_{m_k} $ \end{thm} \begin{thm}\cite{R7} Let $I\subset 2^\mathbb{N}$ be an admissible ideal. The rough $I$-limit set of a sequence $\{x_n\}_{n\in\mathbb{N}}$ in $(X, ||\cdot||)$ is closed. \end{thm} \begin{thm}\cite{R7} Let $I\subset 2^\mathbb{N}$ be an admissible ideal. The rough $I$-limit set of a sequence $\{x_n\}_{n\in\mathbb{N}}$ in $(X, ||\cdot||)$ is convex. \end{thm} We now give some basic ideas on $I^K$-convergence in a topological space studied by M. Ma\v{c}aj and M. Sleziak \cite{4th2}. \begin{defn}\cite{4th2} Let $I$ be an ideal on a set $S$ and $X$ be a topological space. A function $f: S\mapsto X$ is said to be $I$-convergent to $x$ if $f^{-1} (U)=\{s\in S: f(s)\in U\}\in F(I)$ holds for every neighbourhood $U$ of the point $x$ and we write $I-lim f=x$ \end{defn} If $S=\mathbb{N}$, then we obtain the usual definition of $I$-convergence of sequences. In this case the notation $I-lim x_n=x$ is used. \begin{defn}\cite{4th2} Let $I$ be an ideal on a set $S$ and let $f:S\mapsto X$ be a function to a topological space $X$. The function $f$ is called $I^*$-convergent to the point $x$ if there exists a set $M\in F(I)$ such that the function $g:S\mapsto X$ defined by $g(s)=\begin{cases} f(s), \:\text{if} s\in M\\ x, \:\text{if} s\notin M \end{cases}$ is Fin-convergent to $x$. If $f$ is $I^*$-convergent to $x$, then we write $I^*-lim f =x$. \end{defn} The usual notion of $I^*$-convergence of sequences is a special case when $S=\mathbb{N}$. \begin{defn}\label{eqidef}\cite{4th2} Let $K$ and $I$ be ideals on a set $S$ and $X$ be a topological space. The function $f:S\mapsto X$ is said to be $I^K$-convergent to $x\in X$ if there exists a set $M\in F(I)$ such that the function $g:S \mapsto X$ given by $g(s)=\begin{cases} f(s), \text{if} \:\:s\in M\\ x, \text{if} \:\:s\notin M \end{cases}$ is $K$-convergent to $x$. If $f$ is $I^K$-convergent to $x$, then we write $I^K-lim f =x$. \end{defn} When $S=\mathbb{N}$, then we speak about $I^K$-convergence of sequences. \begin{rem} The definition of $I^K$-convergence may also be treated from \cite{4th4} as follows: there exists $M\in F(I)$ such that the function $f|_M$ is $K|M$-convergent to $x$, where $K|M=\{A\cap M: A\in K\}$ is the trace of $K$ on $M$. The two definitions are equivalent but the definition given in \ref{eqidef} is somewhat simpler \cite{4th2}. \end{rem} \begin{lem}\cite{4th2} If $I$ and $K$ are ideals on a set $S$ and $f: S\mapsto X$ is a function such that $K-lim f=x$ then $I^K-lim f=x$. \end{lem} \begin{defn}\cite{4th2} Let $K$ be an ideal on a set $S$. we write $A\subset_K B$ whenever $A\setminus B\in K$. if $A\subset_K B$ and $B\subset_K A$ then we write $A \sim_K B$. \end{defn} Clearly $A\sim_K B \Leftrightarrow A\Delta B\in K$.\\ Now we recall a lemma from \cite{4th2} where several equivalent formulations of a condition for ideals $I$ and $K$ have been described. \begin{lem}\label{apk}\cite{4th2} Let $I$ and $K$ be ideals on the same set $S$. Then the following conditions are equivalent:\\ $(i)$ For every sequence $\{A_n\}_{n\in\mathbb{N}}$ of sets from $I$ there is $A\in I$ such that $A_n\sim _K A$ for all $n$'s.\\ $(ii)$ Any sequence $\{F_n\}_{n\in\mathbb{N}}$ of sets from $F(I)$ has a $K$-pseudo intersection in $F(I)$.\\ $(iii)$ For every sequence $\{A_n\}_{n\in\mathbb{N}}$ of sets belonging to $I$ there exists a sequence $\{B_n\}_{n\in\mathbb{N}}$ of sets from $I$ such that $A_j \sim_K B_j$ for $j\in \mathbb{N}$ and $B=\bigcup _{j\in\mathbb{N}} B_j \in I$.\\ $(iv)$ For every sequence of mutually disjoint sets $\{A_n\}_{n\in\mathbb{N}}$ belonging to $I$ there exists a sequence $\{B_n\}_{n\in\mathbb{N}}$ of sets belonging to $I$ such that $A_j\sim_K B_j$ for $j\in\mathbb{N}$ and $B=\bigcup _{j\in\mathbb{N}} B_j \in I$.\\ $(v)$ For every non-decreasing sequence $A_1\subset A_2 \subset \cdots \subset A_n\subset \cdots$ of sets from $I$ there exists a sequence $\{B_n\}_{n\in\mathbb{N}}$ of sets belonging to $I$ such that $A_j \sim_K B_j$ for $j\in\mathbb{N}$ and $B=\bigcup_{j\in\mathbb{N}} B_j\in I$. \end{lem} \begin{defn}\cite{4th2} Let $I$ and $K$ be two ideals on a same set $S$. We say that $I$ has the additive property with respect to $K$, more briefly that AP$(I, K)$ holds, if any one of the equivalent conditions of Lemma \ref{apk} holds. \end{defn} The condition (AP) from \cite{h}, is equivalent to the condition AP($I$, Fin). Now we recall the following two theorems from \cite{4th2}. \begin{thm}\cite{4th2} Let $I$ and $K$ be ideals on a set $S$ and $X$ be a first countable topological space. If $I$ has the additive property with respect to $K$, then for any function $f: S \mapsto X$ $I$ convergence implies the $I^K$-convergence or in other words, if the condition AP$(I, K)$ holds then the $I$-convergence implies the $I^K$-convergence. \end{thm} Let us recall that a topological space $X$ is called finitely generated space or Alexandroff space if intersection of any number of open sets of $X$ is again an open set (see \cite{4th1}). Equivalently, $X$ is finitely generated if and only if each point $x$ has a smallest neighbourhood. \begin{thm}\cite{4th2} Let $I$, $K$ be ideals on a set $S$ and $X$ be a first countable topological space which is not finitely generated. If the $I$-convergence implies the $I^K$-convergence for any function $f: S\mapsto X$, then the ideal $I$ has the additive property with respect to $K$ or briefly the condition AP$(I,K)$ holds. \end{thm} \section{Main results} Through out our discussion $(X, ||\cdot||)$ or simply $X$ will always denote a normed linear space over the field $\mathbb{C}$ or $\mathbb{R}$ and $I$,$K$ always assumed to be non trivial admissible ideals on $\mathbb{N}$ unless otherwise stated. \begin{defn}\label{def1} Let $r$ be a non-negative real number and $I$ be a non trivial admissible ideal on $\mathbb{N}$. Then a sequence $\{x_n\}_{n\in \mathbb{N}}$ in $(X, || \cdot || )$ is said to be rough $I^*$-convergent of roughness degree $r$ to $x$ if there exists a set $M = \{m_1 < m_2< m_3 < \cdots < m_k < \cdots\}$ in $F(I)$ such that the sub sequence $\{x_{m_k}\}_{k\in \mathbb{N}}$ is rough convergent of roughness degree $r$ to $x$. Thus for any $\varepsilon >0$ there exists a $N\in \mathbb{N}$ such that $|| x_{m_k} - x || < r + \varepsilon $ for all $k \geq N$. we denote this by $x_n \xrightarrow{r-I^*} x$. \end{defn} Here $x$ is called the rough $I^*$-limit of the sequence $\{x_n\}_{n\in \mathbb{N}}$ of roughness degree $r$. For $r=0$ we have the definition of $I^*$-convergence of sequences in normed linear spaces. Obviously rough $I^*$-limit of a sequence in normed linear spaces is not unique. Therefore we have to consider the rough $I^*$-limit set of the sequence $\{x_n\}_{n\in\mathbb{N}}$ defined as follows: $I^*-LIM^r x_n = \{x\in X :x_n \xrightarrow{r - I^*} x\}$. \begin{defn}\label{def2} Let $r$ be a non-negative real number. Also let $I$ and $K$ be two non trivial admissible ideals on $\mathbb{N}$. Then a sequence $\{x_n\}_{n\in \mathbb{N}}$ in a normed linear space $(X, || \cdot || )$ is said to be rough $I^K$-convergent of roughness degree $r$ to $x$ if there exists a set $M=\{m_1< m_2<\cdots <m_k<\cdots\}$ in $F(I)$ such that the sub sequence $\{x_{m_k}\}_{k\in \mathbb{N}}$ is rough $K|M$-convergent of roughness degree $r$ to $x$, where $K|M=\{A\cap M: A\in K\}$ is the trace of $K$ on $M$. That is for any $\varepsilon >0$, the set $\{k\in \mathbb{N} : || x_{m_k} - x || \geq r +\varepsilon\}\in K|M$. we denote this by $x_n \xrightarrow{r-I^K} x$. \end{defn} Here $x$ is called the rough $I^K$-limit of the sequence $\{x_n\}_{n\in\mathbb{N}}$ of roughness degree $r$. For $r=0$ we have the definition of $I^K$-convergent of sequences in normed linear spaces. It should be noted that for $M\in F(I)$, the trace $K|M=\{A\cap M: A\in K\}$ of $K$ on $M$ also forms an ideal on $\mathbb{N}$. Clearly rough $I^K$-limit of a sequence in normed linear spaces is not unique. Therefore we will consider the rough $I^K$-limit set of the sequence $\{x_n\}_{n\in \mathbb{N}}$ defined by $I^K-LIM^r x_n =\{x\in X : x_n \xrightarrow{r-I^K} x\}$. If the ideal $K$ is such that it is the class of all finite subsets of $\mathbb{N}$ then definition \ref{def1} and definition \ref{def2} coincides. Obviously if $x$ is a rough $I^*$-limit of a sequence $\{x_n\}_{n\in\mathbb{N}}$ then $x$ is also a rough $I^K$-limit of $\{x_n\}_{n\in\mathbb{N}}$. But it may happen that $x$ is rough $I^K$-limit of a sequence $\{x_n\}_{n\in\mathbb{N}}$ in normed linear space without being rough $I^*$-limit of the the sequence $\{x_n\}$, which is seen from the next example. So, in general, for a sequence $\{x_n\}_{n\in\mathbb{N}}$ in a normed linear space and for any non-negative real number $r$, we have $I^* -LIM^r x_n \subset I^K-LIM^r x_n$. \begin{exmp} Let us consider a decomposition of $\mathbb{N}$ by $\mathbb{N}= A\cup\displaystyle{\bigcup_{i=1} ^\infty} A_i$, where $A=\{1,3,5,\cdots\}$ and $A_i= \{2^n (2i -1): n\in \mathbb{N}\}$. Then each of $A_i$'s are disjoint from each other and each of $A_i$'s are disjoint from $A$ also. Let $I$ be the collections of all those subsets of $\mathbb{N}$ such that the sets which belongs to $I$ can intersects with $A$ and with only a finite numbers of $A_i$'s. Then $I$ is an non trivial admissible ideal on $\mathbb{N}$. Let $\mathbb{N}= \displaystyle{\bigcup_{j=1} ^ \infty} D_j$ be another decomposition of $\mathbb{N}$ such that $D_j=\{2^{j-1}(2s -1): s=1, 2, \cdots\}$. Then each of $D_j$ is infinite and $D_j \cap D_k =\phi$ for $j\neq k$. Let $K$ be the ideal of all those subsets of $\mathbb{N}$ which intersects with only a finite numbers of $D_j$'s. Then $K$ is a non trivial admissible ideal on $\mathbb{N}$. Let us consider the sequence in real number space with usual norm define by $x_n =\frac{1}{j}$ if $n\in D_j$. Let us take $M=\mathbb{N}\in F(I)$. Then $K|M=K$. Now let $r >0$ be arbitrary. Since by Archimedean property for any arbitrary $\varepsilon >0$ there exists a $l\in \mathbb{N}$ such that $\varepsilon>\frac{1}{l}$. So $\{k\in \mathbb{N} : | x_k - (-r)|=|x_k + r|\geq r + \varepsilon\}\subset D_1\cup D_2\cup\cdots\cup D_l\in K=K|M$. Therefore $-r\in I^K-LIM^r x_n$.\\ If possible let $- r\in I^*-LIM^r x_n$. So there exists a set $M=\{m_1<m_2<\cdots <m_k<\cdots\}\in F(I)$ for which the sub sequence $\{x_n\}_{n\in M}$ of the sequence $\{x_n\}_{n\in\mathbb{N}}$ is rough convergent to $x$ of roughness degree $r$. Now as $M\in F(I)$ therefore we have $\mathbb{N}\setminus M = H \:\text{(say)}\:\in I$. Therefore exists a $p\in \mathbb{N}$ such that $H\subset A\cup A_1 \cup A_2 \cup\cdots\cup A_p$ and so $A_{k}\subset M$ for all $k\geq {p+1}$. Now as each of the set $A_k$'s contains an element from each of the set $D_i$'s for $i\geq 2$, so there exists a $s\in\mathbb{N}$ such that $x_{m_k}=\frac{1}{s}$ for infinitely $k$'s when $m_k\in D_s$. As $- r \in I^*-LIM^r x_n$, so for $\varepsilon=\frac{1}{s+1}$ there exists a $N\in\mathbb{N}$ such that $|x_{m_k} - (- r)|=|x_{m_k} + r|< r +\varepsilon$ for all $k\geq N \rightarrow (i)$. Since $x_{m_k}=\frac{1}{s}$ for infinitely many $k$'s, therefore the condition in $(i)$ does not holds. Thus we arrived at a contradiction. Hence $- r\notin I^*-LIM^r x_n$. \end{exmp} \begin{thm}\label{th1} Let $\{x_n\}_{n\in\mathbb{N}}$ be a sequence in a normed linear space $(X, ||\cdot||)$ and $r$ be a non-negative real number. Then for an non trivial admissible ideal $I$ if $\{x_n\}_{n\in \mathbb{N}}$ is rough $I^*$-convergent of roughness degree $r$ to $x$ then it is also rough $I$-convergent of roughness degree $r$ to $x$. \end{thm} \begin{proof} If possible let $\{x_n\}_{n\in\mathbb{N}}$ be rough $I^*$-convergent of roughness $r$ to $x$. Therefore there exists a set $M=\{m_1<m_2<\cdots<m_k<\cdots\}\in F(I)$ such that $\{x_{m_k}\}_{k\in\mathbb{N}}$ is rough convergent of roughness degree $r$ to $x$. Thus for any $\varepsilon >0$ there exists $N\in \mathbb{N}$ such that $||x_{m_k} - x||< r +\varepsilon $ for all $ k\geq N$. So $\{k\in\mathbb{N} : ||x_k - x|| \geq r +\varepsilon \} \subset {\mathbb{N}\setminus M} \cup \{m_1, m_2,\cdots, m_{N - 1}\}\rightarrow(i)$. Now since the right hand side of $(i)$ belongs to $I$, so $\{k\in\mathbb{N} : ||x_k - x|| \geq r +\varepsilon \} \in I$. Therefore the sequence $\{x_n\}_{n\in\mathbb{N}}$ is rough $I$-convergent of roughness degree $r$ to $x$. \end{proof} In view of Theorem \ref{th1}, it follows that rough $I*$-limit set of roughness degree $r$ is a subset of rough $I$-limit set of same roughness degree $r$. Converse of the theorem \ref{th1} not necessarily true. That is if a sequence $\{x_n\}_{n\in\mathbb{N}}$ is rough $I$-convergent of some roughness degree $r$ to $x$ then the sequence $\{x_n\}_{n\in\mathbb{N}}$ may not be rough $I^*$-convergent of same roughness degree $r$ to $x$. This fact can be seen from the next example. \begin{exmp}\label{3.2} Let $\mathbb{N}=\displaystyle{\bigcup_ {j=1 }^ \infty} D_j$ be a decomposition of $\mathbb{N}$ such that $D_j =\{2^{(j -1)} (2s -1): s=1, 2, \cdots\}$. Then each of $D_j$ is infinite and disjoint from each others. Let $I$ be the class of all those subsets of $\mathbb{N}$ which intersects with only a finite numbers of $D_j$'s. Then $I$ is an admissible ideal on $\mathbb{N}$. Let us define a sequence in real numbers space with usual norm by $x_n=\frac{1}{j^j}$ if $n\in D_j$. Let $r$ be an arbitrary non-negative real number. Let $\varepsilon >0$ be arbitrarily chosen, then there exists a $l\in \mathbb{N}$ such that $\varepsilon >\frac{1}{l^l}$. Then $[-r, r]\subset I-LIM^r x_n$, as $\{n\in\mathbb{N} : |x_n - x |\geq r + \varepsilon\}\subset D_1 \cup D_2 \cup \cdots D_l\in I$ for any $x\in [-r, r]$.\\ If possible suppose that the sequence defined above is rough $I^*$-convergent to $ - r$ of same roughness degree $r$. Therefore, there exists a set $M=\{m_1< m_2<\cdots< m_k <\cdots\}\in F(I)$ such that the sub sequence $\{x_{m_k}\}_{k\in \mathbb{N}}$ is rough convergent to $-r$ of roughness degree $r$. Now as $M\in F(I)$, so $\mathbb{N}\setminus M= H(\text{say})\in I$. Hence there exists a $p\in \mathbb{N}$ such that $H\subset D_1 \cup D_2 \cup\cdots\cup D_p$ and so $D_{p + 1 } \subset M$. Therefore $x_{m_k}= \frac{1}{(p +1) ^ {p+1}}$ for $m_k\in D_{p+1}$. Now for $\varepsilon =\frac{1}{(p+2)^{p+1}}$ and $m_k\in D_{p+1}$ we see that $|x_{m_k} + r|\geq r +\varepsilon$ for infinitely many $k$'s. Therefore the sequence $\{x_n\}_{n\in\mathbb{N}}$ is not rough $I^*$-convergent of roughness degree $r$ to $-r$ although $- r\in I-LIM^r x_n$. \end{exmp} Let $r$ be a non-negative real number. Then for a sequence $\{x_n\}_{n\in \mathbb{N}}$ in a normed linear space rough $I$-limit of the sequence $\{x_n\}_{n\in\mathbb{N}}$ of roughness degree $r$ is also a rough $I^*$-limit of same roughness degree $r$ if the ideal $I$ satisfies the condition (AP). To prove this we need the following lemma. \begin{lem}\cite{m}\label{lem1} Let $\{A_n\}_{n\in\mathbb{N}}$ be a countable family of subsets of $\mathbb{N}$ such that each $A_n$ belongs to $F(I)$, the filter associated with an admissible ideal $I$ which has the property (AP). Then there exists a set $B\subset \mathbb{N}$ such that $B\in F(I)$ and the set $B\setminus A_n$ is finite for all $n\in \mathbb{N}$. \end{lem} \begin{thm}\label{th2} Let $I$ be an ideal which has the property (AP) and $\{x_n\}_{n\in \mathbb{N}}$ be a sequence in a normed linear space $(X, ||\cdot||)$. Then if $x$ is a rough $I$-limit of the sequence $\{x_n\}_{n\in\mathbb{N}}$ of some roughness degree $r$ then $x$ is also a rough $I^*$-limit of the sequence $\{x_n\}_{n\in\mathbb{N}}$ of same roughness degree $r$. \end{thm} \begin{proof} Let $I$ be an ideal on $\mathbb{N}$ which satisfies the condition (AP) and $\{x_n\}_{n\in\mathbb{N}}$ be a sequence in a normed linear space $(X, ||\cdot|| )$. Also let us suppose that $x$ be a rough $I$-limit of the sequence $\{x_n\}_{n\in\mathbb{N}}$ of roughness degree $r$ for some $r\geq 0$. Therefore for any $\varepsilon >0$ the set $\{n\in \mathbb{N} : ||x_n - x||\geq r +\varepsilon\}\in I$. Let $l$ be any arbitrary positive real number, so $\frac{l}{i} $ is also positive real number for each $i\in \mathbb{N}$. Define $A_i =\{n\in\mathbb{N} : || x_n - x || < r + \frac{l}{i}\}$ for each $i\in \mathbb{N}$. Then $A_i\in F(I)$ for each $i\in \mathbb{N}$. Also by the lemma \ref{lem1} there exists a set $B\subset \mathbb{N}$ such that $B\in F(I)$ and $B\setminus A_i$ is finite for all $i\in \mathbb{N}$. Now for any arbitrary $\varepsilon >0$ there exists a $j\in \mathbb{N}$ such that $\varepsilon > \frac{l}{j}$. Since $B\setminus A_j$ is finite, so there exists $k=k(j)\in \mathbb{N}$ such that $n\in B\cap A_j$ for all $n\in B$ with $n\geq k$. Now $|| x_n - x|| < r + \frac{l}{j}< r + \varepsilon$ for all $n\in B$ and $n\geq k$. Thus the sub sequence $\{x_n\}_{n\in B}$ is rough convergent of roughness degree $r$ to $x$. Therefore $x$ is also a rough $I^*$-limit of roughness degree $r$. Hence the result follows. \end{proof} \begin{cor} Let $\{x_n\}_{n\in\mathbb{N}}$ be a sequence in a normed linear space $(X, || \cdot||)$ and $r$ be a non-negative real number. Let $I$ be an ideal on $\mathbb{N}$ such that it satisfies the condition (AP). Then both the rough $I$-limit set of roughness degree $r$ and rough $I^*$-limit set of roughness degree $r$ of a sequence $\{x_n\}_{n\in\mathbb{N}}$ are equal. \end{cor} \begin{proof} In view of theorem \ref{th1} and theorem \ref{th2} the result follows. \end{proof} Rough $I$-limit set of a sequence $\{x_n\}_{n\in\mathbb{N}}$ in a normed linear space is a subset of rough $I$-limit set of a sub sequence $\{x_{n_k}\}_{k\in\mathbb{N}}$. But rough $I^*$-limit set of a sequence $\{x_n\}_{n\in\mathbb{N}}$ in a normed linear space may not be a subset of rough $I^*$-limit set of a sub sequence $\{x_{n_k}\}_{k\in\mathbb{N}}$. This fact can be justified by the following example. \begin{exmp} Let $I$ be the ideal of all subsets of $\mathbb{N}$ whose natural density is zero. Let us consider a sequence $\{x_n\}_{n\in\mathbb{N}}$ in real number space with usual norm as follows: $x_n= \begin{cases} -1 & n=k^2\\ \frac{1}{n} & n\neq k^2 \end{cases}$, where $k\in\mathbb{N}$. Now as the natural density of the set $A=\{n\in\mathbb{N}: n=k^2,\: k\in\mathbb{N}\}$ is zero, therefore $A\in I$. So $\mathbb{N}\setminus A=M(\text{say})\in F(I)$. Put $M=\{m_1<m_2<\cdots<m_k<\cdots\}$. Then for any arbitrary $\varepsilon>0$ we can see that $|x_{m_k} - 1| < 1 + \varepsilon$ holds for all $k\in\mathbb{N}$. Hence $1$ is a rough $I^*$-limit of roughness degree $r=1$ of the sequence $\{x_n\}_{n\in\mathbb{N}}$. Let $A$ be enumerated as $A=\{n_1<n_2<\cdots<n_k<\cdots\}$ and consider the sub sequence $\{x_{n_k}\}_{k\in\mathbb{N}}$ of $\{x_n\}_{n\in\mathbb{N}}$. Since for any sub sequence $\{x_{n_{k_m}}\}_{m\in\mathbb{N}}$ of $\{x_{n_k}\}_{k\in\mathbb{N}}$ , we can see that for $0<\varepsilon<1$ we have $|x_{n_{k_m}} - 1|> 1 +\varepsilon$ for all $m$. Hence for this choice of $\varepsilon$, there does not exists any $N(\varepsilon)\in \mathbb{N}$ for which $|x_{n_{k_m}} - 1|< 1 +\varepsilon $ holds for all $m\geq N(\varepsilon)$. Therefore there does not exists any $M'=\{m'_1<m'_2<\cdots< m'_k<\cdots\}\in F(I)$ for which $\{x_{n_{m'_k}}\}_{k\in \mathbb{N}}$ is rough convergent to $1$ of roughness degree $r=1$. So $1$ is not a rough $I^*$-limit of roughness degree $r=1$ of the sub sequence considered above. \end{exmp} \begin{thm} If $I$ is an ideal which satisfies the condition (AP), then the rough $I^*$-limit set of a sequence $\{x_n\}_{n\in\mathbb{N}}$ of some roughness degree $r$ is a subset of rough $I^*$-limit set of a sub sequence $\{x_{n_k}\}_{k\in\mathbb{N}}$ of same roughness degree $r$. \end{thm} \begin{proof} Let $x$ be a rough $I^*$-limit of a sequence $\{x_n\}_{n\in\mathbb{N}}$ of some roughness degree $r$. As a rough $I^*$-limit is also a rough $I$-limit of $\{x_n\}_{n\in\mathbb{N}}$, so $x$ is a rough $I$-limit of $\{x_n\}_{n\in\mathbb{N}}$. Since rough $I$-limit of a sequence $\{x_n\}_{n\in\mathbb{N}}$ is a subset of rough $I$-limit of a sub sequence $\{x_{n_k}\}_{k\in\mathbb{N}}$, hence $x$ is rough $I$-limit of the sub sequence $\{x_{n_k}\}_{k\in\mathbb{N}}$. Now as $I$ satisfies the condition (AP), so $x$ is also a rough $I^*$-limit of the sub sequence $\{x_{n_k}\}_{k\in\mathbb{N}}$. \end{proof} \begin{thm}\label{th3} Let $I$ and $K$ be two admissible ideals on $\mathbb{N}$ and $r$ be a non-negative real number. Suppose that a sequence $\{x_n\}_{n\in\mathbb{N}}$ in a normed linear space $(X, ||\cdot||)$ is rough $I^K$-convergent to $x$ of roughness degree $r$ then $\{x_n\}_{n\in\mathbb{N}}$ is also rough $I$-convergent to $x$ of same roughness degree $r$ if $K\subset I$. \end{thm} \begin{proof} Let $I$ and $K$ be two admissible ideals on $\mathbb{N}$ such that $K\subset I$. Also let $r$ be a non-negative real number. Suppose that a sequence $\{x_n\}_{n\in\mathbb{N}}$ is rough $I^K$-convergent to $x$ of roughness degree $r$. Then there exists a set $M=\{m_1<m_2<\cdots<m_k<\cdots\}\in F(I)$ such that for any $\varepsilon >0$ the set $A(\varepsilon)=\{k\in \mathbb{N} : ||x_{m_k} - x || \geq r +\varepsilon\}\in K|M$. Suppose that $A(\varepsilon)=\{k\in\mathbb{N}: || x_{m_k} - x||\geq r +\varepsilon\}=K_1 \cap M$ for some $K_1\in K$. Now as $K$ is an ideal and $K_1\cap M\subset K_1$, so $K_1\cap M\in K$. Again $\{n\in \mathbb{N} : ||x_n - x|| \geq r +\varepsilon\}\subset (K_1\cap M)\cup \mathbb{N}\setminus M $. Since $\mathbb{N}\setminus M \in I$ and $K\subset I$, therefore $(K_1\cap M)\cup \mathbb{N}\setminus M\in I$. So $\{n\in \mathbb{N} : ||x_n - x|| \geq r +\varepsilon\}\in I$. Hence the result follows. \end{proof} Converse part of the theorem \ref{th3} is also valid, i.e., if a rough $I^K$-limit $x$ of a sequence $\{x_n\}_{n\in\mathbb{N}}$ of some roughness degree $r$ implies that $x$ is also a rough $I$-limit of same roughness degree $r$ then $K\subset I$. To prove this we need the following lemma. \begin{lem}\label{lemma 3.5} If $I$ and $K$ are ideals on $\mathbb{N}$. Then a rough $K$-limit of a sequence $\{x_n\}_{n\in\mathbb{N}}$ of some roughness degree $r$ is also a rough $I^K$-limit of $\{x_n\}_{n\in\mathbb{N}}$ of same roughness degree $r$. \end{lem} \begin{proof} Let $I$ and $K$ be two ideals on $\mathbb{N}$ and $r$ be a non-negative real number. Let $x$ be a rough $K$-limit of $\{x_n\}_{n\in\mathbb{N}}$ of roughness degree $r$ i.e., $x\in K-LIM^r x_n$. Then for any $\varepsilon >0$, $\{n\in\mathbb{N}: ||x_n - x||\geq r + \varepsilon\}\in K$. Now as $\phi \in I$, so $\mathbb{N}\in F(I)$. Let $M=\{m_1<m_2<\cdots<m_k<\cdots\}=\mathbb{N}\in F(I)$, then $\{x_{m_k}\}=\{x_n\}$ and $K|M = K$. Therefore $\{k\in \mathbb{N}: ||x_{m_k} - x||\geq r +\varepsilon \}= \{n \in\mathbb{N} : ||x_n - x||\geq r +\varepsilon\}\in K=K|M$. So $x\in I^K-LIM^r x_n$. Hence the result follows. \end{proof} \begin{thm}\label{corproof} If rough $I^K$-limit of a sequence $\{x_n\}_{n\in\mathbb{N}}$ of some roughness degree $r$ is $x$ implies that $x$ is also a rough $I$-limit of $\{x_n\}_{n\in\mathbb{N}}$ of same roughness degree $r$ then $K\subset I $. \end{thm} \begin{proof} Suppose that $K\not\subset I$ and $r$ be a non-negative real number. Then there exists a set $A\in K\setminus I$. Let us choose $x, y\in X$ such that $||x||=1$ and $ y = (r + 2 )x$, then we have $|| x - y||\geq r + \varepsilon$ for $0< \varepsilon \leq 1$ and $||x - y|| < r +\varepsilon$ for $\varepsilon > 1$. Now define a sequence $\{x_n\}_{n\in\mathbb{N}}$ as follows $x_n= \begin{cases} x + r , & n\in \mathbb{N}\setminus A\\ y, & n\in A \end{cases}$. Then for any $\varepsilon >0$ the set $\{n\in\mathbb{N}: || x_n - x||\geq r +\varepsilon\}$ is either the set $A$ (when $0<\varepsilon\leq 1$) or $\phi$ (when $\varepsilon>1$). Since $K$ is an admissible ideal and $A\in K$, therefore $\{n\in\mathbb{N}: || x_n - x||\geq r +\varepsilon\}\in K$. Thus $x$ is a rough $K$-limit of $\{x_n\}_{n\in\mathbb{N}}$ of roughness degree $r$. Now by lemma \ref{lemma 3.5}, $x$ is a rough $I^K$-limit of $\{x_n\}_{n\in\mathbb{N}}$ of roughness degree $r$. Since for $0<\varepsilon \leq 1$, $\{n\in\mathbb{N}: ||x_n - x||\geq r +\varepsilon\}= A$ and $A\notin I$. So $\{n\in\mathbb{N}: ||x_n - x||\geq r +\varepsilon\}\notin I$ and hence $x$ is not a rough $I$-limit of roughness degree $r$. But by our assumption, $x$ is also a rough $I$-limit of $\{x_n\}_{n\in\mathbb{N}}$. Thus we arrive at a contradiction and so, $K\subset I$. \end{proof} \begin{cor} Let $I$ and $K$ be two ideals on $\mathbb{N}$. Then rough $I^K$-limit set is a subset of $I$-limit set of a sequence $\{x_n\}_{n\in\mathbb{N}}$ of some roughness degree $r$ if and only if $K\subset I$. \end{cor} \begin{proof} In view of Theorem \ref{th3} and Theorem \ref{corproof} the result follows. \end{proof} In general, if $x$ is a rough $I$-limit of a sequence $\{x_n\}_{n\in\mathbb{N}}$ in a normed linear space then it does not necessarily implies that $x$ is also a rough $I^K$-limit of $\{x_n\}_{n\in\mathbb{N}}$. The following is an example in support of this assertion. \begin{exmp} Let $I$ be ideal as in example \ref{3.2}. Also let $K$ be the ideal on $\mathbb{N}$ such that it is the collection of all subsets of $\mathbb{N}$ whose natural density is zero. Now let us define a sequence in real numbers space with usual norm by $x_n=\frac{1}{j}$ if $n\in D_j$. Let $r$ be a arbitrary non-negative real number. Let $\varepsilon >0$ be arbitrarily chosen, then there exists a $l\in \mathbb{N}$ such that $\varepsilon >\frac{1}{l}$. Clearly $[-r, r]\subset I-LIM^r x_n$, as $\{n\in\mathbb{N} : |x_n - x |\geq r + \varepsilon\}\subset D_1 \cup D_2 \cup \cdots D_l\in I$ for any $x\in [-r, r]$.\\ If possible let $-r$ be a rough $I^K$-limit of $\{x_n\}$ of roughness degree $r$. Then there exists a $M=\{m_1<m_2<\cdots<m_k<\cdots\}\in F(I)$ such that the sub sequence $\{x_{m_k}\}$ is rough $K|M$-convergent of roughness degree $r$ to $-r$. Now as $N\setminus M=H(\text{say})\in I$, so there exists a $p\in \mathbb{N}$ such that $H\subset D_1\cup D_2 \cup\cdots \cup D_p$. Hence $D_k\subset M$ for all $k\geq p +1$. Let $\varepsilon =\frac{1}{p+1}$. Then $\{k\in\mathbb{N}: ||x_{m_k} + r||\geq r + \frac{1}{p+1}\}=\{k\in\mathbb{N}: m_k\in D_{p+1}\}$. As $D_{p+1}=\{2^{p}(2s -1): s=1,2,\cdots\}$ and natural density of $D_{p+1}=\frac{1}{2^{p+1}}$, therefore natural density of the set $\{k\in\mathbb{N}: ||x_{m_k} + r||\geq r + \frac{1}{p+1}\}$ is not zero. Hence $\{k\in\mathbb{N}: ||x_{m_k} + r||\geq r + \frac{1}{p+1}\}\notin K|M$, since natural density of each set belongs to $K|M$ is also zero. Therefore $-r$ is not a rough $I^K$-limit of $\{x_n\}$ of roughness degree $r$. \end{exmp} Rough $I$-limit $x$ of a sequence $\{x_n\}_{n\in\mathbb{N}}$ is also a rough $I^K$-limit of $\{x_n\}_{n\in\mathbb{N}}$ if the ideal $I$ satisfies the condition (AP). Thus we have the following Theorem. \begin{thm}\label{1} Let $I$ and $K$ be two admissible ideals on $\mathbb{N}$ such that the ideal $I$ satisfies the condition (AP). Also let $r$ be a non-negative real number and $\{x_n\}_{n\in\mathbb{N}}$ be a sequence in a normed linear space $(X, ||\cdot||)$. Then $x\in I-LIM^r x_n$ implies $x\in I^K-LIM^r x_n$. \end{thm} \begin{proof} Suppose that $I$ and $K$ be two ideals on $\mathbb{N}$ such that the ideal $I$ satisfies the condition (AP). Let $\{x_n\}_{n\in\mathbb{N}}$ be sequence such that $x\in I-LIM^r x_n$. Now since $I$ satisfies the condition (AP), hence $x\in I^*-LIM^r x_n$. Now as $I^*-LIM^r x_n \subset I^K-LIM^r x_n$, therefore $x\in I^K-LIM^r x_n$. \end{proof} \begin{thm}\label{2} Let $I$ and $K$ be two ideals on $\mathbb{N}$. If for any sequence $\{x_n\}_{n\in\mathbb{N}}$ in a normed linear space $(X, ||\cdot||)$ the implication $ I- LIM^r x_n \subset I^K- LIM^r x_n$ holds, then the the ideal $I$ has the additive property with respect to $K$, i.e., AP($I$, $K$) holds. \end{thm} \begin{proof} Let $\{A_n\}_{n\in \mathbb{N}}$ be a sequence of mutually disjoint sets belonging to $I$ and $r>0$ be a real number. Define a sequence $\{x_n\}_{n\in\mathbb{N}}$ in real number space with usual norm as follows $x_n=\begin{cases} r+\frac{1}{i},\:\:\:\:\: n\in A_i\\ 0,\: \:\:\:\:\:\: n\in \mathbb{N}\setminus \displaystyle{\cup_i} A_i \end{cases}$. Then for any $\varepsilon>0$, there exists $p\in\mathbb{N}$ such that $\{n\in\mathbb{N}: || x_n - 0|| \geq r +\varepsilon\}\subset A_1\cup A_2\cup\cdots \cup A_p\in I$. So $0\in I-LIM^r x_n$. Consequently, by our assumption, $0\in I^K-LIM^r x_n$. Thus there exists a set $M=\{m_1<m_2<\cdots< m_k<\cdots\}\in F(I)$ such that the sub sequence $\{x_{m_k}\}_{k\in \mathbb{N}}$ is rough $K|M$-convergent to $0$ of roughness degree $r$. Now if $\displaystyle{\cup_i} A_i\in I$ then by taking $A_i =B_i$ for $i\in\mathbb{N}$ the results follows directly by using $(iv)$ of the lemma \ref{apk}. So let $\displaystyle{\cup_i} A_i\notin I$. Since $M\in F(I)$, so the set $M$ contains a infinite numbers of $A_i$'s. Now for arbitrary $\varepsilon>0$ the set $\{k\in \mathbb{N}: |x_{m_k} - 0|\geq r+\varepsilon\}\in K|M$. For each $i\in\mathbb{N}$ either $A_i\cap M\neq \phi$ or $A_i\cap M=\phi$. By the construction of the sequence and by the fact that $0\in I^K-LIM^r x_n$ in both cases we have $A_i\cap M\in K|M$ and since $\varepsilon>0$ is arbitrary so, $A_i\cap M\in K$ for each $i\in\mathbb{N}$. Now as $M\in F(I)$, so $\mathbb{N}\setminus M=B(\text{say})\in I$. Let us put $B_i =A_i \cap B$ for each $i$. Then each of $B_i$ belongs to $I$. Also as $\displaystyle{\bigcup_{i=1}^{\infty}}B_i= \displaystyle{\bigcup_{i=1}^{\infty}}(A_i \cap B)= B \cap \displaystyle{\bigcup_{i=1}^{\infty}} A_i \subset B$, so $\displaystyle{\bigcup_{i=1}^{\infty}}B_i\in I$. Now as $B_i \subset A_i$, so $A_i\setminus B_i = A_i\cap M$. Thus $A_i\setminus B_i\in K$. Therefore $A_i \sim_K B_i$ for $i\in \mathbb{N}$, since $A_i \sim_K B_i \Leftrightarrow A_i \Delta B_i\in K$ and $A_i \Delta B_i = A_i \setminus B_i$ in this case. Thus by the virtue $(iv)$ of the lemma \ref{apk} the result follows. \end{proof} In the next example we will see that, as in the case rough $I^*$-limit, rough $I^K$-limit set of a sequence $\{x_n\}_{n\in\mathbb{N}}$ in a normed linear space may not be a subset of rough $I^K$-limit set of a sub sequence $\{x_{n_k}\}_{k\in\mathbb{N}}$. \begin{exmp} Let $I$ be the collection of all subsets of $\mathbb{N}$ whose natural density is zero. Then $I$ is a non trivial admissible ideal on $\mathbb{N}$. Also let $\mathbb{N}= \displaystyle{\bigcup_{j=1} ^ \infty} D_j$ be a decomposition of $\mathbb{N}$ such that $D_j=\{2^{j-1}(2s -1): s=1, 2, \cdots\}$. Then each $D_j$ is infinite and $D_j \cap D_k =\phi$ for $j\neq k$. Now $K$ be the ideal such that it is the class of all subsets of $\mathbb{N}$ which intersects with only a finite numbers of $D_j$'s. Let us consider the sequence in real number space with usual norm, where $x_n =\frac{1}{j}$ if $n\in D_j$. Now as $\phi\in I$, so $\mathbb{N}\in F(I)$. Let us take $\mathbb{N}=M$. Let us enumerate $M$ as, $M =\{m_1<m_2<m_3<\cdots<m_k<\cdots\}$. Then $K|M= K$. Also let $r >0$ be arbitrary. Now for an arbitrary $\varepsilon >0$ one can find a $l\in \mathbb{N}$ such that $\varepsilon >\frac{1}{l}$. Now we have a $p\in\mathbb{N}$ such that $\{k\in\mathbb{N}: ||x_{m_k} + r||\geq r +\varepsilon\}\subset D_1 \cup D_2\cup \cdots \cup D_p\in K=K|M$. So $\{k\in\mathbb{N}: ||x_{m_k} + r||\geq r +\varepsilon\}\in K|M$. Therefore $-r$ is a rough $I^K$-limit of roughness degree $r$. Again let us consider the sub sequence $\{x_{n_k}\}$ of the sequence $\{x_n\}$ such that $\{x_{n_k}\}=1$ for all $k$'s. Then for any $0<\varepsilon<1$ and for any $M=\{m_1<m_2<\cdot<m_k<\cdots\}\in F(I)$, $\{k\in\mathbb{N}: ||x_{n_{m_k}} + r||\geq r +\varepsilon\}=\mathbb{N}$. Since $\mathbb{N}\notin K|M$, therefore $\{k\in\mathbb{N}: ||x_{m_k} + r||\geq r +\varepsilon\}\notin K|M$. Hence $-r$ is not a rough $I^K$-limit of the sub sequence $\{x_{m_k}\}$. \end{exmp} \begin{thm}\label{newone} Let $I$ and $K$ be two admissible ideal on $\mathbb{N}$ such that $K\subset I$ and AP($I$, $K$) holds. Then the rough $I^K$-limit set of a sequence $\{x_n\}_{n\in\mathbb{N}}$ of some roughness degree $r$ is a subset of rough $I^K$-limit set of a sub sequence $\{x_{n_k}\}_{k\in\mathbb{N}}$ of same roughness degree $r$. \end{thm} \begin{proof} Let $x$ be a rough $I^K$-limit of a sequence $\{x_n\}_{n\in\mathbb{N}}$ of some roughness degree $r$. Also let $I$ and $K$ be two ideals on $\mathbb{N}$ such that AP($I$, $K$) holds and $K\subset I$. Now since $K\subset I$, therefore $x$ is also a rough $I$-limit of the sequence $\{x_n\}_{n\in\mathbb{N}}$. So $x$ is also a rough $I$-limit of a sub sequence $\{x_{n_k}\}_{k\in\mathbb{N}}$ of the sequence $\{x_n\}_{n\in\mathbb{N}}$. Again since AP($I$, $K$) holds, so $x$ is a rough $I^K$-limit of the sub sequence $\{x_{n_k}\}_{k\in\mathbb{N}}$. Hence the results follows. \end{proof} \begin{rem} Rough $I^K$-limit of a sequence $\{x_n\}_{n\in\mathbb{N}}$ of some roughness degree $r$ is also a rough $I*$-limit of $\{x_n\}_{n\in\mathbb{N}}$ of same roughness degree $r$ if some additional condition holds as given in the following Theorem. \end{rem} \begin{thm} Let $I$ and $K$ be two admissible ideal on $\mathbb{N}$ such that $K\subset I$ and the ideal $I$ satisfies the condition (AP). Then for a sequence $\{x_n\}_{n\in\mathbb{N}}$ in $X$ rough $I^K$-limit of some roughness degree $r$ is a rough $I^*$-limit of same roughness degree $r$. \end{thm} \begin{proof} Let $x$ be a rough $I^K$-limit of a sequence $\{x_n\}_{n\in\mathbb{N}}$ of some roughness degree $r$. Also let $I$ and $K$ be two ideals on $\mathbb{N}$ such that AP($I$, $K$) holds and $K\subset I$. Again since $K\subset I$, so $x$ is also a rough $I$-limit of the sequence $\{x_n\}_{n\in\mathbb{N}}$. Again since the ideal $I$ satisfies the condition (AP), so $x$ is a rough $I^*$-limit of the sequence $\{x_n\}_{n\in\mathbb{N}}$ by Theorem \ref{th2}. \end{proof} \begin{defn} [c.f. \cite{R7}] Let $I$ and $K$ be two admissible ideals on $\mathbb{N}$. A sequence $\{x_n\}_{n\in\mathbb{N}}$ in a normed linear space is said to be $K|M$-bounded if there exists a set $M=\{m_1<m_2<\cdots<m_k<\cdots\}\in F(I)$ and a positive number $L$ such that for the sub sequence $\{x_{m_k}\}_{k\in \mathbb{N}}$ we have $\{k\in \mathbb{N}: || x_{m_k}||\geq L\}\in K|M$. \end{defn} Obviously for a bounded sequence $\{x_n\}_{n\in\mathbb{N}}$ there always exists a non-negative real number $r$ for which $I^K-LIM^r x_n\neq \phi$. The reverse implication is generally not valid which can be seen from the following example, but if we take the sequence $\{x_n\}_{n\in\mathbb{N}}$ to be $K|M$-bounded then the reserve implication also holds. \begin{exmp}\label{newexmp} Let $I$ be the ideal of the class of all those subsets of $\mathbb{N}$ whose natural density is zero. Then $I$ is an admissible ideal on $\mathbb{N}$. Also let $K$ be any admissible ideal on $\mathbb{N}$. Let us consider the sequence $\{x_n\}_{n\in\mathbb{N}}$ in real number space with usual norm as follows: $x_n=\begin{cases} 1, \:\: n\neq k^2\\ n,\:\: n=k^2 \end{cases}$, for some $k\in\mathbb{N}$. Now as $A(\text{say}) =\{n\in\mathbb{N}: n=k^2 \:\text{for some}\: k\in\mathbb{N}\}\in I$, so $\mathbb{N}\setminus A=M(\text{say})\in F(I)$. Let us enumerate $M$ as, $M=\{m_1<m_2<\cdots< m_k<\cdots\}$. Now we see that for $r=1$ and for any $\varepsilon>0$, the set $\{k\in\mathbb{N}: |x_{m_k} - x|\geq r + \varepsilon\}=\phi\in K|M$ for any $x\in [0, 2]$. So $[0, 2]\subset I^K-LIM^r x_n$. But the sequence considered here is unbounded. \end{exmp} \begin{thm} Let $I$ and $K$ be two admissible ideals on $\mathbb{N}$. Then a sequence $\{x_n\}_{n\in\mathbb{N}}$ is $K|M$-bounded if and only if there exists a non-negative real number $r$ such that $I^K-LIM^ r x_n\neq \phi$. \end{thm} \begin{proof} Suppose that the sequence $\{x_n\}_{n\in\mathbb{N}}$ is $K|M$-bounded. Then there exists a set $M=\{m_1<m_2<\cdots <m_k<\cdots\}\in F(I)$ and a positive number $L$ such that for the sub sequence $\{x_{m_k}\}$, $\{k\in\mathbb{N}: || x_{m_k}||\geq L \}=K(\text{say})\in K|M$. Define $r:=\sup{\{||x_{m_k}||: k\in K^\complement\}}$, where $K^\complement$ denote the complement of $K$ in $\mathbb{N}$. Then for any $\varepsilon>0$ we have $\{k\in\mathbb{N}: ||x_{m_k} - 0||\geq r +\varepsilon\}\subset K$. So $0\in I^K-LIM ^r x_n$.\\ Conversely suppose that $I^K-LIM^r x_n\neq \phi$ for some $r\geq 0$. Let $x\in I^K-LIM^r x_n$ and $||x||=L$. Then there exists a set $M=\{m_1<m_2<\cdots<m_k<\cdots\}\in F(I)$ such that for any $\varepsilon>0$ the set $\{k\in\mathbb{N}: ||x_{m_k} - x||\geq r +\varepsilon\}= K_1(\text{say})\in K|M$. Now $||x_{m_k}||=||x_{m_k} - x +x||\leq ||x_{m_k} - x|| + ||x||< r +\varepsilon + L$ for all $k\in K_1^\complement$. So $\{k\in\mathbb{N}: ||x_{m_k}||\geq r +\varepsilon + L\}\in K|M$. So the sequence $\{x_n\}_{n\in\mathbb{N}}$ is $K|M$-bounded. \end{proof} It is remarkable that the rough $I^K$-limit set of a sequence $\{x_n\}_{n\in\mathbb{N}}$ is convex set, as shown in the following theorem. \begin{thm} Let $I$ and $K$ be two ideals on $\mathbb{N}$ and $r$ be a non-negative real number. Then for a sequence $\{x_n\}_{n\in\mathbb{N}}$, the rough $I^K$-limit set $I^K-LIM^r x_n$ is convex. \end{thm} \begin{proof} Let us assume that $x_1, x_2\in I^K-LIM^r x_n$. Then there exists $M'=\{m'_1<m'_2<\cdots<m'_k<\cdots\}$ and $M''=\{m''_1<m''_2<\cdots<m''_k<\cdots\}$ in $F(I)$ such that $\{k\in\mathbb{N}: ||x_{m'_k} - x_1||\geq r +\varepsilon\}\in K|M'$ and $\{k\in\mathbb{N}: ||x_{m''_k} - x_2||\geq r +\varepsilon\}\in K|M''$. Now as both of $M'$ and $M''$ belongs to $F(I)$. Let $M=M'\cap M''$. Then $M\in F(I)$ and let us enumerate $M$ as, $M=\{m_1<m_2<\cdots<m_k<\cdots\}$. Since $K$ is an admissible ideal, therefore $\{k\in\mathbb{N}: ||x_{m_k} - x_1||\geq r +\varepsilon\}\in K|M\rightarrow(i)$ and also $\{k\in\mathbb{N}: ||x_{m_k} - x_2||\geq r +\varepsilon\}\in K|M\rightarrow(ii)$. Let $0\leq \lambda\leq 1$. Now as $||x_{m_i} - [(1 -\lambda)x_1 + \lambda x_2]||=||(1 -\lambda)(x_{m_i} - x_1) + \lambda (x_{m_i} - x_2)||< r +\varepsilon$ for each $i$ belongs to the complements of the set as in $(i)$ and $(ii)$ simultaneously. So $||k\in\mathbb{N}: ||x_{m_k} - [(1-\lambda)x_1 + \lambda x_2]||\geq r +\varepsilon\}\in K|M$. \end{proof} \begin{rem} Let $I$ be an ideal on $\mathbb{N}$. Since for any non-negative real number $r$ and for a sequence $\{x_n\}_{n\in\mathbb{N}}$ in $X$ rough $I^*$-limit of roughness degree $r$ is also a rough $I^K$-limit of same roughness degree $r$, therefore rough $I^*$-limit set of roughness degree $r$ of $\{x_n\}_{n\in\mathbb{N}}$ is also a convex set. \end{rem} \begin{thm} Let $I$, $I_1$, $I_2$, $K$, $K_1$ and $K_2$ be ideals on $\mathbb{N}$ such that $I_1\subset I_2$ and $K_1\subset K_2$. Also let $\{x_n\}_{n\in\mathbb{N}}$ be a sequence and $r$ be a non-negative real number. Then $(i)$ $I_1 ^ K-LIM^r x_n \subset I_2 ^K-LIM^r x_n$.\\ $(ii)$ $I^{K_1}-LIM^r x_n \subset I^{K_2}-LIM^r x_n$. \end{thm} \begin{proof} $(i)$ Suppose $x\in I_1 ^ K-LIM^r x_n$. Thus there exists a set $M=\{m_1<m_2<\cdots<m_k\cdots\}\in F(I_1)$ such that the sub sequence $\{x_{m_k}\}_{k\in\mathbb{N}}$ is rough $K|M$-convergent to $x$ of roughness degree $r$. Now as $I_1\subset I_2$, therefore $\mathbb{N}\setminus M\in I_1 \subset I_2$. Hence $M\in F(I_2)$. So $x\in I_2 ^K-LIM^r x_n$.\\ $(ii)$ Let $x\in I^{K_1}-LIM^r x_n$. Therefore the exists a set $M=\{m_1<m_2<\cdots< m_k<\cdots\}\in F(I)$ such that the sub sequence $\{x_{m_k}\}_{k\in\mathbb{N}}$ is rough $K_1|M$-convergent to $x$ of roughness degree $r$. Now as $K_1\subset K_2$, therefore the sub sequence $\{x_{m_k}\}_{k\in\mathbb{N}}$ is also rough $K_2|M$-convergent to $x$ of roughness degree $r$. Hence $x\in I^{K_2}-LIM^r x_n$. \end{proof} \begin{thm} Let $I$ and $K$ be two admissible ideals on $\mathbb{N}$. Then rough $I^K$-limit set of a sequence $\{x_n\}_{n\in\mathbb{N}}$ is closed. \end{thm} \begin{proof} The proof is trivial when $I^K-LIM^r x_n=\phi$, where $r\geq 0$. Let us assume that $I^K-LIM^r x_n\neq \phi$ for some $r\geq 0$. Also let $\{y_n\}_{n\in\mathbb{N}}$ be a sequence in $I^K-LIM^r x_n$ such that $y_n \rightarrow y$. Since $y_n\rightarrow y$, so for a given $\varepsilon >0$ there exists a $N_1\in\mathbb{N}$ such that $||y_n - y||<\frac{\varepsilon}{2}$ for all $n>N_1$. Let $n_1> N_1$, therefore $||y_{n_1} - y|| < \frac{\varepsilon}{2}$. Again since $\{y_n\}_{n\in\mathbb{N}}$ is a sequence in $I^K-LIM^r x_n$, therefore $y_{n_1}\in I^K-LIM^r x_n$. Therefore there exists a set $M=\{m_1<m_2<\cdots<m_k<\cdots\}\in F(I)$ such that $\{k\in\mathbb{N}: ||x_{m_k} - y_{n_1}||\geq r +\frac{\varepsilon}{2}\}=K_1(\text{say})\in K|M$. Now for $k\in K_1^\complement$ we have, $|| x_{m_k} - y||= || x_{m_k} - y_{n_1} + y_{n_1} - y||\leq ||x_{m_k} - y_{n_1}|| +|| y_{n_1} - y||< r +\frac{\varepsilon}{2} +\frac{\varepsilon}{2}= r+\varepsilon$. Thus $\{k\in\mathbb{N}: ||x_{m_k} - y||\geq r +\varepsilon\}\in K|M$. Therefore $y\in I^K-LIM^ r x_n$ and hence the result follows. \end{proof}
1,314,259,995,395
arxiv
\section{\label{}} \section{Introduction} Leptoquarks (LQ) are hypothetical particles that are predicted by many extensions~\cite{mBRW,techni1,techni2,techni3,superstring} of the Standard Model of particle physics (SM), such as Grand Unification Theory, Technicolor and Composite models. They carry both baryon and lepton numbers and thus couple to a lepton and a quark. They carry fractional electric charge, they are color triples, and can have either zero or one unit of spin (i.e. can be either scalar or vector particles). Existing experimental limits on lepton number violation, Flavor Changing Neutral Current, proton decay, and other rare processes favor three generations of leptoquarks, with no inter-generational mixing. The production and decay of LQs are characterized by: the mass of the LQ particle (M$_{LQ}$), its decay branching ratio into a lepton and a quark (usually denoted as $\beta$) and the Yukawa coupling at the LQ-lepton-quark vertex ($\lambda$). At hadron colliders, leptoquarks are mainly produced in pairs, via gluon-gluon fusion and quark anti-quark annihilation. The dominant pair production mechanisms do not depend strongly on $\lambda$ and the single production of leptoquarks does not become significant (and it does not in any case invalidate the search results presented here) in the range of LQ masses probed with the 2010 LHC data. The final state event signatures from the decay of pair produced LQs can be classified as: dilepton and jets (both LQ and anti-LQ decay into a charged lepton and a quark); lepton, missing transverse energy and a jet (one LQ decays into a charged lepton and a quark, while the other decays into a neutrino and a quark); and missing transverse energy and jets (both LQ and anti-LQ decay into neutrinos and quarks). The three signatures have branching ratios of $\beta^2$, 2$\beta(1-\beta)$, and $(1-\beta)^2$, respectively. The charged leptons can be either electrons, muons, or tau leptons, corresponding to the three generation of LQs. Only electrons and muons are considered here. \section{Search for Pair-Production of Scalar Leptoquarks with the CMS Detector} Searches for first and second generation pair-produced scalar LQs were performed and published on 34-36 pb$^{-1}$ of 7 TeV proton-proton collision data recorded by the CMS detector~\cite{CMSJinst} during the 2010 LHC run. First generation results include searches both in the dilepton and jets final state ($ee jj$), and in the lepton, missing transverse energy and jets final state ($e\nu jj$). Second generation result consist of a search in the dilepton and jets final state ($\mu\mu jj$). The $ee jj$ and $e\nu jj$ results were combined to attain the best possible exclusion reach in all of the parameter space of the first generation pair produced scalar LQ search, i.e. ($\beta$ and $M_{LQ}$). The analysis in all of the channels aims at identifying the existence of new heavy particles by establishing an excess of events characteristic of the decay of heavy objects. The analysis starts with using either a single or a double lepton trigger path, which is robust and very efficient. An event signature preselection isolates events with high transverse momentum final state objects (two or more isolated leptons and two or more jets, or one isolated lepton, two or more jets, and significant missing transverse energy indicative of the presence of a neutrino). Event variables are identified to effectively separate a possible LQ signal from standard model backgrounds (these are the $S_T$, $M(ll)$, and $M_T(l\nu)$ variables described below) and lower thresholds are placed on these variables at preselection level. Backgrounds from standard model sources are estimated and first compared with data at the preselection level. Major sources of background are $Z/\gamma^*$+jets and $W+$jets processes and $t\bar{t}$ (with single top production, diboson processes, and QCD multijet processes being smaller contributions). Major backgrounds are either directly determined from data control samples or determined with Monte Carlo samples (therefore using kinematic shape information from the MC) but normalized to data in selected control regions. A cut-based variable approach is used for the final selection, where the selection is optimized for different LQ mass hypotheses by minimizing the expected upper limit on the LQ cross section in the absence of an observed signal using a Bayesian approach~\cite{bayes1, bayes}. The three variables used in the optimization are: the scalar sum of the final objects transverse momenta, $S_T$; the invariant mass of the dilepton pair, $M_{ll}$, for the $ee jj$ and $\mu\mu jj$ channels; and the transverse mass, $M_T(l\nu)$, of the lepton and neutrino in the $e\nu jj$ analysis. After final selection, the data are well described by the standard model background predictions. In the absence of an observed signal, an upper limit on the LQ cross section is set using a Bayesian method~\cite{bayes1, bayes} with a flat signal prior. A lognormal prior is used to integrate over the nuisance parameters. Using Poisson statistics, a 95\% confidence level (C.L.) upper limit is obtained on the LQ pair-production cross-section times branching ratio as a function of LQ mass. Comparing with the NLO predictions~\cite{NLO} for the scalar LQ pair production cross section a lower limit on $M_{LQ}$ is determined for $\beta = 1$ and $\beta = 0.5$ (electron only, for the latter). In the electron channel, the $ee jj$ and $e\nu jj$ channels are combined to further maximize the exclusion in $\beta$ and $M_{LQ}$, specially for the case of $\beta = 0.5$, where the $ee jj$ adds to the sensitivity of the $e\nu jj$ channel. \section{$ee jj$ and $\mu\mu jj$ channels} The $ee jj$ analysis requires the presence of a single or double electromagnetic trigger (with an efficiency close to 100$\%$), two or more isolated electrons with $p_T>30$ GeV and $|\eta|<2.5$, and two or more jets with $p_T>30$ GeV and $|\eta|<3.0$. In addition, at pre-selection level, $\Delta R(e,j)>0.7$ is required together with a minimum threshold of $M_{ee}>50$ GeV and $S_T = p_T(e_1) + p_T(e_2) + p_T(jet_1) + p_T(jet_2)>250$ GeV. Data and standard model background predictions agree well at the level of pre-selection, as shown in Fig.~\ref{eeprelpte}, and ~\ref{eeprelst}. This channel uses an integrated luminosity of 33.2 pb$^{-1}$. \begin{figure}[ht] \centering \includegraphics[width=80mm]{Figure1} \caption{Leading electron $p_T$ at preselection level for the $ee jj$ analysis.} \label{eeprelpte} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=80mm]{Figure2} \caption{$S_T$ at preselection level for the $ee jj$ analysis.} \label{eeprelst} \end{figure} Similarly, the $\mu\mu jj$ analysis starts from the requirement of a single muon trigger (with an efficiency of 99$\%$), two or more isolated muons with $p_T>30$ GeV and $|\eta|<2.4$ (one of which must be within $|\eta|<2.1$), and two or more jets with $p_T>30$ GeV and $|\eta|<3.0$. The two muons are required to be separated in $R$ by at least 0.3. Minimum thresholds of $M_{\mu\mu}>50$ GeV and $S_T = p_T(\mu_1) + p_T(\mu_2) + p_T(jet_1) + p_T(jet_2)>250$ GeV complete the preselection requirements. Good agreement is observed between data and standard model background predictions at this level (Fig.s~\ref{mmprelptm} and ~\ref{mmprelst}). This channel uses an integrated luminosity of 34.0 pb$^{-1}$. \begin{figure}[ht] \centering \includegraphics[width=80mm]{Figure3} \caption{Leading muon $p_T$ at preselection level for the $\mu\mu jj$ analysis.} \label{mmprelptm} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=80mm]{Figure4} \caption{$S_T$ at preselection level for the $\mu\mu jj$ analysis.} \label{mmprelst} \end{figure} The major backgrounds from standard model processes to the dilepton and jets channels come from $Z/\gamma^* +$ jets and $t\bar{t}$ production. The $Z/\gamma^*+$ contribution is determined from MC, but it is rescaled to the data in the Z peak region at preselection. The invariant mass of the dilepton pair is shown in Fig.~\ref{eepreldimass} and in Fig.~\ref{mmpreldimass} for the $ee jj$ and $\mu\mu jj$ channel, respectively. The normalization factors are $1.20\pm 0.14$ for $ee jj$ and $1.28\pm 0.14$ for $\mu\mu jj$. The systematic uncertainty on the $Z/\gamma^* +$ jets background prediction in both channels comes from the statistically dominated uncertainty in the normalization factors and from a shape uncertainty obtained by comparing the yields of $Z/\gamma^* +$ jets MC samples generated with different renormalization and factorization scales and matching thresholds. The $t\bar{t}$ normalization uncertainty is based on a CMS measurement of the $t\bar{t}$ cross-section~\cite{topcms}. Smaller $W+$ jets and diboson plus jets backgrounds are determined entirely from MC and are found to be negligible. The background contributions from QCD multijet processes are determined from data control regions and are also found to be negligible. \begin{figure}[ht] \centering \includegraphics[width=80mm]{Figure5} \caption{Invariant mass of the two highest $p_T$ electrons at preselection level for the $ee jj$ analysis.} \label{eepreldimass} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=80mm]{Figure6} \caption{Invariant mass of the two highest $p_T$ muons at preselection level for the $\mu\mu jj$ analysis.} \label{mmpreldimass} \end{figure} At final selection, a dilepton invariant mass cut ($M_{ee}>$125 GeV and $M_{\mu\mu}>$115 GeV) is placed to suppress most of the $Z/\gamma^*+$ jets background and a $S_T$ threshold is optimized for each $M_{LQ}$ hypothesis. $S_T$ is effective at removing most of the $t\bar{t}$ background and any residual backgrounds surviving the preselection criteria and the cut on the dilepton invariant mass. The number of data, signal MC, and background events for each of the final selection criteria optimized for different $M_{LQ}$ are listed in Table~\ref{tab:eejjsummary} and Table~\ref{tab:mumujjsummary} for the $ee jj$ and $\mu\mu jj$ channels respectively, together with the optimized $S_T$ thresholds. \begin{table*}[htbp] \caption{Number of $ee jj$ events for LQ signal, backgrounds, and data after full selection with 33.2 pb$^{-1}$. The product of signal acceptance and efficiency is also reported for different LQ masses. The $Z/\gamma^*+$jets MC has been normalized to the data as described in the text. Other backgrounds include $W$ + jets, di-boson, and single top. The uncertainties are statistical. The observed and expected 95$\%$ C.L. upper limit (u.l.) on the LQ pair production cross section $\sigma$ are shown in the last column.} \begin{center} \scriptsize \begin{tabular}{|c|cc|cccc|c|c|} \hline\hline $M_{\rm LQ}$ & \multicolumn{2}{|c|}{Signal Samples (MC)} & \multicolumn{4}{|c|}{Standard Model Background Samples (MC)} & Events & Obs./Exp.\\ ($S_T$~Cut) & Selected & Acceptance & \multicolumn{4}{|c|}{Selected Events in} & in & 95\% C.L.\\ {[GeV]} & Events & $\times$Efficiency & $t\bar{t}$ + jets & $Z/\gamma^*$+ jets & Others & All & Data & u.l. on $\sigma$ [pb]\\ \hline 200 ($S_T>$340) & 117.5$\pm$0.8 & 0.297$\pm$0.002 & 2.6 $\pm$0.1 & 2.0 $\pm$0.2 & 0.27$\pm$0.05 & 4.9 $\pm$0.2 & 2 & 0.441 / 0.720 \\ 250 ($S_T>$400) & 43.8$\pm$0.2 & 0.380$\pm$0.002 & 1.3 $\pm$0.1 & 1.3 $\pm$0.1 & 0.14$\pm$0.02 & 2.7 $\pm$0.1 & 1 & 0.309 / 0.454 \\ 280 ($S_T>$450) & 24.4$\pm$0.1 & 0.403$\pm$0.002 & 0.69$\pm$0.05 & 0.87$\pm$0.07 & 0.10$\pm$0.02 & 1.7 $\pm$0.1 & 1 & 0.305 / 0.373 \\ 300 ($S_T>$470) & 17.3$\pm$0.09 & 0.430$\pm$0.002 & 0.52$\pm$0.05 & 0.75$\pm$0.07 & 0.10$\pm$0.02 & 1.4 $\pm$0.1 & 1 & 0.292 / 0.332 \\ 320 ($S_T>$490) & 12.3$\pm$0.06 & 0.451$\pm$0.002 & 0.43$\pm$0.04 & 0.65$\pm$0.07 & 0.08$\pm$0.02 & 1.2 $\pm$0.1 & 1 & 0.283 / 0.305 \\ 340 ($S_T>$510) & 8.88$\pm$0.04 & 0.469$\pm$0.002 & 0.32$\pm$0.04 & 0.56$\pm$0.06 & 0.08$\pm$0.02 & 0.96$\pm$0.08 & 1 & 0.278 / 0.279 \\ 370 ($S_T>$540) & 5.55$\pm$0.02 & 0.496$\pm$0.002 & 0.26$\pm$0.03 & 0.47$\pm$0.06 & 0.07$\pm$0.02 & 0.80$\pm$0.07 & 1 & 0.267 / 0.254 \\ 400 ($S_T>$560) & 3.55$\pm$0.02 & 0.522$\pm$0.002 & 0.20$\pm$0.03 & 0.41$\pm$0.05 & 0.06$\pm$0.02 & 0.67$\pm$0.07 & 1 & 0.257 / 0.234 \\ 450 ($S_T>$620) & 1.70$\pm$0.01 & 0.539$\pm$0.002 & 0.12$\pm$0.02 & 0.28$\pm$0.05 & 0.02$\pm$0.01 & 0.42$\pm$0.06 & 0 & 0.174 / 0.210 \\ 500 ($S_T>$660) & 0.868$\pm$0.003& 0.565$\pm$0.002 & 0.08$\pm$0.02 & 0.23$\pm$0.05 & 0.02$\pm$0.01 & 0.33$\pm$0.05 & 0 & 0.166 / 0.194 \\ \hline\hline \end{tabular} \end{center} \label{tab:eejjsummary} \end{table*} \begin{table*}[htbp] \begin{center} \caption{ The $\mu\mu jj$ data event yields in 34.0 pb$^{-1}$ for different LQ mass hypotheses, together with the optimized $S_T$ threshold (in GeV) for each mass, background predictions, expected LQ signal events (S), and signal selection efficiency times acceptance ($\epsilon_S$). M$_{LQ}$ and $S_T$ values are listed in GeV. The $Z/\gamma^* \to \mu\mu + \text{jets}$ and $t\bar{t}$ contributions are rescaled by the normalization factors determined from data. Other backgrounds correspond to $VV$, $W+\text{jets}$, and multijet processes. Uncertainties are statistical. } \vspace{.25in} \scriptsize \begin{tabular}{| c|c c|c c c c|c|c|}\hline \hline $M_{LQ}$ & \multicolumn{2}{c|}{MC Signal Samples} & \multicolumn{4}{c|}{Monte Carlo Background Samples} & Events & Obs./Exp. \\ ($S_T$ Cut) & Selected & Acceptance & \multicolumn{4}{c|}{Selected Events in} & in & {$95 \%$ C.L.} \\ $\text{[GeV]}$ & Events & $\times$ Efficiency & $t\bar{t}$ + jets & $Z/\gamma^* +\text{jets}$ & Others & All & Data& u.l. on $\sigma $ [pb]\\ \hline 200 ($S_T>310$) &160$\pm$20&0.388$\pm$0.003&4.6$\pm$0.1&4.08$\pm$0.07&0.1$\pm$0.01&8.8$\pm$0.2&5& 0.438 / 0.695 \\ 225 ($S_T>350$) &89$\pm$9&0.421$\pm$0.003&3.1$\pm$0.1&2.99$\pm$0.05&0.07$\pm$0.01&6.2$\pm$0.1&3& 0.339 / 0.547 \\ 250 ($S_T>400$) &51$\pm$5&0.437$\pm$0.003&1.88$\pm$0.09&1.92$\pm$0.04&0.051$\pm$0.009&3.9$\pm$0.1&3& 0.366 / 0.436 \\ 280 ($S_T>440$) &28$\pm$3&0.467$\pm$0.003&1.15$\pm$0.07&1.53$\pm$0.03&0.038$\pm$0.008&2.72$\pm$0.08&3& 0.371 / 0.361 \\ 300 ($S_T>440$) &21$\pm$2&0.518$\pm$0.004&1.15$\pm$0.07&1.53$\pm$0.03&0.038$\pm$0.008&2.72$\pm$0.08&3& 0.335 / 0.326 \\ 320 ($S_T>490$) &14$\pm$1&0.509$\pm$0.004&0.64$\pm$0.05&1.12$\pm$0.02&0.019$\pm$0.005&1.78$\pm$0.06&2& 0.300 / 0.292 \\ 340 ($S_T>530$) &9$\pm$1&0.508$\pm$0.003&0.4$\pm$0.04&0.79$\pm$0.01&0.01$\pm$0.004&1.20$\pm$0.04&1& 0.245 /0.264 \\ 400 ($S_T>560$) &4.0$\pm$0.4&0.578$\pm$0.004&0.31$\pm$0.04&0.67$\pm$0.01&0.01$\pm$0.004&0.99$\pm$0.04&1& 0.219 / 0.222 \\ 450 ($S_T>620$) &1.9$\pm$0.2&0.600$\pm$0.004&0.19$\pm$0.03&0.49$\pm$0.01&0.006$\pm$0.003&0.69$\pm$0.03&0& 0.153 / 0.199 \\ 500 ($S_T>700$) &0.9$\pm$0.1&0.602$\pm$0.004&0.09$\pm$0.02&0.277$\pm$0.006&0.003$\pm$0.002&0.37$\pm$0.02&0& 0.152 / 0.180 \\\hline\hline \end{tabular} \label{tab:mumujjsummary} \end{center} \end{table*} In absence of an excess above standard model backgrounds expectation, an upper limit on the LQ cross section is set using a Bayesian method~\cite{bayes1, bayes} with a flat signal prior. A log-normal probability density function is used to integrate over the systematic uncertainties. Major sources of systematic uncertainties for both channels are the uncertainties on the determination of the luminosity, the jet energy scale, the dilepton reconstruction and identification efficiencies, and the uncertainties associated with the normalization and modeling of the main backgrounds, $Z/\gamma^*+$jets and $t\bar{t}$. Using Poisson statistics, a 95\% confidence level (C.L.) upper limit is obtained on $\sigma\times\beta^2$. This is shown in Fig.~\ref{limit_eejj} and Fig.~\ref{limit_mmjj} together with the NLO~\cite{NLO} predictions for the scalar LQ pair production cross section. The systematic uncertainties are included in the calculation as nuisance parameters. With the assumption that $\beta=1$, first generation and second-generation scalar leptoquarks with masses less than 384 and 394~GeV are excluded at 95\% C.L.~\cite{eejj,mmjj}. This is in agreement with the expected limits of 391 and 394~GeV. \begin{figure}[ht] \centering \includegraphics[width=80mm]{Figure7} \caption{The expected and observed 95\% C.L. upper limit on the first-generation scalar LQ pair production $\sigma \times \beta^2$ as a function of the LQ mass together with the NLO theoretical cross section curve. The shaded band on the theoretical values includes CTEQ6 PDF uncertainties and the error on the LQ production cross section due to re-normalization and refactorization scale variation. The results correspond to 33.2~pb$^{-1}$ in the $ee jj$ channel.} \label{limit_eejj} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=80mm]{Figure8} \caption{The expected and observed 95\% C.L. upper limit on the second-generation scalar LQ pair production $\sigma \times \beta^2$ as a function of the LQ mass together with the NLO theoretical cross section curve. The shaded band on the theoretical values includes CTEQ6 PDF uncertainties and the error on the LQ production cross section due to re-normalization and refactorization scale variation. The results correspond to 34.0~pb$^{-1}$ in the $\mu\mu jj$ channel.} \label{limit_mmjj} \end{figure} \subsection{$e\nu jj$ channel and combination with $ee jj$} The $e\nu jj$ analysis requires the presence of a single electromagnetic trigger, one isolated electrons with $p_T>35$ GeV and $|\eta|<2.2$, and two or more jets with $p_T>30$ GeV and $|\eta|<3.0$. In addition, at pre-selection level, $\Delta R(e,j)>0.7$ is required together with transverse missing energy, $MET>50$ GeV, a veto on the presence of muons in the event, $S_T = p_T(e) + MET + p_T(jet_1) + p_T(jet_2)>250$ GeV, and azimuthal cuts between final state objects, $|\Delta \phi(MET,e)| >$ 0.8 and $|\Delta \phi(MET,jet_1)| >$ 0.5 to reduce the contribution from mis-reconstructed events. Data and standard model background predictions agree well at the level of pre-selection, as shown in Fig.~\ref{enprelmet}. This channel uses an integrated luminosity of 36 pb$^{-1}$. \begin{figure}[ht] \centering \includegraphics[width=80mm]{Figure9} \caption{Missing transverse energy at preselection level for the $e\nu jj$ analysis.} \label{enprelmet} \end{figure} The major backgrounds from standard model processes to the $e\nu jj$ channel are $W+$ jets and $t\bar{t}$ production. The $W+$jets contribution is determined from MC and rescaled to the data in the 50$< M_T(e\nu) <$110 GeV region at preselection. The $M_T(e\nu)$ distribution is shown in Fig.~\ref{enuprelmt}. The systematic uncertainty on the $W+$ jets background prediction comes from the statistically dominated uncertainty in the normalization factors and from a shape uncertainty obtained by comparing the yields of $W+$ jets MC samples generated with different renormalization and factorization scales and matching thresholds. The $t\bar{t}$ normalization uncertainty is based on a CMS measurement of the $t\bar{t}$ cross-section~\cite{topcms} and the shape uncertainty is determined from MC. Other sources of background determined from MC are found to be negligible. The background contributions from QCD multijet processes are determined from data control regions and are also found to be negligible. \begin{figure}[ht] \centering \includegraphics[width=80mm]{Figure10} \caption{Transverse mass of the electron and neutrino, $M_T(e\nu)$, at preselection level for the $e\nu jj$ analysis.} \label{enuprelmt} \end{figure} At final selection, the transverse mass of the electron and neutrino $M_T(e\nu)$ is required to be above 125 GeV and a min$(p_T(e), MET)>$85 GeV cut is placed to suppress most of the $W+$ jets background. A $S_T$ threshold is then optimized for each $M_{LQ}$ hypothesis. $S_T$ is effective at removing most of the $t\bar{t}$ background and any residual backgrounds surviving the preselection criteria and the $W$ veto cuts listed above. The number of data, signal MC, and background events for each of the final selection criteria optimized for different $M_{LQ}$ are listed in Table~\ref{tab:enujjsummary}, together with the optimized $S_T$ thresholds. \begin{table*}[htbp] \caption{Number of $e\nu jj$ events for the first generation LQ signal, backgrounds, and data samples after the full analysis selection. The optimum $S_T$ threshold is reported for each LQ mass. All uncertainties are statistical. The product of signal acceptance and efficiency is also reported for different LQ masses.} \label{tab:enujjsummary} \begin{center} \scriptsize \begin{tabular}{|c|cc|ccccc|c|} \hline $M_{\text{LQ}}$ & \multicolumn{2}{c|}{MC Signal Samples} & \multicolumn{5}{c|}{MC and QCD Background Samples} & Events \\ ($S_T$~cut) & Selected & Acceptance & \multicolumn{5}{c|}{Selected Events in} & in\\ {[GeV]} & Events & $\times$ Efficiency & $t\bar{t}$ + jets & $W + $jets & Other Bkgs & QCD & All Bkgs & Data\\ \hline \hline 200 ($S_T>350$) & 34.5$\pm$0.2 & 0.161 & 3.6$\pm$0.1 & 2.2$\pm$0.3 & 0.48$\pm$0.06 & 0.20$\pm$0.04 & 6.5$\pm$0.3 & 5 \\ 250 ($S_T>410$) & 15.9$\pm$0.1 & 0.255 & 2.24$\pm$0.09 & 1.7$\pm$0.3 & 0.35$\pm$0.05 & 0.18$\pm$0.05 & 4.4$\pm$0.3 & 3 \\ 280 ($S_T>460$) & 9.54$\pm$0.05 & 0.291 & 1.43$\pm$0.08 & 1.2$\pm$0.2 & 0.29$\pm$0.05 & 0.14$\pm$0.04 & 3.1$\pm$0.2 & 3 \\ 300 ($S_T>490$) & 6.89$\pm$0.03 & 0.317 & 1.09$\pm$0.07 & 1.0$\pm$0.2 & 0.27$\pm$0.05 & 0.14$\pm$0.04 & 2.5$\pm$0.2 & 2 \\ 320 ($S_T>520$) & 5.03$\pm$0.02 & 0.339 & 0.75$\pm$0.05 & 0.8$\pm$0.2 & 0.22$\pm$0.05 & 0.13$\pm$0.04 & 1.9$\pm$0.2 & 2 \\ 340 ($S_T>540$) & 3.73$\pm$0.02 & 0.364 & 0.65$\pm$0.05 & 0.7$\pm$0.2 & 0.20$\pm$0.05 & 0.12$\pm$0.04 & 1.6$\pm$0.2 & 2 \\ 370 ($S_T>570$) & 2.40$\pm$0.01 & 0.396 & 0.50$\pm$0.04 & 0.6$\pm$0.1 & 0.18$\pm$0.04 & 0.08$\pm$0.03 & 1.3$\pm$0.2 & 1 \\ 400 ($S_T>600$) & 1.57$\pm$0.01 & 0.426 & 0.34$\pm$0.04 & 0.5$\pm$0.1 & 0.17$\pm$0.04 & 0.08$\pm$0.03 & 1.1$\pm$0.1 & 1 \\ 450 ($S_T>640$) & 0.797$\pm$0.003 & 0.467 & 0.26$\pm$0.03 & 0.4$\pm$0.1 & 0.13$\pm$0.04 & 0.08$\pm$0.04 & 0.9$\pm$0.1 & 0 \\ 500 ($S_T>670$) & 0.417$\pm$0.001 & 0.500 & 0.18$\pm$0.03 & 0.4$\pm$0.1 & 0.12$\pm$0.04 & 0.08$\pm$0.04 & 0.8$\pm$0.1 & 0 \\ \hline \end{tabular} \end{center} \end{table*} In absence of an excess above standard model backgrounds expectation, an upper limit on the LQ cross section is set using a Bayesian method similar to the one used in the $ee jj$ analysis. The $e\nu jj$ channel has maximum sensitivity for $\beta = 0.5$, and also offers sensitivity for lower values of $\beta$, where the $ee jj$ channel quickly runs out of sensitivity due to the quadratic behavior of the branching fraction of the LQ pair into the above channel. Thus, combining it with the $ee jj$ channel allows one to extend the sensitivity of the search in the intermediate $\beta$ range compared to either of the two analysis considered separately. The two channels are combined using the same Bayesian approach used to set the individual limits. Assuming a flat prior for the signal cross section, the expected signal yield in the $ee jj$ [$e\nu jj$] channel is found by multiplying the cross section by the signal efficiency, integrated luminosity, and the branching fraction $\beta^2$ [$2\beta(1-\beta)$]. Since the major sources of systematic uncertainty between the two channels are correlated, a combined likelihood is constructed from the individual likelihoods for the two channels. The 95\% C.L. mass limit as a function of $\beta$ is shown in Fig.~\ref{eecombbeta}, along with the individual limits from the two channels. With the assumption that $\beta=0.5$, first generation scalar leptoquarks with masses less than 340~GeV are excluded at 95\% C.L.~\cite{enjj}. \begin{figure}[ht] \centering \includegraphics[width=80mm]{Figure11} \caption{The combined expected and observed 95\% C.L. mass limit for first-generation scalar leptoquarks as a function of $\beta$. Individual channel limits ($ee jj$ and $e\nu jj$) are also shown.} \label{eecombbeta} \end{figure} \subsection{Conclusions and outlook} Searches for pair production of first and second generation scalar leptoquarks have been performed at CMS in the final states with: two charged leptons and two jets, one charged lepton (electron), missing transverse energy and two jets, with the full 2010 statistics. In the absence of a signal we exclude first generation LQ with masses below 384 GeV ($\beta =1$), second generation LQ with masses below 394 GeV ($\beta =1$), and first generation LQ with masses below 340 GeV ($\beta = 0.5$). The $\beta = 1$ results were the most stringent limits at the time of publication. The $\beta = 0.5$ and $\beta = 1$ results have been combined and have been submitted for publication. The analysis of the 2011 data is ongoing.
1,314,259,995,396
arxiv
\section{Introduction} Topological statistics have a long history of use within cosmology \cite{1990ApJ...352....1G,1991ApJ...378..457P,Mecke:1994ax,Schmalzing:1997aj,Schmalzing:1997uc,Hikage:2006fe,Ducout:2012it,1989ApJ...345..618M,1992ApJ...387....1P,1992ApJ...385...26G}. Theoretical studies of random fields have been undertaken; both Gaussian \citep{1970Ap......6..320D,Adler,Gott:1986uz,Hamilton:1986,Ryden:1988rk, 1987ApJ...319....1G,1987ApJ...321....2W} and perturbatively non-Gaussian \citep{Matsubara:1994wn,Matsubara:1994we,Matsubara:1995dv,1988ApJ...328...50M,Matsubara:1995ns, 2000astro.ph..6269M,10.1111/j.1365-2966.2008.12944.x,Pogosyan:2009rg,Gay:2011wz,Codis:2013exa}, and modern techniques are being developed that go beyond the standard Minkowski functional analysis; Minkowski tensors \cite{Beisbart:2001vb,Beisbart:2001gk, Ganesan:2016jdk, Chingangbam:2017sap, Kapahtia:2019ksk, Appleby:2017uvb, Appleby:2018tzk, Kapahtia:2017qrg, K.:2018wpn}, Betti numbers \cite{Park:2013dga,Feldbrugge:2019tal,Pranav:2018lox,Pranav:2018pnu,Pranav:2016gwr,Shivshankar:2015aza,vandeWeygaert:2011ip} and multi-scale analyses of the cosmic web \cite{10.1111/j.1365-2966.2011.18395.x, Codis:2018niz, Kraljic:2019acs}. Previous application of the Minkowski functionals to various modern data sets can be found in \citep{2001ApJ...553...33P,Hikage:2002ki,Hikage:2003fc,Park:2005fk,10.1111/j.1365-2966.2008.14358.x,Gott:2008kk,Choi:2010sx,Zhang:2010tha,Petri:2013ffb,Blake:2013noa,Wiegand:2013xfa,2014ApJ...796...86P,Wang:2015eua,Wiegand:2016ezl,Buchert:2017uup,Sullivan:2017mhr,Hikage_2001,Gott:2006yy}. The advent of cosmological scale, large scale structure data has allowed measurements of the higher point functions induced by gravitational collapse \cite{Wiegand:2013xfa,Wiegand:2016ezl,Buchert:2017uup,Sullivan:2017mhr} that would be difficult to extract using conventional $N$-point methods. The genus belongs to the family of Minkowski Functionals. The genus of the matter density field, as traced by galaxies, can be used as a cosmological probe. By measuring the genus curve at different redshifts, one can extract information regarding the parameters governing the expansion history of the Universe. The redshift dependence of the genus amplitude was originally proposed as a standard ruler in \citet{Park:2009ja,doi:10.1111/j.1365-2966.2010.18015.x}. For the $\Lambda$CDM model, the amplitude of the genus curve is related to the slope of the linear matter power spectrum, which does not evolve with redshift. By comparing this quantity at high and low redshift, we should detect no evolution. However, if we select an incorrect cosmological model to infer the distance-redshift relation, then comoving smoothing scales and volumes become systematically incorrect with increasing redshift. This will generate a spurious evolution in the statistic. Hence, by measuring the genus using different cosmological models to infer distance scales, one can find the expansion history that conserves this statistic. This cosmological test was first proposed in \citet{Park:2009ja}. More recently, the authors have revisited this possibility and applied the method to projected two dimensional galaxy density fields, using all-sky mock galaxy lightcone data \citep{Appleby:2017ahh,Appleby:2018jew}. The analysis presented here provides a conclusion of these works, as we apply the methodology to a combination of low- and high-redshift galaxy catalogs to obtain a constraint on the cosmological parameters $\Omega_{\rm m}$ and dark energy equation of state $w_{\rm de}$. This test was pursued in \cite{Blake:2013noa}, with the first direct application of the method to galaxy data (specifically the WiggleZ survey \cite{Blake:2011wn}). Competitive distance measurements were obtained from three-dimensional Minkowski functional measurements, and issues associated with this measurement (principally sparse sampling) were highlighted. The conclusion of the work was that topology is potentially competitive with Baryon Acoustic Oscillations (BAO) as a standard ruler, however the physics and assumptions that go into the analysis are more involved, as the Minkowski functionals measure the shape of the full extent of the power spectrum in an integrated sense. In this work we measure the genus of both the BOSS, LOWZ and CMASS galaxy catalogs \citep{2015ApJS..219...12A} and the SDSS Main Galaxy Sample (SDSS MGS) \citep{2009ApJS..182..543A}. The low redshift SDSS MGS data provides a robust measure of the genus amplitude at low redshift, practically insensitive to the distance-redshift relation. In contrast, the higher redshift BOSS data will be sensitive to our choice of cosmological parameters when inferring distances. If we select an incorrect distance-redshift relation, the genus amplitude extracted from the BOSS data will systematically evolve, relative to the low redshift measurement. The reason for this effect is that an incorrect choice of comoving distance will cause us to select erroneous smoothing scales and effective areas, meaning that we will be measuring the slope of the matter power spectrum at different scales as a function of redshift. As the matter power spectrum is not scale invariant, this will manifest as an evolving genus amplitude. The principal challenge when using the genus as a standard ruler is that we must compare high redshift measurements to low redshift counterparts. However, the low redshift Universe is restricted in volume and the statistical uncertainty provides the dominant limitation on parameter constraints. To mitigate this problem, we measure the genus of the full three-dimensional field at low redshift. We then convert the three-dimensional measurement into a constraint on the theoretical expectation of the two-dimensional genus amplitude. The paper will proceed as follows. In Section \ref{sec:theory} we discuss some of the issues associated with using the genus amplitude as a standard ruler, and our method of extracting this quantity from galaxy data. We briefly review the extraction of the genus from two-dimensional shells of BOSS data in Section \ref{sec:obs}. In Section \ref{sec:3D} we detail the data, mask, mock catalogs and systematics associated with SDSS MGS measurement of the three-dimensional genus. The conversion from three dimensional measured genus to the theoretical expectation value of the two-dimensional genus amplitude is explained in Section \ref{sec:conv}. Finally in Section \ref{sec:constraints} we place constraints on cosmological parameters, then close with a discussion in Section \ref{sec:discuss}. This work is a companion to \cite{Appleby:2020pem}, which uses the absolute value of the genus amplitude (rather than its evolution with redshift, as in this work) to place constraints on the shape of the matter power spectrum. We discuss the relation between the two approaches in Section \ref{sec:discuss}. \section{Genus amplitude as a standard ruler} \label{sec:theory} The two-dimensional genus of a perturbatively non-Gaussian field without boundary is given by the so-called Edgeworth expansion \cite{Matsubara:1994we,2000astro.ph..6269M,Pogosyan:2009rg,Gay:2011wz,Codis:2013exa} \begin{eqnarray} \nonumber & & g_{\rm 2D}(\nu_{\rm A}) = A_{\rm G}^{(\rm 2D)} e^{-\nu_{\rm A}^{2}/2} \left[ H_{1}(\nu_{\rm A})+ \left[ {2 \over 3} \left( S^{(1)} - S^{(0)}\right) \times \right. \right. \\ \label{eq:mat1} & & \quad \left. \left. H_{2}(\nu_{\rm A}) + {1 \over 3} \left(S^{(2)} - S^{(0)}\right)H_{0}(\nu_{\rm A}) \right] \sigma_{0} + {\cal O}(\sigma_{0}^{2}) \right] , \end{eqnarray} \noindent where $A_{\rm G}^{(\rm 2D)}$ is the amplitude \begin{eqnarray} \label{eq:ag} & & A_{\rm G}^{(\rm 2D)} \equiv {1 \over 2(2\pi)^{3/2}} {\sigma_{1}^{2} \over \sigma_{0}^{2}} , \end{eqnarray} \noindent and the skewness parameters $S^{(0)}, S^{(1)}, S^{(2)}$ are related to the three point cumulants and will not be used here. $\sigma_{1}$ and $\sigma_{0}$ are defined as integrals over the power spectrum, in this work smoothed with Gaussian kernels of comoving scale $R_{\rm G}$ \begin{eqnarray} \label{eq:s02} & & \sigma_{0}^{2} = {1 \over (2\pi)^{2}}\int d^{2} k_{\perp} e^{-k_{\perp}^{2}R_{\rm G}^{2}} P_{\rm 2D}(k_{\perp},z) , \\ \label{eq:s12} & & \sigma_{1}^{2} = {1 \over (2\pi)^{2}}\int d^{2} k_{\perp} k_{\perp}^{2} e^{-k_{\perp}^{2}R_{\rm G}^{2}} P_{\rm 2D}(k_{\perp},z), \end{eqnarray} \noindent and the projected two-dimensional power spectrum $P_{\rm 2D}(k_{\perp},z)$ is related to its full three-dimensional counterpart according to \begin{equation}\label{eq:p2d} P_{\rm 2D}(k_{\perp},z) = {2 \over \pi} \int dk_{\mathbin{\|}} P_{\rm 3D}\left(k,z\right) {\sin^{2} [k_{\mathbin{\|}} \Delta] \over k_{\mathbin{\|}}^{2} \Delta^{2}} , \end{equation} \noindent where $\Delta$ is the comoving thickness of the two-dimensional slices of the field. ${\vec k}_{\perp}$ and $k_{\mathbin{\|}}$ are the wave numbers perpendicular and parallel to the line of sight respectively. The three-dimensional power spectrum of the density field that is traced by galaxies is the sum of the redshift-space distorted matter field and a shot noise contribution \begin{equation}\label{eq:p3df} P_{\rm 3D} (k,k_{\mathbin{\|}},z) = b^{2} \left( 1 + \beta {k_{\mathbin{\|}}^{2} \over k^{2}}\right)^{2} P_{\rm m}(z, k) + P_{\rm SN} , \end{equation} \noindent where $P_{\rm m}(z,k)$ is the matter power spectrum at redshift $z$, $P_{\rm SN}$ is the shot noise power spectrum $P_{\rm SN} =1/\bar{n}$, where $\bar{n}$ is the number density of galaxies. We introduce $\beta = f/b$, $b$ is the linear galaxy bias and $f$ is the growth factor. The quantity $\nu_{A}$ is the density threshold such that the excursion set has the same area fraction as a corresponding Gaussian field - \begin{equation}\label{eq:afrac} f_{A} = {1 \over \sqrt{2\pi}} \int^{\infty}_{\nu_{A}} e^{-t^{2}/2} dt , \end{equation} \noindent where $f_{A}$ is the fractional area of the field above $\nu_{A}$. This choice of $\nu_{\rm A}$ parameterization eliminates the non-Gaussianity in the one-point function \citep{1987ApJ...319....1G,1987ApJ...321....2W,1988ApJ...328...50M}. For the case of a Gaussian field, the genus amplitude is a measure of the shape of the linear matter power spectrum $P_{\rm m}(z,k)$, which is a conserved quantity for the $\Lambda$CDM model and certain generalisations (such as $w$CDM, assuming dark energy perturbations are negligible). If we use an incorrect cosmological model to infer the distance-redshift relation, then we get the smoothing scale $R_{\rm G}$ and volume occupied by galaxy data systematically wrong at different redshifts. Hence we will measure the shape of the power spectrum at different scales when using an incorrect expansion history. As a result, the genus amplitude that we extract from the data will spuriously evolve with redshift if we get the expansion history wrong. A low redshift measurement will represent the `true' genus amplitude having little dependence on the cosmology adopted, against which high redshift measurements can be compared. This effect was predicted in \citep{Park:2009ja} and explicitly measured using mock galaxies in \cite{Appleby:2018jew}. In reality a number of small systematic effects are present in real galaxy data that generate redshift evolution of this statistic. The primary sources of contamination are as follows, listed in order of severity \begin{enumerate} \item{We bin galaxies into redshift shells and apply a mass cut to fix the number density of tracers at each redshift to be constant, thus fixing a constant shot noise power spectrum $P_{\rm SN}$ in each shell. In contrast, the amplitude of the matter power spectrum $P_{\rm m}(z,k)$ decreases with redshift. It follows that the relative importance of the shot noise contribution in ($\ref{eq:p3df}$) will increase with redshift, which will manifest as an increasing genus amplitude at higher $z$. This effect depends on $R_{\rm G}$ relative to the mean galaxy separation $\bar{r}$, and is negligible for $R_{\rm G} \gg \bar{r}$.} \item{Linear redshift space distortion decreases the amplitude of the two-dimensional genus by around $\sim 9\%$, roughly constant over the redshift range $0 < z < 0.7$. However, it also introduces a mild $\sim 1\%$ redshift dependent evolution, decreasing the amplitude with increasing redshift. This is due to the redshift dependence of $\beta(z)$ in equation ($\ref{eq:p3df}$).} \item{Non-linear gravitational evolution will typically act to decrease the genus amplitude with decreasing redshift, which is an ${\cal O}(\sigma_{0}^{2})$ effect (so-called gravitational smoothing \cite{1989ApJ...345..618M,1991ApJ...378..457P,2005ApJ...633....1P}). } \end{enumerate} The magnitude of each of these effects depends on the number density of galaxies, the smoothing scales perpendicular and parallel to the line of sight ($R_{\rm G}$ and $\Delta$) and the area of the data. In Appendix A we use mock galaxy lightcone data to examine these effects in isolation, and argue that for the data and smoothing scales used in this work, no significant redshift evolution of the genus amplitude will be induced. To briefly summarise the results in Appendix A : We take constant comoving scale $R_{\rm G} = 20 {\rm Mpc}$ to Gaussian smooth the data perpendicular to the line of sight, and comoving slice thickness $\Delta = 80 \, {\rm Mpc}$ along the line of sight. At these scales, the redshift space distortion and shot noise effects both introduce an evolution of the genus amplitude of order $\sim 1\%$ over the redshift range $0 < z < 0.7$. Shot noise/redshift space distortion causes the genus amplitude to increase/decrease with increasing $z$. The two competing effects effectively cancel for the particular galaxy sample considered in this work. Furthermore, the mean galaxy separation of the two-dimensional projected fields is approximately $\bar{r} \simeq 15 \, {\rm Mpc}$, smaller than $R_{\rm G} = 20 \, {\rm Mpc}$. This makes the non-Gaussianity of the shot noise contribution small. The non-Gaussian gravitational corrections to the amplitude are small. We quantify this statement by measuring the next-to-leading-order correction term $a_{3}H_{3}(\nu_{\rm A})$, finding it to be $\sim {\cal O}(1\%)$ at the scales probed. Non-Gaussian corrections are suppressed when the area fraction threshold $\nu_{\rm A}$ is used to define the excursion set as opposed to the standard threshold $\nu$. We find no evidence of evolution of $a_{3}$ over the range $0.25 < z < 0.6$ relevant to the BOSS data. Numerical systematic effects also exist. The area of our data slices decreases at low redshifts for fixed solid angle, and the excursion set regions at high $|\nu|$ are more difficult to be sampled in a smaller area. Whenever the excursion set is poorly sampled, the genus amplitude will generically be biased high. To eliminate this bias, we must only measure the genus curve over a range of threshold values $-\nu_{0} < \nu < \nu_{0}$ for which the excursion set is well sampled at all redshifts. We vary the threshold limit $\nu_{0}$ to check that the data provides an unbiased measurement of the genus curve. The range $|\nu_{A}| < 2.5$ is well represented within our shells, so we measure the genus curve over this range. \section{Observational Data $0.25 < \lowercase{z} < 0.6$} \label{sec:obs} Our treatment of the high redshift data -- SDSS-III Baryon Oscillation Spectroscopic Survey (BOSS) \citep{2000AJ....120.1579Y} -- has been described in detail in \cite{Appleby:2020pem}. To briefly review, we bin the galaxies into $N_{\rm z} = 12$ shells of comoving thickness $\Delta = 80 \, {\rm Mpc}$, 6/6 from the LOWZ and CMASS data, over the range $0.25 < z < 0.6$. We apply a mass cut to fix the number density as $\bar{n} = 6.25 \times 10^{-5} \, ({\rm Mpc})^{-3}$ within each shell. With this choice, the shot noise contribution to the field is large but the clustering signal is dominant for the smoothing scales adopted in this work. The galaxies are weighted to account for observational systematics. Specifically, the following weight was applied to each galaxy in the LOWZ and CMASS sample \begin{equation} w_{\rm tot} = w_{\rm systot}\left( w_{\rm cp} + w_{\rm noz} -1 \right) \end{equation} \noindent where $w_{\rm cp}$ is the correction factor to account for the subsample of galaxies that are not assigned a spectroscopic fibre, $w_{\rm noz}$ is for the failure in the pipeline to assign redshifts due to certain galaxies, and $w_{\rm systot}$ represents non-cosmological fluctuations in the CMASS target density due to stellar density and seeing. The redshift bin limits are presented in Table \ref{tab:1}; these were derived using the Planck cosmological parameters $w_{\rm de} = -1$, $\Omega_{\rm m} = 0.307$ to define slices of constant comoving thickness $\Delta=80 \, {\rm Mpc} $. We should vary these limits and re-bin the galaxies each time we vary the cosmology in the distance redshift relation. However, because the genus amplitude is insensitive to $\Delta$ for thick slices, we can fix these limits throughout without biasing our results. We provide evidence to support this statement in Appendix B. \begin{table} \begin{center} \begin{tabular}{|| c | c ||} \hline LOWZ & CMASS \\ \hline \, $0.250 < z \leq 0.271$ \, & \, $0.453 < z \leq 0.476$ \, \\ \, $0.271 < z \leq 0.292$ \, & \, $0.476 < z \leq 0.500$ \, \\ \, $0.292 < z \leq 0.313$ \, & \, $0.500 < z \leq 0.524$ \, \\ \, $0.313 < z \leq 0.334$ \, & \, $0.524 < z \leq 0.548$ \, \\ \, $0.334 < z \leq 0.356$ \, & \, $0.548 < z \leq 0.573$ \, \\ \, $0.356 < z \leq 0.378$ \, & \, $0.573 < z \leq 0.598$ \, \\ \hline \end{tabular} \caption{\label{tab:1} The redshift limits of the LOWZ and CMASS shells used in this work.} \end{center} \end{table} HEALPix\footnote{http://healpix.sourceforge.net} \citep{Gorski:2004by} is used to bin the galaxies into pixels on the unit sphere. A galaxy number density field $\delta_{i,j} \equiv (n_{i,j}-\bar{n}_{j})/\bar{n}_{j}$ is defined, where $1 \leq j \leq N_{\rm z}$ denotes the redshift bin (of which there are $N_{\rm z}=12$ in total) and $1 \leq i \leq N_{\rm pix}$ is the pixel identifier on the unit sphere. $\bar{n}_{j}$ is the mean number of galaxies contained within a pixel at each redshift shell, and $n_{i,j}$ is the number of galaxies contained within pixel $i$ in redshift slice $j$. We use $N_{\rm pix} = 12 \times 512^{2}$ pixels. The survey geometry and veto masks \cite{Reid:2015gra} were then used to generate a binary healpix map : $\Theta_{i} =1$ if the survey angular selection function in the $i^{\rm th}$ pixel is larger than some cutoff $\Theta_{\rm cut} = 0.8$ and $\Theta_{i} = 0$ otherwise, where $i$ runs over $N_{\rm pix}$ pixels. The $\Theta_{i}$ mask was applied to the galaxy field $\delta_{i, j}$. We smooth the two-dimensional density fields, and the $\Theta_{i}$ mask, in each shell using angular scale $\theta_{\rm G} = R_{\rm G}/d_{\rm cm}(z_{j},\Omega_{\rm m}, w_{\rm de})$, where $R_{\rm G} = 20 {\rm Mpc}$ is the comoving smoothing scale and $d_{\rm cm}(z_{j},\Omega_{\rm m}, w_{\rm de})$ is the comoving distance to the center of the $j^{\rm th}$ redshift shell. Defining $\tilde{\Theta}_{i,j}$ and $\tilde{\delta}_{i,j}$ as the smoothed mask and density fields, we re-define $\tilde{\delta}_{i,j} = 0$ if $\tilde{\Theta}_{i,j} < \Theta_{\rm cut}$ and $\tilde{\delta}_{i,j} \to \tilde{\delta}_{i,j}/\tilde{\Theta}_{i,j}$ otherwise. Finally, we re-apply the original unsmoothed $\Theta_{i}$ mask. This procedure eliminates regions close to the boundary, where the field may not be well reconstructed. In Appendix C of \cite{Appleby:2020pem} we explicitly show that our masking procedure, and method of genus extraction, provides an unbiased estimate of the genus, and we direct the reader to this paper for further details. The important underlying point is that we are extracting the genus per unit area, which is a local quantity and hence can be estimated in an unbiased manner from a cut-sky galaxy sample. Finally we divide the genus by the total area of the data $A_{j} = 4\pi f_{\rm sky} d_{\rm cm}^{2}(z_{j},\Omega_{\rm m}, w_{\rm de})$, where $f_{\rm sky}$ is the fractional area of the data on the sky. The genus is reconstructed using the method described in \cite{Schmalzing:1997uc,Appleby:2018jew}, which provides an unbiased estimate of the full sky genus from an observed patch. We measure the genus for $200$ values of the threshold $\nu_{\rm A}$, equi-spaced over the range $-2.5 < \nu_{\rm A} < 2.5$, then take the average over every four values to obtain $N_{\nu_{\rm A}}=50$ measurements. We label the measured values $g_{j}^{n}$, where $j$ runs over the redshift shells and $1 \leq n \leq N_{\nu_{\rm A}}$ over the $N_{\nu_{\rm A}}=50$ thresholds. We then extract the genus amplitudes $A^{(\rm 2D)}_{j}$ by minimizing the following $\chi^{2}$ functions at each redshift -- \begin{equation}\label{eq:ch2d} \chi^{2}_{j} = \sum_{n=1}^{N_{\nu_{\rm A}}}\sum_{m=1}^{N_{\nu_{\rm A}}} \Delta g^{n}_{j} \Sigma_{n,m}^{-1}(z_{j}) \Delta g^{m}_{j} , \end{equation} \noindent with respect to the parameters $A_{j}^{\rm (2D)}, a_{0, j}, a_{2, j}, a_{3, j}$, where \begin{eqnarray} \nonumber & & \Delta g^{n}_{j} = g_{j}^{n} - A_{j}^{\rm (2D)} e^{-\nu_{{\rm A},n}^{2}/2} \left[ a_{0, j} H_{0}(\nu_{{\rm A},n}) + \right. \\ \label{eq:herm2d} & & \qquad \left. H_{1}(\nu_{{\rm A},n}) + a_{2, j} H_{2}(\nu_{{\rm A},n}) + a_{3, j} H_{3}(\nu_{{\rm A},n}) \right] , \end{eqnarray} \noindent and $\Sigma_{n,m}(z_{j})$ are the covariance matrices associated with $\Delta g^{n}_{j}$. $\Sigma_{n,m}(z_{j})$ are obtained using the patchy mock galaxy catalogs \citep{2016MNRAS.456.4156K,2016MNRAS.460.1173R,2014MNRAS.439L..21K,10.1093/mnras/stv645} -- further information on the covariance matrices used in our analysis can be found in \cite{Appleby:2020pem}. The measured genus values $g_{j}^{n}$ are functions of the distance-redshift relation, and hence the cosmological parameters $(\Omega_{\rm m}, w_{\rm de})$. This parameter sensitivity enters in the definition of the angular smoothing scale $\theta_{\rm G} = R_{\rm G}/d_{\rm cm}(z_{j},\Omega_{\rm m}, w_{\rm de})$ and the area occupied by the data $A_{j} = 4\pi f_{\rm sky} d_{\rm cm}^{2}(z_{j},\Omega_{\rm m}, w_{\rm de})$. We repeat our measurement of $g_{j}^{n}$ and minimization of ($\ref{eq:herm2d}$) for each cosmological parameter set. We fix $h=0.677$ to its Planck value throughout, where $H_{0} = 100 h \, {\rm km}\, {\rm s^{-1}} \, {\rm Mpc}^{-1}$. \begin{table} \begin{center} \begin{tabular}{||c c ||} \hline Parameter \, & Fiducial Value \\ [0.5ex] \hline\hline $\Omega_{\rm m}$ & $0.307$ \\ $h$ & $0.677$ \\ $w_{\rm de}$ & $-1$ \\ $\Delta$ & $80 {\rm Mpc}$ \\ $R_{\rm G}$ & $20 {\rm Mpc}$ \\ \hline \end{tabular}\label{tab:ini} \caption{\label{tab:ii}Fiducial parameters used to fix the slice thickness, and the fiducial parameters used to calculate the genus in this work. $\Delta$ is the thickness of the two dimensional slices of the density field, and $R_{\rm G}$ is the Gaussian smoothing scale used in the two-dimensional planes perpendicular to the line of sight. } \end{center} \end{table} Figure \ref{fig:4} exhibits the two-dimensional genus curves [top panel] and the corresponding amplitudes $A^{(\rm 2D)}_{j}$ [bottom panel] extracted from the $N_{\rm z} = 12$ LOWZ and CMASS data shells \citep{Appleby:2020pem}. The genus curves and amplitudes are functions of the assumed cosmological model, and in this figure we have taken a $\Lambda$CDM model with parameters given in Table \ref{tab:ii}. The genus amplitude is reconstructed to accuracy $\sim 3\%$ and $\sim 1.5\%$ in the LOWZ/CMASS data respectively, and we present the best fit amplitudes, $1 \, \sigma$ error bars and reduced $\chi^{2}$ values in Table \ref{tab:amps}. Three of the redshift bins present relatively poor fits with a $\chi^{2}$ per degree of freedom $> 1.5$ : two in the LOWZ data and one CMASS slice. This could indicate that the mocks are under-predicting the true statistical uncertainty, possibly lacking cosmic variance. A theoretical understanding of the statistical uncertainty of the Minkowski functionals is currently lacking, as no prediction for their covariance is available. However, the amplitudes extracted from the slices are all consistent. We test this by performing a simple linear regression to the best fit $A^{(2D)}$ data points, assuming the data are uncorrelated. We find a p-value of $p=0.83$ for the null hypothesis that the slope of the linear fit is consistent with zero, indicating no statistically significant redshift evolution of the genus amplitude over the redshift range $0.25 < z < 0.6$. This is expected from theoretical arguments, but provides an important consistency check on our analysis. \begin{figure}[b!] \includegraphics[width=0.5\textwidth]{2D_genus_CMASS_LOWZ_Rg_20.pdf} \includegraphics[width=0.5\textwidth]{BOSS_amplitude_Rg_20.pdf} \caption{[Top panel] Twelve, two-dimensional genus curves obtained from the BOSS LOWZ and CMASS data, as a function of $\nu_{\rm A}$. [Bottom panel] Two-dimensional genus amplitude measurements derived from the $N_{\rm z} = 12$ genus curves presented in the top panel. The same color scheme is applied in both panels. } \label{fig:4} \end{figure} \begin{table} \begin{center} \begin{tabular}{|| c c c ||} \hline Redshift & $A_{\rm 2D} \times 10^{5} {\rm Mpc}^{-2}$ & $\chi^{2}/{\rm DoF}$ \\ \hline 0.26 & $4.97 \pm 0.16$ & 1.49 \\ 0.28 & $4.95 \pm 0.15$ & 1.12 \\ 0.30 & $5.16 \pm 0.13$ & 1.53 \\ 0.32 & $5.19 \pm 0.13$ & 1.58 \\ 0.35 & $5.14 \pm 0.13$ & 1.02 \\ 0.37 & $4.82 \pm 0.12$ & 1.20 \\ \hline 0.46 & $5.15 \pm 0.09$ & 1.34 \\ 0.49 & $5.11 \pm 0.10$ & 1.08 \\ 0.51 & $4.92 \pm 0.09$ & 1.63 \\ 0.54 & $5.04 \pm 0.08$ & 0.96 \\ 0.56 & $5.03 \pm 0.08$ & 1.15 \\ 0.59 & $5.14 \pm 0.08$ & 1.41 \\ \hline \end{tabular} \caption{\label{tab:amps}The mean and $1 \, \sigma$ uncertainty of the genus amplitudes extracted from the six LOWZ and CMASS shells. The third column is the reduced $\chi^{2}$ value of the fit (46 degrees of freedom). } \end{center} \end{table} \section{Low Redshift Data $0 < \lowercase{z} < 0.12$} \label{sec:3D} To test the expansion history, we also require an accurate measurement of the genus at low redshift, which should be practically insensitive to the distance-redshift relation. This would provide an anchor, a measurement of the shape of the linear matter power spectrum against which high redshift genus curves can be compared. However, two-dimensional slices at low redshifts have very small areas and suffer from curvature effects. To overcome this limitation we use the three-dimensional local galaxy distribution in the SDSS MGS and apply a Gaussian smoothing over a smaller scale. The measured three-dimensional genus will be used to estimate the two-dimensional genus amplitude. In the following sections, we describe in detail our method -- the theory underlying the three-dimensional genus, the galaxy data used, the mask, how we remove systematics from the genus amplitude using mock galaxy catalogs and how we infer the two-dimensional genus amplitude from the three-dimensional data. \subsection{Theory -- Expectation value of three-dimensional genus} The genus per unit volume of a three dimensional, Gaussian random field as a function of threshold $\nu$ is given by \citep{10.1143/PTP.76.952, Adler, Gott:1986uz, Hamilton:1986} \begin{eqnarray} \label{eq:gg} & & g_{\rm 3D}(\nu) = {1 \over 4\pi^{2}} \left({\Sigma_{1}^{2} \over 3\Sigma_{0}^{2}}\right)^{3/2} \left(1 - \nu^{2} \right) e^{-\nu^{2}/2} , \\ \nonumber & & \Sigma_{0}^{2} = \langle \delta^{2}_{\rm 3D} \rangle , \qquad \Sigma_{1}^{2} = \langle |\nabla \delta_{\rm 3D} |^{2} \rangle , \end{eqnarray} \noindent where $\Sigma_{0,1}$ are the two-point cumulants of the three-dimensional field, related to the power spectrum as \begin{eqnarray} \label{eq:s03} & & \Sigma_{0}^{2} = \int d^{3} k e^{-k^{2}\Lambda_{\rm G}^{2}} P_{\rm 3D}(k) , \\ \label{eq:s13} & & \Sigma_{1}^{2} = \int d^{3} k e^{-k^{2}\Lambda_{\rm G}^{2}} k^{2} P_{\rm 3D}(k) , \\ \end{eqnarray} \noindent where we have smoothed with a Gaussian kernel of width $\Lambda_{\rm G}$. The genus amplitude is given by \begin{equation}\label{eq:g1} A_{\rm G}^{(\rm 3D)} = {1 \over 4\pi^{2}} \left({\Sigma_{1}^{2} \over 3\Sigma_{0}^{2}}\right)^{3/2} . \end{equation} \noindent The leading order non-Gaussian expansion of the genus, in terms of the $\nu_{\rm A}$ threshold convention, is given by \citep{Matsubara:1994we, 2000astro.ph..6269M,Pogosyan:2009rg, Gay:2011wz,Codis:2013exa} \begin{eqnarray} \nonumber & & g_{\rm 3D}(\nu_{\rm A}) = A_{\rm G}^{(\rm 3D)} e^{-\nu_{\rm A}^{2}/2} \left[H_{2}(\nu_{\rm A}) + \left[ \left(S^{(1)} - S^{(0)}\right) \times \right. \right. \\ \label{eq:mats3d} & & \quad \left. \left. H_{3}(\nu_{\rm A}) + \left(S^{(2)} - S^{(0)}\right) H_{1}(\nu_{\rm A})\right] \Sigma_{0} + {\cal O}(\Sigma_{0}^{2}) \right] . \end{eqnarray} \noindent As for the two-dimensional genus, the amplitude (coefficient of $H_{2}$ Hermite polynomial) is not modified by the non-Gaussian effect of gravitational collapse to linear order in the $\Sigma_{0}$ expansion ($\ref{eq:mats3d}$). \subsection{Data} \label{sec:3DSDSS} To extract the genus of the low redshift matter density, we use the seventh data release of the main galaxy catalog of the SDSS DR7 \citep{2009ApJS..182..543A}. Specifically, we adopt the Korea Institute for Advanced Study Value Added Galaxy Catalog (KIAS VAGC) \citep{articleyyc,2005AJ....129.2562B,2008ApJ...674.1217P}. The KIAS catalog supplements redshifts from other existing galaxy redshift catalogs -- the updated Zwicky catalog \citep{1999PASP..111..438F}, the IRAS Point Source Catalog Redshift Survey \cite{Saunders:2000af}, the Third Reference Catalogue of Bright Galaxies \citep{1991rc3..book.....D}, and the Two Degree Field Galaxy Redshift Survey \citep{2001MNRAS.328.1039C}. The KIAS VAGC contains $593,514$ redshifts of SDSS main galaxies in the $r$-band Petrosian magnitude range $10 < r_{\rm p} < 17.6$. Details of the selection criteria, classification schemes and angular selection functions can be found in \citep{articleyyc}. To maximize the area to boundary ratio of the data, we remove the three southern stripes and Hubble deep field region. The catalog provides angular positions, redshifts and absolute, $r$-band magnitudes normalised to the $z=0.1$ epoch, calculated from extinction corrected AB fluxes and an evolution correction $E(z) = 1.6(z-0.1)$ \citep{Tegmark:2003uf}. All magnitudes and colors are corrected to the redshift $z=0.1$ epoch. Following \cite{Choi:2010sx}, we apply a magnitude cut $M_{\rm r} < -20.19+ 5 \log h$ to generate a volume limited sample over the redshift range $0.02 < z < 0.116$, with a mean galaxy separation of $r_{\rm gal} = \bar{n}_{\rm gal}^{-1/3} = 8.3 {\rm Mpc}$, where $\bar{n}_{\rm gal}$ is the mean galaxy number density within the volume. The redshift range was selected to ensure a maximal number of galaxies are used in the analysis. The galaxies are presented as a function of redshift and absolute magnitude in Figure \ref{fig:sdss} (top panel), and the angular distribution of all galaxies used in this work are presented in the bottom panel. Note that in this Figure and in what follows the factor $5 \log h$ will be dropped in the expression of $M_{\rm r}$. \begin{figure}[b!] \includegraphics[width=0.5\textwidth]{sdss_mag_z.pdf} \includegraphics[width=0.5\textwidth]{sdss_ra_dec.pdf} \caption{[Top panel] The absolute, $r$-band magnitude of the SDSS MGS galaxies as a function of redshift. The solid red lines indicate the boundaries of our volume limited sample with $0.02 < z < 0.116$ and $M_{\rm r} < -20.19$. [Bottom panel] The angular distribution of the volume limited sample of galaxies on the sky; declination vs right ascension (in degrees). } \label{fig:sdss} \end{figure} To convert the galaxy catalog into a three-dimensional density field, we construct a regular three-dimensional $N_{\rm pix}^{3} = 512^3$ pixel lattice in a cube of side length $L_{\rm box} = 750 {\rm Mpc}$ and use cosmological parameters given in Table \ref{tab:ii} to infer the distance-redshift relation. We bin the galaxies into pixels using the Cloud-in-Cell scheme, generating a three-dimensional number density field $\delta_{ijk} = (n_{ijk}-\bar{n})/\bar{n}$, where $\bar{n}$ is the average number of galaxies within the unmasked pixels and $1 \leq i,j,k \leq N_{\rm pix}$ subscripts are pixel labels. The galaxies are weighted via the angular selection function during this binning procedure. The angular selection function constitutes a set of weights as a function of angular position on the sky - $w(\theta,\phi)$. It is defined in this work as $w = 0$ when outside the survey geometry or inside a bright star mask and $0 < w \leq 1$ when inside the survey geometry. This function represents the survey completeness as a function of position on the sky. Because we weight the galaxies according to the angular selection function, we convert $w$ into a binary field with $w = 1$ if $w > w_{\rm cut}$ and $w=0$ otherwise, where $w_{\rm cut}$ was selected as $w_{\rm cut} = 0.8$. Using the fiducial distance-redshift relation, we define $w_{ijk}$ as the projection of the angular selection function into a $512^{3}$, three-dimensional pixel cube of the same dimensions as $\delta_{ijk}$. We smooth the three-dimensional density field $\delta_{ijk}$ with a Gaussian kernel of width $\Lambda_{\rm G} = 8.86 {\rm Mpc}$ (this value is $R_{G} = 6 {\rm Mpc}/h$, following \cite{Choi:2010sx}), and also smooth the projected selection function $w_{ijk}$ with the same kernel, defining the smoothed counterparts as $\tilde{\delta}_{ijk}$ and $\tilde{w}_{ijk}$. We then redefine $\tilde{\delta}_{ijk} = 0$ for all pixels in which $\tilde{w}_{ijk} < 0.9$ and $\tilde{\delta}_{ijk} = \tilde{\delta}_{ijk}/\tilde{w}_{ijk}$ if $\tilde{w}_{ijk} \ge 0.9$. We then re-apply the original mask and set $\tilde{\delta}_{ijk} = 0$ if $w_{ijk} = 0$. This eliminates all data in the vicinity of the survey boundary. From the masked field we reconstruct the three dimensional genus, by generating iso-field triangulated meshes and calculating the Gaussian curvature at the triangle vertices. Details of the method can be found in \citep{Appleby:2018tzk}. We calculate the genus as a function of $\nu_{\rm A}$, where $\nu_{\rm A}$ is the threshold chosen to match the volume fraction of a Gaussian random field. We select $200$, $\nu_{\rm A}$ threshold values over the range $-2.5 < \nu_{\rm A} < 2.5$, then take the average of every four values to obtain the genus at $N_{\nu_{\rm A}} = 50$, $\nu_{\rm A}$ values equi-spaced over this range. The resulting measured genus values are presented in Figure \ref{fig:sdss_g3d} (top panel, red points). To extract the genus amplitude from the measurements, we fit a Hermite polynomial expansion to the data points by minimizing the following $\chi^{2}$ function \begin{equation}\label{eq:chb} \chi^{2} = \Delta g^{\rm T} \Gamma^{-1} \Delta g , \end{equation} \noindent where \begin{eqnarray} \nonumber & & \Delta g_{i} = g_{i} - A^{(\rm 3D)} e^{-\nu_{{\rm A},i}^{2}/2} \left[ a_{0} H_{0}(\nu_{{\rm A},i}) +a_{1} H_{1}(\nu_{{\rm A},i}) + \right. \\ \label{eq:herma} & & \qquad \qquad \left. H_{2}(\nu_{{\rm A},i}) +a_{3} H_{3}(\nu_{{\rm A},i}) +a_{4} H_{4}(\nu_{{\rm A},i}) \right] . \end{eqnarray} \noindent $A^{(\rm 3D)}, a_{0}, a_{1}, a_{3}, a_{4}$ are free parameters to be constrained via the minimization of ($\ref{eq:chb}$), the $i$ subscript denotes the $i^{\rm th}$, $\nu_{\rm A}$ threshold bin and $g_{i}$ are the measured genus values. In the fitting procedure we include the leading order $a_{1},a_{3}$ Hermite polynomial coefficients and the next-to-leading order even Hermite polynomial contributions $a_{0}$, $a_{4}$. Introducing additional Hermite polynomials does not significantly modify the fit. The covariance matrix $\Gamma_{ij}$ is obtained from mock galaxy catalogs, as described in the following section. \subsection{Mock Galaxy Catalogs} Mock galaxy catalogs are generated using Horizon Run 4 (HR4) \cite{Kim:2015yma}. Horizon Run 4 is a cosmological scale dark matter simulation in which $N = 6300^{3}$ particles in a volume $V = (3150 {\rm Mpc}/h)^{3}$ are evolved using a modified GOTPM scheme\footnote{For a description of the original GOTPM code, please see \cite{Dubinski:2003fq}. A description of the modifications introduced in the Horizon Run project can be found at https://astro.kias.re.kr/~kjhan/GOTPM/index.html.}. The initial conditions are obtained using second order Lagrangian perturbation theory \cite{L'Huillier:2014dpa}, and the cosmological parameters used are $h=0.72$, $n_{\rm s} = 0.96$, $\Omega_{\rm m} = 0.26$, $\Omega_{\rm b} = 0.048$. We use the $z=0$ snapshot box to create mock galaxy catalogs, using the HR4 cosmological parameters to infer distances. Details of the numerical implementation, and the method by which mock galaxies are constructed can be found in \cite{Hong:2016hsd}. The mock galaxies are defined using the most bound halo particle galaxy correspondence scheme, and the survival time of satellite galaxies post merger is estimated via the merger timescale model described in \cite{Jiang:2007xd}. The snapshot box is decomposed into $N_{\rm r}=360$ non-overlapping volumes, and mock galaxy catalogs are constructed from each region, with the same number density, redshift range and survey geometry as the data. From each mock sample we repeat our analysis ; bin the galaxies into a regular cubic pixel lattice, smooth the resulting number density field with a Gaussian of scale $\Lambda_{\rm G} = 8.86 {\rm Mpc}$, apply the smoothed mask $\tilde{w}_{ijk}$ then unsmoothed binary mask $w_{ijk}$, then extract the genus from $\tilde{\delta}_{ijk}$. The result is a set of genus measurements $g^{(\rm 3D)}_{i, m}$, where $1 \leq i \leq N_{\nu_{\rm A}}$ runs over $N_{\nu_{\rm A}} = 50$, $\nu_{\rm A}$ bins uniformly sampled in the range $-2.5 < \nu_{\rm A} < 2.5$ and $1 \leq m \leq N_{\rm r}$ runs over the randomly sampled realisations. We measured the genus for $200$ values and averaged every fourth point to arrive at the $N_{\nu_{\rm A}} = 50$ values in each mock sample. The covariance matrix is constructed as \begin{equation} \label{eq:cov3d} \Gamma_{ij} = {1 \over N_{\rm r} - 1 }\sum_{m=1}^{N_{\rm r}} \left(g^{(\rm 3D)}_{i,m} - \langle g^{(\rm 3D)}_{i} \rangle \right) \left(g^{(\rm 3D)}_{j,m} - \langle g^{(\rm 3D)}_{j} \rangle \right) , \end{equation} \noindent where $\langle g_{i}^{(\rm 3D)} \rangle$ is the average value of the genus in the $i^{\rm th}$ threshold bin. In Figure \ref{fig:sdss_g3d} (bottom panel) we exhibit the covariance matrix $\Gamma_{ij}$ extracted from the mock realisations. We note the strong correlation between genus values measured at different thresholds. A similar covariance matrix was numerically extracted from mock data in \cite{Blake:2013noa}. \subsection{Results -- Three-Dimensional Genus of SDSS MGS} \label{sec:res3d} In Figure \ref{fig:sdss_g3d} (top panel) we exhibit the genus measured from the SDSS MGS (red points), and the best-fit curve reconstruction $g^{(\rm th)}(\nu_{\rm A})$ (black solid line). The error bars are the square root of the diagonal elements of $\Gamma_{ij}$. After minimizing the $\chi^{2}$ function ($\ref{eq:chb}$), in Table \ref{tab:parm_herm} (first row) we present the best fit and uncertainty on the $(A^{(\rm 3D)}, a_{0}, a_{1}, a_{3}, a_{4})$ parameters. We also present a fit including just $(A^{(\rm 3D)}, a_{1}, a_{3})$, and $(A^{(\rm 3D)})$ only for comparison. If we regard equation ($\ref{eq:herma}$) as an expansion in $\sigma_{0}$, then $a_{1},a_{3}$ should be of order $a_{1,3} \sim {\cal O} (\sigma_{0})$ and $a_{0}, a_{4} \sim {\cal O}(\sigma_{0}^{2})$. At the scales adopted in this work, the higher order terms are large, which indicates that the field is non-linear. In spite of this, all three amplitude measurements are consistent. However, the second and third rows yield a significantly worse $\chi^{2}$. The genus amplitude presented in the first row of Table \ref{tab:parm_herm}; $A^{(\rm 3D)} = 4.040 \times 10^{-6} {\rm Mpc}^{-3}$, and uncertainty $\Delta A^{(\rm 3D)} = 0.197 \times 10^{-6} {\rm Mpc}^{-3}$, will be used as the low redshift genus amplitude measurement from the SDSS MGS. This low redshift data point will be used to complement the higher redshift, two-dimensional BOSS measurements. \begin{table*} \begin{center} \begin{tabular}{|| c c c c c c ||} \hline $A^{(\rm 3D)} \times 10^{6} (\rm Mpc^{-3})$ & $a_{0}$ & $a_{1}$ & $a_{3}$ & $a_{4}$ & $\chi^{2}$ \\ [0.5ex] \hline\hline $4.040 \pm 0.197$ & $0.095 \pm 0.016$ & $-0.009 \pm 0.025$ & $0.042 \pm 0.019$ & $-0.006 \pm 0.014$ & $63.9$ \\ $4.167 \pm 0.135 $ & - & $-0.015 \pm 0.026$ & $0.026 \pm 0.018$ & - & $117.6$ \\ $4.084 \pm 0.130 $ & - & - & - & - & $124.0$ \\ \hline \end{tabular} \caption{\label{tab:parm_herm}Best fit Hermite polynomial coefficients for the three-dimensional genus curve extracted from the SDSS MGS. The top row is the full fitting function used in this work. In the second row we set $a_{0} = a_{4} = 0$ and in the third row we fix $a_{0} = a_{1} = a_{3} = a_{4} = 0$, and fit a Gaussian curve to the points. } \end{center} \end{table*} \begin{figure} \includegraphics[width=0.48\textwidth]{SDSS_3D_genus_best_fit.pdf} \includegraphics[width=0.48\textwidth]{correlation_3D.pdf} \caption{[Top panel] The genus curve measured from the SDSS MGS using a Gaussian smoothing length of $\Lambda_{\rm G} = 8.86 {\rm Mpc}$. The red points correspond to measured values, and the error bars are from the diagonal components of the covariance matrix ($\ref{eq:cov3d}$). The black solid line is the best fit curve reconstruction ($\ref{eq:herma}$) with parameters given in the first row of table \ref{tab:parm_herm}. [Bottom panel] The covariance matrix $\Gamma_{ij}$. Bins separated by $\Delta \nu_{\rm A} < 0.25$ are strongly correlated (red), and bins at larger separations present anti-correlation (blue). } \label{fig:sdss_g3d} \end{figure} The measured amplitude $A^{(\rm 3D)}$ of the SDSS MGS is effectively insensitive to cosmological parameters. We confirm that reasonable variation of cosmological parameters does not affect the measured value of $A^{(\rm 3D)}$ in Figure \ref{fig:ps1}. We select five parameter sets $(\Omega_{\rm m},w_{\rm de})= (0.21,-1), (0.31,-1), (0.38,-1), (0.31,-0.5), (0.31,-1.5)$ to infer the distance redshift relation, construct the density field from the galaxy positions and measure the genus amplitude $A^{(\rm 3D)}$ by minimizing the $\chi^{2}$ function ($\ref{eq:chb}$). The resulting amplitudes and uncertainties are presented in Figure \ref{fig:ps1}. We find no significant change in the measured genus amplitude if we use different cosmological parameters to infer the distance redshift relation, as expected at low redshift $z < 0.12$. For this reason, we fix $A^{(\rm 3D)} = 4.040 \pm 0.197 \times 10^{-6} {\rm Mpc}^{-3}$, corresponding to the red data point in Figure \ref{fig:ps1}. \begin{figure} \includegraphics[width=0.48\textwidth]{amplitude_SDSS_diff_cos.pdf} \caption{The genus amplitude as measured from the SDSS MGS, assuming five different cosmological models to infer the distance redshift relation. The measured amplitude of the low redshift sample is effectively insensitive to our choice. We select a fiducial cosmology $(\Omega_{\rm m}, w_{\rm de}) = (0.307, -1)$ to infer $A^{(\rm 3D)}$ (red diamond). } \label{fig:ps1} \end{figure} \section{Three- to Two-dimensional Genus Amplitude} \label{sec:conv} Our intention is to combine the SDSS MGS and BOSS genus measurements, and find the cosmology that minimizes the evolution of the two-dimensional genus amplitude. However, to directly compare these results, we must convert the three dimensional genus amplitude measurement from the SDSS MGS to a corresponding effective two-dimensional amplitude. To do so, we perform the following steps -- \begin{enumerate} \item{Correct the measured three-dimensional genus amplitude for gravitational smoothing and non-linear redshift space distortion with a correction factor obtained from simulations.} \item{Using the now corrected, real space amplitude, perform a cosmological parameter search by comparing this value to its Gaussian expectation value. The result is a set of parameter constraints on $(\Omega_{\rm c}h^{2}$, $n_{\rm s})$ which determine the shape of the linear power spectrum.} \item{Use the best fit cosmological parameters $(\Omega_{\rm c}h^{2}$, $n_{\rm s})$ to infer the two-dimensional theoretical expectation of the genus amplitude \begin{equation}\label{eq:agauss} A^{(\rm 2D)}_{{\rm G}} = {1 \over 2(2\pi)^{3/2}} { \int k_{\perp}^{3} e^{-k_{\perp}^{2}R_{\rm G}^{2}} P_{\rm 2D}(k_{\perp}) d k_{\perp} \over \int k_{\perp} e^{-k_{\perp}^{2}R_{\rm G}^{2}} P_{\rm 2D}(k_{\perp}) d k_{\perp} } . \end{equation} Where the two-dimensional power spectrum $P_{\rm 2D}$ is related to the three dimensional matter power spectrum according to equation ($\ref{eq:p2d}$), and we use the three-dimensional power spectrum ($\ref{eq:p3df}$) with $\bar{n}=6.25 \times 10^{-5} \, {\rm Mpc}^{-3}$, $b=2$, $R_{\rm G} = 20 {\rm Mpc}$, $\Delta = 80 {\rm Mpc}$; the values relevant to the BOSS data. The end result is an inferred two-dimensional genus amplitude, based on the SDSS MGS data. } \end{enumerate} \noindent In the following subsections we discuss each point in turn. \subsection{Systematics removal} \label{sec:3Dsys} Before comparing $A^{(\rm 3D)}$ to its Gaussian expectation value, we must account for non-linear effects. The most significant systematics that must be corrected are non-linear gravitational evolution and redshift space distortion. At the smoothing scale $\Lambda_{\rm G} = 8.86 {\rm Mpc}$, the effect of redshift space distortion on the genus amplitude will not be well approximated by the linear Kaiser approximation \cite{Choi:2013eej}. Therefore, to eliminate its effect we directly compare the three-dimensional genus measured from the simulations in real and redshift space, and use the difference between these measurements as a correction factor to be applied to the SDSS genus amplitude measurement. To model these systematic corrections, we use the KIAS Multiverse simulations; a set of five cosmological scale, dark matter only simulations. Each is generated from a different cosmological model in which $\Omega_{\rm m}$ and $w_{\rm de}$ are varied \citep{2017ApJ...843...73S,Park:2019mvn,10.1093/mnras/staa566,article_moto}. Since our low redshift genus measurement will be practically insensitive to the value of $w_{\rm de}$, we use three of the simulations with cosmological parameters $(\Omega_{\rm m},w_{\rm de}) = (0.21,-1)$, $(0.26, -1)$ and ($0.31, -1)$ with all other cosmological parameters fixed as $\Omega_{\rm b} = 0.044$, $h = 0.72$, $n_{\rm s} = 0.96$. Each simulation comprises $N_{p} = 2048^3$ dark matter particles in a $1024^{3} h^{-3} {\rm Mpc}^{3} = 1422^{3} {\rm Mpc}^{3}$ box, gravitationally evolved using a modified GOTPM code which uses the Poisson equation \begin{equation} \nabla^{2} \Psi = 4 \pi G a^{2} \bar{\rho}_{\rm m}\delta_{\rm m} \left( 1 + {D_{\rm de} \over D_{\rm m}}{\Omega_{\rm de}(a) \over \Omega_{\rm m}(a)}\right) . \end{equation} \noindent The same random number sequence was used to generate the initial condition for each simulation at $z=99$, to eliminate cosmic variance when comparing different models. The power spectrum was normalised such that the rms of the matter fluctuation, smoothed with top hat $8 h^{-1} {\rm Mpc}$ and linearly evolved to $z=0$, is $\sigma_{8} = 0.794$. The genus curves of the Multiverse $z=0$ snapshot boxes in real and redshift space are presented in the top panel of Figure \ref{fig:3DRSD}, for three different cosmological models $\Omega_{\rm m} = 0.21, 0.26, 0.31$. We use the entire box with periodic boundary conditions to make these measurements -- that is we apply no mask in this subsection. For each simulation, we fix the number density of the mock galaxies such that the mean separation is $\bar{r} = \bar{n}^{-1/3}_{\rm gal} = 8.33 \, {\rm Mpc}$, by applying a mass cut. The solid/dashed lines correspond to real/redshift space mock galaxy catalogs, and green/red/blue corresponds to $\Omega_{\rm m} = 0.21, 0.26, 0.31$ respectively. In all cases one can observe an amplitude drop due to the effect of redshift space distortion. In the bottom panel, we exhibit amplitude measurements extracted from the genus curves in the top panel. The green/red/blue color scheme is the same as for the top panel, and diamonds/stars correspond to real/redshift space measurements of the genus amplitude. We denote the real/redshift space genus amplitudes as $A^{(\rm 3D)}_{\rm real}$ and $A^{(\rm 3D)}_{\rm rsd}$ respectively. The fractional difference between the redshift and real space amplitude measurements -- $a_{\rm rsd}^{(\rm 3D)} \equiv A^{(\rm 3D)}_{\rm rsd}/A^{(\rm 3D)}_{\rm real}$ -- is $a_{\rm rsd}^{(\rm 3D)} = 0.89, 0.92, 0.91$ for $\Omega_{\rm m} = 0.21, 0.26, 0.31$ respectively. The effect of redshift space distortion is a $\sim 10\%$ effect on the genus amplitude at these scales, and is only weakly dependent on cosmological parameters. Specifically, $a_{\rm rsd}^{(\rm 3D)}$ exhibits no significant, systematic dependence on $\Omega_{\rm m}$. We use the $\Omega_{\rm m} = 0.26$ simulation and take $a_{\rm rsd}^{(\rm 3D)} = 0.92$ in what follows, correcting the measured $A^{(\rm 3D)}$ amplitude by a factor of $(1 - \Delta_{\rm rsd})^{-1}$ with $\Delta_{\rm rsd} = 0.08$. This factor converts $A^{(\rm 3D)}$ to real space. \begin{figure} \includegraphics[width=0.48\textwidth]{multiverse_genus_real_rsd.pdf} \includegraphics[width=0.48\textwidth]{amplitude_real_rsd_space_GRF.pdf} \caption{[Top panel] Measured genus curves as a function of $\nu_{\rm A}$ for three multiverse simulations with $\Omega_{\rm m} = 0.21, 0.26, 0.31$ (green, red, blue lines). The solid lines are real space measurements, dashed are redshift space. [Bottom panel] The genus amplitudes extracted from the top panel. The diamonds/stars represent the real/redshift space measurements respectively. The yellow squares represent the prediction for a Gaussian field for the given $\Omega_{\rm m}$. The real space, mock galaxy amplitudes are lower than the Gaussian prediction due to gravitational smoothing, and the redshift space values are still lower due to the effect of redshift space distortion.} \label{fig:3DRSD} \end{figure} To account for non-linear gravitational evolution, we compare the measurement of the three-dimensional genus amplitude of the multiverse simulations in real space to the Gaussian expectation value ($\ref{eq:g1}$), where we use the linear matter power spectrum plus shot noise \begin{equation} P_{\rm 3D}(k) = b_{\rm sdss}^{2} P_{\rm m}(k) + P_{\rm SN, sdss} , \end{equation} \noindent to generate the cumulants $\Sigma_{0,1}$. We use the SDSS MGS number density $\bar{n} = 1.7 \times 10^{-3} {\rm Mpc}^{-3}$ for the shot noise power spectrum $P_{\rm SN, sdss } = 1/\bar{n}$ and galaxy bias $b_{\rm sdss} = 1.5$ \citep{Howlett:2014opa,Ross:2014qpa}. We have already converted $A^{(\rm 3D)}$ to real space using the correction factor $\Delta_{\rm rsd}$. In Figure \ref{fig:3DRSD} (bottom panel) we exhibit the genus amplitude of the $z=0$, real space Multiverse simulation snapshot boxes (green, red and blue diamonds), and the corresponding Gaussian expectation value ($\ref{eq:g1}$) with the same cosmological parameters (yellow squares, labeled `GRF'). The Gaussian expectation values are systematically higher than the genus measured from each simulation box -- this highlights the `gravitational smoothing' effect of non-linear gravitational collapse. The effect is $a_{\rm gr} \equiv A^{(\rm 3D)}_{\rm real}/ A^{(\rm 3D)}_{\rm G} = 0.88,0.90, 0.92$ for the $\Omega_{\rm m} = 0.21, 0.26, 0.31$ simulations respectively. To directly compare the measured genus amplitude from the SDSS MGS to the corresponding Gaussian expectation value, we correct $A_{\rm G}^{(\rm 3D)}$ by a factor of $(1 - \Delta_{\rm gr})^{-1}$ with $\Delta_{\rm gr} = 0.10$. After correcting the measured genus amplitude $A^{(\rm 3D)}$ to account for non-linear redshift space distortion and gravitational smoothing, the next step is to compare $A^{(\rm 3D)}$ to the expectation value ($\ref{eq:g1}$) to obtain a set of parameter constraints. We minimize the simple $\chi^{2}$ function \begin{equation}\label{eq:chi3D} \chi^{2} = {[(1-\Delta_{\rm rsd} - \Delta_{\rm gr})^{-1} A^{(\rm 3D)} - A^{(\rm 3D)}_{\rm G}(\Omega_{\rm c}h^{2},n_{\rm s})]^{2} \over \sigma_{\rm 3D}^{2} } , \end{equation} \noindent where $A^{(\rm 3D)}_{\rm G}$ is the Gaussian expectation value of the three-dimensional genus curve ($\ref{eq:g1}$), and is sensitive to $\Omega_{\rm c}h^{2}$, $n_{\rm s}$ and weakly to $\Omega_{\rm b}h^{2}$. As the dependence on $\Omega_{\rm b}h^{2}$ is very weak, we fix this parameter to its Planck best fit value $\Omega_{\rm b}h^{2} = 0.0222$ \cite{Aghanim:2018eyx}. $\sigma_{\rm 3D} =0.197 \times 10^{-6} {\rm Mpc}^{-3}$ is the statistical uncertainty on $A^{(\rm 3D)}$. In Figure \ref{fig:3Dcon} we present the two-dimensional $1,2-\sigma$ contours in the $n_{\rm s}$, $\Omega_{\rm c}h^{2}$ plane, obtained by performing an MCMC parameter search, minimizing ($\ref{eq:chi3D}$). We observe a strong degeneracy between $n_{\rm s}$ and $\Omega_{\rm c}h^{2}$, as both can vary the degree of small scale power and hence increase/decrease the genus amplitude. The Planck best fit is shown as a black star, and the cosmological model of the Multiverse simulation used to make the non-linear redshift space distortion and gravitational smoothing corrections $\Delta_{\rm rsd}$, $\Delta_{\rm gr}$ is presented as a green square. Both are within the $1-\sigma$ contour. \begin{figure} \includegraphics[width=0.48\textwidth]{paper_och2_ns_sdss.pdf} \caption{The $1,2 -\sigma$ contours in the $n_{\rm s}$-$\Omega_{\rm c}h^{2}$ plane obtained by minimizing the chi square function ($\ref{eq:chi3D}$). The black star is the Planck best fit value of these parameters and green square the cosmological model used to infer the non-linear corrections to the measured genus curve. } \label{fig:3Dcon} \end{figure} In the next step of our analysis, we convert these constraints to a measure of the two-dimensional genus amplitude $A^{(\rm 2D)}_{\rm G}$. \subsection{Conversion from cosmological parameters to $A^{(\rm 2D)}_{\rm G}$} Finally, we transform from the cosmological parameters $\Omega_{\rm c}h^{2}$, $n_{\rm s}$ to a prediction for the two dimensional genus amplitude. To do so, we transform each parameter set and corresponding $\chi^{2}$ value $(\Omega_{\rm c}h^{2}, n_{\rm s},\chi^{2})$ from the previous section to $(A^{(\rm 2D)}_{\rm G}, \chi^{2})$ by inserting $\Omega_{\rm c}h^{2}$, $n_{\rm s}$ into the definition of the theoretical expectation of the two-dimensional genus amplitude ($\ref{eq:ag}$) using ($\ref{eq:s02}-\ref{eq:p3df}$), taking $b=2$ and $\bar{n} = 6.25\times 10^{-5} \, {\rm Mpc}^{-3}$ suitable for the BOSS galaxy sample, and smoothing scales $\Delta = 80 \, {\rm Mpc}$, $R_{\rm G} = 20 \, {\rm Mpc}$. The result is a one-dimensional probability distribution function for $A^{(\rm 2D)}_{\rm G}$, as inferred from the three-dimensional measurement. We present the resulting probability distribution in Figure \ref{fig:3D2D} (top panel). From this we infer the best fit and $1\sigma$ uncertainty on the two-dimensional genus amplitude as $A^{(\rm 2D)}_{\rm G} = 5.084 \pm 0.087 \times 10^{-5} \, ({\rm Mpc})^{-2}$. To review, the $A^{(\rm 3D)}$ measurement provides a constraint of the shape of the linear matter power spectrum. We have used the best fit values and uncertainties on the parameters $\Omega_{\rm c}h^{2}$ and $n_{\rm s}$ to infer the best fit and uncertainty on the theoretical expectation value of $A^{(\rm 2D)}_{\rm G}$ at low redshift. In the bottom panel of Figure \ref{fig:3D2D} we present the two-dimensional genus measurement inferred from the SDSS MGS (silver star) and those directly measured from the BOSS data (multi-coloured data points). We take the SDSS MGS measurement to lie at $z=0.1$. We assume that the low redshift measurement of $A^{(\rm 2D)}_{\rm G}$ is insensitive to variations of the expansion history in what follows, and treat it as a constant $A^{(\rm 2D)}_{\rm G} = 5.084 \pm 0.087 \times 10^{-5} \, ({\rm Mpc})^{-2}$. \begin{figure} \includegraphics[width=0.48\textwidth]{A2D.pdf} \includegraphics[width=0.48\textwidth]{SDSS_BOSS_amplitude.pdf} \caption{[Top panel] The probability distribution of the amplitude of the two-dimensional genus $A_{\rm G}^{(\rm 2D)}$ obtained by minimizing the $\chi^{2}$ function ($\ref{eq:chi3D}$). [Bottom panel] The two-dimensional genus measurement $A_{\rm G}^{(\rm 2D)}$ inferred from the low redshift SDSS MGS is presented as a silver star, along with the BOSS data points over the redshift range $0.25 < z <0.6$. The BOSS points are derived assuming a Planck cosmology. } \label{fig:3D2D} \end{figure} In this section, we have pursued a rather complex path to extracting $A^{(\rm 2D)}_{\rm G}$ from the low-redshift data. However, the reasoning behind our method lies in maximizing the constraining power of the data. For our method to provide a reasonable constraint, we must minimize the statistical uncertainty of the low-redshift measurement as far as possible, and this required us to select smaller smoothing scales than the high redshift data. The smallest smoothing scales that we can adopt at low and high-redshift are fixed by the mean galaxy separation of the SDSS MGS and BOSS catalogs; $\bar{r}_{\rm sdss} \sim 8.3 {\rm Mpc}$ and $\bar{r}_{\rm BOSS} \sim 25 {\rm Mpc}$ respectively. We cannot smooth below these scales without introducing unknown non-Gaussian systematics due to shot noise. This philosophy motivated our choice of $\Lambda_{\rm G} = 8.86 {\rm Mpc}$ ($= 6 \, {\rm Mpc}/h$) and $\Delta = 80 {\rm Mpc}$, $R_{\rm G} = 20 {\rm Mpc}$. Given the different number densities, bias factors and smoothing scales used in the analysis of the SDSS MGS and BOSS data, the only logical approach to relate the two is to infer the theoretical expectation $A^{(\rm 2D)}_{\rm G}$ using one of the data sets, and proceed to compare this value to the second. To apply this method, one must carefully correct for any non-linear effects such as gravitational collapse using simulations. \section{Parameter Constraints} \label{sec:constraints} We are now able to combine the low- and high-redshift measurements to constrain the distance-redshift relation. To do so, we minimize the following $\chi^{2}$ function \begin{equation}\label{eq:chirz} \chi^{2} = \sum_{k=1}^{N_{\rm z}}\sum_{j=1}^{N_{\rm z}} {\bf p}_{j}{\rm cov^{-1}}_{jk}{\bf p}_{k} , \end{equation} \noindent where ${\bf p}_{j} = A^{(\rm 2D)}_{j}/A^{(\rm 2D)}_{\rm G} - 1$ and ${\rm cov}_{jk} = (\sigma_{{\rm BOSS},j}^{2} + \sigma_{\rm SDSS}^{2})\delta_{jk}$; the covariance matrix is the sum of the statistical uncertainties on the BOSS and SDSS measurements. We have used a diagonal covariance matrix for our analysis, assuming that $A^{\rm (2D)}_{\rm G}$ represents the unbiased, central theoretical expectation value of the genus amplitude to which we compare our measured genus values to. A direct comparison between measured values of any statistic at high and low redshift would introduce correlation between ${\bf p}_{j}$ components. However, our analysis has used the measured low redshift data to infer the theoretical expectation value of the genus amplitude. A direct comparison of $A^{(\rm 2D)}_{\rm G}$ posterior probability distributions inferred from the SDSS data and twelve BOSS shells, for each parameter set used to infer the distance redshift relation, would provide a more rigorous statistical comparison. However, such a procedure is computationally intractable so we make the simplifying assumption that the high redshift shells are drawn from a PDF with central value given by the SDSS LRG value of $A^{\rm (2D)}_{\rm G}$. The second implicit assumption with our choice of diagonal covariance matrix is that we have neglected large-wavelength correlations between the SDSS LRG and BOSS galaxy samples. When generating the covariance matrices for the two-dimensional genus measurements of the BOSS data, we found no statistically significant correlation between neighbouring shells. This indicates that the cross correlation of the genus measurements is negligible. We fix $h=0.677$ and vary $\Omega_{\rm m}$, $w_{\rm de}$. For each parameter set $\Omega_{\rm m}$, $w_{\rm de}$, we estimate the distance to the centers of the $j$ redshift shells using $d_{\rm cm}(z_{j},\Omega_{\rm m}, w_{\rm de})$ and reconstruct the genus curves using angular smoothing scales $\theta_{{\rm G}, j} = R_{\rm G}/d_{\rm cm}(z_{j},\Omega_{\rm m}, w_{\rm de})$ and effective area of the data $A_{j} = 4\pi f_{\rm sky} d_{\rm cm}^{2}(z_{ j},\Omega_{\rm m}, w_{\rm de})$. After measuring the genus curves for the given expansion history, we calculate the $\chi^{2}$ function ($\ref{eq:chirz}$). The low redshift measurement $A_{\rm G}^{(\rm 2D)}$ is assumed to be independent of input cosmological model, as elucidated in section \ref{sec:res3d}. Performing a MCMC exploration of the two-dimensional parameter space, the resulting $1,2-\sigma$ contours (blue) are presented in Figure \ref{fig:dz}. The tan contour is the $w$CDM parameter constraint obtained from the Planck 2018 temperature data. If we combine the two data sets, we obtain a combined constraint on $\Omega_{\rm m}$ and $w_{\rm de}$ (pink contours)\footnote{Specifically, we used publicly available $w$CDM, \href{https://wiki.cosmos.esa.int/planck-legacy-archive/index.php/Cosmological_Parameters}{MCMC chains} from the Planck collaboration \citep{Aghanim:2018eyx}, combining ($\ref{eq:chirz}$) and the Planck MCMC likelihoods in quadrature.}. The marginalised parameter constraints for $\Omega_{\rm m},w_{\rm de}$ are presented in Table \ref{tab:dz}. The degeneracy between $\Omega_{\rm m}$ and $w_{\rm de}$, exhibited in the blue contour in Figure \ref{fig:dz}, has been found previously \citep{Park:2009ja,Appleby:2018jew}. The Planck temperature data presents an almost orthogonal contour in the $w_{\rm de}$-$\Omega_{\rm m}$ plane, so by combining these two data sets we can obtain a $\sim 15\%$ constraint on the equation of state of dark energy, $w_{\rm de} = -1.05^{+0.13}_{-0.12}$. The sensitivity of our test to $\Omega_{\rm m}$ and $w_{\rm de}$ is relatively weak as we are restricted to redshifts $z < 0.6$. A higher redshift measurement will improve the constraints considerably. Although the constraint is modest, the $\Lambda$CDM expansion history is consistent with the data over the redshift range considered. The constraint from the genus arises almost entirely from the combination of SDSS MGS low-redshift and CMASS data points; the LOWZ data have error bars that are too large and lie at a redshift that is too low to make a strong contribution. The results are not sensitive to the absolute value of the genus amplitude -- we are extracting information from the difference between different redshift bins. The absolute value also contains information, related to the shape of the matter power spectrum, as discussed further in a companion paper \cite{Appleby:2020pem}. The derived parameter constraints have been obtained under the assumption that the genus amplitude is a conserved quantity. For non-standard gravity or dark matter models, the matter power spectrum can possess redshift and scale dependent corrections. Similarly, we have assumed that dark energy perturbations do not significantly affect the shape of the matter power spectrum at low redshift. \begin{center} \begin{table} \begin{tabular}{||c c c ||} \hline Data \, & \, $\Omega_{\rm m}$ \, & \, $w_{\rm de}$ \\ [0.5ex] \hline\hline Genus (BOSS+MGS) \, & \, $0.507^{+0.104}_{-0.126}$ \, & \, $-2.24^{+1.07}_{-1.14}$ \\ \, & \, & \\ \begin{tabular}{@{}c@{}}Genus (BOSS+MGS) \\ + Planck (2018) \end{tabular} \, & \, $0.303\pm 0.036$ \, & \, $-1.05^{+0.13}_{-0.12}$ \\ \hline \end{tabular} \caption{\label{tab:dz}Parameter best fit and $1-\sigma$ uncertainties, obtained by minimizing the $\chi^{2}$ function ($\ref{eq:chirz}$). After combining our genus likelihood with Planck 2018 temperature data, we obtain the second row.} \end{table} \end{center} \begin{figure} \includegraphics[width=0.45\textwidth]{evo.pdf} \includegraphics[width=0.45\textwidth]{evo_1D.pdf} \caption{[Top panel] Two-dimensional $68,95\%$ contours in the $(\Omega_{\rm m},w_{\rm de})$ plane obtained by minimizing the $\chi^{2}$ function ($\ref{eq:chirz}$) and using the genus amplitude as a standard ruler (blue contours). The tan contours are the marginalised constraints in the $w_{\rm de}$-$\Omega_{\rm m}$ plane obtained from Planck temperature data \citep{Aghanim:2018eyx}, and the pink contours are the result of combining the genus and Planck $\chi^{2}$ functions in quadrature. The black star is the $\Lambda$CDM Planck best fit, and grey circle the best fit of the combined genus + Planck (pink) contour. [Bottom panel] The marginalised one-dimensional probability distribution functions of $w_{\rm de}$. The colour scheme is the same as in the top panel. } \label{fig:dz} \end{figure} \section{Discussion} \label{sec:discuss} In this work, we have obtained constraints on $w_{\rm de}$ and $\Omega_{\rm m}$ from the tomographic analysis of the two-dimensional slices of observed large-scale galaxy distribution. The amplitudes of the two-dimensional genus curves are measured in a series of concentric slices of density fields derived from the SDSS BOSS data. The amplitude at low-redshift is derived from the three-dimensional genus of the SDSS MGS data, and combined to find the cosmological parameters minimizing the redshift evolution of the genus. In doing so, we arrive at a constraint of $w_{\rm de}=-2.24^{+1.07}_{-1.14}$, or $w_{\rm de} = -1.05^{+0.13}_{-0.12}$, $\Omega_{\rm m} = 0.303 \pm 0.036$ if we combine our analysis with Planck temperature data \citep{Aghanim:2018eyx}. The parameter constraints arising solely from the genus statistic are particularly weak; this is due to the strong degeneracy between parameters and also the limited statistical power that we are able to employ. The presence of shot noise fundamentally restricts our ability to reconstruct the density field from the galaxy point distribution, as we must smooth on scales of at least the mean galaxy separation \cite{Kim:2014axe,Blake:2013noa,Appleby:2017ahh}. In contrast, methods such as the Alcock-Paczynski (AP) test \cite{Li:2016wbl} (see also \cite{Li:2017nzs,Zhang:2018jfu,Li:2018nlh,Park:2019mvn,Zhang:2019jsu}) employ information from very small scales, eliminating non-perturbative, non-linear systematics using simulations. In addition, the AP test does not require the application of mass cuts to generate uniform data samples with redshift, as we are forced to. As a result, \cite{Li:2016wbl} were able to obtain tight parameter constraints on $w_{\rm de}$ and $\Omega_{\rm m}$ using the same BOSS data. For the genus to be competitive with other statistics, we must first learn how to model and remove observational systematics. Beyond sampling noise, another dominant limitation of the method is in the comparison of high- and low-redshift measurements, as the low-redshift data are subject to large statistical uncertainty. This is the dominant contribution to the parameter uncertainties. The only way to evade this issue is to smooth the data on smaller scales, but in doing so we are increasingly exposed to non-linear physics. In this work we corrected the low-redshift, three-dimensional genus amplitude by factors of $\Delta_{\rm rsd} = 0.08$ and $\Delta_{\rm gr} = 0.10$ to account for redshift space distortion and gravitational collapse. These values were inferred from simulations. Better theoretical understanding of the non-linear regime and its impact on the genus curve will be necessary in the future to improve our analysis. Similarly, a better understanding of the effect of shot noise will allow us to probe smaller scales; in the current work we regard the mean galaxy separation $\bar{r}$ of a catalog to be a hard limit below which we are subjected to unknown non-Gaussian corrections. As the low redshift SDSS MGS is more dense than the BOSS catalog, we were able to smooth the former on smaller scales and thus extract more information. In a companion paper \cite{Appleby:2020pem}, we measured the genus curves of two-dimensional shells of the BOSS data and directly compared their amplitudes to the Gaussian expectation value. As we smooth the BOSS data with large scales $\Delta = 80 \, {\rm Mpc}$, $R_{\rm G} = 20 \, {\rm Mpc}$, we did not apply any non-linear correction factors to our measurements, and were able to use the Kaiser formula to estimate the effect of redshift space distortion. In \cite{Appleby:2020pem} we placed constraints on cosmological parameters that determine the shape of the linear matter power spectrum; $\Omega_{\rm c}h^{2}$ and $n_{\rm s}$. The information extracted in that work came from the absolute value of the genus amplitude. In the present analysis, we measure the redshift evolution of the genus amplitude, irrespective of its absolute value. One can interpret the two approaches as a measure of the initial condition/transfer function of the dark matter perturbations and a test of the expansion history respectively. In \cite{Appleby:2020pem}, we fixed the distance-redshift relation using the Planck 2018 best fit cosmology \cite{Aghanim:2018eyx} and measured the genus curves of the BOSS data a single time. We were able to do this as we restricted our analysis to the $\Lambda$CDM model, and the constraints obtained in this work are considerably weaker than those obtained in \cite{Appleby:2020pem}. The redshift evolution test considered here is {a measure of distance, and hence is principally sensitive to $\Omega_{\rm m}$ and the equation of state of dark energy}. To improve the parameter constraints, a number of avenues remain open. We can combine different low-redshift data sets, increasing the effective volume and reducing the statistical uncertainty. We can calculate analytically the non-linear corrections due to gravitational smoothing and redshift space distortion, which will provide a better understanding of the non-linear effects that we must account for on small scales. In addition, we can apply our method to high-redshift data, such as Lyman break galaxies. We expect that a high redshift data point will provide a significantly improved constraint on the expansion history. As the distance between observer and data increases, the effect of choosing an incorrect cosmology becomes more pronounced. Finally, the three-dimensional Minkowski Functionals contain more information than their two-dimensional counterparts, and a complete analysis of the three-dimensional field will be forthcoming. In this work, and throughout a series of papers \citep{Appleby:2017ahh,Appleby:2018jew,Appleby:2020pem}, we have focused on the two-dimensional genus, extracted from shells of the three-dimensional galaxy distribution. The reasoning behind this choice is two-fold. First, the BOSS galaxy catalog is relatively sparse, and we mitigate this issue by taking thick slices along the line of sight. Binning galaxies in this way is a smoothing choice, so we can interpret our approach as anisotropic smoothing perpendicular and parallel to the line of sight. Smoothing on larger scales parallel to the line of sight allows us to use linear redshift space distortion physics, which is important as non-linear redshift space distortion effects on topological statistics are not yet well understood. Second, in future work we intend to compare our results with higher redshift photometric redshift catalogs, which will require galaxies to be binned into thick shells. An understanding of how photometric redshift uncertainty modifies our analysis must be further explored before this comparison can be made. \section*{Acknowledgement} SAA is supported by an appointment to the JRG Program at the APCTP through the Science and Technology Promotion Fund and Lottery Fund of the Korean Government, the Korean Local Governments in Gyeongsangbuk-do Province and Pohang City and by a KIAS Individual Grant QP055701 via the Quantum Universe Center at Korea Institute for Advanced Study. SEH was supported by Basic Science Research Program through the National Research Foundation of Korea funded by the Ministry of Education (2018\-R1\-A6\-A1\-A06\-024\-977). We thank the Korea Institute for Advanced Study for providing computing resources (KIAS Center for Advanced Computation Linux Cluster System). Funding for SDSS-III has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, and the U.S. Department of Energy Office of Science. The SDSS-III web site is http://www.sdss3.org/. SDSS-III is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS-III Collaboration including the University of Arizona, the Brazilian Participation Group, Brookhaven National Laboratory, Carnegie Mellon University, University of Florida, the French Participation Group, the German Participation Group, Harvard University, the Instituto de Astrofisica de Canarias, the Michigan State/Notre Dame/JINA Participation Group, Johns Hopkins University, Lawrence Berkeley National Laboratory, Max Planck Institute for Astrophysics, Max Planck Institute for Extraterrestrial Physics, New Mexico State University, New York University, Ohio State University, Pennsylvania State University, University of Portsmouth, Princeton University, the Spanish Participation Group, University of Tokyo, University of Utah, Vanderbilt University, University of Virginia, University of Washington, and Yale University. The massive production of all MultiDark-Patchy mocks for the BOSS Final Data Release has been performed at the BSC Marenostrum supercomputer, the Hydra cluster at the Instituto de Fısica Teorica UAM/CSIC, and NERSC at the Lawrence Berkeley National Laboratory. We acknowledge support from the Spanish MICINNs Consolider-Ingenio 2010 Programme under grant MultiDark CSD2009-00064, MINECO Centro de Excelencia Severo Ochoa Programme under grant SEV- 2012-0249, and grant AYA2014-60641-C2-1-P. The MultiDark-Patchy mocks was an effort led from the IFT UAM-CSIC by F. Prada’s group (C.-H. Chuang, S. Rodriguez-Torres and C. Scoccola) in collaboration with C. Zhao (Tsinghua U.), F.-S. Kitaura (AIP), A. Klypin (NMSU), G. Yepes (UAM), and the BOSS galaxy clustering working group. Some of the results in this paper have been derived using the healpy and HEALPix package \section*{Appendix A -- Systematic Effects} In Section \ref{sec:theory} we listed three effects that can introduce a small evolution in the genus amplitude. In the Appendix we consider each point in turn and in isolation, to confirm that we have all known systematics under control. We use all sky, mock galaxy lightcone data from the Horizon Run 4 dark matter simulation project to perform these tests. We direct the reader to \cite{Kim:2015yma,Hong:2016hsd} for information on the simulation and mock galaxy catalogs. We use the lightcone data over the range $0.15 < z < 0.7$, creating $N=20$ shells and applying mass cuts to generate constant number density samples, exactly as we did for the BOSS data. We bin the galaxies into shells of thickness $\Delta = 80 {\rm Mpc}$ and smooth perpendicular to the plane with comoving scale $R_{\rm G} \, {\rm Mpc}$. The simulation was performed using a flat $\Lambda$CDM cosmology with parameters $h=0.72$, $\Omega_{\rm m} = 0.26$, $\sigma_{8} = 0.794$, $n_{\rm s} = 0.96$. \subsection*{1 - S\lowercase{hot Noise}} Shot noise is the single largest systematic associated with information extraction using the genus curve \cite{Kim:2014axe}. There are two issues associated with this phenomenon -- it is non-Gaussian and it can potentially introduce a redshift evolution of the genus amplitude. First regarding the non-Gaussianity. As a simple approximation, we have corrected for shot noise by adding a constant white noise contribution to the total power spectrum; $P_{\rm SN} = 1/\bar{n}$. In reality, the noise is a Poisson process (roughly speaking), but when writing the genus in terms of a Hermite polynomial expansion as in ($\ref{eq:mat1}$) we have implicitly assumed that the field is drawn from a perturbatively Gaussian distribution. As a Poisson distribution possesses a different moment generating function compared to a Gaussian, we can expect that shot noise will introduce modifications to the shape of the genus curve. This was observed in both \cite{Kim:2014axe} and \cite{Appleby:2018jew}. If shot noise becomes significant, then the shape of the genus will not be well represented by the first few Hermite polynomials and we lose the interpretation of the genus amplitude as the ratio of second order cumulants of the perturbatively Gaussian field that we are trying to measure. In short, the field that we measure is non-Gaussian due to both gravitational collapse and the nature in which it is sampled. However, only the gravitational non-Gaussianity is treated in the expansion ($\ref{eq:mat1}$). This issue is suppressed if we smooth on scales larger than the mean galaxy separation, in which case the shot noise effect can be approximately represented by the white noise term $P_{\rm SN}$. This remains an imperfect approximation except in the limit $R_{\rm G} \gg \bar{r}$. To present the non-Gaussianity induced by the shot noise sampling, we measure the genus of two-dimensional shells of the Horizon Run 4 mock galaxy lightcone in real space, fixing the smoothing scales $\Delta = 80 {\rm Mpc}$, $R_{\rm G} = 20 {\rm Mpc}$ and applying three mass cuts to the data to fix the number density of our galaxy sample as $\bar{n}_{1} = 3.7 \times 10^{-4} {\rm Mpc}^{-3}$, $\bar{n}_{2} = 7.4 \times 10^{-5} {\rm Mpc}^{-3}$ and $\bar{n}_{3} = 3.7 \times 10^{-5} {\rm Mpc}^{-3}$. We assume that the most dense sample has a shot noise contribution that is suppressed, as the mean galaxy separation is much lower than the smoothing scale $R_{\rm G} = 20 {\rm Mpc}$. We label the genus curves $g^{(1)}_{\rm 2D}(\nu_{\rm A})$, $g^{(2)}_{\rm 2D}(\nu_{\rm A})$ and $g^{(3)}_{\rm 2D}(\nu_{\rm A})$ respectively, where $1,2,3$ superscripts denote the number density cuts $\bar{n}_{1,2,3}$. We repeat our measurement for twenty concentric non-overlapping shells and take the average genus curve to show the effect of shot noise. In Figure \ref{fig:app1} (top panel) we exhibit the average genus curves for $\bar{n}_{1}$ (black), $\bar{n}_{2}$ (red) and $\bar{n}_{3}$ (blue). Below we also present the difference between the genus curves $\Delta g_{\rm 2D}^{(2)} = g^{(2)}_{\rm 2D} - g^{(1)}_{\rm 2D}$ (blue) and $\Delta g_{\rm 2D}^{(3)} = g^{(3)}_{\rm 2D} - g^{(1)}_{\rm 2D}$ (red) respectively. Clearly the difference between the genus curves is not simply an amplitude shift -- the shape of the residual curve is both shifted and distorted. This is due to the non-Gaussian nature of the sampling, and is most significant in the most sparse sample $\bar{n}_{3}$. For the case $\bar{n}_{2}$, these effects are less pronounced but still present. This indicates that our treatment of shot noise using the white noise contribution $P_{\rm SN}$ is imperfect. This effect will be further studied by the authors in the future. However, even if the effect of shot noise can be represented by a white noise term $P_{\rm SN} = 1/\bar{n}$, it can still generate a redshift evolution in the genus amplitude. This is because we are fixing $P_{\rm SN}$ to be constant at each redshift, but the matter power spectrum has a decreasing amplitude with increasing redshift. Hence the shot noise term increases in significance to the past, and will manifest as an increasing genus amplitude with increasing $z$. \begin{figure} \includegraphics[width=0.45\textwidth]{2D_genus_sn_both.pdf} \includegraphics[width=0.45\textwidth]{amplitude_sn.pdf} \includegraphics[width=0.45\textwidth]{DM_snapshot_real.pdf} \caption{[Top panel] The average genus curves extracted from all-sky mock galaxy lightcone data, taking three mass cuts to fix a constant number density in each shell $\bar{n}_{1} = 3.7 \times 10^{-4} {\rm Mpc}^{-3}$ (black), $\bar{n}_{2} = 7.4 \times 10^{-5} {\rm Mpc}^{-3}$ (red), $\bar{n}_{3} = 3.7 \times 10^{-5} {\rm Mpc}^{-3}$ (blue). We also exhibit the difference $\Delta g_{\rm 2D}$ between the sparse samples and the dense catalog. [Middle panel] The Gaussian prediction for the genus amplitude in real space, for a model with no shot noise (grey) and with the fiducial number density $\bar{n} = 6.25 \times 10^{-5} {\rm Mpc}^{-3}$ (yellow). Shot noise introduces significant evolution in the genus amplitude. [Bottom panel] Genus amplitudes extracted from two-dimensional slices of dark matter particle snapshot boxes for a sparse $N=64^3$ (gold points) and dense $N=512^3$ (grey points) sample. The solid lines are the Gaussian prediction. } \label{fig:app1} \end{figure} To show the hypothetical redshift evolution of the genus amplitude, in Figure \ref{fig:app1} (middle panel) we present the theoretical expectation $A_{\rm G}^{(\rm 2D)}$ as a function of redshift, using ($\ref{eq:ag}$) with power spectrum ($\ref{eq:p3df}$) and taking the real space power spectrum (that is, setting $\beta=0$ in equation ($\ref{eq:p3df}$)). We plot the amplitude assuming negligible shot noise, setting an arbitrarily high hypothetical number density $\bar{n} = 2 \times 10^{8} {\rm Mpc}^{-3}$ (grey line) and the fiducial number density of the BOSS catalog used in this work $\bar{n} = 6.25 \times 10^{-5} {\rm Mpc}^{-3}$ (yellow line), both with constant linear galaxy bias $b=2$. The grey line represents the idealised case and as expected is constant; in this instance the genus is a measure of the shape of the linear matter power spectrum. The yellow curve represents a hypothetical sparse galaxy catalog with constant large scale galaxy bias -- the shot noise contribution causes the genus amplitude to evolve with redshift. To confirm this behaviour, we take simulated dark matter particle snapshot boxes of volume $V = (1024 {\rm Mpc}/h)^{3}$ at $z=0, 0.5, 1$ and sampled $512^3$ (dense) and $64^3$ (sparse) particles randomly. We then construct flat, two-dimensional slices of thickness $\Delta = 80 \, {\rm Mpc}$ and smoothed in the plane with $R_{\rm G} = 20 \, {\rm Mpc}$ Gaussian kernel. The we extract the genus from these fields and then the genus amplitudes. The results are presented in the bottom panel of Figure \ref{fig:app1}. The points/error bars are the mean and standard deviation of $15$ slices of the snapshot boxes, with the yellow/grey points corresponding to the sparse and dense samples respectively. The solid yellow/grey lines are the Gaussian expectation value for the given number density (and bias factor $b=1$, as we are using dark matter particles). The behaviour of the middle panel is reproduced, the sparse sample exhibits a systematic evolution in redshift, but the dense sample is conserved. In this work we have fixed the galaxy bias of the BOSS galaxies to be constant, $b=2$. If the bias evolves with redshift, then this must also be taken into account when assessing the effect of shot noise. The net effect depends on the relative amplitude of the galaxy power spectrum -- hence $b^{2}(z) D^{2}(z) A_{\rm s}$ -- and the shot noise term $P_{\rm SN}$, where $D^{2}(z)$ is the growth rate and $A_{\rm s}$ is the primordial amplitude. \subsection*{2 - R\lowercase{edshift Space Distortion}} The effect of redshift space distortion is to decrease the genus amplitude by $\sim 8\%$, and introduce a mild redshift dependence. To show this effect, in Figure \ref{fig:app2} (top panel) we present the ratio of $A_{\rm G}^{(\rm 2D)}$ redshift and real ($\beta=0$) space, obtained from the theoretical expectation ($\ref{eq:ag}$) assuming negligible shot noise (that is, fixing $\bar{n} = 2 \times 10^{8} {\rm Mpc}^{-3}$) and different galaxy bias values. The green solid line is the fiducial, constant galaxy bias used in this work $b=b_{0}$. We also exhibit $a_{\rm rsd}$ for different linear galaxy bias models $b(z) = b_{0} + b_{1}z$; $(b_{0},b_{1}) = (1.8,0), (1.8,0.5), (1.8,1)$ (yellow solid, black dash-dot and red dashed lines respectively). We also present the values of $a_{\rm rsd}$ inferred from the Horizon Run 4 all-sky mock galaxy shells as pale red points. For our fiducial choice $b=2$, the genus amplitude decreases by $\sim {\cal O}(8\%)$ and decreases with increasing redshift when measured in redshift space (green line). For different bias factors, the redshift dependence of $a_{\rm rsd}$ can change significantly (cf yellow, black, red lines). In the middle panel of the figure, we plot $A_{\rm G}^{(\rm 2D)}$ in redshift space for two different number densities -- $\bar{n} = 2 \times 10^{8} {\rm Mpc}^{-3}$ (red) and fiducial number density used in this work $\bar{n} = 6.25 \times 10^{-5} {\rm Mpc}^{-3}$ (blue), fixing $b=2$. In the absence of shot noise, the genus amplitude decreases with redshift (red line), however as shown in the previous section shot noise acts to increase the genus amplitude with $z$. The net effect is that redshift space distortion decreases and shot noise increases $A_{\rm G}^{(\rm 2D)}$, with the result that the measured genus amplitude of the galaxy catalogs should remain approximately constant over the redshift range considered in this work $0.1 < z < 0.7$. Specifically, the measured genus amplitude should follow the blue curve in the bottom panel. We stress however, that this argument is sensitive to the galaxy sample. Different bias factors, number densities and redshift ranges will not necessarily yield a genus amplitude that is conserved with redshift. We again confirm our hypothesis that redshift space distortion introduces a mild dependence of the genus amplitude on redshift by extracting $A^{(\rm 2D)}$ from slices of dark matter particle data from our simulation. We take $z=0,0.5,1$ snapshot boxes of volume $V = (1024 {\rm Mpc}/h)^{3}$, sub-sample $512^3$ particles from the full data then perturb the particles along the $x_{3}$ direction according to their velocities to create plane, parallel redshift space distorted slices. We then extract the two-dimensional genus amplitude from slices using $\Delta = 80 {\rm Mpc}$, $R_{\rm G} = 20 {\rm Mpc}$ as before. The results are exhibited in the bottom panel of Figure \ref{fig:app2}. The grey/red points and error bars are the mean and standard deviation of $15$ slices of the snapshot boxes in real/redshift space respectively. We observe no evolution in real space, but a systematic decrease in the genus amplitude in redshift space. \begin{figure} \includegraphics[width=0.45\textwidth]{amplitude_arsd.pdf} \includegraphics[width=0.45\textwidth]{amplitude_rsd.pdf} \includegraphics[width=0.45\textwidth]{DM_snapshot_rsd.pdf} \caption{[Top panel] The ratio of genus amplitudes $A_{\rm G}^{(\rm 2D)}$ as measured in redshift and real space, for different bias models $b(z) = b_{0} + b_{1}z$. The green line corresponds to the fiducial values used in this work $b_{0} = 2$, $b_{1}=0$ in the galaxy bias model $b(z) = b_{0} + b_{1}z$. The red points are the values of $a_{\rm rsd}$ inferred from the mock galaxies from the Horizon Run 4 lightcone in real and redshift space. Generally, the effect of linear redshift space distortion is to decrease the genus amplitude by $\sim 8\%$ and introduce a weak redshift dependence. [Middle panel] The expectation value of the genus amplitude in redshift space, taking $b=2$, $\bar{n} = 2 \times 10^{8} {\rm Mpc}^{-3}$ (red) and $\bar{n} = 6.25 \times 10^{-5} {\rm Mpc}^{-3}$ (blue). The measured genus amplitude from the BOSS data should trace the blue curve. [Bottom panel] The genus amplitude extracted from dark matter snapshot boxes for a dense sample $512^3$ in real (grey) and redshift (red) space. } \label{fig:app2} \end{figure} Future dense and high redshift galaxy catalogs will not suffer from the many of the issues discussed in this work. For these data, the shot noise contribution will be significantly reduced. In this case, the correct course of action would be to correct the measured genus amplitudes by a multiplicative factor to convert them to real space \cite{1996ApJ...457...13M}, after which they should be conserved with redshift. This procedure was undertaken for mock galaxies in \cite{Appleby:2018tzk}. \subsection*{3 - G\lowercase{ravitational Smoothing}} It is well known that higher order corrections in the non-Gaussian expansion of the genus curve $\sim {\cal O}(\sigma_{0}^{2})$ will modify the genus amplitude ; empirically it has been observed that the genus amplitude decreases on small scales compared to the Gaussian expectation value when measured from galaxy catalogs \cite{1989ApJ...345..618M,1991ApJ...378..457P,2005ApJ...633....1P}. To test the magnitude of this effect, we measure the coefficient of the $a_{3}$ Hermite polynomial expansion of the genus curve -- \begin{equation} g_{\rm 2D} \simeq a_{1} e^{-\nu_{\rm A}^{2}/2} \left[ a_{0}H_{0} + H_{1} + a_{2} H_{2} + a_{3}H_{3} \right] . \end{equation} \noindent According to the non-Gaussian perturbative expansion of the genus \cite{Matsubara:1994we,2000astro.ph..6269M,Pogosyan:2009rg,Gay:2011wz,Codis:2013exa}, $a_{1}$ is the genus amplitude, $a_{0,2}$ are the first order corrections of order $a_{0,2} \sim {\cal O}(\sigma_{0})$ and we can expect $a_{3}$ will be induced at second order $a_{3} \sim {\cal O}(\sigma_{0}^{2})$. We therefore use this term as a proxy to estimate the magnitude of higher order corrections to the genus amplitude. We extract $a_{0}, a_{2}, a_{3}$ from the twenty all-sky lightcone shells of Horizon Run 4, in redshift space, by integrating the genus curve using \begin{equation} a_{n} ={1 \over n!} { \int_{-4}^{4} d\nu_{\rm A} g_{\rm 2D}(\nu_{\rm A}) H_{n}(\nu_{\rm A}) \over \int_{-4}^{4} d\nu_{\rm A} g_{\rm 2D}(\nu_{\rm A}) H_{1}(\nu_{\rm A})} , \end{equation} \noindent taking $\nu_{0} =4$. In Figure \ref{fig:app4} we present $a_{0}, a_{2}, a_{3}$ (grey, blue, red). The red curve is the next to leading order correction term $a_{3}$. There is some suggestion that $a_{3}$ is increasing with decreasing redshift, from $\sim 0.02$ at $z=0.25$ to $\sim 0.01$ at $z=0.6$. Although the effect is small and the statistical uncertainty large, the higher order, non-linear corrections require further study. The $a_{3}$ term is present at the $1\%$ level at the scales probed. The $a_{0},a_{2}$ coefficients are perturbatively small at the scales studied in this work. These terms can be interpreted as integrals over the Bispectrum, and contain complementary information to the amplitude studied in this work. $a_{2}$ exhibits some evidence of evolution over the redshift range under consideration. \begin{figure} \includegraphics[width=0.45\textwidth]{amplitude_nonlin.pdf} \caption{The Hermite polynomial coefficients $a_{0}, a_{2}, a_{3}$ (grey, blue, red) obtained from twenty shells of all-sky mock galaxy data. There is no strong evidence of evolution of $a_{3}$. } \label{fig:app4} \end{figure} \subsection*{4 - L\lowercase{ack of high threshold critical points}} The first three issues described above are physical effects. The fourth is a purely spurious systematic that can be introduced into the analysis if we improperly select the $\nu_{\rm A}$ threshold range. Specifically, one can observe evolution of the genus amplitude with redshift if we measure the genus over threshold values that are too high. The reason for this lies in the relation between $\nu_{\rm A}$ and $\nu$. The $\nu_{\rm A}$ parameterisation of the genus curve selects thresholds that have the same area fraction as a Gaussian random field. However, since the galaxy catalogs occupy a finite area, high threshold peaks will not be represented within the observed domain, and the area fraction will be systematically under-represented compared to a hypothetical Gaussian random field of arbitrarily large extent. This leads to an increase in the genus curve in the high $\nu_{\rm A}$ tails, which increases the genus amplitude. This can introduce spurious redshift evolution because the area of the data at low redshift is smaller than at high redshift, and so the low-$z$ regime will contain fewer high threshold peaks. We can eliminate this effect by restricting our analysis to $\nu_{\rm A}$ threshold values that are well sampled at each redshift. To present the effect, we take the twenty all-sky lightcone mock galaxy shells from the Horizon Run 4 simulation in real space, smooth them and then apply a mask, only keeping a spherical cap of data of radius $\theta_{\rm cap} = \pi/(2\sqrt{2}) \, {\rm rad}$. We select this value as the area fraction of such a cap roughly matches the area of the BOSS mask. We then measure the genus of this subset of data over the threshold ranges $-4 < \nu_{\rm A} < 4$ and $-2.5 < \nu_{\rm A} < 2.5$. As a proxy for the genus amplitude, we use the following integral \begin{equation}\label{eq:inte} A^{(\rm 2D)} \simeq {1 \over \sqrt{2\pi}} \int_{-\nu_{0}}^{\nu_{0}} g_{\rm 2D} \nu_{\rm A} d\nu_{\rm A} , \end{equation} \noindent with $\nu_{0} = 2.5, 4$. As $\nu_{0} \to \infty$, the integral ($\ref{eq:inte}$) approaches the exact genus curve amplitude. In Figure \ref{fig:app3} we present $A^{(\rm 2D)}$ for $\nu_{0} = 4$ (blue points) and $\nu_{0} = 2.5$ (red squares) from the twenty slices. We also show the mean value of the points as similarly coloured horizontal lines. The exact value of $A^{(\rm 2D)}$ is not relevant to our discussion, the important point is the clear redshift evolution in the blue points, which is due to selecting a large value $\nu_{0} = 4$. For the more conservative choice $\nu_{0} = 2.5$, no redshift evolution is detected relative to the mean value (red points). This indicates that peaks in the range $-2.5 < \nu_{\rm A} < 2.5$ are suitably well represented over the range $0.2 < z < 0.6$ and motivates our choice $-2.5 < \nu_{\rm A} < 2.5$ in the main body of the text. \begin{figure}[b!] \includegraphics[width=0.45\textwidth]{amp_vs_z.pdf} \caption{The amplitude proxy $A^{(\rm 2D)}$ defined in equation ($\ref{eq:inte}$), measured from the twenty shells of lightcone data. The red/blue points correspond to $\nu_{0} = 2.5$, $\nu_{0} = 4$ respectively. The solid horizontal lines are the mean values of the respective points. One can clearly observe a systematic evolution in the blue points, due to the lack of high threshold maxima/minima at low-$z$. } \label{fig:app3} \end{figure} \section*{Appendix B -- variation of $\Delta$} Finally, we check that the genus amplitudes extracted from the data are insensitive to the small variations in shell thickness $\Delta$ induced by selecting different cosmological models to infer the distance-redshift relation. Although the genus is a function of the thickness $\Delta$, we will argue that for large $\Delta$ this sensitivity is low and can be neglected. To show this, we take the Horizon Run 4 all-sky mock galaxy lightcone, and use four different cosmological models to fix the redshift boundaries of the shells. For each cosmology we select redshift limits of the shells $z_{\rm min}$, $z_{\rm max}$ such that the comoving distance $d_{\rm cm}(z_{\rm max},\tilde{\Omega}_{\rm m}, \tilde{w}_{\rm de}) - d_{\rm cm}(z_{\rm min},\tilde{\Omega}_{\rm m}, \tilde{w}_{\rm de}) = \tilde{\Delta} = 80 {\rm Mpc}$, where tildes indicate incorrect cosmological parameters that are presented in Table \ref{tab:appb}, with model $0$ being the correct, fiducial cosmology of the simulation. The true values of the slice thicknesses are given by $\Delta = d_{\rm cm}(z_{\rm max},\Omega_{\rm m}, w_{\rm de}) - d_{\rm cm}(z_{\rm min},\Omega_{\rm m}, w_{\rm de})$ with $\Omega_{\rm m} = 0.26$, $w_{\rm de} = -1$. In Figure \ref{fig:appb} (top panel) we present $\Delta(z)$ as a function of $z$ for each of the cosmological models used to infer the distance redshift relations of the shells. For each cosmological model we have selected redshift limits such that $\tilde{\Delta} = 80 {\rm Mpc}$, independent of redshift, but the true value of $\Delta$ (obtained by using the true cosmology) is evolving. After fixing the redshift shell limits using the incorrect cosmological models, we proceed to calculate the genus in the twenty data shells using the correct cosmological model. We do this as we wish to isolate the effect of a systematically evolving $\Delta$ thickness. We measure the genus curves and extract the amplitudes. In Figure \ref{fig:appb} (bottom panel) we present the genus amplitude $A^{(\rm 2D)}(\tilde{\Delta})$. For clarity we plot the average and standard deviation of every four shells. One can observe no systematic evolution with redshift for any of the cosmological models selected, and the statistical uncertainty is dominant. This insensitivity is because we are using relatively thick slices $\Delta \sim 80 {\rm Mpc}$; thinner slices will exhibit stronger cosmological parameter sensitivity. \begin{table} \begin{center} \begin{tabular}{|| c c c ||} \hline Model \, & \, $\tilde{\Omega}_{\rm m}$ \, & \, $\tilde{w}_{\rm de}$ \\ [0.5ex] \hline\hline 0 \, & \, $0.26$ \, & \, $-1$ \\ I \, & \, $0.26$ \, & \, $-1.2$ \\ II \, & \, $0.26$ \, & \, $-0.8$ \\ III \, & \, $0.2$ \, & \, $-1$ \\ IV \, & \, $0.32$ \, & \, $-1$ \\ \hline \end{tabular} \caption{\label{tab:appb}The four models used in Appendix B to test the effect of variable $\Delta$ slice thickness on the genus amplitude. The $0$ model is the fiducial model of the simulation, and yields a constant $\Delta = 80 {\rm Mpc}$ slice thickness.} \end{center} \end{table} \begin{figure} \includegraphics[width=0.45\textwidth]{Del_vs_z.pdf} \includegraphics[width=0.45\textwidth]{amplitude_vs_z_Del.pdf} \caption{[Top panel] The redshift evolution of the shell thickness $\Delta$, if we use an incorrect cosmological model to infer the redshift limits of the shells. The $0$ model is the fiducial (correct) model of the simulation, and I-IV are incorrect models with parameters given in Table \ref{tab:appb}. [Bottom panel] The measured genus amplitudes of all-sky lightcone shells of the mock galaxy data, using the shell widths presented in the top panel. No systematic evolution of the genus amplitude is found as a result of selecting an incorrect slice thickness. } \label{fig:appb} \end{figure} \newpage \bibliographystyle{ApJ}
1,314,259,995,397
arxiv
\section{Introduction} Banados-Teitelboim- Zanelli (BTZ) black hole \cite{BTZ} is one of the important kinds of black holes in three-dimensional space-time. The rotating charged BTZ black hole solution was studied by Achucarro and Ortiz \cite{AO}, which is called the AO (Achucarro-Ortiz) black holes. It has been found that entropy of Achucarro-Ortiz black hole can be described by the Cardy-Verlinde formula \cite{setare}. Also, Hawking radiation of the AO black hole discussed by the Ref. \cite{X}. Indeed, Kaluza-Klein reductional dimension help to obtain lower dimensional black holes \cite{kk1,kk2}. Since then, the AO black holes have been taking attention both in theoretical physics and cosmology \cite{A1,A2,A3,A4}. For example, the effects of quantum fluctuations (first order) on the properties of a charged BTZ black hole in massive gravity were studied in the Ref. \cite{A3}. Recently, rotating BTZ black hole with higher order corrections of the entropy has been investigated and shown that it elicits some instabilities \cite{1}. Besides, it was seen that when a logarithmic correction is considered for the uncharged BTZ black hole, the leading-order corrections yield an instability while the higher order corrections remove them \cite{1-2}. It is worth noting that thermodynamics of higher dimensional black holes with higher order thermal fluctuations have been studied in \cite{3-0}. These kinds of black holes are important to study AdS/CFT correspondence in lower dimensions.\\ In the present study, our main goal is to study the thermodynamics of the AO black hole with higher order quantum corrections. In particular, we study how the correction terms affect the stability of the AO black hole. Correction terms include logarithmic one \cite{das, SPR} together with the higher order term, which is proportional to the inverse of the entropy \cite{more}. These correction terms indeed arise from the thermal fluctuations of the statistical physics, which may be interpreted as quantum corrections when the size of the black hole becomes infinitesimal \cite{NPB}. Thermal fluctuations are important in several backgrounds like a hyperscaling violation background \cite{EPJC33}. By using the small black holes including the quantum effects, one can consider the quantum gravity effects \cite{Annals, NPB2} (see also \cite{ex1,ex2,2,3,Feng1,Feng2,Feng3}). Moreover, the logarithmic and higher order corrections are considered to study the critical thermodynamics behaviors of some black objects like a dyonic charged AdS black hole \cite{PRD}, charged dilatonic black Saturn \cite{sat1,sat2}, and AdS black holes in massive gravity \cite{sudb}. In that case, it is possible to have the holographic dual of a Van der Waals fluid \cite{Kubiznak:2012wp,Gunasekaran:2012dq,Kubiznak:2016qmn,Kubiznak:2014zwa}. Hence, we shall investigate the $P$-$V$ diagram of the AO black hole to find such a Van der Waals behavior in the presence of higher order corrections \cite{Anabalon:2018ydc,Zou:2017juz,Zou:2014gla,Zhang:2014eap,Ma:2017pap,Ma:2016lwr,Zhao:2014raa,Okcu:2016tgt,Hendi:2018sbe,Momennia:2017hsc,Hendi:2015eca}. On the other hand, the emergence of a minimal observable distance yields to the generalized uncertainty principle (GUP) \cite{GUP1, GUP22, GUP3}, which may be affect the black hole temperature and entropy. One of the key frontiers in modern theoretical physics is to construct a renormalizable, UV complete and non-perturbative theory of quantum gravity which could explain the features near the singularity of black hole and the Big Bang. Although numerous candidates of such theories are proposed, however, most of them offer no testable predictions or even untestable experimentally. In Ref. \cite{Giovanazzi}, (1 + 1)-D black hole entropy investigated by the brick-wall method. The other idea to solve the divergences is to consider the modified Heisenberg uncertainty relation which shows that there exists a minimal length. Thus, using the modified Heisenberg uncertainty relation the divergence in the brick-wall model are eliminated. In the absence of experiments, thermodynamics offer us a physically acceptable route to understand strong gravity regimes. The semi-classical physics help to identifying the properties of black hole like area and surface gravity with thermodynamical features such as entropy and temperature. Any quantum gravity (QG) theory should offer additional terms or correction terms to the results of semi-classical physics. In this connection, one of the aims of 2+1 dimensional QG is construct toy models of black holes and analyze their thermal properties. Since GUP and log corrections are motivated by numerous QG theories, it is imperative to apply these corrections to lower space dimensional black holes. These corrections play prominent role at the smaller scales closer to the Planck scale. Hence, we would like to consider such quantum effect and study modified thermodynamics in AO black hole.\\ The plan of the paper is organized as follows: In section 2, we briefly review the AO black hole space-time and its corrected thermodynamics due to the thermal fluctuations. We use the first and second order corrected black hole entropy to show the effects of thermal fluctuations (which interpreted as quantum effects) on the AO black hole thermodynamics quantities and its stability. In this section we assume that the black hole temperature not affected by quantum corrections. In section 3, we compute the higher order quantum corrected temperature of the AO black hole. Section 4 is devoted to the GUP corrected entropy and temperature of AO black hole. In section 5, we summarize our results and discuss about relations between different corrections. \section{Achucarro-Ortiz black hole} The Einstein-Maxwell action in $2+1$ dimensions is given by, \begin{equation} I=\int d^{3}x\sqrt{-g}\Big(R-2\Lambda-\frac{1}{4}F_{ab}F^{ab}\Big). \end{equation} In the case of $\Lambda=0$ we recover the action given by the Ref. \cite{4002}. The Einstein field equations for ($2+1$)-dimensional space-time with negative cosmological constant take the following form \begin{equation} G_{ab}+\Lambda g_{ab}=\pi T_{ab}~~~~~~~(a,b=0,1,2), \end{equation}% which results in the BTZ black hole solution with electric charge and spin: \begin{equation} ds^{2}=-f(r)dt^{2}+\frac{dr^{2}}{f(r)}+r^{2}\Big(d\phi-\frac{J}{2r^{2}}% dt\Big)^{2}. \label{metric} \end{equation}% The above line-element is also called AO black hole \cite{AO}. In Eq. (\ref% {metric}), the metric function $f(r)$ reads \begin{equation} f(r)=-M+\frac{r^{2}}{l^{2}}+\frac{J^{2}}{4r^{2}}-\frac{\pi }{2}Q^{2}\ln {\frac{r}{l}}, \end{equation} where $M$, $Q$, $J$ denote the mass, electric charge, and angular momentum of the black hole, respectively. Also, $\Lambda =-1/l^{2}$ is the negative cosmological constant and $l$ is the AdS length. As we know the holographic quantity of pressure in the AdS black holes is the cosmological constant. Therefore, we can write, \begin{equation}\label{pressure} P=\frac{1}{16\pi l^{2}}. \end{equation} The event horizon (or the stationary limit surface) which is a null hypersurface of this black hole occurs when $g^{rr}=0$: \begin{equation} \label{mass} -M+\frac{r_{+}^{2}}{l^{2}}+\frac{J^{2}}{4r_{+}^{2}}-\frac{\pi }{2}Q^{2}\ln{\frac{r_{+}}{l}}=0, \end{equation}% which yields the mass of the black hole as \begin{equation}\label{mass2} M=\frac{4r_{+}^{4}-2\pi Q^{2}l^{2}r_{+}^{2}\ln{\frac{r_{+}}{l}}+J^{2}l^{2}}{4l^{2}r_{+}^{2}}. \end{equation} The Hawking temperature of the black hole is given by \begin{equation}\label{temp} T_{H}=\frac{1}{4\pi }\left. \frac{\partial f(r)}{\partial r}\right\vert _{r=r_{+}}=\frac{1}{4\pi }\Big(\frac{2r_{+}}{l^{2}}-\frac{J^{2}}{2r_{+}^{3}}-% \frac{\pi Q^{2}}{2r_{+}}\Big). \end{equation}% The entropy is associated with the event horizon as $S_{0}=4\pi r_{+}$. The thermodynamic volume is $V=\pi r_{+}^{2}$, which means that event horizon can be expressed as $r_{+}=\sqrt{\frac{V}{\pi }}$.\\ In that case the first law of thermodynamics reads as, \begin{equation}\label{first} dM=TdS_{0}+\Omega dJ+\Phi dQ+V dP, \end{equation} where \begin{equation}\label{Omega} \Omega=\left(\frac{dM}{dJ}\right)_{S_{0}, Q, P}=\frac{J}{2r_{+}^{2}}, \end{equation} is the horizon angular velocities while \begin{equation}\label{Pot} \Phi=\left(\frac{dM}{dQ}\right)_{S_{0}, J, P}=-\pi Q\ln{\frac{r_{+}}{l}}, \end{equation} is the horizon electrostatic potential.\\ The logarithmic corrected entropy expression (or the first order correction) is given by \cite{SPR}, \begin{equation}\label{myent} S=S_{0}-\frac{\alpha }{2}\ln (S_{0}T_{H}^{2}), \end{equation} where the constant $\alpha$ is added to track the correction term \cite{Pd,EPL}. In that case if we choose $\alpha=0$, the expression for the entropy without any corrections recovered. Moreover, in the case of $\alpha=1$, we obtain the corrections due to thermal fluctuations. Hence, for the large black holes in low temperature, we can take the limit $\alpha\rightarrow0$, and for the small black holes in high temperature, we can take the limit $\alpha\rightarrow1$. In general, one can consider arbitrary $\alpha$ and fix it by thermodynamics requirements or observational data in higher dimensions. One can find out the explicit form of Eq. (\ref{myent}) as follows \begin{equation} S=4\pi r_{+}-\frac{1}{2}\alpha \Big[-4\ln 2-\ln \pi +\ln \Big(\frac{(\pi l^{2}Q^{2}r_{+}^{2}+J^{2}l^{2}-4r_{+}^{4})^{2}}{l^{4}r_{+}^{5}}\Big)\Big]. \end{equation}% The Helmholtz function is given by \begin{equation} F=-\int SdT_{H}=-\int S(r_{+})\frac{dT_{H}}{dr_{+}}dr_{+}, \end{equation}% which yields \begin{equation*} F=\frac{1}{\pi r_{+}^{3}l^{2}}\Big(\frac{-1}{2}\pi ^{2}Q^{2}r_{+}^{3}l^{2}\ln{\frac{r_{+}}{l}}+\frac{3}{4}\pi r_{+}l^{2}J^{2}-\pi r_{+}^{5}% \Big)+F_{1}(\alpha ), \end{equation*}% where $F_{1}(\alpha )$ is a long expression of the first correction term. We now proceed to the second order correction to the entropy as follows: \begin{equation}\label{Gentropy} S_{c}=S_{0}-\frac{\alpha }{2}\ln (S_{0}T_{H}^{2})+\frac{\beta }{S_{0}}, \end{equation}% where the constant $\beta$ shows the higher order correction. In general, we know that all different approaches to quantum gravity in leading order generate the logarithmic corrections to the black hole entropy while inverse of entropy in higher order. We should note that although the leading order correction is logarithmic, but the coefficient is depends on the quantum gravity theory, hence we one can consider this coefficient as a free parameter of the model. Since the values of the coefficients depend on the quantum gravity approach, it can be argued that they are generated from quantum fluctuations of the space-time geometry rather than matter fields on space-time. Therefore, we consider general $\alpha$ and $\beta$ to obtain modified thermodynamics. In that case the corrected first law of thermodynamics, \begin{equation}\label{cfirst} dM=TdS_{c}+\Omega dJ+\Phi dQ+V dP, \end{equation} may be violated which show some instabilities in presence of $\alpha$ and $\beta$ which will be discussed later. However, we can consider these coefficients like Lagrange multiplier and fix them simultaneously to satisfy the corrected first law of thermodynamics. It means that the equation (\ref{cfirst}) yields to the following equation, \begin{equation}\label{multiplier} \left(\frac{\alpha}{2}+\frac{\beta}{S_{0}}\right)dS_{0}+\alpha\frac{S_{0}}{T}dT=0. \end{equation} The corrected entropy (\ref{Gentropy}) yields the following relevant correction term to the Helmholtz function: \begin{equation} F_{c}=F+\beta \Big(\frac{Q^{2}}{64\pi r_{+}^{2}}+\frac{3J^{2}}{128\pi ^{2}r_{+}^{4}}-\frac{\ln{\frac{r_{+}}{l}}}{8\pi ^{2}l^{2}}\Big). \end{equation}% In Fig. \ref{fig:1}, one can see the typical behavior of the Helmholtz free energy for the corrected and uncorrected cases. We see some infinitesimal variation in the Helmholtz function as being observed in \cite% {Kuang:2018goo,Ovgun:2017bgx}. It can be also seen that there is a maximum value for the Helmholtz free energy. The correction terms reduce the maximum value.\\ \begin{figure}[h!] \begin{center} $% \begin{array}{cccc} \includegraphics[width=70 mm]{1-1.eps}&\includegraphics[width=70 mm]{1-2.eps} \end{array}% $% \end{center} \caption{Helmholtz free energy in terms of $V$ (left plot) and in terms of $% r_{+}$ (right plot) with $Q=J=l=1$. Dashed red lines represent the ordinary case with $\protect\alpha=\protect\beta=0$. Solid blue lines represent corrected case with $\protect\alpha=\protect\beta=1$.} \label{fig:1} \end{figure} To find the equation of state, we employ the first derivative of $F_{c}$ with respect to volume, more specifically: \begin{equation} P=-\frac{\partial F_{c}}{\partial V}, \end{equation}% which gives \begin{equation*} P=\frac{\pi^{2}Q^{2}Vl^{2}+3J^{2}\pi^{2}l^{2}+4V^{2}}{4\pi l^{2}V^{2}}+{\mathcal{O}}(\alpha )+{\mathcal{O}}(\beta ). \end{equation*}% To avoid cumbersome mathematical expressions like above, from now on we shall write only the leading order term, however all terms will be taken into account for the numerical purposes.\\ In Fig. \ref{fig:2}, one can deduce from the $P$-$V$ diagram that there is no Van der Waals behavior as well as no critical points. When the size of the black hole is at microscopic scale, the pressure becomes negative and black hole exhibits unstable phases. On the other hand, logarithmic and higher order corrections yield instability for the small black hole. In other words, when the black hole size is reduced by the Hawking radiation, thermal fluctuations of the quantum corrections become important and their effects cause to the instability.\\ \begin{figure}[h!] \begin{center} $% \begin{array}{cccc} \includegraphics[width=90 mm]{2.eps} \end{array}% $% \end{center} \caption{Pressure in terms of $V$. Dashed red lines represent the ordinary case with $\protect\alpha=\protect\beta=0$, while solid blue lines represent corrected case with $\protect\alpha=\protect\beta=1$.} \label{fig:2} \end{figure} To obtain internal energy, we use the following thermodynamic relation: \begin{equation} E=F_{c}+S_{c}T_{H}, \end{equation}% which gives us the following expression \begin{equation} E=F+\beta \Big(\frac{Q^{2}}{64\pi r_{+}^{2}}+\frac{3J^{2}}{128\pi ^{2}r_{+}^{4}}-\frac{\ln{\frac{r_{+}}{l}}}{8\pi ^{2}l^{2}}\Big)+S_{0}T_{H}-\frac{\alpha }{2}T\ln (S_{0}T_{H}^{2})+\frac{\beta }{S_{0}}T_{H}. \end{equation}% Further, we can calculate enthalpy \begin{equation} h=E+PV, \end{equation}% and the Gibbs free energy: \begin{equation} G=H-T_{H}S_{c}. \end{equation}% Our numerical analysis show that the behavior of enthalpy and Gibbs energy are similar to the internal energy.\\ \begin{figure}[h!] \begin{center} $% \begin{array}{cccc} \includegraphics[width=80 mm]{3.eps} \end{array}% $% \end{center} \caption{Specific heat in terms of $r_{+}$. Dotted green line represent the uncharged static case ($Q=J=0$), dashed red lines represent the ordinary case with $\protect\alpha=\protect\beta=0$ and $Q=J=1$, while solid blue lines represent corrected case with $\protect\alpha=\protect\beta% =1$ and $Q=J=1$.} \label{fig:3} \end{figure} To investigate the stability of the physical system, we consider the specific heat definition: \begin{equation} C=T_{H}\frac{\partial S_{c}}{\partial T_{H}} \end{equation}% which gives \begin{equation} C=4\pi \frac{\frac{2r_{+}}{l^{2}}-\frac{J^{2}}{2r_{+}^{3}}-\frac{\pi Q^{2}}{% 2r_{+}}}{\frac{2}{l^{2}}+\frac{3J^{2}}{2r_{+}^{4}}+\frac{\pi Q^{2}}{% 2r_{+}^{2}}}+C_{1}(\alpha )+C_{2}(\beta ), \end{equation}% where $C_{1}(\alpha )$ and $C_{2}(\beta )$ are complicated $\alpha $ and $% \beta $ dependent terms. We shall make their physical interpretation, graphically. Sign of the specific heat and its asymptotic behavior can give us information about the stability and phase transition \cite{EPJC22}.\\ In Fig. \ref{fig:3}, we draw the specific heat versus event horizon graph. In the case of uncharged static AO black hole (dotted green line), we see completely stable black hole. However, with the inclusion of rotation, some instabilities appear for small radii. In the presence of correction terms, unstable regions increased. Although there are some unstable phases, however there is no critical point and phase transition corresponding to asymptotic behavior. \section{Higher Order Quantum Corrected Temperature} In this section, we shall attempt to derive higher order quantum corrected temperature of BTZ black hole, which is dual of one dimensional holographic superconductors \cite{Refiz1}. To this end, we shall study the Parikh-Wilczek's quantum tunneling method \cite{Refiz2} together with the entropy (\ref{Gentropy}) to add higher order quantum corrections to the tunneling probability by considering the back reaction effects. However, the effects of Heisenberg uncertainly principle i.e., GUP corrections will be discussed in the next section. Finally, the modified $T_{H}$ due to the back reaction effect will be computed. However, as can be seen from the operations that are detailed below, it is necessary to express the entropy in terms of mass to be able to do all of these. For this, it is necessary to write the horizon in terms of mass. But, the transcendental structure of Eq. (\ref{mass2}) does not allow this. To overcome this difficulty, we simply consider the chargeless case: $Q=0$. Thus, from Eq. (\ref{mass2}), one can get event and inner horizons as follows, \begin{equation} r_{+}=\frac{1\,}{\sqrt{2}}\sqrt{M{l}^{2}+l\sqrt{{M}^{2}{l}^{2}-{J}^{2}}}, \label{iz1} \end{equation} and \begin{equation} r_{-}=\frac{1\,}{\sqrt{2}}\sqrt{M{l}^{2}-l\sqrt{{M}^{2}{l}^{2}-{J}^{2}}}, \label{iz1n} \end{equation} such that we have \begin{equation} M=\frac{r_{+}^{2}-r_{-}^{2}}{l}, \label{iz1n2} \end{equation} and \begin{equation} J=\frac{2r_{+}r_{-}}{l}, \label{iz1n3} \end{equation} and also the Hawking temperature (\ref{temp}) becomes \begin{equation} T_{H}=\frac{1}{4\pi }\left( \frac{2r_{+}}{l^{2}}-\frac{J^{2}}{2r_{+}^{3}}% \right) . \label{iz0m} \end{equation} The coordinate system in Eq. (\ref{metric}) is described for the observer located at spatial infinity. After transforming the metric (\ref{metric}) to the dragging coordinate system \cite{Refiz3}: \begin{equation} \psi =\phi -\frac{J}{2r^{2}}t,\text{ \ \ } \mathcal{T}=t\text{,} \label{iz1n4} \end{equation}% we see that the physics near the horizon can be effectively ($\psi =const.$) described by the following two-dimensional metric \begin{equation} ds^{2}=-f(r)d\mathcal{T}^{2}+\frac{dr^{2}}{f(r)}. \label{iz1n5} \end{equation} On the other hand, the near horizon metric (\ref{iz1n5}) can be expressed in the regular Painleve-Gulstrand (PG) coordinates \cite{Refiz4,Refiz5} by applying the following transformation \begin{equation} d\mathcal{T}_{PG}=d\mathcal{T}+\frac{\sqrt{1-f(r)}}{f(r)}dr, \label{iz1n6} \end{equation} where $\mathcal{T}_{PG}$ is called the PG time, which is nothing but the proper time. Thus, the metric (\ref{iz1n5}) recasts in \begin{equation} ds^{2}=-f(r)d\mathcal{T}_{PG}^{2}+2\sqrt{1-f(r)}d\mathcal{T}_{PG}dr+dr^{2}, \label{iz1n7} \end{equation} and it admits the following radial null geodesics of a test particle: \begin{equation} \dot{r}=\frac{dr}{d\mathcal{T}_{PG}}=-\sqrt{1-f(r)}\pm 1, \label{iz1n8} \end{equation} where plus (minus) sign corresponds to outgoing (ingoing) geodesics. After expanding Eq. (\ref{iz1n8}) around the event horizon, one can find that the radial outgoing null geodesics, $\dot{r}$,\ is approximated to the following \begin{equation} \dot{r}\cong \kappa (r-r_{+}). \label{izn9} \end{equation} in which the surface gravity \cite{Refiz6} (in the PG coordinates) reads \begin{equation} \kappa =\frac{1}{2}\left. \frac{\partial f(r)}{\partial r}\right\vert _{r=r_{+}}. \label{izn10} \end{equation} On the other hand, the imaginary part of the action ($I$) for an outgoing particle with positive energy that crosses the event horizon from inside ($% r_{\otimes }$) to outside ($r_{\odot }$) is given by \cite{Refiz7} \begin{equation} Im{I}=Im\int_{r_{\otimes }}^{r_{\odot }}p_{r}dr=Im\int_{r_{\otimes }}^{r_{\odot }}\int_{0}^{p_{r}}d\tilde{p}_{r}dr. \label{iz2} \end{equation} Let us recall that the Hamilton's equation for the classical trajectory is given by \begin{equation} dp_{r}=\frac{dH}{\dot{r}}, \label{iz3} \end{equation} where $p_{r}$ and $H$\ are the radial canonical momentum and Hamiltonian, respectively. Thus, we have \begin{equation} Im{I}=Im\int_{r_{\otimes }}^{r_{\odot }}\int_{0}^{H}\frac{d\widetilde{H}}{% \dot{r}}dr. \label{iz4} \end{equation} Now, let us assume that we have a circularly symmetric space-time having constant total mass $M$. By making one more assumption, we consider the system as if containing a radiating BTZ black hole with varying mass $% M-\omega $ that emits a circular shell of energy $\omega $: $\omega \ll M$. This scenario describes the self-gravitational effect \cite{Refiz8,Refiz9}. In this framework, Eq. (\ref{iz4}) becomes \begin{align} ImI& =Im\int_{r_{\otimes }}^{r_{\odot }}\int_{M}^{M-\omega }\frac{d% \widetilde{H}}{\dot{r}}dr, \notag \\ & =-Im\int_{r_{\otimes }}^{r_{\odot }}\int_{0}^{\omega }\frac{d\widetilde{% \omega }}{\dot{r}}dr, \label{iz5} \end{align} in which the Hamiltonian $H=M-\omega $\ and $dH=-d\omega $ are used. Following Eq. (\ref{izn9}), the radial outgoing null geodesics, $\dot{r}$,\ of the radiating black hole is defined as follows \cite{Refiz7} \begin{equation} \dot{r}\cong \kappa _{QGC}(r-r_{+}), \label{iz6} \end{equation} where $\kappa _{QGC}=\kappa (M-\omega )$\ is the quantum gravity corrected ($% QGC$) surface gravity \cite{Refiz10,Refiz11}. Therefore, after $r$ integration (the integration over $r$ is done by deforming the contour), Eq. (\ref{iz5}) becomes \begin{equation} ImI=-\pi \int_{0}^{\omega }\frac{d\tilde{\omega}}{\kappa _{QGC}}. \label{iz7} \end{equation} After defining the $QGC$ Hawking temperature as $T_{QGC}=\frac{\kappa _{QGC}% }{2\pi }$, we get \begin{equation} ImI=-\frac{1}{2}\int_{0}^{\omega }\frac{d\tilde{\omega}}{T_{QGC}}=-\frac{1}{2% }\int_{S_{QGC}(M)}^{S_{QGC}(M-\omega )}dS=-\frac{1}{2}\Delta S_{QGC.} \label{iz8} \end{equation} The above expression yields the modified tunneling rate: \begin{equation} \Gamma _{QGC}\sim e^{-2ImI}=e^{\Delta S_{QGC}}. \label{iz9} \end{equation} Taking cognizance of Eq. (\ref{Gentropy}), one can compute $\Delta S_{QGC}$ as follows \begin{eqnarray} \label{iz10} \Delta S_{QG} &=&S_{QGC}(M-\omega )-S_{QGC}(M) \nonumber\\ &=&4\pi \left( r_{+}(\omega)-r_{+}\right)+\frac{\alpha }{2}\left[ \ln \left( \frac{r_{+}}{4\pi}\left( 2\,{l}^{-2}+{% \frac{6{J}^{2}}{{r_{+}}^{2}}}\right) ^{2}\right) \right] \nonumber\\ &-&\left[ \ln \left( \frac{r_{+}(\omega)}{4\pi}\left( 2\,{l}^{-2}+{\frac{6{J}% ^{2}}{{r_{+}(\omega)}^{2}}}\right) ^{2}\right) \right]-\frac{\beta }{2\pi }\left( {\frac{1}{2r_{+}}}-{\frac{1}{2r_{+}(\omega)}}% \right), \end{eqnarray} where $r_{+}$ is given by the equation (\ref{iz1}) and we defined \begin{equation} r_{+}(\omega)=\frac{1\,}{\sqrt{2}}\sqrt{(M-\omega){l}^{2}+l\sqrt{{(M-\omega)}% ^{2}{l}^{2}-{J}^{2}}}. \label{iz1w} \end{equation} If we expand $\Delta S_{QG}$ (\ref{iz10}) and recast terms up to leading order in $\omega $, we find \begin{equation} \Delta S_{QGC}\cong \Psi \omega +O(\omega ^{2}), \label{iz11} \end{equation} where \begin{eqnarray} \Psi =&-&{\frac{l\,\left( 12\,{J}^{2}Ml+11\,{J}^{2}\sqrt{{M}^{2}{l}^{2}-{J}% ^{2}}-2\,{M}^{3}{l}^{3}-2\,{M}^{2}{l}^{2}\sqrt{{M}^{2}{l}^{2}-{J}^{2}}% \right) \alpha }{4\sqrt{{M}^{2}{l}^{2}-{J}^{2}}\left( Ml+\sqrt{{M}^{2}{l}% ^{2}-{J}^{2}}\right) \left( {M}^{2}{l}^{2}+Ml\sqrt{{M}^{2}{l}^{2}-{J}^{2}}+{J% }^{2}\right) }} \notag \\ &+&{\frac{\sqrt{2l}\beta }{8\pi \,\sqrt{Ml+\sqrt{{M}^{2}{l}^{2}-{J}^{2}}}% \sqrt{{M}^{2}{l}^{2}-{J}^{2}}}}-{\frac{\pi \,\sqrt{2}{l}^{3/2}\sqrt{Ml+\sqrt{% {M}^{2}{l}^{2}-{J}^{2}}}}{\sqrt{{M}^{2}{l}^{2}-{J}^{2}}}}. \label{iz12n} \end{eqnarray} Based on Eqs. (\ref{iz11}) and (\ref{iz12n}) and recalling the the Boltzmann factor \cite{Refiz12}: \begin{equation} \Gamma _{QGC}\sim e^{\Delta S_{QGC}}=e^{-\frac{\omega }{T_{QGC}}}, \label{54} \end{equation} we find out the $QGC$ temperature as follows \begin{equation} T_{QGC}=-\frac{1}{\Psi }. \label{55} \end{equation} We will show that there are a maximum mass to have positive temperature (\ref{55}). It means that this relation is only valid for small mass at quantum scales. It is easy to verify that when suppressing the $QGC$ effects (i.e., $\alpha =\beta =0$), $T_{QGC}$ reduces to \begin{equation} \left. T_{QGC}\right\vert _{\alpha =\beta =0} =\frac{\sqrt{{M}^{2}{l}^{2}-{% J}^{2}}}{\,\sqrt{2}{l}^{3/2}\pi\sqrt{Ml+\sqrt{{M}^{2}{l}^{2}-{J}^{2}}}} =\frac{1}{4\pi }\left( \frac{2r_{+}}{l^{2}}-\frac{J^{2}}{2r_{+}^{3}}% \right), \end{equation} which is nothing but the standard Hawking temperature (\ref{iz0m}) of the rotating BTZ black hole. \section{GUP corrections to entropy of AO black hole} \label{secIII} In this section, the entropy of the AO black hole will be calculated using the GUP corrections \cite{stringe}. At the size of Planck length, semiclassical methods do not work properly, so that one needs to use the theory of the quantum gravity which is not complete yet. For this purpose, some corrections to the classical theory is used to approach quantum gravity regime. Now, we study the GUP corrections, which were first applied in string theory and in loop quantum gravity, on the Hawking temperature and Bekenstein entropy \cite{Ovgun:2017hje,Sakalli:2016mnk,Ovgun:2015box,Ovgun:2015jna,Ovgun:2016roz,Alonso-Serrano:2018ycq}. GUP can directly be applied to the modified thermodynamical quantities by counting the number of states with the help of quantum corrections \cite{alpha1,alpha2}. Furthermore, GUP provides the modification of the Heisenberg principle at Planck scales \cite{Ovgun:2017hje,Sakalli:2016mnk,Ovgun:2015box,Ovgun:2015jna,Ovgun:2016roz,Alonso-Serrano:2018ycq,stringe,alpha1,alpha2,GUP2,GUP4 \begin{equation} \Delta x\Delta p\geq \hbar \left( 1-\frac{\gamma l_{p}}{\hbar }\Delta p+% \frac{\gamma ^{2}l_{p}^{2}}{\hbar ^{2}}(\Delta p)^{2}\right) , \label{gup} \end{equation}% where $\gamma$ is a dimensionless positive parameter, $l_{p}$ and $M_{p}$ denote the Planck length and the Plank mass respectively, also $c$ denotes the light velocity. By using the Taylor series one can rewrite the equation (\ref{gup}) as follow \cite{1510.08444}, \begin{equation} \Delta p\geq \frac{1}{2\Delta x}\left[ 1-\frac{\gamma }{2\Delta x}+\frac{% \gamma ^{2}}{2(\Delta x)^{2}}+\cdots \right], \label{p} \end{equation} where $G=c=k_{B}=1$ is used. Therefore, $\Delta x\Delta p\geq 1$, which yields $E\Delta x\geq 1$\cite{Scardigli1}, where the standard dispersion relation $E^{2}=p^{2}+m^{2}$ is used \cite{Scardigli2}. Then, for the massless particles one can obtain $E=\Delta p\geq 1/\Delta x$ \cite{Scardigli3}. In that case GUP corrected energy $E_{GUP}$ becomes \cite{1502.00179}, \begin{equation*} \Gamma \simeq \exp [-2\mathrm{Im}(\mathcal{I})]=\exp \left[ \frac{-4{\pi }% E_{GUP}}{\kappa }\right]. \end{equation*}% where surface gravity $\kappa $ is given by the equation (\ref{izn10}) and \begin{equation} E_{GUP}\geq E\left[ 1-\frac{\gamma }{2(\Delta x)}+\frac{\gamma ^{2}}{% 2(\Delta x)^{2}}+\cdots \right] . \end{equation}% Comparison with the Boltzmann factor ($e^{-E/T}$), we obtain GUP corrected temperature of the AO black hole as \begin{equation*} T\leq T_{H}\left[ 1-\frac{\gamma }{2(\Delta x)}+\frac{\gamma ^{2}}{2(\Delta x)^{2}}+\cdots \right] ^{-1}, \end{equation*}% where the Hawking temperature of the AO black hole is given by Eq. (\ref{temp}): \begin{equation} T_{H}=\frac{1}{4\pi }\Big(\frac{2r_{+}}{l^{2}}-\frac{J^{2}}{2r_{+}^{3}}-% \frac{\pi Q^{2}}{2r_{+}}\Big). \label{temp2} \end{equation}% In the case of near horizon of the AO black hole, we assume $\Delta x=2r_{+}$, then one can get the GUP corrected temperature as follows \begin{eqnarray} T_{GUP} &\leq &\frac{1}{4\pi }\Big(\frac{2r_{+}}{l^{2}}-\frac{J^{2}}{% 2r_{+}^{3}}-\frac{\pi Q^{2}}{2r_{+}}\Big)\left( 1-\frac{\gamma }{4r_{+}}+% \frac{\gamma ^{2}}{8r_{+}^{2}}+\cdots \right) ^{-1} \notag \label{Tgup} \\ &\simeq &\frac{1}{4\pi }\Big(\frac{2r_{+}}{l^{2}}-\frac{J^{2}}{2r_{+}^{3}}-% \frac{\pi Q^{2}}{2r_{+}}\Big)\left( 1+\frac{\gamma }{4r_{+}}-\frac{\gamma ^{2}}{8r_{+}^{2}}+\cdots \right). \end{eqnarray}% Inserting event horizon radius in terms of the black hole hairs and fix coefficients one can re produce corrected temperature given by the equation (\ref{55}). At this point, one can remember the laws of black hole thermodynamics to determine the entropy of the AO black hole. In terms of the black hole entropy $S_{0}=4\pi r_{+}$, we find \begin{equation} S_{GUP}\leq S_{0}-\gamma \pi \ln \left( S_{0}T_{GUP}^{2}\right) -\frac{2\pi ^{2}\gamma ^{2}}{S_{0}}+\cdots . \label{Entgup} \end{equation} Comparing with the entropy (\ref{Gentropy}) we find $\gamma\sim\frac{\alpha}{2}$ and $\beta=-2\pi^{2}\gamma^{2}$. Hence, we have the following expression for the temperature, \begin{equation} T_{GUP}=\frac{1}{2\pi }\Big(\frac{2r_{+}}{l^{2}}-\frac{J^{2}}{% 2r_{+}^{3}}-\frac{\pi Q^{2}}{2r_{+}}\Big)\Big(1+\frac{\gamma l_{p}}{\Delta x}\Big)^{-1}\left[ 1+\sqrt{1-% \frac{4}{(1+\frac{\Delta x}{\gamma l_{p}})^{2}}}\right] ^{-1}, \label{texact} \end{equation}% From above, we have deduced that the maximum temperature satisfies the following relation: \begin{equation} T_{GUP}\leq T_{max}=\frac{1}{4\pi }\Big(\frac{2r_{+}}{l^{2}}-\frac{J^{2}}{% 2r_{+}^{3}}-\frac{\pi Q^{2}}{2r_{+}}\Big). \end{equation} We will show that the temperature (\ref{texact}) coincides with the higher order quantum corrected temperature (\ref{55}) at low mass. For the massive black hole $T_{QGC}\approx T_{GUP}$ for $\alpha=0$. \section{Conclusion and discussion} In this paper, we have considered the AO black hole and study its thermodynamics by taking into account of quantum corrections (considering the effects of back reaction and GUP separately. We have computed the modified Helmholtz free energy and used it to investigate the $P-V$ criticality for the AO black hole. From the results obtained, we have revealed that there are no Van der Waals behavior and critical points. Performing the numerical analysis for the specific heat, we have shown that the quantum corrected terms of the entropy signal the instability. We have also derived the QGC (with back reaction effect) and GUP corrected expressions of thermodynamic parameters: temperature, heat capacity, and entropy of the AO black hole. In particular, for the microscopic AO black holes, those corrections remove the thermal instability (see Fig. \ref{fig:3}). Furthermore, we have also calculated the upper limit of the Hawking temperature and prove that $% T_{GUP}\leq T_{max}=T_{H}$. We calculated corrected temperature due to the higher order quantum corrections and obtain $T_{QGC}$ for uncharged AO black hole. Solid green line of the Fig. \ref{fig4} represent $T_{QGC}$. Also we calculated corrected temperature due to generalized uncertainty principle and obtain $T_{GUP}$ for the uncharged AO black hole ($Q=0$) as illustrated by blue dash dotted line of the Fig. \ref{fig4}. We can see both curves coincide for the low mass ($M<1$ for $l=1$ and $J=0.2$). It is interesting to note that if we neglect $\alpha$ then, $T_{QGC}\approx T_{GUP}$ (see red dashed line of the Fig. \ref{fig4}), which means that corrections due to generalized uncertainty principle is corresponding to higher order quantum correction.\\ \begin{figure}[h!] \begin{center} $% \begin{array}{cccc} \includegraphics[width=95 mm]{4.eps} \end{array}% $% \end{center} \caption{Temperature in terms of the black hole mass for $l=1$, $J=0.2$, $\gamma=1$ and $\beta=1$. Green solid line drawn for $\alpha=1$.} \label{fig4} \end{figure} To this end, we have focused on the second order corrections that were ignored in the earlier studies about this subject.\\ The present study motivates us for doing further research in this direction. We plan to extend our analytical analysis to the higher dimensional black holes and explore the effects of dimension on the quantum corrected temperature and entropy. One can also consider the Einstein-Maxwell action coupled with a charged scalar field and repeat calculations.\\ Finally it may be interesting to consider the corrections arising from the classical geometry which is called extended uncertainty principle (EUP) on the thermodynamics of black hole \cite{1809}. Following Ref. \cite{Park} in which the Hawking-Page transition of the BTZ black hole where discussed in the framework of the EUP, and its GUP corrections (GEUP), we also aim to study the EUP and GEUP corrections on the AO black hole thermodynamics. \acknowledgments This work is supported by Comisi\'on Nacional de Ciencias y Tecnolog\'ia of Chile through FONDECYT Grant N$^{\textup{o}}$ 3170035 (A. \"{O}.).
1,314,259,995,398
arxiv
\section{Electronic Submission} \label{submission} Submission to ICML 2021 will be entirely electronic, via a web site (not email). Information about the submission process and \LaTeX\ templates are available on the conference web site at: \begin{center} \textbf{\texttt{http://icml.cc/}} \end{center} The guidelines below will be enforced for initial submissions and camera-ready copies. Here is a brief summary: \begin{itemize} \item Submissions must be in PDF\@. \item Submitted papers can be up to eight pages long, not including references, plus unlimited space for references. Accepted papers can be up to nine pages long, not including references, to allow authors to address reviewer comments. Any paper exceeding this length will automatically be rejected. \item \textbf{Do not include author information or acknowledgements} in your initial submission. \item Your paper should be in \textbf{10 point Times font}. \item Make sure your PDF file only uses Type-1 fonts. \item Place figure captions \emph{under} the figure (and omit titles from inside the graphic file itself). Place table captions \emph{over} the table. \item References must include page numbers whenever possible and be as complete as possible. Place multiple citations in chronological order. \item Do not alter the style template; in particular, do not compress the paper format by reducing the vertical spaces. \item Keep your abstract brief and self-contained, one paragraph and roughly 4--6 sentences. Gross violations will require correction at the camera-ready phase. The title should have content words capitalized. \end{itemize} \subsection{Submitting Papers} \textbf{Paper Deadline:} The deadline for paper submission that is advertised on the conference website is strict. If your full, anonymized, submission does not reach us on time, it will not be considered for publication. \textbf{Anonymous Submission:} ICML uses double-blind review: no identifying author information may appear on the title page or in the paper itself. Section~\ref{author info} gives further details. \textbf{Simultaneous Submission:} ICML will not accept any paper which, at the time of submission, is under review for another conference or has already been published. This policy also applies to papers that overlap substantially in technical content with conference papers under review or previously published. ICML submissions must not be submitted to other conferences and journals during ICML's review period. Informal publications, such as technical reports or papers in workshop proceedings which do not appear in print, do not fall under these restrictions. \medskip Authors must provide their manuscripts in \textbf{PDF} format. Furthermore, please make sure that files contain only embedded Type-1 fonts (e.g.,~using the program \texttt{pdffonts} in linux or using File/DocumentProperties/Fonts in Acrobat). Other fonts (like Type-3) might come from graphics files imported into the document. Authors using \textbf{Word} must convert their document to PDF\@. Most of the latest versions of Word have the facility to do this automatically. Submissions will not be accepted in Word format or any format other than PDF\@. Really. We're not joking. Don't send Word. Those who use \textbf{\LaTeX} should avoid including Type-3 fonts. Those using \texttt{latex} and \texttt{dvips} may need the following two commands: {\footnotesize \begin{verbatim} dvips -Ppdf -tletter -G0 -o paper.ps paper.dvi ps2pdf paper.ps \end{verbatim}} It is a zero following the ``-G'', which tells dvips to use the config.pdf file. Newer \TeX\ distributions don't always need this option. Using \texttt{pdflatex} rather than \texttt{latex}, often gives better results. This program avoids the Type-3 font problem, and supports more advanced features in the \texttt{microtype} package. \textbf{Graphics files} should be a reasonable size, and included from an appropriate format. Use vector formats (.eps/.pdf) for plots, lossless bitmap formats (.png) for raster graphics with sharp lines, and jpeg for photo-like images. The style file uses the \texttt{hyperref} package to make clickable links in documents. If this causes problems for you, add \texttt{nohyperref} as one of the options to the \texttt{icml2021} usepackage statement. \subsection{Submitting Final Camera-Ready Copy} The final versions of papers accepted for publication should follow the same format and naming convention as initial submissions, except that author information (names and affiliations) should be given. See Section~\ref{final author} for formatting instructions. The footnote, ``Preliminary work. Under review by the International Conference on Machine Learning (ICML). Do not distribute.'' must be modified to ``\textit{Proceedings of the $\mathit{38}^{th}$ International Conference on Machine Learning}, Online, PMLR 139, 2021. Copyright 2021 by the author(s).'' For those using the \textbf{\LaTeX} style file, this change (and others) is handled automatically by simply changing $\mathtt{\backslash usepackage\{icml2021\}}$ to $$\mathtt{\backslash usepackage[accepted]\{icml2021\}}$$ Authors using \textbf{Word} must edit the footnote on the first page of the document themselves. Camera-ready copies should have the title of the paper as running head on each page except the first one. The running title consists of a single line centered above a horizontal rule which is $1$~point thick. The running head should be centered, bold and in $9$~point type. The rule should be $10$~points above the main text. For those using the \textbf{\LaTeX} style file, the original title is automatically set as running head using the \texttt{fancyhdr} package which is included in the ICML 2021 style file package. In case that the original title exceeds the size restrictions, a shorter form can be supplied by using \verb|\icmltitlerunning{...}| just before $\mathtt{\backslash begin\{document\}}$. Authors using \textbf{Word} must edit the header of the document themselves. \section{Format of the Paper} All submissions must follow the specified format. \subsection{Dimensions} The text of the paper should be formatted in two columns, with an overall width of 6.75~inches, height of 9.0~inches, and 0.25~inches between the columns. The left margin should be 0.75~inches and the top margin 1.0~inch (2.54~cm). The right and bottom margins will depend on whether you print on US letter or A4 paper, but all final versions must be produced for US letter size. The paper body should be set in 10~point type with a vertical spacing of 11~points. Please use Times typeface throughout the text. \subsection{Title} The paper title should be set in 14~point bold type and centered between two horizontal rules that are 1~point thick, with 1.0~inch between the top rule and the top edge of the page. Capitalize the first letter of content words and put the rest of the title in lower case. \subsection{Author Information for Submission} \label{author info} ICML uses double-blind review, so author information must not appear. If you are using \LaTeX\/ and the \texttt{icml2021.sty} file, use \verb+\icmlauthor{...}+ to specify authors and \verb+\icmlaffiliation{...}+ to specify affiliations. (Read the TeX code used to produce this document for an example usage.) The author information will not be printed unless \texttt{accepted} is passed as an argument to the style file. Submissions that include the author information will not be reviewed. \subsubsection{Self-Citations} If you are citing published papers for which you are an author, refer to yourself in the third person. In particular, do not use phrases that reveal your identity (e.g., ``in previous work \cite{langley00}, we have shown \ldots''). Do not anonymize citations in the reference section. The only exception are manuscripts that are not yet published (e.g., under submission). If you choose to refer to such unpublished manuscripts \cite{anonymous}, anonymized copies have to be submitted as Supplementary Material via CMT\@. However, keep in mind that an ICML paper should be self contained and should contain sufficient detail for the reviewers to evaluate the work. In particular, reviewers are not required to look at the Supplementary Material when writing their review. \subsubsection{Camera-Ready Author Information} \label{final author} If a paper is accepted, a final camera-ready copy must be prepared. For camera-ready papers, author information should start 0.3~inches below the bottom rule surrounding the title. The authors' names should appear in 10~point bold type, in a row, separated by white space, and centered. Author names should not be broken across lines. Unbolded superscripted numbers, starting 1, should be used to refer to affiliations. Affiliations should be numbered in the order of appearance. A single footnote block of text should be used to list all the affiliations. (Academic affiliations should list Department, University, City, State/Region, Country. Similarly for industrial affiliations.) Each distinct affiliations should be listed once. If an author has multiple affiliations, multiple superscripts should be placed after the name, separated by thin spaces. If the authors would like to highlight equal contribution by multiple first authors, those authors should have an asterisk placed after their name in superscript, and the term ``\textsuperscript{*}Equal contribution" should be placed in the footnote block ahead of the list of affiliations. A list of corresponding authors and their emails (in the format Full Name \textless{}email@domain.com\textgreater{}) can follow the list of affiliations. Ideally only one or two names should be listed. A sample file with author names is included in the ICML2021 style file package. Turn on the \texttt{[accepted]} option to the stylefile to see the names rendered. All of the guidelines above are implemented by the \LaTeX\ style file. \subsection{Abstract} The paper abstract should begin in the left column, 0.4~inches below the final address. The heading `Abstract' should be centered, bold, and in 11~point type. The abstract body should use 10~point type, with a vertical spacing of 11~points, and should be indented 0.25~inches more than normal on left-hand and right-hand margins. Insert 0.4~inches of blank space after the body. Keep your abstract brief and self-contained, limiting it to one paragraph and roughly 4--6 sentences. Gross violations will require correction at the camera-ready phase. \subsection{Partitioning the Text} You should organize your paper into sections and paragraphs to help readers place a structure on the material and understand its contributions. \subsubsection{Sections and Subsections} Section headings should be numbered, flush left, and set in 11~pt bold type with the content words capitalized. Leave 0.25~inches of space before the heading and 0.15~inches after the heading. Similarly, subsection headings should be numbered, flush left, and set in 10~pt bold type with the content words capitalized. Leave 0.2~inches of space before the heading and 0.13~inches afterward. Finally, subsubsection headings should be numbered, flush left, and set in 10~pt small caps with the content words capitalized. Leave 0.18~inches of space before the heading and 0.1~inches after the heading. Please use no more than three levels of headings. \subsubsection{Paragraphs and Footnotes} Within each section or subsection, you should further partition the paper into paragraphs. Do not indent the first line of a given paragraph, but insert a blank line between succeeding ones. You can use footnotes\footnote{Footnotes should be complete sentences.} to provide readers with additional information about a topic without interrupting the flow of the paper. Indicate footnotes with a number in the text where the point is most relevant. Place the footnote in 9~point type at the bottom of the column in which it appears. Precede the first footnote in a column with a horizontal rule of 0.8~inches.\footnote{Multiple footnotes can appear in each column, in the same order as they appear in the text, but spread them across columns and pages if possible.} \begin{figure}[ht] \vskip 0.2in \begin{center} \centerline{\includegraphics[width=\columnwidth]{icml_numpapers}} \caption{Historical locations and number of accepted papers for International Machine Learning Conferences (ICML 1993 -- ICML 2008) and International Workshops on Machine Learning (ML 1988 -- ML 1992). At the time this figure was produced, the number of accepted papers for ICML 2008 was unknown and instead estimated.} \label{icml-historical} \end{center} \vskip -0.2in \end{figure} \subsection{Figures} You may want to include figures in the paper to illustrate your approach and results. Such artwork should be centered, legible, and separated from the text. Lines should be dark and at least 0.5~points thick for purposes of reproduction, and text should not appear on a gray background. Label all distinct components of each figure. If the figure takes the form of a graph, then give a name for each axis and include a legend that briefly describes each curve. Do not include a title inside the figure; instead, the caption should serve this function. Number figures sequentially, placing the figure number and caption \emph{after} the graphics, with at least 0.1~inches of space before the caption and 0.1~inches after it, as in Figure~\ref{icml-historical}. The figure caption should be set in 9~point type and centered unless it runs two or more lines, in which case it should be flush left. You may float figures to the top or bottom of a column, and you may set wide figures across both columns (use the environment \texttt{figure*} in \LaTeX). Always place two-column figures at the top or bottom of the page. \subsection{Algorithms} If you are using \LaTeX, please use the ``algorithm'' and ``algorithmic'' environments to format pseudocode. These require the corresponding stylefiles, algorithm.sty and algorithmic.sty, which are supplied with this package. Algorithm~\ref{alg:example} shows an example. \begin{algorithm}[tb] \caption{Bubble Sort} \label{alg:example} \begin{algorithmic} \STATE {\bfseries Input:} data $x_i$, size $m$ \REPEAT \STATE Initialize $noChange = true$. \FOR{$i=1$ {\bfseries to} $m-1$} \IF{$x_i > x_{i+1}$} \STATE Swap $x_i$ and $x_{i+1}$ \STATE $noChange = false$ \ENDIF \ENDFOR \UNTIL{$noChange$ is $true$} \end{algorithmic} \end{algorithm} \subsection{Tables} You may also want to include tables that summarize material. Like figures, these should be centered, legible, and numbered consecutively. However, place the title \emph{above} the table with at least 0.1~inches of space before the title and the same after it, as in Table~\ref{sample-table}. The table title should be set in 9~point type and centered unless it runs two or more lines, in which case it should be flush left. \begin{table}[t] \caption{Classification accuracies for naive Bayes and flexible Bayes on various data sets.} \label{sample-table} \vskip 0.15in \begin{center} \begin{small} \begin{sc} \begin{tabular}{lcccr} \toprule Data set & Naive & Flexible & Better? \\ \midrule Breast & 95.9$\pm$ 0.2& 96.7$\pm$ 0.2& $\surd$ \\ Cleveland & 83.3$\pm$ 0.6& 80.0$\pm$ 0.6& $\times$\\ Glass2 & 61.9$\pm$ 1.4& 83.8$\pm$ 0.7& $\surd$ \\ Credit & 74.8$\pm$ 0.5& 78.3$\pm$ 0.6& \\ Horse & 73.3$\pm$ 0.9& 69.7$\pm$ 1.0& $\times$\\ Meta & 67.1$\pm$ 0.6& 76.5$\pm$ 0.5& $\surd$ \\ Pima & 75.1$\pm$ 0.6& 73.9$\pm$ 0.5& \\ Vehicle & 44.9$\pm$ 0.6& 61.5$\pm$ 0.4& $\surd$ \\ \bottomrule \end{tabular} \end{sc} \end{small} \end{center} \vskip -0.1in \end{table} Tables contain textual material, whereas figures contain graphical material. Specify the contents of each row and column in the table's topmost row. Again, you may float tables to a column's top or bottom, and set wide tables across both columns. Place two-column tables at the top or bottom of the page. \subsection{Citations and References} Please use APA reference format regardless of your formatter or word processor. If you rely on the \LaTeX\/ bibliographic facility, use \texttt{natbib.sty} and \texttt{icml2021.bst} included in the style-file package to obtain this format. Citations within the text should include the authors' last names and year. If the authors' names are included in the sentence, place only the year in parentheses, for example when referencing Arthur Samuel's pioneering work \yrcite{Samuel59}. Otherwise place the entire reference in parentheses with the authors and year separated by a comma \cite{Samuel59}. List multiple references separated by semicolons \cite{kearns89,Samuel59,mitchell80}. Use the `et~al.' construct only for citations with three or more authors or after listing all authors to a publication in an earlier reference \cite{MachineLearningI}. Authors should cite their own work in the third person in the initial version of their paper submitted for blind review. Please refer to Section~\ref{author info} for detailed instructions on how to cite your own papers. Use an unnumbered first-level section heading for the references, and use a hanging indent style, with the first line of the reference flush against the left margin and subsequent lines indented by 10 points. The references at the end of this document give examples for journal articles \cite{Samuel59}, conference publications \cite{langley00}, book chapters \cite{Newell81}, books \cite{DudaHart2nd}, edited volumes \cite{MachineLearningI}, technical reports \cite{mitchell80}, and dissertations \cite{kearns89}. Alphabetize references by the surnames of the first authors, with single author entries preceding multiple author entries. Order references for the same authors by year of publication, with the earliest first. Make sure that each reference includes all relevant information (e.g., page numbers). Please put some effort into making references complete, presentable, and consistent. If using bibtex, please protect capital letters of names and abbreviations in titles, for example, use \{B\}ayesian or \{L\}ipschitz in your .bib file. \section*{Software and Data} If a paper is accepted, we strongly encourage the publication of software and data with the camera-ready version of the paper whenever appropriate. This can be done by including a URL in the camera-ready copy. However, \textbf{do not} include URLs that reveal your institution or identity in your submission for review. Instead, provide an anonymous URL or upload the material as ``Supplementary Material'' into the CMT reviewing system. Note that reviewers are not required to look at this material when writing their review. \section*{Acknowledgements} \textbf{Do not} include acknowledgements in the initial version of the paper submitted for blind review. If a paper is accepted, the final camera-ready version can (and probably should) include acknowledgements. In this case, please place such acknowledgements in an unnumbered section at the end of the paper. Typically, this will include thanks to reviewers who gave useful comments, to colleagues who contributed to the ideas, and to funding agencies and corporate sponsors that provided financial support. \nocite{langley00} \section{Introduction} \label{sec:introduction} Most machine learning (ML) algorithms are highly configurable. Their hyperparameters must be chosen carefully, as their choice often impacts the model performance. Even for experts, it can be challenging to find well-performing hyperparameter configurations. Automated machine learning (AutoML) systems and methods for automated HPO have been shown to yield considerable efficiency compared to manual tuning by human experts \citep{snoek2012practical}. However, these approaches mainly return a well-performing configuration and leave users without insights into decisions of the optimization process. Questions about the importance of hyperparameters or their effects on the resulting performance often remain unanswered. Not all data scientists trust the outcome of an AutoML system due to the lack of transparency \citep{drozdal2020trust}. Consequently, they might not deploy an AutoML model, despite all performance gains. Providing insights into the search process may help increase trust and facilitate interactive and exploratory processes: A data scientist could monitor the AutoML process and make changes to it (e.g., restricting or expanding the search space) already \emph{during} optimization to anticipate unintended results. Transparency, trust, and understanding of the inner workings of an AutoML system can be increased by interpreting the internal surrogate model of an AutoML approach. For example, BO trains a surrogate model to approximate the relationship between hyperparameter configurations and model performance. It is used to guide the optimization process towards the most promising regions of the hyperparameter space. Hence, surrogate models implicitly contain information about the influence of hyperparameters. If the interpretation of the surrogate matches with a data scientist's expectation, confidence in the correct functioning of the system may be increased. If these do not match, it provides an opportunity to look either for bugs in the code or for new theoretical insights. We propose to analyze surrogate models with methods from IML to provide insights into the results of HPO. In the context of BO, typical choices for surrogate models are flexible, probabilistic black-box models, such as Gaussian processes (GP) or random forests. Interpreting the effect of single hyperparameters on the performance of the model to be tuned is analogous to interpreting the feature effect of the black-box surrogate model. We focus on the PDP \citep{friedman2001greedy}, which is a widely-used method\footnote{There exist various implementations \citep{greenwell:2017, scikit-learn}), extensions \citep{greenwell2018simple, goldstein2014peeking} and applications \citep{friedman2003multiple, cutler2007random}.} to visualize the average marginal effect of single features on a black-box model's prediction. When applied to surrogate models, they provide information on how a specific hyperparameter influences the estimated model performance. However, applying PDPs out of the box to surrogate models might lead to misleading conclusions. Efficient optimizers such as BO tend to focus on exploiting promising regions of the hyperparameter space while leaving other regions less explored. Therefore, a sampling bias in input space is introduced, which in turn can lead to a poor fit and biased interpretations in underexplored regions of the space. \textbf{Contributions:} We study the problem of sampling bias in experimental data produced by AutoML systems and the resulting bias of the surrogate model and assess its implications on PDPs. We then derive an uncertainty measure for PDPs of probabilistic surrogate models. In addition, we propose a method that splits the hyperparameter space into interpretable sub-regions of varying uncertainty to obtain sub-regions with more reliable and confident PDP estimates. In the context of BO, we provide evidence for the usefulness of our proposed methods on a synthetic function and in an experimental study in which we optimize the architecture and hyperparameters of a deep neural network. Our Supplementary Material provides (A) more background related to uncertainty estimates, (B) notes on how our methods are applied to hierarchical hyperparameter spaces, (C) details on the experimental setup and more detailed results, (D) a link to the source code. \textbf{Reproducibility and Open Science}: The implementation of the proposed methods as well as reproducible scripts for the experimental analysis are provided in a public git-repository\footnote{\url{https://github.com/slds-lmu/paper_2021_xautoml}}. \section{Background and Related Work} \label{sec:background} Recent research has begun to question whether the evaluation of an AutoML system should be purely based on the generated models' predictive performance without considering interpretability \citep{hutter14, pfisterer:2019, Freitas:2019, xanthopoulos:2020}. Interpreting AutoML systems can be categorized as (1)~interpreting the resulting ML model on the underlying dataset, or (2) interpreting the HPO process itself. In this paper, we focus on the latter. Let $c: \Lambda \to \R$ be a \emph{black-box} cost function, mapping a hyperparameter configuration \mbox{$\lambdab = \left(\lambda_1, ..., \lambda_d\right)$} to the model error\footnote{Typically, the model error is estimated via cross-validation or hold-out testing.} obtained by a learning algorithm with configuration $\lambdab$. The hyperparameter space may be mixed, containing categorical and continuous hyperparameters. The goal of HPO is to find $\lambdab^\ast \in \argmin\nolimits_{\lambdab \in \Lambda} c(\lambdab).$ Throughout the paper, we assume that a surrogate model $\hat c: \Lambda \to \R$ is given as an approximation to $c$. If the surrogate is assumed to be a GP, $\hat c(\lambdab)$ is a random variable following a Gaussian posterior distribution. In particular, for any finite indexed family of hyperparameter configurations $\left(\lambdab^{(1)}, ..., \lambdab^{(k)} \right) \in \Lambda^k$, the vector of estimated performance values is Gaussian with a posterior mean $\bm{\hat m} = \left(\hat m\left(\lambdab^{(i)}\right)\right)_{i = 1, ..., k}$ and covariance $\bm{\hat K} = \left(\hat k\left(\lambdab^{(i)}, \lambdab^{(j)}\right)\right)_{i, j = 1, ..., k}$. \textbf{Hyperparameter Importance.} Understanding which hyperparameters influence model performance can provide valuable insights into the tuning strategy \citep{probst2018tunability}. To quantify relevance of hyperparameters, models that inherently quantify feature relevance -- e.g., GPs with ARD kernel \citep{neil1996bayesian} -- can be used as surrogate models. \citet{hutter14} quantified the importance of hyperparameters based on a random forest fitted on data generated by BO, for which the importance of both the main and the interaction effects of hyperparameters was calculated by a functional ANOVA approach. Similarly, \citet{sharma:2019} quantified the hyperparameter importance of residual neural networks. These works highlight how useful it is to quantify the importance of hyperparameters. However, importance scores do not show \emph{how} a specific hyperparameter affects the model performance according to the surrogate model. Therefore, we propose to visualize the assumed marginal effect of a hyperparameter. A model-agnostic interpretation method that can be used for this purpose is the PDP. \textbf{PDPs for Hyperparameters.} Let $S\subset \{1, 2, ..., d\}$ denote an index set of features, and let $C = \{1, 2, ..., d\} \setminus S$ be its complement. The partial dependence (PD) function \citep{friedman2001greedy} of $c: \Lambda \to \R$ for hyperparameter(s) $S$ is defined as\footnote{To keep notation simple, we denote $c(\lambdab)$ as a function of two arguments $(\lambdab_S, \lambdab_C)$ to differentiate components in the index set $S$ from those in the complement. The integral shall be understood as a multiple integral of $c$ where $\lambdab_j$, $j \in C$, are integrated out. } \begin{eqnarray} c_{S}(\lambdab_S) := \E_{\lambdab_C}\left[c(\lambdab)\right]=\int_{\Lambda_C} c(\lambdab_S, \lambdab_C)~\textrm{d}\mathbb{P}(\lambdab_C). \label{eq:pdp} \end{eqnarray} When analyzing the PDP of hyperparameters, we are usually interested in how their values $\lambdab_S$ impact model performance uniformly across the hyperparameter space. In line with prior work~\citep{hutter14}, we therefore assume $\P$ to be the uniform distribution over $\Lambda_C$. Computing $c_S(\lambdab_S)$ exactly is usually not possible because $c$ is unknown and expensive to evaluate in the context of HPO. Thus, the posterior mean $\hat{m}$ of the probabilistic surrogate model $\hat{c}(\lambdab)$ is commonly used as a proxy for $c$. Furthermore, the integral may not be analytically tractable for arbitrary surrogate models $\hat c$. Hence, the integral is approximated by Monte Carlo integration, i.e., \begin{eqnarray} \hat c_S\left(\lambdab_S\right) &=& \frac{1}{n} \sum\nolimits_{i = 1}^n \hat m\left(\lambdab_S, \lambdab_C^{(i)}\right) \label{eq:estimate_pdp} \end{eqnarray} for a sample $\left(\lambdab_C^{(i)}\right)_{i = 1, ..., n} \sim \P(\lambdab_C)$. $\hat m \left(\lambdab_S, \lambdab_C^{(i)}\right)$ represents the marginal effect of $\lambdab_S$ for one specific instance $i$. Individual conditional expectation (ICE) curves \citep{goldstein2014peeking} visualize the marginal effect of the $i$-th observation by plotting the value of $\hat m \left(\lambdab_S, \lambdab_C^{(i)}\right)$ against $\lambdab_S$ for a set of grid points\footnote{Grid points are typically chosen as an equidistant grid or sampled from $\P(\lambdab_S)$. The granularity $G$ is chosen by the user. For categorical features, the granularity typically corresponds to the number of categories.} $\lambdab_S^{(g)}\in \Lambda_S$, $g \in \{1, ..., G\}$. Analogously, the PDP visualizes $\hat c_{S}(\lambdab_S)$ against the grid points. Following from Eq.~\ref{eq:estimate_pdp}, the PDP visualizes the average over all ICE curves. In HPO, the marginal predicted performance is a related concept. Instead of approximating the integral via Monte Carlo, the integral over $\hat c$ is computed exactly. \citet{hutter14} propose an efficient approach to compute this integral for random forest surrogate models. \textbf{Uncertainty Quantification in PDPs.} Quantifying the uncertainty of PDPs provides additional information about the reliability of the mean estimator. \citet{hutter14} quantified the model uncertainty specifically for random forests as surrogates in BO by calculating the standard deviation of the marginal predictions of the individual trees. However, this procedure is not applicable to general probabilistic surrogate models, such as the commonly used GP. There are approaches that quantify the uncertainty for ML models that do not provide uncertainty estimates out-of-the-box. \citet{cafri:2016} suggested a bootstrap approach for tree ensembles to quantify the uncertainties of effects based on PDPs. Another approach to quantify the uncertainty of PDPs is to leverage the ICE curves. For example, \citet{greenwell:2017} implemented a method that marginalizes over the mean $\pm$ standard deviation of the ICE curves for each grid point. However, this approach quantifies the underlying uncertainty of the data at hand rather than the model uncertainty, as explained in Appendix \ref*{sec:app_choice_uncertainty}. A model-agnostic estimate based on uncertainty estimates for probabilistic models is missing so far. \textbf{Subgroup PDPs.} Recently, a new research direction concentrates on finding more reliable PDP estimates within subgroups of observations. \citet{molnar2020modelagnostic} focused on problems in PDP estimation with correlated features. To that end, they apply transformation trees to find homogeneous subgroups and then visualize a PDP for each subgroup. \citet{groemping:2020} looked at the same problem and also uses subgroup PDPs, where ICE curves are grouped regarding a correlated feature. \citet{britton:2019} applied a clustering approach to group ICE curves to find interactions between features. However, none of these approaches aim at finding subgroups where reliable PDP estimates have low uncertainty. Additionally, to the best of our knowledge, nothing similar exists for analyzing experimental data created by HPO. \section{Biased Sampling in HPO} \label{sec:bias} Visualizing the marginal effect of hyperparameters of surrogate models via PDPs can be misleading. We show that this problem is due to the sequential nature of BO, which generates dependent instances (i.e., hyperparameter configurations) and thereby introduces a sampling and a resulting model bias. To save computational resources in contrast to grid search or random search, efficient optimizers like BO tend to exploit promising regions of the hyperparameter space while other regions are less explored (see Figure \ref{fig:sampling_bias}). Consequently, predictions of surrogate models are usually more accurate with less uncertainty in well-explored regions and less accurate with high uncertainty in under-explored regions. This model bias also affects the PD estimate (see Figure \ref{fig:uncertainty_ice_curves}). ICE curves may be biased and less confident if they are computed in poorly-learned regions where the model has not seen much data before. Under the assumption of uniformly distributed hyperparameters, poorly-learned regions are incorporated in the PD estimate with the same weight as well-learned regions. ICE curves belonging to regions with high uncertainty may obfuscate well-learned effects of ICE curves belonging to other regions when they are aggregated to a PDP. Hence, the model bias may also lead to a less reliable PD estimate. PDPs visualizing only the mean estimator of Eq. \eqref{eq:estimate_pdp} do not provide insights into the reliability of the PD estimate and how it is affected by the described model bias. \begin{figure}[t] \centering \begin{minipage}[t]{.48\textwidth} \includegraphics[width=\textwidth]{figures/sampling_bias.png} \caption{Illustration of the sampling bias when optimizing the $2D$ Styblinski Tang function with BO and the Lower Confidence Bound (LCB) acquisition function $a(\lambdab) = \hat m(\lambdab) + \tau \cdot \hat s (\lambdab)$ for $\tau = 0.1$ (left) and $\tau = 2$ (middle) vs. data sampled uniformly at random (right). } \label{fig:sampling_bias} \end{minipage}% \hfill \begin{minipage}[t]{0.48\textwidth} \centering \includegraphics[width=\textwidth]{figures/ice_curves_example.png} \caption{The two horizontal cuts (left) yield two ICE curves (right) showing the mean prediction and uncertainty band against $\lambda_1$ for $\hat c$ with $\tau = 0.1$ on the $2D$ Styblinski-Tang function. The upper ICE curve deviates more from the true effect (black) and shows a higher uncertainty. } \label{fig:uncertainty_ice_curves} \end{minipage} \end{figure} \section{Quantifying Uncertainty in PDPs} \label{sec:uncertainty} \begin{wrapfigure}[13]{R}{0.5\textwidth} \vspace{-2em} \centering \includegraphics[width = \linewidth]{figures/pdp_comparison.png} \caption{PDPs (blue) with confidence bands for surrogates trained on data created by BO and LCB with $\tau = 0.1$ (left), $\tau = 1$ (middle) and uniform i.i.d. dataset (right) vs. the true PD (black). } \label{fig:bias_pdp_test} \end{wrapfigure} Pointwise uncertainty estimates of a probabilistic model provide insights into the reliability of the prediction $\hat c(\lambdab)$ for a specific configuration $\lambdab$. This uncertainty directly correlates with how explored the region around $\lambdab$ is. Hence, including the model's uncertainty structure into the PD estimate enables users to understand in which regions the PDP is more reliable and which parts of the PDP must be cautiously interpreted.\footnote{Note that we aim at representing model uncertainty in a PD estimate, and not the variability of the mean prediction (see Appendix \ref*{sec:app_choice_uncertainty} for a more detailed justification). } We now extend the PDP of Eq. \eqref{eq:estimate_pdp} to probabilistic surrogate models $\hat c$ (e.g., a GP). Let $\lambdab_S$ be a fixed grid point and $\left( \lambdab_C^{(i)}\right)_{i = 1, ..., n} \sim \P\left(\lambdab_C\right)$ a sample that is used to compute the Monte Carlo estimate of Eq. \eqref{eq:estimate_pdp}. The vector of predicted performances at the grid point $\lambdab_S$ is $\bm{\hat c}\left(\lambdab_S\right) = \left(\hat c\left(\lambdab_S, \lambdab_C^{(i)}\right)\right)_{i = 1, ..., n}$ with (posterior) mean $\bm{\hat m}\left(\lambdab_S\right) := \left(\hat m\left(\lambdab_S, \lambdab_C^{(i)}\right)\right)_{i = 1, ..., n}$ and a (posterior) covariance $\bm{\hat K}\left(\lambdab_S\right) := \left(\hat k\left(\left(\lambdab_S, \lambdab_C^{(i)}\right), \left(\lambdab_S, \lambdab_C^{(j)}\right)\right)\right)_{i, j = 1, ..., n}$. Thus, $\hat c_{S}\left(\lambdab_S\right) = \frac{1}{n} \sum\nolimits_{i = 1}^n \hat c\left(\lambdab_S, \lambdab_C^{(i)}\right)$ is a random variable itself. The expected value of $\hat c_{S}\left(\lambdab_S\right)$ corresponds to the PD of the posterior mean function $\hat m$ at $\lambdab_S$, i.e.: \begin{eqnarray} \hat m_S\left(\lambdab_S\right) &=& \E_{\bm{\hat c}} \left[\hat c_{S}\left(\lambdab_S\right)\right] = \E_{\bm{\hat c}} \left[\frac{1}{n}\sum\nolimits_{i = 1}^n \hat c\left(\lambdab_S, \lambdab_C^{(i)}\right)\right] =\frac{1}{n} \sum\nolimits_{i = 1}^n \hat m\left(\lambdab_S, \lambdab_C^{(i)}\right). \label{eq:pdp_expectation} \end{eqnarray} The variance of $\hat c_{S}\left(\lambdab_{S}\right)$ is \begin{eqnarray} \hat s_S^2(\lambdab_S) &=&\mathbb{V}_{\bm{\hat c}}\left[\hat c_{S}\left(\lambdab_S\right)\right] = \mathbb{V}_{\bm{\hat c}} \left[\frac{1}{n} \sum\nolimits_{i = 1}^n \hat c\left(\lambdab_S, \lambdab_C^{(i)}\right) \right] = \frac{1}{n^2} \bm{1}^\top \bm{\hat K}\left(\lambdab_S\right) ~ \bm{1}. \label{eq:pdp_gp_var} \end{eqnarray} For the above estimate, it is important that the kernel is correctly specified such that the covariance structure is modeled properly by the surrogate model. Eq. \eqref{eq:pdp_gp_var} can be approximated empirically by treating the pairwise covariances as unknown, i.e.: \begin{eqnarray} \hat s_S^2\left(\lambdab_S\right) &\approx& \frac{1}{n} \sum\nolimits_{i = 1}^n \bm{\hat K}\left(\lambdab_S\right)_{i,i}. \label{eq:pdp_gp_var_nocov} \end{eqnarray} In Appendix~\ref*{sec:app_covariances}, we show empirically that this approximation is less sensitive to kernel misspecifications. Please note that the variance estimate and the mean estimate can also be applied to other probabilistic models, such as GAMLSS\footnote{Generalized additive models for location, scale and shape}, transformation trees, or a random forest. An example for PDPs with uncertainty estimates is shown in Figure \ref{fig:bias_pdp_test} for different degrees of a sampling bias. \section{Regional PDPs via Confidence Splitting} \label{sec:splitting} As discussed in Section \ref{sec:bias}, (efficient) optimization may imply that the sampling is biased, which in turn can produce misleading interpretations when IML is naively applied. We now aim to identify sub-regions $\Lambda^\prime \subset \Lambda$ of the hyperparameter space in which the PD can be estimated with high confidence, and separate those from sub-regions in which it cannot be estimated reliably. In particular, we identify sub-regions in which poorly-learned effects do not obfuscate the well-learned effects along each grid point, thereby allowing the user to draw conclusions with higher confidence. By partitioning the entire hyperparameter space through a tree-based approach into disjoint and interpretable sub-regions, a more detailed understanding of the sampling process and hyperparameter effects is achieved. Users can either study the hyperparameter effect of a (confident) sub-region individually or understand the exploration-exploitation sampling of HPO by considering the complete tree structure. The result of this procedure for a single split is shown in Figure \ref{fig:pdp_explain}. The PD estimate on the \emph{entire} hyperparameter space $\Lambda$ is computed by sampling the Monte Carlo estimate $(\lambdab_C^{(i)})_{i \in \mathcal{N}} \sim \P(\lambdab_C)$, $\mathcal{N} := \{1, 2, ..., n\}$. We now introduce the PD estimate on a \emph{sub-region} $\Lambda^\prime \subset \Lambda$ simply as $(\lambdab_C^{(i)})_{i \in \mathcal{N^\prime}}$ only using $\mathcal{N}^\prime = \{i \in \mathcal{N}\}_{\lambdab^{(i)} \in \Lambda^\prime}$. Since we are interested in the marginal effect of the hyperparameter(s) $S$ at each $\lambdab_S \in \Lambda_S$, we will usually visualize the PD for the whole range $\Lambda_S$. Thus, all obtained sub-regions should be of the form $\Lambda^\prime = \Lambda_S \times \Lambda_C^\prime$ with $\Lambda_C^\prime\subset \Lambda_C.$ This corresponds to an average of ICE curves in the set $i \in \mathcal{N}^\prime$. The pseudo-code to partition a hyperparameter (sub-)space $\Lambda$ and corresponding sample $(\lambdab_C^{(i)})_{i \in \mathcal{N}} \in \Lambda_C$, $\mathcal{N} \subseteq \{1, ..., n\}$, into two child regions is shown in Algorithm~\ref{alg:tree}. This splitting is recursively applied in a CART\footnote{Classification and regression trees}-like procedure~\citep{breiman:1984} to expand a full tree structure, with the usual stopping criteria (e.g., a maximum number of splits, a minimum size of a region, or a minimum improvement in each node). In each leaf node, the sub-regional PDP and its corresponding uncertainty estimate are computed by aggregating over all contained ICE curves. The criterion to evaluate a specific partitioning is based on the idea of grouping ICE curves with similar uncertainty structure. To be more exact, we evaluate the impurity of a PD estimate on a sub-region $\Lambda^\prime$ with the help of the associated set of observations $\mathcal{N}^\prime = \{i \in \mathcal{N}\}_{\lambdab_C^{(i)} \in \Lambda_C^\prime}$, also referred to as nodes, as follows: For each grid point $\lambdab_S$, we use the L2 loss in $ L\left(\lambdab_S, \mathcal{N}^\prime\right)$ to evaluate how the uncertainty varies across all ICE estimates $i \in \mathcal{N}^\prime$ using $\hat s^2_{S|\mathcal{N}^\prime} \left(\lambdab_S\right):= \frac{1}{|\mathcal{N}^\prime|}\sum_{i \in \mathcal{N}^\prime} \hat s^2\left(\lambdab_S, \lambdab_C^{(i)}\right)$ and aggregate the loss $ \mathcal{L}\left(\lambdab_S, \mathcal{N}^\prime\right)$ over all grid points in $\mathcal{R}_{L2}(\mathcal{N}^\prime)$: \begin{eqnarray} \mathcal{L}\left(\lambdab_S, \mathcal{N}^\prime\right) = \sum\nolimits_{i \in \mathcal{N}^\prime}\left(\hat s^2\left(\lambdab_S, \lambdab_C^{(i)}\right) - \hat s^2_{S|\mathcal{N}^\prime} \left(\lambdab_S\right)\right)^2 \text{ and } \mathcal{R}_{L2}(\mathcal{N}^\prime) = \sum\nolimits_{g = 1}^G \mathcal{L} (\lambdab_S^{(g)}, \mathcal{N}^\prime). \label{eq:splitting_crit_L2} \end{eqnarray} \begin{wrapfigure}{l}{0.5\textwidth} \begin{minipage}[htb]{0.48\textwidth} \begin{algorithm}[H] \caption{Tree-based Partitioning} \label{alg:tree} \begin{algorithmic} \STATE \textbf{input: } $\mathcal{N}$ \FOR{$j \in C$}{ \FOR{Every split $t$ on hyperparameter $\lambda_j$ }{ \STATE $\mathcal{N}_l^{j, t} = \{ i \in \mathcal{N}\}_{\lambda_j^{(i)} \leq t}$ \STATE $\mathcal{N}_r^{j, t} = \{i \in \mathcal{N}\}_{\lambda_j^{(i)} > t}$ \STATE $\mathcal{I}(j, t) = \mathcal{R}_{L_2}(\mathcal{N}_l^{j, t}) + \mathcal{R}_{L_2}(\mathcal{N}_r^{j, t})$ }\ENDFOR } \ENDFOR \STATE Choose $\left(j^\ast, t^\ast_{\lambda^\ast_j}\right) \in \argmin\nolimits_{j, t} \mathcal{I}(j, t)$ \STATE Return $\mathcal{N}_l^{j, t}$ and $\mathcal{N}_r^{j, t}$ for $(j, t) = \left(j^\ast, t^\ast_{\lambda^\ast_j}\right)$ \end{algorithmic} \end{algorithm} \centering \end{minipage} \end{wrapfigure} Hence, we measure the pointwise $L_2$-distance between ICE curves of the variance function $\hat s^2(\lambdab_S, \lambdab_C^{(i)})$ and its PD estimate $\hat s^2_{S|\mathcal{N}^\prime}\left(\lambdab_S\right)$ within a sub-region $\mathcal{N}^\prime$. This seems reasonable, as ICE curves in well-explored regions of the search space should, on average, have a lower uncertainty than those in less-explored regions. However, since we only split according to hyperparameters in $C$ but not in $S$, the partitioning does not cut off less explored regions w.r.t. $\lambdab_S$. Thus, the chosen split criterion groups ICE curves of the uncertainty estimate such that we receive sub-regions associated with low costs~$c$ (and thus high relevance for a user) to be more confident in well-explored regions of $\lambdab_S$ and less confident in under-explored regions. Figure \ref{fig:ice_sd} shows that ICE curves of the uncertainty measure with high uncertainty over the entire range of $\lambdab_S$ are grouped together (right sub-region). Those with low uncertainty close to the optimal configuration of $\lambdab_S$ and increasing uncertainties for less suitable configurations are grouped together by curve similarities in the left sub-region. The respective PDPs are illustrated in Figure \ref{fig:pdp_explain}, where the confidence band in the left sub-region decreased compared to the confidence band of the global PDP especially for grid points close to the optimal value of $\lambdab_S$. Hence, by grouping observations with similar ICE curves of the variance function, resulting sub-regional PDPs with confidence bands provide the user with the information of which sub-regions of $\Lambda_C$ are well-explored and lead to more reliable PDP estimates. Furthermore, the user will know which ranges of $\lambdab_S$ can be interpreted reliably and which ones need to be regarded with caution \begin{figure}[t] \begin{minipage}[htb]{0.48\textwidth} \centering \includegraphics[width = 0.9\linewidth]{figures/splitting_expl_sd_single.png} \captionof{figure}{ICE curves of $\hat{s}$ of $\lambdab_S$ for the left (green) and right (blue) sub-region after the first split. The darker lines represent the respective PDPs. The orange vertical line marks the value $\lambda_S$ of the optimal configuration.} \label{fig:ice_sd} \end{minipage}\hfill \begin{minipage}[htb]{0.48\textwidth} \scalebox{0.8}{ \hspace{40pt} \begin{tikzpicture} \usetikzlibrary{arrows} \usetikzlibrary{shapes} \tikzset{treenode/.style={draw, circle, font=\small}} \tikzset{line/.style={draw, thick}} \node [treenode] (a0) {$\mathcal{N}$};[below=5pt,at=(node1.south)] \node [treenode, below=0.4cm, at=(a0.south), xshift=-2.0cm] (a1) {$\mathcal{N}_l$}; \node [treenode, below=0.4cm, at=(a0.south), xshift=2.0cm] (a2) {$\mathcal{N}_r$}; \path [line] (a0.south) -- + (0,-0.2cm) -| (a1.north) node [midway, above] {$\lambda_j< 6.9$}; \path [line] (a0.south) -- +(0,-0.2cm) -| (a2.north) node [midway, above] {$\lambda_j\geq6.9$}; \end{tikzpicture} } \\ \includegraphics[width = 0.49\linewidth]{figures/shuttle_left_node.png}\includegraphics[width = 0.49\linewidth]{figures/shuttle_right_node.png} \captionof{figure}{Example of two estimated PDPs (blue line) and $95\%$ confidence bands after one partitioning step. The orange vertical line is the value of $\lambdab_S$ from the optimal configuration, the black curve is the true PD estimate $c_S(\lambdab_S)$.} \label{fig:pdp_explain} \end{minipage} \end{figure} To sum up, the splitting procedure provides interpretable, disjoint sub-regions of the hyperparameter space. Based on the defined impurity measure, PDPs with high reliability can be identified and analyzed. In particular, the method provides more confident and reliable estimates in the sub-region containing the optimal configuration. Which PDPs are most interesting to explore depends on the question the user would like to answer. If the main interest lies in understanding the optimization and exploring the sampling process, a user might want to keep the number of sub-regions relatively low by performing only a few partitioning steps. Subsequently, one would investigate the overall structure of the sub-regions and the individual sub-regional PDPs. If users are more interested in interpreting hyperparameter effects only in the most relevant sub-region(s), they may want to split deeper and only look at sub-regions that are more confident than the global PDP. Due to the nature of the splitting procedure, the PDP estimate on the entire hyperparameter space is a weighted average of the respective sub-regional PDPs. Hence, the global PDP estimate is decomposed into several sub-regional PDP estimates. Furthermore, note that the proposed method does not assume a numeric hyperparameter space, since the uncertainty estimates as well as ICE and PDP estimates can also be calculated for categorical features. Thus, it is applicable to problems with mixed spaces as long as a probabilistic surrogate model -- and particularly its uncertainty estimates -- are available. In Appendix \ref*{sec:app_hierarchical} we describe how our method is applied to hierarchical hyperparameter spaces. Since the proposed method is an instance of the CART algorithm, finding the optimal split for a categorical variable with $q$ levels generally involves checking $2^q$ subsets. This becomes computationally infeasible for high values of $q$. It remains an open question for future work if this can be sped by an optimal procedure as in regression with L2 loss \citep{fisher1958grouping} and binary classification \citep{breiman1984trees} or by a clever heuristic as for multiclass classification \citet{wright2019splitting}. \section{Experimental Analysis} \label{sec:experiments} In this section, we validate the effectiveness of the introduced methods. We formulate two main hypotheses: First, experimental data affected by the sampling bias lead to biased surrogate models and thus to unreliable and misleading PDPs. Second, the proposed partitioning allows us to identify an interpretable sub-region (around the optimal configuration) that yields a more reliable and confident PDP estimate. In a first experiment, we apply our methods to BO runs on a synthetic function. In this controlled setup, we investigate the validity of our hypotheses with regards to problems of different dimensionality and different degrees of sampling bias. In a second experiment, we evaluate our PDP partitioning in the context of HPO for neural networks on a variety of tabular datasets. We assess the sampling bias of the optimization design points by comparing their empirical distribution to a uniform distribution via Maximum Mean Discrepancy (MMD) \citep{gretton:2012, molnar2020modelagnostic}, which is covered in more detail in the Appendix~\ref*{sec:app_experimental_metrics}. We measure the reliability of a PDP, i.e., the degree to which a user can rely on the estimate of the PD estimate, by comparing it to the true PD $c_S(\lambdab_S)$ as defined in Eq. \eqref{eq:pdp}. More specifically, for every grid point $\lambdab_S^{(g)}$, we compute the negative log-likelihood (NLL) of $c_S(\lambdab_S)$ under the distribution of $\hat c_S\left(\lambdab_S\right)$ pointwise for every grid point $\lambdab_S^{(g)}$. The confidence of a PDP is illustrated by the width of its confidence bands $\hat m_S\left(\lambdab_S\right) \pm q_{1 - \alpha/2}\cdot\hat s_S\left(\lambdab_S\right)$, with $q_{1 - \alpha/2}$ denoting the $(1 - \alpha / 2)$-quantile of a standard normal distribution. We measure the confidence by assessing $\hat s_S(\lambdab_S)$ pointwise for every grid point. In particular, we consider the mean confidence (MC) across all grid points $\frac{1}{G} \sum_{g = 1}^G \hat s\left(\lambdab_S^{(g)}\right)$ as well as the confidence at the grid point closest to $\hat{\lambdab}_S$ abbreviated by OC, with $\hat{\lambdab}$ being the best configuration evaluated by the optimizer. To evaluate the performance of the confidence splitting, we report the above metrics on the sub-region that contains the best configuration evaluated by the optimizer, assuming that this region is of particular interest for a user of HPO. PDPs are computed with regards to single features for $G = 20$ equidistant grid points and $n = 1000$ Monte Carlo samples. \subsection{BO on a Synthetic Function} \label{sec:hpo_synth} We consider the $d$-dimensional Styblinski-Tang function $c: \left[-5, 5\right]^d \to \R$, $\lambdab \mapsto \frac{1}{2} \sum_{i = 1}^d \left(\lambdab_i^4 + 16 \lambdab_i^2 + 5 \lambdab_i\right)$ for $d \in \{3, 5, 8\}$. Since the PD is the same for each dimension $i$, we only present the effects of $\lambdab_1$. We performed BO with a GP surrogate model with a Mat\'{e}rn-3/2 kernel and the LCB acquisition function $a(\lambdab) = \hat m(\lambdab) + \tau \cdot \hat s(\lambdab)$ with different values $\tau \in \{0.1, 1, 5\}$ to control the sampling bias. We compute the global PDP with confidence bands estimated according to Eq. \eqref{eq:pdp_gp_var_nocov} for the GP surrogate model $\hat c$ that was fitted in the \emph{last} iteration of BO. We ran Algorithm~\ref{alg:tree}, and computed the PDP in the sub-region containing the optimal configuration. All computations were repeated $30$ times. Further details on the setup are given in Appendix~\ref*{sec:app_experimental_design_synth}. \begin{minipage}[t]{0.4\textwidth} \includegraphics[width = \linewidth]{figures/exploration_vs_confidence.png} \captionof{figure}{The figure presents the MC (left) and the NLL (right) for $d \in \{3, 5, 8\}$ for a high ($\tau = 0.1$), medium ($\tau = 1$), and low ($\tau = 5$) sampling bias across $30$ replications. With a lower sampling bias, we obtain narrower confidence bands and a lower NLL.} \label{fig:reliability_and_confidence} \end{minipage}\hfill \begin{minipage}[htb]{0.56\textwidth} \begin{scriptsize} \captionof{table}{The table shows the relative improvement of the MC and the NLL via Algorithm \ref{alg:tree} with $1$ and $3$ splits, compared to the global PDP along with the sampling bias for a $\tau = 0.1$ (high), $\tau = 2$ (medium), and $\tau = 5$ (low). Results are averaged across $30$ replications.}\label{tab:tree_depth} \begin{tabular}{cccccc} \toprule & & \multicolumn{2}{c}{$\delta$ MC (\%)} & \multicolumn{2}{c}{$\delta$ NLL (\%)} \\ \cmidrule{3-4} \cmidrule{5-6} $d$ & MMD & $n_\textrm{sp}=1$ & $n_\textrm{sp}=3$ & $n_\textrm{sp}=1$ & $n_\textrm{sp}=3$\\ \midrule 3 & low (0.18) & 7.65 & 13.64 & 5.89 & 10.92 \\ 3 & medium (0.51) & 12.86 & 36.92 & 4.78 & 7.70 \\ 3 & high (0.56) & 16.52 & 34.84 & 2.77 & -1.62 \\ 5 & low (0.15) & 6.63 & 15.45 & 2.82 & 6.05 \\ 5 & medium (0.45) & 19.67 & 37.28 & 4.05 & 7.80 \\ 5 & high (0.53) & 11.99 & 33.06 & -3.86 & -1.93 \\ 8 & low (0.11) & 3.58 & 9.67 & 0.84 & 2.40 \\ 8 & medium (0.42) & 8.86 & 23.03 & 1.51 & 3.30 \\ 8 & high (0.56) & 6.59 & 19.84 & 1.53 & 4.29 \\ \bottomrule \end{tabular} \end{scriptsize} \end{minipage} \hfill As presented in Figure \ref{fig:reliability_and_confidence}, the PDPs for surrogate models trained on \emph{less biased} data (measured by the MMD) yield \emph{lower} values of the NLL, as well as \emph{lower} values for the MC. Table \ref{tab:tree_depth} shows that a single tree-based split reduces the MC by up to almost $20 \%$, and up to $37\%$ when performing $3$ partitioning steps. Additionally, the NLL improves with an increasing number of partitioning steps in most cases. The results on the synthetic functions support our second hypothesis that the tree-based partitioning improves the reliability in terms of the NLL and the confidence of the PD estimates. The improvement of the MC is higher for a medium to high sampling bias, compared to scenarios that are less affected by sampling bias. We observe that (particularly for high sampling bias) there are some outlier cases in which the NLL worsens. More detailed results are shown in Appendix~\ref*{sec:app_experimental_results_synth}. \subsection{HPO on Deep Learning} \label{sec:hpo_nn} In a second experiment, we investigate HPO in the context of a surrogate benchmark \citep{eggensperger2015efficient} based on the LCBench data \citep{Zimmer2020AutoPyTorchTM}. For each of the $35$ different OpenML \citep{vanschoren2014openml} classification tasks, LCBench provides access to evaluations of a deep neural network on $2000$ configurations randomly drawn from the configuration space defined by Auto-PyTorch Tabular (see Table \ref*{tab:searchspace} in Appendix \ref*{sec:app_experimental_design}). For each task, we trained a random forest as an empirical performance model that predicts the balanced validation error of the neural network for a given configuration. These empirical performance models serve as cheap to evaluate objective functions, which efficiently approximate the result of the real-world experiment of running a deep learning configuration on an LCBench instance. BO then acts on this empirical performance model as its objective\footnote{Please note that the random forest is only used as a surrogate in order to construct an efficient benchmark objective, and not as a surrogate in the BO algorithm, where we use a GP.}. For each task, we ran BO to obtain the optimal architecture and hyperparameter configuration. Again, we used a GP with a Mat\'{e}rn-3/2 kernel and LCB with $\tau = 1$. Each BO run was allotted a budget of $200$ objective function evaluations. We computed the PDPs and their confidences, which are estimated according to Eq. \eqref{eq:pdp_gp_var_nocov}, based on the surrogate model $\hat c$ after the final iteration. We performed tree-based partitioning with up to $6$ splits based on a uniformly distributed dataset of size $n = 1000$. All computations were statistically repeated 30 times. Further details are provided in Appendix \ref*{sec:app_experimental_design_mlp}. \begin{wraptable}{R}{0.5\textwidth} \caption{Relative improvement of MC, OC, and NLL on hyperparameter level. The table shows the respective mean (standard deviation) of the average relative improvement over 30 replications for each dataset and 6 splits.} \label{tab:conf.impr.feat} \begin{scriptsize} \begin{tabular}{lrrrr} \toprule Hyperparameter & $\delta$ MC (\%) & $\delta$ OC (\%) & $\delta$ NLL (\%) \\ \midrule Batch size & 40.8 (14.9) & 61.9 (13.5) & 19.8 (19.5)\\ Learning rate & 50.2 (13.7) & 57.6 (14.4) & 17.9 (20.5)\\ Max. dropout & 49.7 (15.4) & 62.4 (11.9) & 17.4 (18.2)\\ Max. units & 51.1 (15.2) & 58.6 (12.7) & 24.6 (22.0)\\ Momentum & 51.7 (14.5) & 58.3 (12.7) & 19.7 (21.7)\\ Number of layers & 30.6 (16.4) & 50.9 (16.6) & 13.8 (32.5)\\ Weight decay & 36.3 (22.6) & 61.0 (13.1) & 11.9 (19.7)\\ \bottomrule \end{tabular} \end{scriptsize} \end{wraptable} For the real-world data example, we focus on answering the second hypothesis, i.e., whether the tree-based Algorithm~\ref{alg:tree} improves the reliability of the PD estimates. We compare the PDP in sub-regions after $6$ splits with the global PDP. We computed the relative improvement of the confidence (MC and OC) and the NLL of the sub-regional PDP compared to the respective estimates for the global PDP. As shown in Table \ref{tab:conf.impr.feat}, the MC of the PDPs is on average reduced by $30\%$ to $52\%$, depending on the hyperparameter. At the optimal configuration $\hat{\lambdab}_S$, the improvement even increases to $50\% - 62\%$. Thus, PDP estimates for all hyperparameters are on average -- independent of the underlying dataset -- clearly more confident in the relevant sub-regions when compared to the global PD estimates, especially around the optimal configuration $\hat{\lambdab}_S$. In addition to the MC, the NLL simultaneously improves. In Appendix \ref*{sec:app_experimental_results_mlp}, we provide details regarding the evaluated metrics on the level of the dataset and demonstrate that our split criterion outperforms other impurity measures regarding MC and~OC. Furthermore, we emphasize in Appendix \ref*{sec:app_experimental_results_mlp} the significance of our results by providing a comparison to a naive baseline method. \begin{figure}[bth] \centering \includegraphics[width = 0.9\linewidth]{figures/mlp_example_new.png} \caption{PDP (blue) and confidence band (grey) of the GP for hyperparameter \textit{max. number of units} (\textit{batch size}) on the left (right) side. The black line shows the PDP of the meta surrogate model representing the true PDP estimate. The orange vertical line marks the optimal configuration $\hat{\lambdab}_S$. The relative improvements from the global PDP to the sub-regional PDP after 6 splits are for \textit{max. number of units} (\textit{batch size}): $\delta$ MC = $61.6 \%$ ($28.4 \%$), $\delta$ OC = $63.5 \%$ ($62.2 \%$), $\delta$ NLL = $48.6 \%$ ($30.1 \%$).} \label{fig:mlp_example1} \end{figure} To further study our suggested method, we now highlight a few individual experiments. We chose one iteration of the \textit{shuttle} dataset. On the two left plots of Figure \ref{fig:mlp_example1}, we see that the true PDP estimate for \textit{max. number of units} is decreasing, while the globally estimated PDP trend is increasing and thus misleading. Although the confidence band already indicates that the PDP cannot be reliably interpreted on the entire hyperparameter space, it remains challenging to draw any conclusions from it. After performing $6$ splits, we receive a confident and reliable PD estimate on an interpretable sub-region. The same plots are depicted for the hyperparameter \textit{batch size} on the right part of Figure \ref{fig:mlp_example1}. This example illustrates that the confidence band might not always shrink uniformly over the entire range of $\lambdab_S$ during the partitioning, but often particularly around the optimal configuration $\hat{\lambdab}_S$. \section{Discussion and Conclusion} \label{sec:conclusion} In this paper, we showed that partial dependence estimates for surrogate models fitted on experimental data generated by efficient hyperparameter optimization can be unreliable due to an underlying sampling bias. We extended PDPs by an uncertainty estimate to provide users with more information regarding the reliability of the mean estimator. Furthermore, we introduced a tree-based partitioning approach for PDPs, where we leverage the uncertainty estimator to decompose the hyperparameter space into interpretable, disjoint sub-regions. We showed with two experimental studies that we generate, on average, more confident and more reliable regional PDP estimates in the sub-region containing the optimal configuration compared to the global PDP. One of the main limitations of PDPs is that they bear the risk of providing misleading results if applied to correlated data in the presence of interactions, especially for nonparametric models \citep{groemping:2020}. However, existing alternatives that visualize the global marginal effect of a feature such as accumulated local effect (ALE) plots \citep{apley2020visualizing} do also not provide a fully satisfying solution to this problem \citep{groemping:2020}. As a solution to this problem, \cite{groemping:2020} suggests stratified PDPs by conditioning on a correlated and potentially interacting feature to group ICE curves. This idea is in the spirit of our introduced tree-based partitioning algorithm. However, in the context of BO we might assume the distribution in Eq.~\eqref{eq:pdp} to be uniform and therefore no correlations are present. Instead of correlated features, we are faced with a sampling bias (see Section~\ref{sec:bias}) where we observe regions of varying uncertainty. Hence, instead of stratifying with respect to correlated features and aggregating ICE curves in regions with less correlated features, we stratify with respect to uncertainty and aggregate ICE curves in regions with low uncertainty variation. Nonetheless, it might be interesting to compare our approach with approaches based on the considerations made by \cite{groemping:2020} -- or potentially improved ALE curves. Another limitation when using single-feature PDPs as in our examples is that hyperparameter interactions are not visible. However, two-way interactions can be visualized by plotting two-dimensional PDPs within sub-regions. Another possibility to detect interactions is to look at ICE curves within the sub-regions. If the shape of ICE curves within a sub-region is very heterogeneous, it indicates that the hyperparameter under consideration interacts with one of the other hyperparameters. Hence, having the additional possibility to look at ICE curves of individual observations within a sub-region is an advantage compared to other global feature effect plots such as ALE plots \citep{apley2020visualizing}, as they are not defined on an observational level. While we mainly discussed GP surrogate models on a numerical hyperparameter space in our examples, our methods are applicable to a wide variety of distributional regression models and also for mixed and hierarchical hyperparameter spaces. We also considered in Appendix~\ref*{sec:app_experimental_results_mlp} different impurity measures. While the one introduced in this paper performed best in our experimental settings, this impurity measure as well as other components are exchangeable within the proposed algorithm. In the future, we will study our method on more complex, hierarchical configuration spaces for neural architecture search. The proposed interpretation method is based on a surrogate and consequently does provide insights about what the AutoML system has \emph{learned}, which in turn allows plausibility checks and may increase trust in the system. To what extent this allows conclusions on the \emph{true} underlying hyperparameter effects depends on the quality of the surrogate. How to efficiently perform model diagnostics to ensure a high surrogate quality before applying interpretability techniques is subject to future research. While we focused on providing better explanations without generating any additional experimental data, it might be interesting to investigate in future work how confidence and reliability of IML methods can be increased most efficiently when a user is allowed to conduct additional experiments. Overall, we believe that increasing interpretability of AutoML will pave the way for human-centered AutoML. Our vision is that users will be able to better understand the reasoning and the sampling process of AutoML systems and thus can either trust and accept the results of the AutoML system or interact with it in a feedback loop based on the gained insights and their preferences. How users can then best interact with AutoML (beyond simple changes of the configuration space) will be left open for future research. \begin{ack} This work has been partially supported by the German Federal Ministry of Education and Research (BMBF) under Grant No. 01IS18036A. The authors of this work take full responsibilities for its content. \end{ack} \newpage \small
1,314,259,995,399
arxiv
\section{Introduction} The evolution of self-gravitating systems and plasmas is mainly dominated by collective mean-field processes due to their large number of particles $N$ and the long-range nature of the $1/r^2$ gravitational or Coulombian force. The typical dynamical scale $t_{\rm dyn}$ of such systems is of the order of $1/\sqrt{G\bar\rho}$, where $G$ is the gravitational constant and $\bar\rho$ is some mean mass density, for the self-gravitating systems, and proportional to the inverse of the plasma frequency $\omega_P=\sqrt{4\pi n_e e^2/m_e}$, where $n_e$, $e$ and $m_e$ are the number density, charge and mass of the electrons, for plasmas.\\ \indent Collisional processes typically contribute to the dynamics on a time scale that is function\footnote{Note that $N$ corresponds to the total number of particles of the systems in the gravitational case, while it is the average number of particles, ions or electrons, withing a Debye length for plasmas.} of $N$ as $t_{c}\approx t_{\rm dyn}\ln{N}/N$, that in the case of gravitating systems may exceed several times the age of the Universe (see \cite{bt08}), while in plasma physics is always regulated by temperature and mean free path. For these reasons, the numerical modelling of systems governed by $1/r^2$ forces is usually carried by means of {\it collisionless} approaches such as the widely uses particle-mesh or particle-in-cell (PIC) schemes (see e.g. \cite{he88,dr11}). However, there are several examples of of systems that are (at least partially) dynamically regulated by collisions. In particular, dense stellar systems such as globular clusters or galactic cores can be largely in collisional regimes, while their modelling in terms of ``honest" direct $N-$body simulations with a one-to-one correspondence between stars and simulation particles remains still challenging due to the (relatively) large size $N$ of the order of $10^6$. In plasma physics, collisional systems can be found in trapped one component ion or electron plasmas (see \cite{dub99}) or ultracold neutral plasmas (see \cite{pohl05}), while a transition between collisional and collisionless plasma regimes is expected in the scrape-off layer of tokamaks (see \cite{funda05}).\\ \indent Numerical simulations of collisional stellar systems usually employ the Monte Carlo technique or hybrid particle-mesh direct $N-$body schemes. Vice versa, PIC plasma codes based on the solution of the Vlasov-Maxwell equations on some reduced geometry rely on analytic or semi-analytic methods to reconstruct the collision integral in the kinetic picture.\\ \indent In a series of papers, we have implemented a novel approach to the simulation of both gravitational and Coulomb collisional systems based on the so-called multiparticle collision operator (hereafter MPC), originally developed by \cite{kapral99} in the context of mesoscopic fluid dynamics. Here we give a brief introduction of the method and highlight the major results obtained so far. \section{Overview of the method} The MPC scheme in a three-dimensional simulation domain containing $N$ particles partitioned in $N_c$ cells, amounts to a cell-dependent rotation of an angle $\alpha_i$ of the particle's relative velocity vectors $\delta\mathbf{v}_j=\mathbf{v}_j-\mathbf{u}_{\rm{com},i}$ in the centre of mass frame of each cell $i$, so that \begin{equation}\label{rotation} \mathbf{v}_{j}^\prime=\mathbf{u}_{\rm{com},i}+\delta\mathbf{v}_{j,\perp}{\rm cos}(\alpha_i)+(\delta\mathbf{v}_{j,\perp}\times\mathbf{R}_i){\rm sin}(\alpha_i)+\delta\mathbf{v}_{j,\parallel}. \end{equation} In the formulae above $\mathbf{R}_i$ is a random axis chosen for the given cell, $\delta\mathbf{v}_{j,\perp}$ and $\delta\mathbf{v}_{j,\parallel}$ are the relative velocity components perpendicular and parallel to $\mathbf{R}_i$, respectively and \begin{equation} \mathbf{u}_{\rm{com},i}=\frac{1}{m_{{\rm tot},i}}\sum_{j=1}^{n_i}m_j\mathbf{v}_j;\quad m_{{\rm tot},i}=\sum_{j=1}^{n_i}m_j. \end{equation} For each cell, if the rotation angle $\alpha_i$ is chosen randomly by sampling a uniform distribution, after the vectors $\delta\mathbf{v}_j$ are rotated back around $\mathbf{R}_i$ by $-\alpha_i$, the linear momentum and the kinetic energy are conserved {\it exactly} in each cell, from which follows the conservation of the {\it total} parent quantities (see e.g. \cite{dicintio17}), at variance with other previously implemented collision schemes, such as for example the \cite{nambu83} method that imposes the conservation of the kinetic energy only, while breaking the conservation of linear momentum (that is preserved only in time average).\\ \indent By introducing an additional constraint on the rotation angles $\alpha_i$, one can add the conservation of a component of the total angular momentum of the cell $\mathbf{L}_i$. In this case $\alpha_i$ is given by \begin{equation}\label{sincos} {\rm sin}(\alpha_i)=-\frac{2a_ib_i}{a_i^2+b_i^2};\quad {\rm cos}(\alpha_i)=\frac{a_i^2-b_i^2}{a_i^2+b_i^2}, \end{equation} where \begin{equation}\label{ab} a_i=\sum_{j=1}^{N_i}\left[\mathbf{r}_j\times(\mathbf{v}_j-\mathbf{u}_i)\right]|_z;\quad b_i=\sum_{j=1}^{N_i}\mathbf{r}_j\cdot(\mathbf{v}_j-\mathbf{u}_i). \end{equation} In the expression above $\mathbf{r}_j$ are the particles position vectors, and with $[\mathbf{x}]|_z$ we assume that we are considering (without loss of generality) the component of the vector $\mathbf{x}$ parallel to the $z$ axis of the simulation's coordinate system, that implies that in ouir case the $z$ component of the cell angular momentum is conserved.\\ \indent The exact conservation of the total angular momentum inside the cell can be enforced by choosing $\mathbf{R}_i$ parallel to the direction of the cell's angular momentum vector $\mathbf{L}_i$ and using the definition of $a_i$ accordingly.\\ \indent In our implementation of the MPC method, the collision step is always conditioned to a cell-dependent probability accounting for "how much the system is collisional" locally. The latter is defined as \begin{equation}\label{cumulative} p_i={\rm Erf}\left(\beta\Delta t \nu_c\right), \end{equation} where $\Delta t$ is the simulation timestep, $\nu_c$ is the collision frequency, $\beta$ is a dimensionless constant of the order of twice the number of the simulation cells, and ${\rm Erf}(x)$ is the standard error function. In Equation (\ref{cumulative}) the collision frequency is given by \begin{equation} \nu_c=\frac{8\pi G^2\bar{m}^2_i\bar n\log\Lambda}{\sigma^3_i} \end{equation} for the gravitational case, where $\bar{n}$ the mean stellar number density, $\bar{m}_i$ and $\sigma_i$ the average particle mass and the velocity dispersion in the cell, respectively, and by \begin{equation} \nu_c=\frac{4\pi e^4n_e\log\Lambda}{m_e^{1/2}T^{3/2}} \end{equation} for the plasma case, where $T$ is the electron temperature. In both cases the Coulomb logarithm $\log\Lambda$ of the maximum to minimum impact parameter is usually taken of order 10 for the systems of interest. Otherwise, it can also be evaluated cell-dependently.\\ \indent Once Equation (\ref{cumulative}) is evaluated in each cell, a random number $p_{*i}$ is sampled from a uniform distribution in the interval $[0,1]$ and the multi-particle collision is applied for all cells for which $p_{*i}\leq p_i$.\\ \indent Between two applications of the MPC operator, the particles are propagated under the effect of the self-consistent gravitational or electromagnetic fields, obtained by solving at on a grid the Poisson or Maxwell equations with standard finite elements schemes. \section{Results and discussion} \begin{figure} \includegraphics[width=0.9\textwidth]{mpcdss.pdf} \caption{a) Number of escapers as function of time for a cluster with $N=32000$ particles propagated with {\sc mpcdss} (solid line) and {\sc nbody6} (squares). b) Evolution of the central density for a core collapsing cluster evolved with {\sc mpcdss} (filled symbols) and {\sc nbody6} (empty symbols).} \label{figmpcdss} \end{figure} \subsection{Evolution of Globular clusters} \cite{dicintio21} and \cite{dicintio22a} performed a wide range of hybrid MPC-Particle-mesh simulations of globular clusters with the {\sc mpcdss} code, investigating the effect of a mass spectrum on the dynamics of core collapse, with particle numbers up to $N=10^6$, different radial anisotropy profiles while adopting the widely used Plummer density distribution. Low resolution simulations with $N=32000$ have been compared to their counterparts performed with the direct $N-$body code {\sc nbody6} (see \cite{aarseth}) using the exact same initial conditions. Overall we observed a good agreement between the two approaches in the low $N$ limit in the evolution of indicators such as the fraction of escapers (i.e. particles with {\it positive} total energy being out a given cut-off radius) shown in panel a) of Fig. \ref{figmpcdss} or the mean density $\rho_0$ evaluated within a fixed radius of the order one tenth of the initial scale radius, see panel b) same figure, with the MPC simulations being a factor 5 faster than direct $N-$body. In models with a mass spectrum we confirm the theoretical self-similar contraction picture but with a dependence on the slope of the mass function. Moreover, the time of core collapse shows a non-monotonic dependence on the slope that holds also for the depth of core collapse and for the dynamical friction timescale of heavy particles. Cluster density profiles at core collapse show a broken power law structure, suggesting that central cusps are a genuine feature of collapsed cores.\\ \indent In addition we also investigated the dynamics of a central intermediate mass black hole (IMBH) finding that the latter, independently on the structure of the mass spectrum and the anisotropy profiles accelerates the core collapse while making it shallower. In general we also observe that the presence of a mass spectrum result in a way large wander radius of the IMBH for fixed total stellar mass and different masses of the IMBH itself.\\ \indent The results on the structure and evolution of velocity dispersion and anisotropy profiles of cluster with central IMBHs are to be published elsewhere (\cite{dicintio22b}). \begin{figure} \includegraphics[width=0.85\textwidth]{tropico.pdf} \caption{a) Electron velocity distribution along the direction of the electric field. b) Electron velocity distribution in the transverse direction. c) Electron kinetic energy distribution. d) Ion kinetic energy distribution. All distributions are evaluated at $t\sim 700/\omega_P$ for different values of $T_0$ (indicated in figure) and $E_z=0.1$ in computer units.} \label{figtropico} \end{figure} \subsection{Weakly collisional plasmas} The most part of the MPC simulations performed with the {\sc tropic$^3$o} code (see \cite{dicintio15}, \cite{dicintio17}) where devoted to the investigation of the transition between different regimes of anomalous energy transport in low dimensional (1D and 2D, see \cite{lepri19}) toy set-ups aiming at shedding some light on the heat flux profile structure along magnetic field lines crossing strong temperature gradients in weakly collisional plasmas, relevant for magnetic fusion devices. \cite{ciraolo18} found a surprisingly good agreement between the electron temperature profiles between a hot source and a cold interface obtained by MPC simulations with the corresponding profiles evaluated by 1D fluid codes with a semi-analytical collisional closure of the fluid equations and a non-local definition of the heat flux accounting for suprathermal particles.\\ \indent \cite{lepri21} further investigated the nonequilibrium steady states of a 1D models of finite length L in contact with two thermal-wall heat reservoirs, finding a clear crossover from a kinetic transport regime to an anomalous (hydrodynamic) one over a characteristic scale proportional to the cube of the collision time among particles. In addition, test simulations of models with thermal walls injecting particles with given nonthermal velocity. showed that for fast and relatively cold particles, smaller systems never establish local equilibrium keeping non-Maxwellian velocity distributions.\\ \indent \cite{dicintio18} used MPC simulations to study the nonthermal profiles of the weakly ionized media in filamentary structures in molecular clouds finding that strong collisions in dense regions enforce the production of high velocity tails in the particle's distribution, able to climb the gravitational potential well of the filament, thus forming hot and diffuse external envelopes without additional heat sources from the intergalactic media or star formation feedback. Density and temperature profiles obtained in these numerical experiments qualitatively mach the observed transverse profiles of both gas density and temperature.\\ \indent The possibility to incorporate an energy and momentum preserving particle-based collisional operator in plasma codes opens to the possibility to study problems involving the presence of different species or complex nonthermal processes without making strong assumptions on a (possibly unknown) phase-space distribution function. In particular, we aim at studying the mechanism of electron run-away in presence of net electrostatic fields and/or violation of the plasmas quasineutrality. We performed some preliminary test simulations in a 3D periodic geometry aiming at observing the formation of a suprathermal tail in the velocity distribution when applying a constant electric field $\mathbf{E}$ along one of the three spatial coordinates, for different values of the initial electron temperature $T_0$ at fixed density $n_e$ and assuming that electrons and ions are initially at equilibrium. In Fig. \ref{figtropico} we show the electron velocity distributions along the direction of $\mathbf{E}$ ($z$, without loss of generality) and one of the transverse directions $f(v_z)$ and $f(v_x)$, panels a) and b). It appears clearly that for fixed density and therefore fixed mean free path, systems with lower initial electron temperature are less and less prone to develop non thermal tails along the direction of the fixed external field as particles accelerated by such field suffer a stronger dynamical friction effect due to the lower velocity dispersion, as theorized by \cite{dreicer59}. Further increasing $T_0$ produces more and more deformed $f(v_z)$ while the velocity distribution in the transverse directions $f(v_x)$ regains a thermal structure when $T_0$ is larger than some critical value for which the hotter systems are basically collisionless and the different degrees of freedom are decoupled. This is also evident from the full differential kineti energy distribution $n(K)$ (see panel c, same figure), where the ''mono-chromatic" peak at $K\approx1.5\times 10^2$ in computer units appears only for systems starting with somewhat intermediate values of $T_0$. Being heavier (in this case with a mass ratio of $2\times 10^3$), and thus having lower mobility, ions do not bare any peculiar structure in their kinetic energy distributions even at late times of the order of thousand of plasma oscillations, such that their $n(K)$ corresponding to different $T_0$ can be collapsed onto one another (panel d).
1,314,259,995,400
arxiv
\section{Introduction} The AdS/CFT correspondence \cite{Maldacena:1997re} provides a very elegant reformation of certain questions concerning specific strongly coupled CFTs in terms of classical geometric calculations in a dual gravity theory. Such methods are particularly powerful for questions related to CFTs deformed by sources in a manner that depends on spacetime where first principle computations in such CFTs are very challenging. An example of this is considering the CFT on curved spacetime as reviewed in \cite{Marolf:2013ioa}. Here we examine a basic property of quantum field theories on curved static spacetimes with globally timelike Killing vector, namely their vacuum energy. This was discussed in \cite{Horowitz:1998ha,Myers:1999psa} in the context of AdS/CFT where it was pointed out that the bulk stress tensor determined by holographic renormalisation \cite{Henningson:1998gx,Balasubramanian:1999re,deHaro:2000xn,Skenderis:2002wp}, which as usual is chosen to vanish on flat space, then gives the vacuum or Casimir energy when the boundary space is non-trivial and the bulk geometry is that corresponding to the vacuum state. In this work we consider $(2+1)$-dimensional holographic CFTs on such spacetimes which due to the absence of a conformal anomaly may be chosen without loss of generality to have ultrastatic form. We take the spatial geometry, $\Sigma$, to be closed so that the total vacuum energy is finite, and being in odd dimensions is scheme independent in the holographic renormalisation. This energy will be a functional of the 2-dimensional spatial geometry $\Sigma$ which, other than being closed, may have general topology and metric. The purpose of this paper is to show that under reasonable assumptions on the nature of the bulk geometry dual to the vacuum state, simple geometric considerations lead to the conclusion that the CFT vacuum energy is non-positive for any closed space $\Sigma$, and negative unless $\Sigma$ has constant curvature (so is locally a sphere, torus or hyperbolic space). To put this result in context one can consider whether such a result applies to the vacuum energy in other theories. For $(1+1)$-dimensional CFTs a closed spatial geometry $\Sigma$ is simply a circle and so there is no interesting local geometry. Then a classic computation yields that the Casimir energy is a function of the circle size and the central charge. Likewise the Casimir energy may be simply computed for free theories too. However for more spatial dimensions, such as the $(2+1)$-dimensional case we consider here, then $\Sigma$ may have complicated local geometry that the vacuum energy depends on. One might imagine that free field theories would allow computations to be performed. For example one could consider a free scalar field (massless, massive or conformally coupled), on such an ultrastatic spacetime. The vacuum energy is then related to the functional determinant of an elliptic operator on the space $\Sigma$ \cite{Birrell:1982ix,Fulling:1989nb}. However such determinants are very subtle as they are naively divergent and must be regulated. Then even computing the vacuum energy for highly symmetric spaces is a very non-trivial task, albeit a well defined one in odd dimensions where there is no scheme dependence \cite{Candelas:1983ae}. With the simplest cases being so challenging it is not surprising that, to our knowledge, results such as bounds on the vacuum energy for general $\Sigma$ do not exist even for free matter above $(1+1)$-dimensions. Thus holography provides a very powerful tool for calculation, allowing elementary methods to give global results on the vacuum energy for strongly coupled CFTs as a functional of the space $\Sigma$. One assumption we make is that the vacuum state is described by a static bulk geometry which is smooth when heated up to any finite temperature. We do not require the vacuum dual geometry to be smooth at zero temperature, and note that in important canonical examples it is not. While there is some control mathematically over existence of infilling bulk geometries for given boundary spaces (for example \cite{GrahamLee,Anderson:2002xb}) this is not generally a solved problem, and particularly given that at zero temperature the bulk may be singular, is presumably very hard to understand generally. Thus for us it will remain an assumption. Technically we will prove an inequality involving the free energy at finite temperature where we assume the bulk is smooth, with any zero temperature singularities being `good' and hence shrouded by horizons \cite{Gubser:2000nd}. Then we take the zero temperature limit of this to derive that the vacuum energy is non-positive. In fact such a bound on free energy was obtained for holographic CFTs in the special case where $\Sigma$ has constant scalar curvature (so a round sphere, flat torus or quotients thereof, or compact hyperbolic spaces) in \cite{Galloway:2015ora} using similar methods but the mass definition of \cite{WangMass} (see also the related earlier works \cite{Boucher:1983cv,Chrusciel:2000az} in the same constant scalar curvature setting but without horzions). Consequently this does not allow study of the Casmir energy when the boundary metric is deformed, but instead gives a general statement about the properties of thermal states for the CFT on these specific spaces. In this work we use the holographic stress tensor to compute the vacuum energy, allowing us to consider any boundary space $\Sigma$. In fact while the thermal inequality in \cite{Galloway:2015ora} applies in general dimension, interestingly it is only in the case of $(2+1)$ boundary dimensions that it can be extended to general spaces $\Sigma$ and hence derive a global statement on the vacuum energy as a functional of $\Sigma$. The structure of the paper is as follows. In section~\ref{sec:holo} we will review the geometric dual description of a holographic CFT, discuss the extraction of the boundary stress tensor, and outline the assumptions we make of the bulk geometry. Then in section~\ref{sec:bulk} we consider the bulk equations and introduce the key geometric tool we will use to control the vacuum energy, namely that the bulk optical Ricci scalar is a super-harmonic function, allowing an inequality relating boundary terms at the conformal boundary and any bulk horizons. In the following sections~\ref{sec:boundary} and~\ref{sec:horizon} we compute these boundary terms, and collecting these in~\ref{sec:result} arrive at a thermodynamic inequality involving the free energy. Taking the zero temperature limit then yields our claimed result that the zero temperature vacuum energy is non-positive. We conclude with a brief discussion in section~\ref{sec:discussion}. \\ \noindent \emph{Note added:} After completion of this work an alternate derivation of our result was pointed out to us by Juan Maldacena. This uses an elegant geometric bound related to the Gauss-Bonnet theorem \cite{Anderson}. For the interest of the reader we have added section~\ref{sec:Anderson} detailing this alternate approach. \section{The dual bulk geometry \label{sec:holo}} We restrict our attention to the universal gravity sector of the AdS/CFT correspondence (see for example the review \cite{Marolf:2013ioa}), so that the zero temperature vacuum and thermal states of the $(2+1)$-dimensional CFT in the absence of sources are described by solutions of the $(3+1)$-dimensional pure gravity Einstein equations with negative cosmological term. Let the bulk metric be $g^{(4)}$, then it must satisfy the Einstein condition, \begin{eqnarray} R^{(4)}_{\mu\nu} = - \frac{3}{\ell^2} g^{(4)}_{\mu\nu} \end{eqnarray} where $\ell$ determines the AdS length, and is related to $c$, the CFT `effective central charge', as $c = \ell^{2} / 16 \pi G_{(4)}$, where $G_{(4)}$ is the bulk Newton constant. For the CFT to be well described by a semi-classical gravity dual we require $c \gg 1$. The AdS/CFT correspondence dictates that the metric $g^{(4)}$ has a conformal boundary whose geometry gives the conformal class of the spacetime the CFT lives on. Let us denote the metric of the CFT's spacetime as $g_{CFT}$. The asymptotic approach to this conformal boundary then determines the expectation value of the renormalised CFT stress tensor, $T_{CFT}$, and hence the energy of the state that is described by this bulk geometry. Consider the CFT on a general static closed spacetime, \begin{eqnarray} g_{CFT} = - N(x) dt^2 + \bar{g}_{ab}(x) dx^a dx^b \end{eqnarray} where $(\Sigma, \bar{g})$ is a smooth closed $2$-dimensional Riemannian manifold with local coordinates $x^a$, and $\partial / \partial t$ is a globally timelike Killing vector, so that $N$ is a positive function over $\Sigma$. Since the Killing vector is globally timelike, and so $N > 0$, we can move to an ultrastatic conformal frame. There is no conformal anomaly in $(2+1)$-dimensions, so the stress tensor transforms simply under the required conformal transformation. We take a new frame, $g'_{CFT} = \Omega^2 g_{CFT}$, where $\Omega^2 = 1/N$, and then the metric in this frame is ultrastatic, \begin{eqnarray} g'_{CFT} = - dt^2 + \bar{g}'_{ab}(x) dx^a dx^b \end{eqnarray} with $\bar{g}'_{ab} = \bar{g}_{ab} / N$, and the stress tensor in the new frame is, \begin{eqnarray} T'_{CFT} = \Omega^{-1} T_{CFT} = \sqrt{N} T_{CFT} \; . \end{eqnarray} In particular we will be interested in the CFT energy, $E$, defined with respect to the time translation Killing vector $v = \frac{\partial}{\partial t}$. This energy is given as, \begin{eqnarray} E = \int_{\Sigma} \sqrt{\bar{g}} T^{CFT}_{AB} n^A v^B = \int_{\Sigma} \sqrt{\bar{g}} \frac{1}{\sqrt{N}} T^{CFT}_{tt} \end{eqnarray} where $x^A = (t, x^a)$ and $n$ is the unit normal vector to a constant $t$ hypersurface, so $n = \frac{1}{\sqrt{N}} \frac{\partial}{\partial t}$. In the ultrastatic frame we have $n' = \frac{\partial}{\partial t}$, $\sqrt{\bar{g}'} = \frac{1}{N} \sqrt{\bar{g}} $ and $T'^{CFT}_{tt} = \sqrt{N} T^{CFT}_{tt}$, so that \begin{eqnarray} E' = \int_{\Sigma} \sqrt{\bar{g}'} T'^{CFT}_{AB} n'^A v^B = E \; . \end{eqnarray} Thus energy with respect to $\partial / \partial t$ is invariant under the conformal transformation. Since this is the key quantity of interest we see we may work in the ultrastatic frame without loss of generality. Hence from now on we shall only consider the ultrastatic case with $N = 1$, and so the energy $E[ \Sigma ]$ is a functional of the 2-dimensional closed spatial geometry $\Sigma$. As we are interested in the zero temperature vacuum state and thermal states, and the CFT spacetime is static, we assume the dual bulk spacetime is also static. Note that while the Killing vector $\partial/\partial t$ of the CFT metric is globally timelike, we do not assume this for the bulk. Instead we make the following assumption. \\ \noindent \fbox{ \parbox{\columnwidth}{ \emph{Assumption 1:} At finite CFT temperature, $T$, a thermal state is described by a dual static bulk solution that is smooth away from the conformal boundary (with boundary metric $g_{CFT}$), and ends only on this or on smooth Killing horizons (whose Hawking temperatures are $T$ with respect to $\partial / \partial t$). } } \\ We are assuming all horizons are Killing horizons with respect to $\partial / \partial t$ and in equilibrium with each other. Their surface gravity, $\kappa$, measured with respect to the bulk Killing field $K = \partial / \partial t$ that generates these Killing horizons is given by $\kappa^2 = - \frac{1}{2} \left( \nabla^{(4)}_\mu K_\nu \right)^2$ evaluated at the horizon. With respect to $K$ they have Hawking temperature $T_{Hawking} = \kappa /(2 \pi)$, and since we have chosen a conformal frame for $g_{CFT}$ such that $K$ restricted to the conformal boundary generates the CFT time translation then the CFT temperature is $T = T_{Hawking}$. The zero temperature vacuum (ie. Casimir) energy is the quantity we are ultimately interested in. One might expect that given the above assumption, we would further assume that if the bulk spacetime at zero temperature has any other ends than the conformal boundary, these should be smooth extremal (ie. zero temperature) horizons. However this would be too restrictive and rules out consideration of important physical examples. Let us now consider such an example, namely the dual to the CFT on a flat spatial torus. Taking the CFT to be at temperature $T$ on a boundary metric which is the Minkowski spacetime, the dual bulk is of the form, \begin{eqnarray} ds^2_{(4)} = \frac{\ell^2}{z^2} \left( - f dt^2 + \delta_{ab} dx^a dx^b + \frac{1}{f} dz^2 \right) . \end{eqnarray} At zero temperature this is Poincare-AdS with $f = 1$, and ends in an extremal horizon as $|x^a|, z \to \infty$ \cite{Kunduri:2013gce}. At finite temperature it is planar AdS-Schwarzschild with $f = 1 - \left(\frac{z}{z_0}\right)^3$ ending on a regular horizon at $z = z_0$, corresponding to CFT temperature $T = 3 / ( 4 \pi z_0 )$. However the spatial section of the Minkowski spacetime is $\mathbb{R}^2$ and is of course not closed. Since we are interested in closed $\Sigma$ we may rectify this by the identification of Minkowski to the product of time with a 2-torus, ie. making the $x^a$ coordinates above periodic, so $g_{CFT} = -dt^2 + g_{T^2}$ for $g_{T^2}$ the metric on the torus. While this identification on the planar AdS-Schwarzschild solution simply compactifies the geometry of the horizon at $z = z_0$ to be toroidal, at zero temperature it has a more dramatic action, destroying the smooth extremal horizon and replacing it with a null singularity. Note that if we consider periodic boundary conditions for fermions about the $x^a$ circles then this is the relevant dual to describe the vacuum (the non-singular AdS-soliton is not a possible bulk geometry since it is incompatible with the fermion spin structure \cite{Horowitz:1998ha}). This bulk null singularity is a `good' singularity in the sense of \cite{Gubser:2000nd} since at any small finite temperature compared to the torus periods it is shrouded behind the smooth planar AdS-Schwarzschild horizon. Nonetheless we see that this canonical example of the CFT on the product of time with a 2-torus does not have a smooth bulk at zero temperature. Indeed recently it has been argued that extremal horizons are actually non-generic as ends to zero temperature bulk duals, and in fact null singularities might generally be expected \cite{Hickling:2014dra,Hickling:2015ooa}. Fortunately the result we will derive concerning the zero temperature behaviour does not require such a strong assumption as the bulk ending only on the conformal boundary or smooth extremal horizons. In fact our result requires only that the zero temperature bulk solution arises as the limit of finite temperature solutions of our assumed form above as $T \to 0$. The condition on these is not that the bulk geometry is smooth in this limit, but rather that the total energy and entropy are finite and continuous in this limit. Thus we make the following assumption. \\ \noindent \fbox{ \parbox{\columnwidth}{ \emph{Assumption 2:} At zero temperature the static bulk solution dual to the CFT vacuum is the limit of finite temperature solutions satisfying Assumption 1 above. The energy and entropy are well behaved in the $T \to 0$ limit. } } \\ Thus we allow singularities in the gravity dual to the zero temperature vacuum state, but only ones that are {`good'} in the sense of \cite{Gubser:2000nd}. As emphasised above this allows us to consider situations such as the canonical example of $g_{CFT} = -dt^2 + g_{T^2}$ (with periodic fermion boundary conditions) where the dual vacuum geometry is singular. Physically if Assumption 2 did not hold one would be concerned the CFT was pathological - indeed even having non-zero entropy as $T \to 0$ is rather non-generic from the perspective of quantum field theory. Thus we will always implicitly be working at finite temperature in what follows, and will talk about the zero temperature behaviour only as a limit of this. Following our previous work \cite{Hickling:2015sha} we write the static bulk spacetime in terms of a warped product of time and a Riemannian geometry $(\mathcal{M},g)$, \begin{eqnarray} \label{eq:bulkmetric} ds^2_{(4)} = \frac{\ell^2}{Z^2} \left( -dt^2 + g_{ij} dx^i dx^j \right) \end{eqnarray} where $x^i$ are local coordinates on $\mathcal{M}$, so $i=1,2,3$. This geometry $(\mathcal{M},g)$ is known as the \emph{optical geometry} \cite{OptMetric}. We assume that the bulk spacetime ends on the conformal boundary, with geometry $(\mathbb{R} \times \Sigma, g_{CFT})$, with $g_{CFT} = - dt^2 + \bar{g}$, and may also end on $N_H \ge 0$ bulk Killing horizon components, with spatial sections $\mathcal{H}_A$ for $A = 1, \ldots N_H$. We take $\mathcal{M}$ to have a boundary $\partial \mathcal{M} = \Sigma$, and make $Z$ the defining function for the conformal boundary, so $Z > 0$ everywhere except on the boundary $\partial M$, where $Z = 0$ with $d Z \ne 0$. Consider a component $\mathcal{H}_A$ of the horizon. For a general static Killing horizon we may write the metric locally as, \begin{eqnarray} ds^2 = - \kappa^2 r^2 f(r,y) dt^2 + dr^2 + h_{ab}(r, y) dy^a dy^b \end{eqnarray} where the constant $\kappa$ gives the surface gravity of the horizon with respect to the Killing vector $\partial/\partial t$, $r$ is a normal coordinate to the horizon which is located at $r = 0$, $y^a$ are coordinates on the spatial section of the horizon, and $f$ is a function such that $f(0,y) = 1$, and both $f(r,y)$ and $h_{ab}(r,y)$ are smooth functions of $r^2$ to ensure regularity of the Killing horizon (see for example \cite{Wiseman:2011by,Adam:2011dn}). For the dual to a thermal state, the surface gravity of such a bulk horizon is related to the CFT temperature as $T = \kappa/2 \pi$, and the CFT entropy contribution due to this horizon component is $S_{\mathcal{H}_A} = A_{\mathcal{H}_A}/4 G_{(4)}$ where $A_{\mathcal{H}_A} = \int \sqrt{h}|_{r=0}$ is the area of the horizon $\mathcal{H}_A$. The AdS/CFT dictionary then states that the total CFT entropy, $S$, is the sum of the horizon components so $S = \sum_A S_{\mathcal{H}_A}$. Written in terms of our optical metric we see that near a horizon we have, \begin{align} \label{eq:horizon1} g_{ij} dx^i dx^j &= \frac{1}{\kappa^2 r^2 f} \left( dr^2 + h_{ab}(r, y) dy^a dy^b \right) \nonumber \\ Z &= \frac{\ell}{\kappa r \sqrt{f}} \; . \end{align} Hence from the perspective of the optical geometry $(\mathcal{M}, g)$ we see as $r \to 0$ each horizon component is actually an asymptotic region that is in fact a conformal boundary, with conformal boundary geometry given by the geometry of the spatial section of $\mathcal{H}_A$. Of course these are conformal boundaries of $(\mathcal{M},g)$ and are not to be confused with the conformal boundary of the full spacetime which, as we have discussed, corresponds to an actual boundary of $\mathcal{M}$. In summary the assumed structure of $(\mathcal{M}, g)$ for bulk spacetimes at finite temperature is as follows. The spacetime conformal boundary where $Z = 0$ is a \emph{boundary} of $\mathcal{M}$ with $\partial \mathcal{M} = \Sigma$. The $N_H$ bulk spacetime Killing horizons where $Z \to \infty$ are \emph{conformal boundaries} of $\mathcal{M}$ with geometries given by the spatial sections of the horizons $\mathcal{H}_A$. \section{Bulk equations \label{sec:bulk}} The static Einstein equations for this bulk spacetime can be decomposed over the optical geometry as, \begin{eqnarray} \label{eq:bulkeqns} R_{ij} &=& - \frac{2}{Z} \nabla_i \partial_j Z \nonumber \\ R &=& \frac{6}{Z^2} \left( 1 - \left( \partial Z \right)^2 \right) \end{eqnarray} where indices are raised/lowered using the optical metric $g_{ij}$. Here $R_{ij}$ is the Ricci tensor of the optical metric $g_{ij}$, $R = R_i^{~i}$ is its Ricci scalar and $\nabla$ is its covariant derivative. Our results all follow from the elegant elliptic equation that the optical Ricci scalar obeys, \begin{eqnarray} \label{eq:optRicciEq} \nabla^2 R - R^2 + 3 R_{ij} R^{ij} = 0 \end{eqnarray} which may be verified straightforwardly from the Einstein equations above. Since the norm in $(\mathcal{M},g)$ of the tracefree part of the optical Ricci tensor $\tilde{R}_{ij} \equiv {R}_{ij} - \frac{1}{3} R g_{ij}$ is given as, \begin{eqnarray} | \tilde{R}_{ij} |^2 = R_{ij} R^{ij} - \frac{1}{3} R^2 \end{eqnarray} and is non-negative for a smooth Riemannian optical metric, we see that $R_{ij} R^{ij} \ge \frac{1}{3} R^2$. Hence the optical Ricci scalar is a super-harmonic function on $(\mathcal{M}, g)$, \begin{eqnarray} \nabla^2 R \le 0 \end{eqnarray} with this inequality being saturated only if $\tilde{R}_{ij}$ vanishes. This result played a key role in our previous work \cite{Hickling:2015sha}. Here we integrate it over the optical geometry, and use the divergence theorem to obtain, \begin{eqnarray} \label{eq:surfterms} \int_{\mathcal{M}} \sqrt{g} \, \nabla^2 R = \int_{\partial \mathcal{M}} dA^i \partial_i R + \sum_A \int_{\mathcal{H}_A} dA^i \partial_i R \le 0 \qquad \end{eqnarray} where $dA^i$ is the outward facing area element for a surface. Thus we find an inequality involving surface terms over the boundary of $\mathcal{M}$, corresponding to the spacetime conformal boundary, and also the asymptotic regions of $\mathcal{M}$ corresponding to the spacetime horizons. Having this inequality we must now evaluate these surface terms. \section{Conformal boundary surface term \label{sec:boundary}} Firstly we consider the surface term at $\partial \mathcal{M}$ due to the conformal boundary and relate this to physical quantities using the holographic dictionary \cite{Henningson:1998gx,Balasubramanian:1999re,deHaro:2000xn} (and reviewed in \cite{Skenderis:2002wp}). The Fefferman-Graham form for a conformally compact $(3+1)$-dimensional Einstein metric is, \begin{eqnarray} ds^2_{(4)} = \frac{\ell^2}{z^2} \left( dz^2 + h_{AB}(z, x) dx^A dx^B \right) \end{eqnarray} with the asymptotic behaviour near the conformal boundary $z = 0$ given by, \begin{align} h_{AB}(z, x) & = \bar{h}_{AB}(x) + \left( \bar{R}^{(h)}_{AB} - \frac{1}{4} \bar{R}^{(h)} \bar{h}_{AB} \right) z^2 \nonumber \\ & \qquad + t_{AB}(x) z^3 + O(z^4) \end{align} where this series expansion is written with indices raised and lowered with respect to the metric that provides the representative for the conformal boundary geometry, $\bar{h}_{AB}(x)$. In particular $\bar{R}^{(h)}_{AB}$ is its Ricci tensor, and $\bar{R}^{(h)}$ its scalar curvature. The data $\bar{h}_{AB}(x)$ and $t_{AB}(x)$, which is a transverse traceless tensor with respect to $\bar{h}_{AB}$, fully determine all subsequent terms in the above expansion. The AdS/CFT dictionary then requires us to take $\bar{h} = g^{CFT} = -dt^2 + \bar{g}$, and then the vacuum expectation value of the CFT stress tensor, $T_{CFT}$, is, \begin{eqnarray} \langle T^{CFT}_{AB} \rangle = 3 \, c \, t_{AB} \end{eqnarray} where $c$ is the effective central charge defined above. Hence given a bulk solution with such asymptotics, the CFT energy is then, \begin{eqnarray} E = \int_{\Sigma} \sqrt{\bar{g}} \langle T^{CFT}_{tt} \rangle = 3 \, c \, \int_{\Sigma} \sqrt{\bar{g}} \, t_{tt} \; . \end{eqnarray} It is worth emphasising that due to the absence of a conformal anomaly there is no ambiguity or scheme dependence in this stress tensor. Consider our metric in equation~\eqref{eq:bulkmetric}. Taking coordinates on $\mathcal{M}$ so $x^i = ( z, x^a)$ with $z$ the Feffermann-Graham coordinate above, then we see near the conformal boundary, \begin{align} Z(z,x) & = z \left( 1 - \frac{1}{8} \bar{R}(x) z^2 + \frac{1}{2} t_{tt}(x) z^3 + O(z^4) \right) \nonumber \\ g_{zz}(z,x) & = 1 - \frac{1}{4} \bar{R}(x) z^2 + t_{tt}(x) z^3+ O(z^4) \nonumber \\ g_{ab}(z,x) & = \bar{g}_{ab}(x) - \frac{1}{2} \bar{R}(x) \bar{g}_{ab}(x) z^2 \nonumber \\ & \qquad + \left( t_{ab}(x) + \bar{g}_{ab}(x) t_{tt}(x) \right) z^3 + O(z^4) \end{align} with $g_{z a} = 0$, and where now the expansions are written covariantly with respect to the CFT spatial metric $\bar{g}$, with its Ricci tensor and scalar being $\bar{R}_{ab}$ and $\bar{R}$ respectively. Using this we may compute the asymptotic behaviour of the optical Ricci scalar, \begin{align} R(z, x) & = 3 \bar{R}(x) - 18 t_{tt}(x) z + O( z^2) \; . \end{align} Note that this implies the Ricci scalar of $\Sigma$ is simply related to the boundary value of the bulk optical Ricci scalar as \cite{Hickling:2015sha}, \begin{align} \label{eq:boundaryrelation} \bar{R} = \frac{1}{3} R |_{\partial M} \; . \end{align} Now we may compute the boundary term $\int_{\partial \mathcal{M}} dA^i \partial_i R$. Using the above expansions we have, \begin{eqnarray} \partial_n R & = 18 t_{tt} + O( z) \end{eqnarray} where $n = - \frac{1}{\sqrt{g_{zz}}} \frac{\partial}{\partial z}$ gives the unit normal to a constant $z$ surface in $(\mathcal{M},g)$ directed towards the conformal boundary, so that, \begin{align} \label{eq:surfasym} \int_{\partial \mathcal{M}} dA^i \partial_i R & = \int_{Z = 0} \sqrt{\bar{g}} \, \partial_n R \nonumber \\ & = 18 \int_{\Sigma} \sqrt{\bar{g}} \, t_{tt} = \frac{6}{c} E \; . \end{align} Thus we see that the surface term associated to the spacetime conformal boundary is simply proportional to the CFT energy. We note that if a finite or zero temperature bulk spacetime ends only on the conformal boundary, with no horizons or singularities, then the only boundary term is the one above and from~\eqref{eq:surfterms} this simply yields the result $E \le 0$. Thus we can already see that the energy in these cases is non-positive. An example of such a situation is for a CFT where $\Sigma$ is a round sphere radius $\mathcal{R}$, and the temperature is taken to be well below that of the Hawking-Page phase transition \cite{Hawking:1982dh,Witten:1998zw}. In this case it is expected that no static black hole solutions exist at such temperatures (certainly this is true imposing spherical symmetry) and the only bulk is global AdS. Thus taking $\Sigma$ as a small deformation of a round sphere and taking low temperatures, so $T \mathcal{R} \ll 1$ we expect no bulk horizons and hence again $E \le 0$ by the above. Another example is for $\Sigma$ a 2-torus with antiperiodic fermion boundary conditions about one cycle. The relevant static bulk is then the AdS-soliton \cite{Horowitz:1998ha}. It is believed that black holes have a minimum temperature \cite{Aharony:2005bm,Emparan:2009dj,Figueras:2014lka} with such asymptotics, and so below this temperature the only candidate bulk spacetime is the AdS-soliton itself which indeed has negative energy $E < 0$. More generally we might imagine that boundary metrics $\Sigma$ that lead to confining behaviour at low temperatures (for example, taking $\Sigma$ to have positive scalar curvature \cite{Hickling:2015sha}) have dual static geometries with no bulk horizons below some threshold temperature, and hence in such a temperature range must have $E \le 0$. However, as emphasised above, in general we may have bulk horizons at arbitrarily low temperature as in the example of $\Sigma$ being a torus (with periodic fermion boundary conditions). Hence we now proceed to consider the contribution to surface terms from finite temperature horizons in the bulk. \section{Horizon surface terms \label{sec:horizon}} A static smooth spacetime Killing horizon can be written in local coordinates as in equation~\eqref{eq:horizon1} in our bulk ansatz. In order to compute the surface term due to a horizon component $\mathcal{H}_A$ we solve the bulk Einstein condition as an expansion in the normal coordinate $r$ about the horizon location $r=0$. One finds, \begin{align} f(r,y) & = 1 - \frac{1}{6} \bar{R} r^2 + O(r^3) \nonumber \\ h_{ab}(r, y) &= \bar{h}_{ab} + \left( \frac{3}{2 \ell^2} + \frac{1}{4} \bar{R} \right) \bar{h}_{ab} r^2 + O(r^3) \end{align} where $\bar{h}_{ab}(y)$ is the metric induced on the spatial section of the horizon, and $\bar{R}$ is its Ricci scalar. From this one may deduce that the optical Ricci scalar behaves as, \begin{align} \partial_n R & = - \kappa^3 \left( \frac{12}{\ell^2} + 6 \bar{R} \right) r^2 + O(r^3) \end{align} where $n = - \frac{1}{\sqrt{g_{rr}}} \frac{\partial}{\partial r}$ gives a unit normal to a constant $r$ surface in $(\mathcal{M},g)$ directed into the horizon, and then taking into account the fact that the volume element of a constant $r$ surface scales as $\sim \frac{1}{\kappa^2 r^2} \sqrt{\bar{h}}$ as $r \to 0$, one finds the finite value, \begin{align} \int_{\mathcal{H}_A} dA^i \partial_i R & = \int_{r = 0} \sqrt{\bar{g}} \partial_n R \nonumber \\ & = - \kappa \int \sqrt{\bar{h}} \left( \frac{12}{\ell^2} + 6 \bar{R} \right) \nonumber \\ & = - \frac{12 \kappa}{\ell^2} A_{\mathcal{H}_A} - 6 \kappa \int \sqrt{\bar{h}} \bar{R} \; . \end{align} Now using the relations to CFT temperature $T$, the contribution to the entropy, $S_{\mathcal{H}_A}$, and the Gauss-Bonnet theorem, we may write, \begin{align} \int_{\mathcal{H}_A} dA^i \partial_i R & = - \frac{6 }{c} T S_{\mathcal{H}_A} - 48 \pi^2 \chi_{\mathcal{H}_A} T \end{align} where $\chi_{\mathcal{H}_A}$ is the Euler characteristic for the spatial horizon geometry of $\mathcal{H}_A$. Since all the horizons are in equilibrium at the same temperature then the total contribution due to the horizons is, \begin{align} \label{eq:surfhoriz} \sum_A \int_{\mathcal{H}_A} dA^i \partial_i R & = - \frac{6 }{c} T S - 48 \pi^2 T \sum_A \chi_{\mathcal{H}_A} \end{align} with $S = \sum_A S_{\mathcal{H}_A}$ being the CFT entropy. \footnote{It is a simple matter to allow the horizons to have their own temperatures, and hence not be in equilibrium, but here we are interested in the equilibrium canonical ensemble.} \section{Vacuum energy bounds \label{sec:result}} Thus we see that the inequality $\nabla^2 R \le 0$ in the optical Ricci scalar integrated over the bulk yields the bound on the surface terms in~\eqref{eq:surfterms} which may be evaluated using equations~\eqref{eq:surfasym} and~\eqref{eq:surfhoriz} to deduce, \begin{align} \label{eq:thermobound} \frac{1}{c} F = \frac{1}{c} \left( E - T S \right)\le 8 \pi^2 \, T \sum_A \chi_{\mathcal{H}_A} \end{align} where $F = E - T S$ is the CFT free energy at temperature $T$. While this bound can be thought of as a constraint on the thermodynamics of the CFT and equivalently on the dual black holes, it also allows us to bound the zero temperature vacuum Casimir energy by taking the limit $T \to 0$. By Assumption 2 the total energy and entropy are bounded and continuous in the limit $T \to 0$, so, \begin{eqnarray} \lim_{T \to 0} \left( \frac{ E }{c } \right) \le 0 \end{eqnarray} and hence the vacuum energy is non-positive. As discussed above, these inequalities can only be saturated if the tensor $\tilde{R}_{ij}$ (the tracefree part of $R_{ij}$) vanishes everywhere in $\mathcal{M}$. However, since, \begin{eqnarray} \nabla^i \tilde{R}_{ij} = \nabla^i {R}_{ij} - \frac{1}{3} \partial_j R = \frac{1}{6} \partial_j R \end{eqnarray} we see that a necessary condition for saturation of these bounds is that the optical Ricci scalar is constant on $\mathcal{M}$. Furthermore equation~\eqref{eq:boundaryrelation} then implies that $\bar{R}$, the Ricci scalar of $\Sigma$, must also be constant. We conclude that static bulk spacetimes where $\Sigma$ has non-constant scalar curvature must have negative vacuum energies $E < 0$. Thus the vacuum energy can only vanish in the non-generic situation that $\Sigma$ has constant curvature, and hence is locally a sphere, torus or hyperbolic space. \section{Discussion \label{sec:discussion}} We now conclude with a brief discussion. Firstly we point out that since our result only applies in the large $c \to \infty$ limit (required for having a gravity dual), it is only the leading part of the energy $E$ that is constrained to be non-positive. However holography predicts that provided a bulk dual exists, the vacuum energy will generically go as $E \sim O(c)$ (except, as we have seen, in cases of maximally symmetric $\Sigma$ where it may vanish to leading order in $c$). We emphasise that our results above rely on the CFT having a dual description given by a bulk spacetime which is smooth at any small finite temperature. If it happened that for some $\Sigma$ no bulk spacetime existed, or somehow violated our assumptions, then the result is not expected to hold, although we know of no such situation. We note that the thermodynamic bound~\eqref{eq:thermobound} is precisely the one found previously in \cite{Galloway:2015ora} specialised to the case of a $(3+1)$-dimensional bulk although in that work $\Sigma$ was constrained to be of constant curvature, so a round sphere, flat torus (or quotients of these) or compact hyperbolic space.\footnote{ Note that in these cases of $\Sigma$ we already know explicit bulk metrics both at finite and zero temperature (although we certainly do not know all such solutions). } It is interesting to note that the bound in \cite{Galloway:2015ora} was given in any bulk dimension. For $D$ boundary dimensions, and hence $(D+1)$ bulk dimensions, the optical Ricci scalar obeys the relation \cite{Hickling:2015sha}, \begin{eqnarray} \nabla^i \left( \frac{1}{Z^{D-3}} \partial_i R \right) \le 0 \; . \end{eqnarray} However, in the higher dimensional cases $D > 3$ this inequality does not obviously lead to analogous bounds on the thermodynamics or energy. Integrating and using the Gauss law over $(\mathcal{M}, g)$ the surface term generated due to the bulk spacetime conformal boundary is no longer generally finite. It would obviously be interesting to explore whether our arguments can be modified to yield bounds in higher dimensions too. Another interesting question is whether these results generalise to inclusion of bulk matter fields obeying specific energy conditions. It is worth contrasting our result with previous work on positive energy theorems \cite{Gibbons:1983aq,Cheng:2005wk} relevant for the AdS-CFT setting. In the $(2+1)$-dimensional case we consider it was argued in \cite{Cheng:2005wk} that for time dependent bulk dynamics a necessary condition for the energy to be bounded from below by zero was the existence of a spinor on $\Sigma$ obeying certain differential conditions. On the other hand our result shows that for generic $\Sigma$ we expect the energy to be negative for the bulk vacuum geometry, and it must therefore be true that the assumptions required for stability as discussed in \cite{Cheng:2005wk} cannot hold. It is important to note that this does not imply such a setting is necessarily dynamically unstable. Indeed the example of the AdS-soliton \cite{Horowitz:1998ha} is believed to be stable to bulk perturbations. We conclude by emphasising that $(2+1)$-dimensional holographic CFTs are known explicitly, as in the canonical example of the ABJM theory \cite{Aharony:2008ug}. It is interesting to note that they may potentially be of interest in the context of AdS/CMT \cite{Hartnoll:2009sz} where one might in principle imagine a real world experiment simulating a holographic $(2+1)$-dimensional CFT, where the spatial geometry of the material could be deformable away from a very symmetric case such as a plane, leading to a Casimir effect. Since our results show the energy is reduced to become negative for generic perturbations of flat space, this indicates potential instabilities associated to crumpling driven by a decrease in the vacuum energy. \section{Relation to Anderson's bound \label{sec:Anderson}} After completion of this work Juan Maldacena pointed out to us another way that the above results may be derived using the geometric bound of Anderson \cite{Anderson}. We have included a sketch of this alternate derivation for the interest of the reader noting that it provides a beautiful example of how a physical result may be geometrized in AdS-CFT. Starting from the 4-dimensional Gauss-Bonnet theorem Anderson has shown that the renormalized volume of an asymptotically hyperbolic Einstein manifold satisfies the bound, \begin{align} V_{ren} \le \frac{4 \pi^2}{3} \ell^4 \chi(M_4) \; . \end{align} Here $\chi(M_4)$ is the Euler character of the bulk spacetime, calculated by writing this as a conformally compact manifold, taking $ds^2_4 = \frac{1}{Z^2} g_{AB} dx^A dx^B$ for a defining function $Z$ and taking $(M_4, g)$ to be a smooth Riemannian manifold with boundary $\partial M_4$ where the defining function vanishes. This bound is saturated if and only if the bulk Weyl tensor vanishes. Since we are considering static bulk spacetimes with horizons at equilibrium, we may analytically continue the Lorentzian bulk geometry to a Riemannian metric by continuing to Euclidean time $\tau = i t$. Then with the appropriate identification $\tau \sim \tau + \beta$ the Lorentzian horizons continue to smooth fixed point sets under the Killing symmetry generated by $\partial/\partial \tau$ referred to as `bolts'. Following \cite{Maldacena:2011mk} then the renormalized volume is simply related to the renormalized on-shell Euclidean gravitational action, $S_{E,\mathrm{on-shell}} = \frac{3}{8 \pi G \ell^2} V_{ren}$ \cite{Balasubramanian:1999re,Skenderis:2002wp}. Now Euclidean semi-classical gravity implies the Euclidean action (suitably renormalized) is related to the free energy $F$ as, $S_{E,\mathrm{on-shell}} = \frac{1}{T} F$. Via Anderson's bound this yields, \begin{align} \label{eq:newbound} \frac{1}{T} F \le 8 \pi^2 c \, \chi(M_4) \end{align} This is similar in spirit to our thermodynamic bound in equation~\eqref{eq:thermobound} although the Euler number refers to the bulk spacetime rather than horizons that might be present. Note that following our argument and assumptions in the paper then considering this bound at finite temperature and then taking $T \to 0$, so $F \to E$, this implies $E \le 0$ (again without assuming the zero temperature solution itself is smooth). However we may also recover our thermodynamic inequality~\eqref{eq:thermobound} directly from the bound above. Following \cite{Gibbons:1979xm} then the Euler character of a 4-manifold with a Killing vector may be given in terms of the nature of the fixed point sets of the action generated by this, in particular the number of `nuts', `anti-nuts' and `bolts'. This holds for manifolds with boundary provided that the Killing vector is tangential to the boundary. In our case the defining function has been chosen to be compatible with the static symmetry so that $(M_4,g)$ is a static smooth 4-manifold with boundary, and Killing vector $\partial/\partial \tau$ that is tangential to the boundary $\partial M_4$. Then we have, \begin{align} \chi(M_4) = N_+ + N_- + \sum_{A=1}^{N_B} \chi_A \end{align} where $N_{+}$ and $N_-$ are the number of nuts and anti-nuts, and $\chi_A$ is the Euler character of the $A$-th bolt, of which there are $N_B$. In our application we have no nuts or anti-nuts, and the bolts continue to the horizons, with the 2-geometry of the bolt corresponding to the spatial section of the Lorentzian horizon. Hence for our application this implies, \begin{align} \chi(M_4) = \sum_{A} \chi({\mathcal{H}_A}) \end{align} and substituting into~\eqref{eq:newbound} yields our bound~\eqref{eq:thermobound}. Anderson's bound is saturated for vanishing bulk Weyl tensor whereas we had previously found saturation for vanishing $ \tilde{R}_{ij} $, the traceless Ricci tensor of the optical metric. A simple computation confirms that the 4-dimensional \section*{Acknowledgements} We thank Philip Candelas and Paul McFadden for useful comments. We are very grateful to Juan Maldacena for pointing out the relation to Anderson's bound which we have detailed in section \ref{sec:Anderson}. This work was supported by the STFC grant ST/J0003533/1. AH is supported by an STFC studentship. \bibliographystyle{apsrev4-1}
1,314,259,995,401
arxiv
\section{Introduction} Sustainable development was first addressed by Erwin Schr\"odinger \cite{Schroedinger} based on entropy, where development was characterized by increasing "orderliness" (nowadays complexity). He pointed out that the development of highly complex forms of matter (or life) should be built on less complex forms. This means decreasing entropy, while increasing the entropy of matter should be avoided if we want to maintain sustainable development. Recently the build up of complexity on the example of 1 kg matter in different forms was studied quantitatively, starting from the simplest example of ideal gases, and then continue with more complex chemical, biological, and living structures \cite{CSV2016}. The complexity of these systems was assessed quantitatively, based on their entropy. We use the method introduced in Ref. \cite{CSV2016}, which attributed the same entropy to known physical systems and to complex organic molecules up to most complex Human Genome DNA. Schr\"odinger \cite{Schroedinger} has also discussed and concluded that the emergence of life does not require new fundamental laws of physics, which allow for non-increasing entropy. Actually, as the Earth is an open system \cite{CsPSX2016}, with a boundary condition strongly decreasing entropy, this boundary condition enforces development towards decreasing entropy, i.e. increasing complexity. The Human brain has a vastly superior possibility of complexity, than biological molecules, and it carries abstract information, as well as many vegetative and reflex functions. The direct calculation of the complexity of the coding in the Human neural network is beyond our present knowledge, but we can make studies of the stored, consciously reachable, information and its complexity. The conscious thinking can be indirectly studied via the analysis of Human languages. We can think about one subject at a time, just like we can speak about one subject at a time. \section{Language Complexity} As discussed in Ref. \cite{CSV2016}, to analyze the system from entropy or complexity point of view we have to consider two basic aspects: (i) the quantum of information or of the substance we analyse and (ii) the possibility of all configurations in a set of degrees of freedom, as well as the realized, realizable or existing configurations from the set of all possible configurations. Regarding the first point (i) in physics we quantized the phase space (the six dimensional position and momentum space) and have introduced the volume of the phase space element based on the quantum mechanical uncertainty relation. In case of a language the basic element could be the word. This can also be the basic element of the conscious thinking. At this time we do not have sufficient information on how a ``word" is represented in a neural network, how many neurons and synapses are involved, and what is the weight of the corresponding material. Hopefully in the future we can acquire the knowledge to reply to these questions. This situation is similar to the early development of statistical physics, when kinetic theory and thermodynamics were already known, with entropy and the second law of thermodynamics. At this time it was already realized that the phase space should be quantized, but before the quantum mechanics one did not know what should be this phase space volume. This did lead to a state where entropy was only defined up to a constant, which could be chosen free. Still similar systems could be analysed quantitatively, and compared to each other. When we chose the word as the quantum of a language we are in a similar situation as the early thermodynamics. A constant is {\it remaining to be determined}, to compare the entropy of the language to that of the ideal gases or the Human DNA sequence. The second condition (ii) is not very problematic in case of a language, the given amount of words can be determined by analysis of texts. Then the number of all possible configurations can be calculated for any given length sentence. For long sentences this number of configurations can become astronomically high, but one can analyse the distribution of the lengths of sentences as well as the maximal length. This will make the number of possible configurations finite. Subsequently one can analyse existing texts and can evaluate the number of existing configurations. This last step, can be done for a single person's language (who wrote extensively, so that we can analyse his or her language). It can be done for writings in a region where the language is used, or for all users of the language.\footnote{ In some languages the computational analysis of texts may be problematic, e.g., in Hungarian the form of a word in a given text is changing to the extent that it is not possible to find the root of a word in a dictionary. The good knowledge of the language and grammar would be necessary to do this analysis, what computational analysis programs cannot do at this time.} \section{Analysis of Chinese Language with Characters} As a first example we use the analysis of the Chinese language to have an order of magnitude estimate of the quantitative complexity or entropy of Human thinking via the language. The Chinese language uses characters. On average a person uses about 3000 characters in communication. The characters may form words of one, two or more characters. These afterwards, form sentences, which are separated by periods (and exclamation or question marks) in writing. Texts of about 26000-80000 Chinese characters were analysed in four samples, Sample $I$ to $IV$ \cite{c1,c2,c3,c4}. We evaluated how many different Chinese characters were contained in a given sample, $N_c$. Then in the first evaluation, we checked how many one character sentences, $N^c_1$, two character sentences, $N^c_2$, three character sentences, $N^c_3$, and so on, up to 35 character sentences were in the samples. See table \ref{t1}. \begin{table}[h] \setlength{\tabcolsep}{2.1pt} \renewcommand{\arraystretch}{1.25} \begin{tabular}{crrrrrrrrrrr} \hline\hline \phantom{\Large $^|_|$}\!\! Sample\!\!\!\!\! & $N_s$&$N_c$ & $N^c_1$ & $N^c_2$ & $N^c_3$ & $N^c_4$ & $N^c_5$ & $N^c_6$ & $N^c_7$ & $N^c_8$ & $N^c_9$ \\ \hline $I$ & 79959& 2553&163& 375& 248& 225& 209& 193& 168& 195& 149\\ $II$ & 79470& 2137& 69& 130& 100& 126& 123& 181& 170& 156& 169\\ $III$ & 26671& 2096& 4& 4& 5& 6& 24& 20& 32& 27& 30\\ $IV$ & 29083& 1916& 1& 4& 5& 20& 19& 18& 29& 38& 48\\ \hline \hline \end{tabular} \caption{ Number of all Chinese characters, $N_s$, and of different Chinese characters, $N_c$, in the Sample texts, $I$ to $IV$ are shown. Then the sentences (separated by periods) are analysed: the one character sentences, two character sentences, and so on. The number of different $k$-character sentences, $N^c_k$ were counted in the sample texts. The longest sentences were between 162, 119, 145, and 129 characters for the four samples respectively.} \label{t1} \end{table} Let us now consider the two character sentences. This can be formed by choosing one character of the $N_c$ for the first position, and another from the $N_c$ for the second position. The two characters may be identical and the sequence of the characters is meaningful. Consequently the maximum number of possible {\it two character sentences} is $N_c^2$, and the probability of one configuration is $p_i = 1/N_c^2$. Thus the maximum entropy of all possible two character sentences is \begin{eqnarray} H(X_2^{max}) &=& - \sum_{all} p_i \ln p_i = - N_c^2 \frac{1}{N_c^2} \ln \frac{1}{N_c^2} \nonumber \\ &=& \ln N_c^2 = 15.690, 15.334, 15.296, 11.116\ , \nonumber \\ \label{HX2max} \end{eqnarray} for Samples $I,\, II,\, III,\, IV,$ respectively. In real physical or biological situations not all (hypothetical) configurations are realized. The number of observed or Realized (R) different {\it two character sentences} for Sample $I$ is only $N^c_2= 375$. Consequently the corresponding specific configuration entropy is \begin{eqnarray} H(X_2^R) &=& - \sum_{i=1}^{N^c_2} p_i \ln p_i = - N^c_2 \frac{1}{N_c^2} \ln \frac{1}{N_c^2} \nonumber \\ &=& 2 N^c_2 \ln(N_c) / N_c^2 = 9.027\cdot 10^{-4} . \label{HX2} \end{eqnarray} This entropy is proportional with the number of two-character sentences, $N^c_2$. At the same time $N^c_2$ is also proportinal with the size of the Sample text. We can do the same analysis in this Sample $I$ text, for sentences of one, three, four, etc., ..., 162 Chinese character sentences. These are unrelated configurations and as the entropy is additive the specific entropy of {\it all sentences} of Sample text $I$, based on Chinese characters is \begin{eqnarray} \sigma_c &=& H(X_1^R) + H(X_2^R) + H(X_3^R) + H(X_4^R) + ... \nonumber \\ &=& 5.009\cdot 10^{-1} +9.027\cdot 10^{-4} + 3.508 \cdot 10^{-7} + ... \nonumber \\ &=& 5.018\cdot10^{-1}. \nonumber \\ \end{eqnarray} One can see that the few (one, two, three) character sentences provide the largest contribution to the entropy, and the longer ones have minor contribution. The higher level of complexity is achieved by minimizing the use of one or two character sentences. The very long sentences have very large number of hypothetical possibilities, while occur very seldom in the text. The contribution of 10 character sentences to the entropy is $\sigma_{10c} < 10^{-30}$, and the longer ones are even smaller. One could take into account the relative frequencies of the different length sentences, but the relative frequencies of long sentences in the sample texts is also rapidly decreasing. Therefore their entropy contribution is utterly negligible. This also indicates the hint that the length of the Sample text is not very important beyond some number as it leads to relatively small change in the results. \begin{table}[h] \setlength{\tabcolsep}{2.5pt} \renewcommand{\arraystretch}{1.25} \begin{tabular}{cccc} \hline\hline \phantom{\Large $^|_|$}\!\! Sample\ \ & $N_s$& $\sigma_{c}$ & $\sigma_{c/10k}$ \\ \hline $I$ & 79959\ \ & $5.018 \cdot10^{-1}$\ \ & $6.276 \cdot 10^{-2}$ \\ $II$ & 79470\ \ & $2.480 \cdot10^{-1}$\ \ & $3.121 \cdot 10^{-2}$\\ $III$ & 26671\ \ & $1.461 \cdot10^{-2}$\ \ & $5.477 \cdot 10^{-3}$\\ $IV$ & 29083\ \ & $3.961 \cdot10^{-3}$\ \ & $1.362 \cdot 10^{-3}$\\ \hline \hline \end{tabular} \caption{ Specific entropy of the Sample texts based on Chinese characters, where $N_s$ is the number of characters in the Sample text, $\sigma_{c}$ is the entropy of the text, and $\sigma_{c/10k}$ is the entropy of the text normalized to 10000 character length.} \label{sc1} \end{table} The entropy of a Sample text is proportional to the length of the text. In order to compare texts of different lengths we can introduce a specific entropy for 10000 characters (or words), so for Sample $I$: \begin{equation} \sigma_{c/10k} \equiv 10000 \cdot \sigma_c / N_s = = 6.276 \cdot 10^{-2}\ . \end{equation} Samples $I-IV$ have a different texts with different parameters. The entropy analysis can be preformed the same way as for Sample $I$, resulting: \begin{equation} \sigma_{c/10k} = (6.276,\ 3.121,\ 0.5477,\ 0.1362) \cdot 10^{-2} \ , \end{equation} for Sample texts $I-IV$ respectively. The shorter text samples have a tendency to give smaller length normalized specific entropy. The results are summarized in Table \ref{sc1}. \section{Analysis of Chinese Language with Words} In the Chinese language, although single characters may correspond to a word, certain two or three character combinations are unique and can be considered as words. So in this sense words can be considered as the basic parts of a sentence instead of Chinese characters. See table \ref{t2}. \begin{table}[h] \setlength{\tabcolsep}{2.0pt} \renewcommand{\arraystretch}{1.25} \begin{tabular}{crrrrrrrrrrr} \hline\hline \phantom{\Large $^|_|$}\!\! Sample\!\!\!\!\! & $N_s$&$N_w$ & $N^w_1$ & $N^w_2$ & $N^w_3$ & $N^w_4$ & $N^w_5$ & $N^w_6$ & $N^w_7$ & $N^w_8$ & $N^w_9$ \\ \hline $I$ & 49835&10122&558& 304& 348& 279& 282& 283& 249& 257& 241\\ $II$ & 47911& 8169&208& 174& 219& 268& 260& 264& 273& 262& 261\\ $III$ & 16780& 5086& 5& 13& 22& 38& 46& 41& 53& 55& 54\\ $IV$ & 18501& 4775& 4& 13& 19& 55& 50& 47& 68& 65& 49\\ \hline \hline \end{tabular} \caption{ Number of all Chinese words, $N_s$, and the different Chinese words, $N_w$, in the Sample texts, $I-IV$ are shown. Then the sentences (separated by periods) are analysed: the one word sentences, two word sentences, and so on. The number of different $k$-word sentences, $N^w_k$ were counted in the sample texts.\\ } \label{t2} \end{table} In Chinese writing the words are not separated by spaces, but commas, quotation marks and other punctuation may separate words. We employ the package "jiebaR" with {\it R language} to distinguish the words. We can calculate the maximum specific entropy for all hypothetical $k$-word combinations, using the number of different Chinese words in the sample text. For example for {\it two word sentences} \begin{eqnarray} H(X_2^{max}) &=& - \sum_{all} p_i \ln p_i = - N_w^2 \frac{1}{N_w^2} \ln \frac{1}{N_w^2} \nonumber \\ &=& \ln N_w^2 = 18.444, 18.016, 17.068, 14.192\ , \nonumber \\ \label{HX2max} \end{eqnarray} for Samples $I,\, II,\, III,\, IV,$ respectively. These are smaller than the max entropies for Chinese characters as the observed number of words is smaller than the number of characters. The number of observed {\it two word sentences} in the Sample texts is of course much smaller, than the hypothetical maximum, thus the specific entropy for two word sentences is also smaller: \begin{equation} H(X_2^R) = (508.4 ,\ 229.4 ,\ 8.390,\ 7.096) \cdot 10^{-3}. \end{equation} for Samples $I,\, II,\, III,\, IV,$ respectively. Then we add up the entropy contribution of {\it all observed sentences} of all lengths in the Sample texts. This provides the total specific entropy \begin{equation} \sigma_w = (508.5,\ 229.4,\ 8.399,\ 7.106 )\ \cdot 10^{-3},\ \end{equation} for Sample texts $I-IV$ respectively. We summarize these data in Table \ref{sw1}. \begin{table}[h] \setlength{\tabcolsep}{2.5pt} \renewcommand{\arraystretch}{1.25} \begin{tabular}{cccc} \hline\hline \phantom{\Large $^|_|$}\!\! Sample\ \ & $N_s$& $\sigma_{w}$ & $\sigma_{w/10k}$ \\ \hline $I$ & 49835\ \ & $5.085 \cdot10^{-1}$\ \ & $1.020 \cdot 10^{-1}$ \\ $II$ & 47911\ \ & $2.294 \cdot10^{-1}$\ \ & $4.788 \cdot 10^{-2}$\\ $III$ & 16780\ \ & $8.399 \cdot10^{-3}$\ \ & $5.005 \cdot 10^{-3}$\\ $IV$ & 18501\ \ & $7.106 \cdot10^{-3}$\ \ & $3.841 \cdot 10^{-3}$\\ \hline \hline \end{tabular} \caption{ Specific entropy of the Sample texts based on Chinese words, where $N_s$ is the number of words in the Sample text, $\sigma_{w}$ is the entropy of the text, and $\sigma_{w/10k}$ is the entropy of the text normalized to 10000 word length.} \label{sw1} \end{table} The Entropies obtained from the analysis of the words is similar to those that were based on characters. The difference between the entropy and the length normalized entropy based on characters and words is smaller in the case of words. \section{Analysis of English texts} The analysed English text samples \cite{e1,e2,e3,e4}, contained 102613, 93668, 6480, and 8992 words. The 3rd and 4th texts are from the first presidential candidacy debate of Hillary Clinton and Donald Trump. See table \ref{t3}. \begin{table}[h] \setlength{\tabcolsep}{2.1pt} \renewcommand{\arraystretch}{1.25} \begin{tabular}{crrrrrrrrrr} \hline\hline \phantom{\Large $^|_|$}\!\! Sample\!\!\!\!\! &$N_s$&$N_w$ & $N^w_1$ & $N^w_2$ & $N^w_3$ & $N^w_4$& $N^w_5$ & $N^w_6$ & $N^w_7$ & $N^w_8$ \\ \hline $I$ &102613& 7966& 0& 6& 33& 68& 155& 120& 189& 272\\ $II$ & 93668& 5745& 44& 10& 12& 11& 7& 8& 28& 35\\ $III$& 6480& 1309& 2& 6& 7& 19& 29& 28& 22& 28\\ $IV$ & 8992& 1225& 18& 12& 27& 42& 74& 76& 66& 64\\ \hline \hline \end{tabular} \caption{ Number of all English words, $N_s$, and of different English words, $N_w$, in the Sample texts. Then the sentences (separated by periods) are analysed: the two word sentences, three word sentences and so on. The number of different $k$-word sentences, $N^w_k$ were counted in the sample texts. While Samples $I$ and $II$ are extended written texts, $III$ and $IV$ are debates of Hillary Clinton and Donald Trump, respectively. The debate texts are shorter and thus also their vocabulary is more constrained. } \label{t3} \end{table} Noticeable that while most Chinese sentences have 10 words or less the in the analysed English text most sentences have about 20 words! This has an interesting effect on the complexity or entropy analysis of the text. We can calculate the maximum specific entropy for all hypothetical $k$-word combinations. For example for {\it two word sentences} \begin{equation} H(X_2^{max})= 17.876,\ 17.312,\ 14.354,\ 14.222, \end{equation} for the three English Sample texts. This is larger than the max entropies for Chinese word texts, due to the larger number of words in the text. See Figure \ref{f2}. The number of observed two word sentences in the text is of course much smaller, than the hypothetical maximum, thus the specific entropy for realized {\it two word sentences} is also smaller \begin{equation} H(X_2^R) = 8.494 \! \cdot 10^{-7}\!,\ 5.245 \! \cdot 10^{-6}\!,\ 5.026 \! \cdot 10^{-5}\!,\ 1.137 \! \cdot 10^{-4}\!,\ \end{equation} for the English text Sample. \begin{figure}[h] \begin{center} \resizebox{0.95\columnwidth}{!} {\includegraphics{Figure1.pdf}} \vskip 1cm \caption{(Color online) The distribution of the sentences according to their length. The length is measured by the number of words in a sentence, while the number of sentences of a given length in a Sample text is shown. The red dots correspond to the English Sample text I, peaking at $\sim$ 26 words, while the blue dots to the Chinese Sample text III, peaking at $\sim$ 8 words. The lines are to guide the eye. } \label{f2} \end{center} \end{figure} Then we add up the entropy contribution of all observed sentences of all lengths. This provides the total specific entropy for {\it all sentences} for the four English Sample texts. See Table \ref{se1}. \begin{table}[h] \setlength{\tabcolsep}{2.5pt} \renewcommand{\arraystretch}{1.25} \begin{tabular}{cccc} \hline\hline \phantom{\Large $^|_|$}\!\! Sample\ \ & $N_s$& $\sigma_{w}$ & $\sigma_{w/10k}$ \\ \hline $I$ & 102613\ \ & $8.499 \cdot10^{-7}$\ \ & $8.283 \cdot 10^{-8}$ \\ $II$ & 93668\ \ & $6.630 \cdot10^{-2}$\ \ & $7.078 \cdot 10^{-3}$\\ $III$ & 6480\ \ & $1.102 \cdot10^{-2}$\ \ & $1.700 \cdot 10^{-2}$\\ $IV$ & 8992\ \ & $1.046 \cdot10^{-1}$\ \ & $1.163 \cdot 10^{-1}$\\ \hline \hline \end{tabular} \caption{ Specific entropy of the Sample texts based on English words, where $N_s$ is the number of words in the Sample text, $\sigma_{w}$ is the entropy of the text, and $\sigma_{w/10k}$ is the entropy of the text normalized to 10000 word length.} \label{se1} \end{table} For the 1st Sample text, this is the same as that of the shortest $2$-word sentences in the text, because the next longer $3$-word sentences have an entropy value that is 4 orders of magnitude smaller. Due to the lack of single word sentences, and the small number of two word sentences the entropy of the English text is much smaller than that of the Chinese texts. In the much shorter debate texts of Clinton and Trump the number of very short sentences dominates. Trump has a much larger number of short sentences and this increases the total entropy of his text, contrary to the fact that the number of the words in his text is significantly larger. See Figure \ref{f3}. \begin{figure}[h] \begin{center} \resizebox{0.95\columnwidth}{!} {\includegraphics{Figure2.pdf}} \vskip 1cm \caption{(Color online) The distribution of the sentences according to their length for the first presidential debate between Hillary Clinton and Donald Trump. The red dots correspond to Trump's text, while the blue dots to Clinton's. Trump's text is dominated by short sentences peaking with 76 sentences of 6 word length. The lines are to guide the eye. } \label{f3} \end{center} \end{figure} \section{Discussion} The analysis of the complexity or entropy of languages can of course be used for comparing different languages, or different texts, or authors to each other. There is a vast amount of literature analysing languages with many different methods. Here we have chosen a relatively simple, and transparent method. But the entropy value as a general feature of material can actually lead to conclusion regarding the entropy of the physical and biological structure of the brain, and the information content in abstract sense. The language can be representative of the conscious operation of the brain. The physical and biological complexity has to be much larger, as the brain is responsible for the vegetative operation of the nervous system as well as the dynamical changes of the operation and the human activity. The language itself is just a static set of information, but it has to be learned, so it is a structure, which is the product of training or learning. The language itself can characterize the development, see e.g. \cite{JYun2015}. The language can also be attributed to a given amount of material. It is a given part of the brain, even if we cannot identify it. Plausibly the same part of the brain carries other static information as well as dynamical information also. This way in addition to the specific entropy of the language, $\sigma_c$ or $\sigma_w$ we can also estimate the physical entropy $S_{1kg}$ or at least a lower limit of it. In case of usual (Shannon) entropy estimates the normalization is not the same as the physical one, but it is perfectly sufficient for comparative studies of these type of structural entropies. In this analysis the role of physical phase-space or configuration space is taken over by the ``word-space" or ``Chinese character-space". These spaces could in principle be extended to infinity, but in fact taking all words of a language in a historical period the word-space of a language is finite. This is also necessary as the language is a means of communication. Thus, we cannot add up the word-space of all languages. \section{Conclusions} We have demonstrated quantitatively the increasing complexity of materials, and used the entropy for unit amount of material to be able to get a measure. This idea stems from Ervin Schroedinger, but our knowledge today makes it possible to extend the level of quantitative discussion to complex live materials. We may continue these studies to higher levels of material structures, like living species, artificial constructions, symbiotic coexistence of different species, or grouping of the same species. up to even structures in Human society. The main achievement of the earlier work \cite{CSV2016} was to show how the entropy in the physical phase space and the entropy of structural degrees of freedom(Shannon entropy) can be discussed on the same platform. For further developments it is important to point out two fundamental aspects of the entropy concept: (i) the {\it quantization} of the space of a given degree of freedom, and (ii) the {\it selection of the realized, realizable or beneficial configurations} from all the possible ones. In the present work we introduced a quantization as the number of words or Chinese characters. At this moment of time we do not know how to relate this quantization to the basic physical quantum of the occupation of an elementary phase-space element. Thus, the relative normalization of the quantitative complexity or entropy of the language is still missing. We would need a much more detailed knowledge about the representation of language in the neural network of the human brain. The other aspect of the entropy calculation is actually solved in case of the human language or languages, as the realized configurations can be relatively easily determined by the analyses of the texts. The Sample text examples presented are all static point. As we see on the example of the nervous system the dynamical change of the entropy of the system is also important. The text analysis could trace down the change of the complexity of the texts of an individual, which could be a measure of the period of increasing complexity at early years compared to decreasing complexity and increasing entropy later. Such analysis could be performed on the novels of authors, who were active for many years. The dynamics and direction of these changes is also essential as shown in Ref. \cite{PCs1980}. \section*{Acknowledgements} This research was partially supported by the Academia Europaea Knowledge Hub, Bergen, by the Institute of Advanced Studies, K\H{o}szeg, by the National Natural Science Foundation of China (Grant No. 11505071), and by the Wuhan University of Technology.
1,314,259,995,402
arxiv
\section{Introduction} According to recent lattice studies~\cite{wirNP97,wirPRD97,LatticeCrossover,ThisDayReview} it is very likely that the standard electroweak theory does not go through a true phase transition at finite temperature. The $3D$ $SU(2)$ Higgs model, which with a $t$-quark of appropriate mass is the effective, dimensionally reduced version of the electroweak theory, ceases to possess a first order transition for a Higgs mass $M_H > 72$ GeV. At larger values of Higgs mass the model investigated merely experiences a smooth crossover~\cite{kajantieprl}. Due to the quantitative similarity of the phase transitions in the $SU(2)$ Higgs and in the $SU(2) \times U(1)$ Higgs~\cite{SU2U1} models and taking the current experimental lower bound~\cite{HiggsMassLimit} of the Higgs boson mass, $M_H \ge 89.3$ GeV, into account, the statement above is highly justified. Within the most popular baryon number generation (BG) scenarios~\cite{BAUScenarios} a {\it sufficiently strong} first order transition is required. Therefore the search for viable extensions of the standard model and the study of their phase structure has become an important direction of research. Nevertheless, last but not least in view of possible alternative BG mechanisms, it is still of some phenomenological interest to study features of the {\it standard model} which change qualitatively at some characteristic temperature (at the $W$ mass scale). For instance, a variation of the diffusion rate of the Chern--Simons number~\cite{Davis98} and a changing spectrum of screening states~\cite{IlSchSt98} could be the analogue of the phase transition that exists at low Higgs mass. Very recently, we have shown \cite{ChGuIlSch98} that the first order phase transition at $M_H \le 70$ GeV is accompanied by a percolation transition\footnote{This is in agreement with an observation of Ref.~\cite{antunes}, made in a different context, that a percolation of strings is a good disorder parameter for a phase transition in field theory.} experienced by certain kinds of topological defects. This issue is a very new one in the field of lattice studies of the electroweak transition, and various aspects have still to be clarified. In this paper we shall describe what remains of this percolation transition when the thermal phase transition changes into a rapid crossover at higher Higgs mass. This paper is the second of a series of studies we want to devote to the {\it statistical properties} of so-called {\it embedded} topological defects~\cite{VaBa69,BaVaBu94} within the $SU(2)$ Higgs model. The embedded defects of interest are Nambu monopoles~\cite{Na77} and $Z$--vortices~\cite{Na77,Ma83}. Although not stable topologically, these defects are seen occurring as the result of thermal fluctuations. Here we are able to show that the percolation transition mentioned above persists at higher Higgs mass. We also provide first evidence that $Z$--vortices are indeed characterized by inhomogeneities of gauge invariant quantities (gauge field action and Higgs field modulus) that should be expected because of the appearance of corresponding vortex solutions in the continuum. Embedded topological defects might be important agents in some electroweak baryogenesis mechanisms. One of such scenarios is based on the decay of an electroweak string network as the Universe cools down. According to this mechanism, long electroweak strings should decay into smaller, twisted and linked string loops which carry non-zero baryon number. This could then explain the emergence of non-zero baryon number in the Universe~\cite{StringScenario,Va94}. We have nothing to contribute to this mechanism as such nor to the kinetics of such decaying network. In our paper we merely study the properties of the embedded strings as they pop up in thermal equilibrium of the $SU(2)$ Higgs model. Similar to what we have found earlier in the symmetric phase (when it can be clearly separated by a first order transition at lower Higgs mass) our results show the existence of a network of $Z$-vortices on the high-temperature side of the crossover, with finite probability of percolating (at $T > T_{\mathrm{perc}}$), while only smaller clusters occur below $T_{\mathrm{perc}}$, however much more frequently than at lower Higgs mass. Some of these clusters have open ends occupied by Nambu monopoles. This suggests a BG scenario without a first order phase transition according to which a large vortex cluster might decay into small vortex pieces at some temperature. In this work, a vortex cluster is a collection of connected dual links which carry non-zero vorticity (are occupied by vortex trajectories). We have used an active bond percolation algorithm known from cluster algorithms for spin models~\cite{Cluster} in order to label various disconnected $Z$--vortex clusters that coexist in a configuration. Our investigation is performed in the framework of dimensional reduction which is expected to lend a reliable effective description of the $4D$ $SU(2)$ Higgs theory for Higgs masses between $30$ and $240$ GeV at temperatures of $O(100)$ GeV. Due to their relatively rich abundance on the low temperature side in the crossover region, the physics of distinct vortex loops and monopolium states (thought to be the remnants of the percolation cluster(s)) is sufficiently interesting to be studied more in detail and within the $4D$ Euclidean approach. Equally interesting would be the space-time structure of the vortex networks in the percolating phase. In our present $3D$ approach, we can get only some rough anticipation of what is going on in $4D$: the average $3D$ densities might be {\it time-projected} densities (describing the vortex network above or the small vortex clusters below the percolation temperature). In the $4D$ Euclidean version of the model the vortex lines we are studying would sweep out $2$-dimensional world surfaces. The clusters of filamentary embedded vortex defects that we observe in the dimensionally reduced variant are compactifications of these world surfaces or intersections with time slices. In the present paper, we have for the first time studied the $3D$ cluster statistics (the number and length distributions) of the configurations. The structure of the paper is as follows. We review the effective, dimensionally reduced formulation of the electroweak theory as a $3D$ $SU(2)$ lattice Higgs model in Section~2. In this section, for the convenience of the reader, we also formulate the lattice definitions of the (elementary and extended) embedded topological defects proposed in Refs.~\cite{ChGuIl97,ChGuIlSch98}. In Section~3 we present the numerical results on the density of Nambu monopoles and vortices and on the percolation probability of the corresponding $Z$--vortex lines in the crossover region. We show how the average number of lattice clusters (formed by vortex lines) and the average length ({\it i.e.} the number of dual links with non-zero vorticity) of vortex clusters changes at the transition. We also give some first results on gauge invariant signatures showing that the vortices provide a filamentary inhomogeneous structure of the system. Section~4 contains a short discussion of our results and conclusions. \section{Nambu Monopoles and $\mathbf{Z}$--Vortices in the Lattice $\mathbf{SU(2)}$ Higgs Model} For the investigation of the thermal equilibrium properties of the defects to be defined below we used the lattice $3D$ $SU(2)$ Higgs model with the following action: \begin{eqnarray} S &=& \beta_G \sum_p \Bigl(1 - \frac{1}{2} \mbox{\rm Tr} U_p \Bigr) - \beta_H \sum_{x,\mu} \frac{1}{2} \mbox{\rm Tr} (\Phi_x^+ U_{x, \mu} \Phi_{x + \hat\mu}) + \sum_x \biggl( \rho_x^2 + \beta_R (\rho_x^2-1)^2 \biggr) \nonumber \end{eqnarray} (the summation is taken over plaquettes $p$, sites $x$ and links $l=\{x,\mu\}$). The action contains three parameters: the gauge coupling $\beta_G$, the lattice Higgs self-coupling $\beta_R$ and the hopping parameter $\beta_H$. The gauge fields are represented by unitary $2 \times 2$ link matrices $U_{x,\mu}$ and $U_p$ denotes the $SU(2)$ plaquette matrix. In this action, the Higgs field is parametrized as follows: $\Phi_x = \rho_x V_x $, where $\rho_x^2= \frac12 \mbox{\rm Tr}(\Phi_x^+\Phi_x)$ is the Higgs modulus squared, and $V_x$ an element of the group $SU(2)$. Later on, the $2 \times 2$ matrix-valued Higgs field $\Phi_x$ is replaced by the more standard isospinor notation $\phi_x = {(\Phi^{11}_x,\Phi^{21}_x)}^T$. The lattice parameters are related to the couplings of the $3D$ superrenormalizable $SU(2)$ Higgs model in the continuum, $g_3$, $\lambda_3$ and $m_3(\mu_3=g_3^2)$ as given {\it e.g.} in \cite{wirNP97}. As in \cite{wirNP97} a parameter $M_H^*$ is used (approximately equal to the zero temperature physical Higgs mass) to parametrize the Higgs self-coupling as follows: \begin{eqnarray} \beta_R=\frac{\lambda_3}{g_3^2} \frac{\beta_H^2}{\beta_G} = \frac{1}{8} {\left(\frac{M_H^*}{80\ {\mbox {GeV}}}\right)}^2 \frac{\beta_H^2}{\beta_G}\, . \label{MH*} \end{eqnarray} Lattice coupling $\beta_G$ and continuum coupling $g^2_3$ are related by \begin{eqnarray} \beta_G = \frac{4}{a g^2_3}\, , \label{betaG} \end{eqnarray} with $a$ being the lattice spacing. We have studied the model at different gauge couplings $\beta_G$ in order to qualitatively understand the appearance on the lattice of embedded defects of some characteristic physical size using lattices with different lattice spacing such that, eventually, the continuum limit can be accessed. This requires to define operators which count extended defects of arbitrary size in lattice units. Let us first recall the definition of the elementary topological defects. The gauge invariant and quantized lattice definition~\cite{ChGuIl97} of the Nambu monopole is closely related to the definition in the continuum theory~\cite{Na77}. First we define a composite adjoint unit vector field $n_x = n^a_x \, \sigma^a$, $n^a_x = - (\phi^+_x \sigma^a \phi_x) \slash (\phi^+_x \phi_x )$ with $\sigma^a$ being the Pauli matrices. In the following construction, the field $n_x$ plays a role similar to the direction of the adjoint Higgs field in the definition of the 't~Hooft--Polyakov monopole~\cite{tHPo74} in the Georgi--Glashow model. Here it is used to define the gauge invariant flux~${\bar \theta}_p$ carried by the plaquette $p$, \begin{eqnarray} {\bar \theta}_p (U,n) & = & \arg \Bigl( {\mathrm {Tr}} \left[({\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l + n_x) V_{x,\mu} V_{x +\hat\mu,\nu} V^+_{x + \hat\nu,\mu} V^+_{x,\nu} \right]\Bigr)\, , \label{AP} \end{eqnarray} in terms of projected links \begin{eqnarray} V_{x,\mu}(U,n) & = & U_{x,\mu} + n_x U_{x,\mu} n_{x + \hat\mu}\, . \nonumber \end{eqnarray} In the unitary gauge, with $\phi_x={(0,\varphi)}^T$ and $n^a_x \equiv \delta^{a3}$, the phases $\theta^u_l = \arg U^{11}_l$ behave as a compact Abelian field with respect to the residual Abelian gauge transformations $\Omega^{abel}_x = e^{i \sigma_3 \, \alpha_x}$, $\alpha_x \in [0,2 \pi)$ which leave the unitary gauge condition intact. Instead of in terms of the plaquettes $\theta_p$ of this Abelian field, the magnetic charge of Nambu monopoles - the topological defects of this Abelian field - can alternatively be defined using a gauge invariant construction~\cite{ChGuIl97}. The monopole charge $j_c$ carried by the cube $c$ can be expressed in terms of the gauge invariant fluxes \eq{AP} passing through the surface $\partial c$: \begin{eqnarray} j_c = - \frac{1}{2\pi} \sum_{p \in \partial c} {\bar \theta}_p\, , \quad {\bar \theta}_p = \left( \theta_p - 2 \pi m_p \right) \in [-\pi,\pi)\, . \label{j_N} \end{eqnarray} The $Z$--string~\cite{Ma83,Na77} corresponds to the Abrikosov--Nielsen--Olesen vortex solution~\cite{ANO} embedded~\cite{VaBa69,BaVaBu94} into the electroweak theory\footnote{Note that there are two independent vortex solutions in the electroweak theory: $Z$--vortex and $W$-vortex, see Ref.~\cite{BaVaBu94}. In the limit of zero Weinberg angle $\theta_W$ (which is considered in the present paper) both solutions coincide up to a global gauge transformation. If $\theta_W \neq 0$ our construction \eq{SigmaN} of the $Z$--vortex should be properly modified and complemented by one of the $W$--vortex. In fact, \eq{SigmaN} would then correspond to what is called there a $W$--vortex solution.}. The $Z$--vorticity number corresponding to the plaquette $p=\{x,\mu\nu\}$ is defined~\cite{ChGuIl97} as follows: \begin{eqnarray} \sigma_p = \frac{1}{2\pi} \Bigl( \chi_p - {\bar \theta}_p \Bigr) \, \, , \label{SigmaN} \end{eqnarray} where ${\bar \theta}_p$ has been given in \eq{AP}, and $\chi_{p} = \chi_{x,\mu\nu} = \chi_{x,\mu} + \chi_{x +\hat\mu,\nu} - \chi_{x + \hat\nu,\mu} - \chi_{x,\nu}$ is the plaquette variable formed in terms of the Abelian links \begin{eqnarray} \chi_{x,\mu} = \arg\left(\phi^+_x V_{x,\mu} \phi_{x + \hat\mu}\right) \, . \nonumber \end{eqnarray} The $Z$--vortex is formed by links $l=\{x,\rho\}$ of the dual lattice which are dual to those plaquettes $p=\{x,\mu\nu\}$ which carry a non-zero vortex number~\eq{SigmaN}: $\mbox{}^{\ast} \sigma_{x,\rho} = \varepsilon_{\rho\mu\nu} \sigma_{x,\mu\nu} \slash 2$. One can show that $Z$--vortices begin and end on the Nambu (anti-) monopoles: $\sum^3_{\mu=1} (\mbox{}^{\ast} \sigma_{x-\hat\mu,\mu} - \mbox{}^{\ast} \sigma_{x,\mu}) = \mbox{}^{\ast} j_x$. In order to understand the behavior of the embedded defects towards the continuum limit we studied also numerically so-called {\it extended} topological objects on the lattice according to Ref.~\cite{IvPoPo90}. A similar approach has been pursued in Ref.~\cite{Laine98}, in which lattice vortices of extended size have been studied in the non-compact version of the $3D$ Abelian Higgs model. An extended monopole (vortex) of physical size $k~a$ is defined on $k^3$ cubes ($k^2$ plaquettes, respectively). The charge of monopoles $j_{c(k)}$ on bigger $k^3$ cubes $c(k)$ is constructed analogously to that of the elementary monopole, eq. \eq{j_N}, with the elementary $1\times 1$ plaquettes in terms of $V_{x,\mu}$ replaced by $n \times n$ Wilson loops (extended plaquettes). In the context of pure gauge theory, in the maximally Abelian gauge, this construction is known under the name of type-I extended objects. For the present model an alternative construction (type-II), obtained by blocking elementary topological objects, fails to lead to a good continuum description~\cite{ChGuIlSch98}. A more detailed definition of extended Nambu monopoles and $Z$--vortices can be found in Ref.~\cite{ChGuIlSch98}. \section{Defect Dynamics at the Crossover} \subsection{Density and Percolation} First, we study the behavior of {\it elementary} Nambu monopoles and $Z$--vortices along a line in parameter space passing through the continuous crossover region. Monte Carlo simulations have been performed on cubic lattices of size $L^3=16^3$ at $\beta_G=12$ for self-couplings $\lambda_3$ corresponding to a Higgs mass $M_H^* = 100$~GeV, see eq.~\eq{MH*}. In our simulations we used the algorithms described in Ref.~\cite{wirNP97} which combine Gaussian heat bath updates for the gauge and Higgs fields with several reflections for the fields to reduce the autocorrelations. We varied the parameter $\beta_H$ in order to traverse the region of the crossover at given $M_H^*$ and $\beta_G$. At this stage we were interested in the behavior of the lattice Nambu monopole ($\rho_m$) and $Z$--vortex ($\rho_v$) densities and of the percolation probability $C$ for the $Z$--vortex as functions of $\beta_H$. For each lattice configuration, the densities $\rho_m$ and $\rho_v$ are given by \begin{eqnarray} \rho_m = \frac{1}{L^3} \sum\limits_c |j_c|\, , \qquad \rho_v = \frac{1}{3 L^3} \sum\limits_p |\sigma_p|\, , \nonumber \end{eqnarray} where $c$ and $p$ refer to elementary cubes and plaquettes; the monopole charge $j_c$ and the $Z$--vorticity $\sigma_p$ are defined in \eq{j_N} and \eq{SigmaN}, respectively. The percolation probability of the system of vortex lines $\mbox{}^{\ast} \sigma$ (which typically can be decomposed into several lattice clusters) is defined as a limit of the following two-point function~\cite{PoPoYu91}: \begin{eqnarray} C & = & \lim_{r \to \infty} C(r)\, ,\label{percolation}\\ C(r) & = & {\left(\sum\limits_{x,y,i} \delta_{x \in \mbox{}^{\ast} \sigma^{(i)}} \,\delta_{y \in \mbox{}^{\ast} \sigma^{(i)}} \cdot \delta\Bigl(|x-y|-r\Bigr) \right)} \cdot {\left( \sum\limits_{x,y} \delta\Bigl(|x-y|-r\Bigr) \right)}^{-1}\, , \nonumber \end{eqnarray} where the summation is taken over all points $x$, $y$ of the dual lattice with fixed distance and over all clusters $\mbox{}^{\ast} \sigma^{(i)}$ of links carrying vorticity ($i$ labels distinct vortex clusters). The Euclidean distance between two points $x$ and $y$ is denoted as $|x-y|$. The notation $x \in \mbox{}^{\ast} \sigma^{(i)}$ means that the vortex world line cluster $\mbox{}^{\ast} \sigma^{(i)}$ passes through the point $x$. Clusters $\mbox{}^{\ast} \sigma^{(i)}$ are called percolating clusters if they contribute to the limit $C$. Formula \eq{percolation} corresponds to the thermodynamical limit. In our finite volume we find numerically that the function $C(r)$ can be fitted as $C(r) = C + C_0 r^{-\alpha} e^{-m r}$, with $C$, $C_0$, $\alpha$ and $m$ being fitting parameters. As we observed, $m \sim a^{-1}$ in the explored region of the phase diagram, therefore we can be sure that finite size corrections to $C$ are exponentially suppressed. If $C$ does not vanish on an infinite lattice then the vacuum is populated by one or more percolating clusters, each consisting of {\it infinitely many} dual links with non-vanishing vorticity. This implies the existence of a non-vanishing vortex condensate. If $C$ turns to zero the vortex condensate vanishes according to this definition. In Figure~1(a) we show the ensemble averages of densities, $\langle \rho_m \rangle$ of Nambu monopoles and $\langle \rho_v \rangle$ of $Z$--vortices as a function of the hopping parameter $\beta_H$ for Higgs mass $M_H^*= 100$~GeV at gauge coupling $\beta_G = 12$. Both densities vanish very smoothly with increasing hopping parameter $\beta_H$, (which corresponds to a decreasing physical temperature). The percolation probability shown in Figure~1(b) vanishes at some value of the coupling constant $\beta_H$ corresponding to a percolation temperature well above the temperature where the densities turn to zero. In fact, the percolation ends while the density of monopoles and vortices still amounts to some fraction of the densities of these objects deep in the symmetric phase. We interpret this by an analogy according to which the would-be Higgs phase in the crossover region (at temperatures below some crossover temperature) resembles a type II superconductor in so far as it can support thick vortices which cannot form infinite clusters. In Nature, in a real cooling process passing the crossover, the percolating cluster(s) that has (have) existed above the crossover temperature would have been broken into vortex rings or vortex strings connecting Nambu monopole pairs. It would be tempting to identify the crossover temperature with the percolation temperature if the latter has a well-defined meaning in the continuum limit. In order to explore whether the percolation effect persists approaching the continuum limit we have studied also extended topological objects (using the so-called type-I construction mentioned in Section~2). According to \eq{betaG} the physical size of the $k^3$ monopoles (or the $k^2$ vortices) in simulations done at $\beta_G = k~\beta^{(0)}_G$ should be roughly the same for all $k$. Since we expect finite volume effects to be potentially more severe at $M_H^*=100$ GeV than at a strongly first order phase transition we were careful to keep also the physical size of the lattice fixed: at $\beta_G = k~\beta^{(0)}_G$ we have performed simulations on lattices with a volume ${(k~L_0)}^3$, respectively. For the coarsest lattice we have chosen $\beta_G=\beta^{(0)}_G=8$ and $L=L_0=16$. We show the behavior of the percolation probability near the crossover point on Figures~2(a,b,c) for $k=1,2,3$, respectively. One can clearly see the existence of a percolation transition for each vortex size $k$. The actual value of the coupling $\beta^{\mathrm{perc}}_H$ corresponding to the percolation transition varies with $k$, similar to how $\beta^{\mathrm{trans}}_H$ was observed to change with $\beta_G$ and $L^3$ at smaller Higgs mass (when there is a true first order phase transition). Also here, one can analogously define a physical percolation temperature $T^{\mathrm{perc}}$ corresponding to $\beta^{\mathrm{perc}}_H$. This temperature is found to become roughly independent of $\beta_G$ (or the lattice spacing) with decreasing lattice spacing. That means that the defects popping up dynamically (as well as the corresponding percolation temperature) become increasingly well-defined when the lattice becomes fine-grained enough to resolve the embedded vortices as extended objects. This indicates that the percolation temperature has a good chance to possess a well-defined continuum limit. Taking into account the perturbative relations between $3D$ and $4D$ quantities~\cite{generic}, $T^{\mathrm{perc}}$ can be roughly estimated as 170 GeV or 130 GeV, dependent on which version of the $4D$ continuum $SU(2)$-Higgs theory is represented by the effective model, without fermions or with fermions including the top quark. The corresponding zero temperature Higgs mass $M_H$ would be 94 and 103 GeV in the respective theories. Notice however that for the finer lattices (bigger vortex size in lattice units) there appears a tail of small percolation probability before $C$ finally turns to zero. This means that, still above the percolation temperature, the percolating clusters become more dilute, only a part of the lattice configurations actually contains a percolating cluster (''intermittent'' percolation), and more and more smaller clusters appear. \subsection{Cluster Statistics} In order to look closer for the properties of the small vortex loops that populate the low temperature side of the crossover at not too low temperature, we have also measured the Monte Carlo ensemble average of the number of clusters per configuration and the average length per cluster. The behavior of these quantities for different $k=1,2,3$ are qualitatively the same, therefore we present these quantities for the case of extension parameter $k=2$. We show in Figure~3 results obtained for the corresponding lattice size $32^3$ and gauge coupling $\beta_G=16$: (a) the density of the Nambu monopoles and $Z$--vortices, (b) the average length ${\cal L}$ per $Z$--vortex cluster and (c) the average number ${\cal N}$ of $Z$--vortex clusters per lattice configuration. It is clearly seen from Figure~3(a) that in the region of the percolation transition (compare Figure~2(b) for $k=2$ at $\beta^{\mathrm{perc}}_{H} \approx 0.3432$) the density of monopoles and vortices decreases smoothly with increasing $\beta_H$. From Figure~3(b) one can conclude that the average length of the vortex clusters ${\cal L}$ decreases drastically while the average number of vortex clusters ${\cal N}$ increases sharply (Figure~3(c)) already at somewhat smaller $\beta_H$. The latter reaches its maximum at $\beta^{\mathrm{perc}}_{H}$. With the help of equation \eq{betaG} and $g_3^2\approx g_4^2 T \approx 0.43 T$ we get, choosing $k=2$ and $\beta_G=16$ as an example, a lattice spacing $a=1/(300~\mathrm{GeV})$. We can estimate the densities of $k=2$ Nambu monopoles and $Z$--vortices in physical units. At the percolation temperature $170$ GeV we get, for lattice densities $\rho_v=0.14$ and $\rho_m=0.26$, in the continuum a vortex density of $(55~\mathrm{GeV})^2$ and a monopole density of $(93~\mathrm{GeV})^3$. Taking into account that, classically, the core widths of embedded defects are of the order of $M_W^{-1}$ we conclude that vortices and monopoles are densely packed at the percolation transition and, as a result, their cores are strongly overlapping. After the percolation transition is completed, the open or closed vortex strings are very short: their mean length is roughly three times the classical vortex width. The average cluster length amounts to $~5ka=1/(30~\mathrm{GeV})$. Although these results characterize structures formed by thermal fluctuations within equilibrium thermodynamics, all facts suggest strongly that in a non-equi\-li\-bri\-um cooling process the percolating cluster(s) and other bigger clusters decay near the percolation temperature $T^{\mathrm{perc}}$, primarily into small closed vortex loops which later gradually shrink and disappear with further decreasing of temperature (increasing of $\beta_H$). The detailed dynamics of this process requires thorough investigations using non-equi\-li\-bri\-um techniques. Note that even at $\beta_H$ values far above the percolation point there is a long plateau in the average length per vortex clusters at a level of ${\cal L} \approx 2$. A more detailed analysis of the field configurations shows in fact that here, below the percolation temperature, each configuration contains a large number (a few dozen, according to Figure~3(c)) of monopole--anti-monopole pairs connected by vortex trajectories of length $1-2$ and only few additional closed vortex loops. As it was shown in Ref.~\cite{HiJa94,ChGuIl97} the sphaleron configuration (an unstable solution to classical equations of motion) contains in its center a monopole--anti-monopole pair connected by a short vortex string. It is suggestive to interpret at least some part\footnote{The vortex string inside the electroweak sphaleron is shown to be twisted~\cite{Va94,HiJa94,twist}. In our measurements we do not check the twist of the $Z$--vortex segments, therefore we are not able to relate all open vortex loops to sphalerons with confidence.} of the open $Z$--vortex strings (which exist within some temperature interval below the percolation temperature with a small density) as sphalerons. \subsection{$Z$--Vortices as Physical Objects} In this Subsection we show that our construction of the $Z$--vortices on the lattice \eq{SigmaN} defines objects resembling some characteristic features of the classical $Z$-vortex solutions in continuum. Our construction detects a line-like object with non-zero vorticity and there is no guarantee that this configuration has a particular vortex profile. However, as we show below, the lattice $Z$--vortices have some common features with the classical solutions. At the center of a classical continuum $Z$--vortex the Higgs field modulus is zero and the energy density reaches its maximum~\cite{Na77,BaVaBu94}. If the vacuum is populated by vortex-like configurations then it would be natural to expect that along the axis of these configurations there will be a line of zeroes of the Higgs field and of points with maximal energy density. The simplest way to test how good this expectation is fulfilled is to measure the (squared) modulus of the Higgs field and the energy density near the dual vortex-carrying links defined by \eq{SigmaN} and compare these quantities with the corresponding bulk average values far from the vortex core\footnote{A similar method has been used to study the physical features of the Abelian magnetic monopoles in non-Abelian gauge theories in Ref.~\cite{MIP}.}. Here we restrict ourselves to the case of elementary defects. This is the worst case in the sense that it obviously does not allow to define a profile of a defect resembling the continuum case. We will come back to the profile of an extended defect in a forthcoming publication. For the present purposes the non-vanishing of \eq{SigmaN} is simply used as a trigger to measure the above mentioned observables. We define the mean value of the modulus squared of the Higgs field inside the (elementary) vortex, ${<\rho^2>}_{\mathrm{in}}$, as the average of $(\phi^+_x \phi_x )$ over the corners of {\it all} plaquettes with $\sigma_P \ne 0$ (dual to the vortex-carrying links). For simplicity, the analogous quantity outside the (elementary) vortex, ${<\rho^2>}_{\mathrm{out}}$, is obtained as the corresponding average over the corners of {\it all} plaquettes with $\sigma_P = 0$. Even more straightforward is the definition of the corresponding averages of the gauge field energy as volume averages of $~1-\frac12 \mathrm{Tr} U_p~$ depending on whether $\sigma_P$ is equal to or different from zero. The quantities ${<\rho^2>}_{\mathrm{in,out}}$ are plotted {\it vs.} $\beta_H$ for $\beta_G=8$ in Fig.~\ref{corr}(a). One can clearly see that the modulus of the Higgs field is smaller near the vortex trajectory than outside the vortex for all values of the coupling $\beta_H$. In order to make clearer the different behavior of the quantities ${<\rho^2>}_{\mathrm{in,out}}$ on both sides of the crossover we show in Fig.~\ref{corr}(b) the histograms of these quantities on the symmetric side ($\beta_H=0.348$) and the Higgs side ($\beta_H=0.356$). On the symmetric side of the crossover (smaller $\beta_H$) the difference\footnote{Note, that only the difference between quantum averages of the squared modulus of the Higgs field (not the quantum average itself!) may be related to the continuum limit due to an additive renormalization, see Ref.~\cite{Laine98} for a detailed discussion on this point.} between quantities ${<\rho^2>}_{\mathrm{in,out}}$ is much smaller than on the Higgs side (larger $\beta_H$). This fact is natural, since on the symmetric side the vortices densely populate the vacuum and vortex cores are overlapping while on the Higgs side of the transition the vortices are dilute. The value of the Higgs field modulus in the region between closely placed vortices is smaller than between far separated vortices. As to be expected for elementary lattice vortices, ${<\rho^2>}_{\mathrm{in}}$ does not vanish. This is due to the relatively large lattice spacing which prevents that the Higgs field can be measured arbitrarily near the vortex axis. A detailed study of the vortex profile (and a localization of the Higgs zeroes) requires the extended vortex construction and is under investigation~\cite{InPreparation}. The gauge field energies ${<1-U_P>}_{\mathrm{in,out}}$ are plotted {\it vs.} $\beta_H$ in Fig.~\ref{corr}(c), the corresponding histograms are shown in Fig.~\ref{corr}(d) for the symmetric side ($\beta_H=0.348$) and the Higgs side ($\beta_H=0.356$) of the crossover. Both figures show that the value of the gauge field energy near the vortex center is larger than in the bulk on both sides of the crossover. The results of this Subsection clearly show that $Z$--vortices are physical objects: vortices carry excess gauge field energy and the Higgs modulus decreases near the vortex center. Whether these thermal excitations really resemble the features of the classical continuum vortex solutions remains to be seen extending this study to finer lattices. \section{Discussion and Conclusions} Recently we have started to investigate numerically the behavior of Nambu mo\-no\-po\-les and $Z$--vortices in the $SU(2)$ Higgs model at high temperatures within the dimensional reduction approach. This model is used as representative for the standard model. Here we have extended our previous work to a region of Higgs mass where the model is known not to have a true thermal phase transition. We show that the 3D percolation transition that we have recently found to be a companion of the first order phase transition at low Higgs mass, still exists at the large but not unrealistic Higgs mass of $M_H\approx 100$ GeV. At temperatures above $T^{\mathrm{perc}} \approx 170$ GeV ($ \approx 130$ GeV for the more realistic case with the top quark included) space is densely populated by large vortex clusters with one or few infinitely extended ones among them. This state is not thermodynamically relevant at lower temperature. Instead of very large clusters, a gas of closed vortex loops and monopolium bound states (Nambu monopole--anti-monopole pairs bound by $Z$--strings) prevails. Further below $T^{\mathrm{perc}}$, with decreasing temperature small vortex loops shrink and disappear while monopole--anti-monopole pairs still survive. In the spirit of our recent investigation of classical sphaleron configurations we would like to associate at least some part of these pairs with sphaleron-like configurations known to exist in the broken phase. Without going into details of specific mechanisms, we want to point out that the non-equi\-lib\-ri\-um break-up of infinitely extended electroweak vortex clusters into small closed loops with decreasing temperature is a prerequisite of some string--mediated baryon number generation scenarios \cite{StringScenario,Va94}. It is interesting to see that similar defect structures are realized within our effective Higgs model. \section*{Acknowledgments} M.~N.~Ch. is grateful to L.~McLerran, M.~I.~Polikarpov and K.~Rummukainen for interesting discussions. M.~N.~Ch. and F.~V.~G. were partially supported by the grants INTAS-96-370, INTAS-RFBR-95-0681, RFBR-96-02-17230a and RFBR-96-15-96740.
1,314,259,995,403
arxiv
\section{Introduction}\label{sec:intro} High-dimensional tensor-variate data arise in computer vision (video data containing multiple frames of color images), neuroscience (EEG measurements taken from different sensors over time under various experimental conditions), and recommending system (user preferences over time). Due to the non-homogeneous nature of these data, second-order information that encodes (conditional) dependency structure within the data is of interest. Assuming the data are drawn from a tensor normal distribution, a straightforward way to estimate this structure is to vectorize the tensor and estimate the underlying Gaussian graphical model associated with the vector. However, such an approach ignores the tensor structure and requires estimating a rather high dimensional precision matrix, often with insufficient sample size. For instance, in the aforementioned EEG application the sample size is one if we aim to estimate the dependency structure across different sensors, time and experimental conditions for a single subject. To address such sample complexity challenges, sparsity is often imposed on the covariance $\mat{\Sigma}$ or the inverse covariance $\mat{\Omega}$, e.g., by using a sparse Kronecker product (KP) or Kronecker sum (KS) decomposition of $\mat{\Sigma}$ or $\mat{\Omega}$. The earliest and most popular form of sparse structured precision matrix estimation approaches represent $\mat{\Omega}$, equivalently $\mat\Sigma$, as the KP of smaller precision/covariance matrices~\citep{allen2010transposable,leng2012sparse,yin2012model,tsiligkaridis2013convergence,zhou14,lyu2019tensor}. The KP structure induces a generative representation for the tensor-variate data via a separable covariance/inverse covariance model. Alternatively, \citet{kalaitzis2013bigraphical,greenewald2019tensor} proposed to model inverse covariance matrices using a KS representation. \citet{rudelsonzhou17errinvardependent,parketal17_kroneckersum} proposed KS-structured covariance model which corresponds to an errors-in-variables model. The KS (inverse) covariance structure corresponds to the Cartesian product of graphs~\citep{kalaitzis2013bigraphical,greenewald2019tensor}, which leads to more parsimonious representations of (conditional) dependency than the KP. However, unlike the KP model, KS lacks an interpretable generative representation for the data. Recently, \citet{wang20sylvester} proposed a new class of structured graphical models, called the Sylvester graphical models, for tensor-variate data. The resulting inverse covariance matrix has the KS structure in its square-root factors. This square-root KS structure is hinted in the paper to have a connection with certain physical processes, but no illustration is provided. A common challenge for structured tensor graphical models is the efficient estimation of the underlying (conditional) dependency structures. KP-structured models are generally estimated via extension of GLasso~\citep{friedman2008sparse} that iteratively minimize the $\ell_1$-penalized negative likelihood function for the matrix-normal data with KP covariance. This procedure was shown to converge to some local optimum of the penalized likelihood function~\citep{yin2012model,tsiligkaridis2013convergence}. Similarly, \citet{kalaitzis2013bigraphical} further extended GLasso to the KS-structured case for $2$-way tensor data. \citet{greenewald2019tensor} extended this to multiway tensors, exploiting the linearity of the space of KS-structured matrices and developing a projected proximal gradient algorithm for KS-structured inverse covariance matrix estimation, which achieves linear convergence (i.e., geometric convergence rate) to the global optimum. In~\citet{wang20sylvester}, the Sylvester-structured graphical model is estimated via a nodewise regression approach inspired by algorithms for estimating a class of vector-variate graphical models~\citep{meinshausen2006high,khare2015convex}. However, no theoretical convergence result for the algorithm was established nor did they study the computational efficiency of the algorithm. In the modern era of big data, both computational and statistical learning accuracy are required of algorithms. Furthermore, when the objective is to learn representations for physical processes, interpretablility is crucial. In this paper, we bridge this ``Statistical-to-Computational-to-Interpretable gap'' for Sylvester graphical models. We develop a simple yet powerful first-order optimization method, based on the Proximal Alternating Linearized Minimization (PALM) algorithm, for recovering the conditional dependency structure of such models. Moreover, we provide the link between the Sylvester graphical models and physical processes obeying differential equations and illustrate the link with a real-data example. The following are our principal contributions: \begin{enumerate} \item A fast algorithm that efficiently recovers the generating factors of a representation for high-dimensional multiway data, significantly improving on~\citet{wang20sylvester}. \item A comprehensive convergence analysis showing linear convergence of the objective function to its global optimum and providing insights for choices of hyperparameters. \item A novel application of the algorithm to an important multi-modal solar flare prediction problem from solar magnetic field sequences. For such problems, SG-PALM is physically interpretable in terms of the partial differential equations governing solar activities proposed by heliophysicists. \end{enumerate} \section{Background and Notation}\label{sec:background} \subsection{Notations} In this paper, scalar, vector and matrix quantities are denoted by lowercase letters, boldface lowercase letters and boldface capital letters, respectively. For a matrix $\mat{A} = (\mat{A}_{i,j}) \in \mathbb{R}^{d \times d}$, we denote $\|\mat{A}\|_2, \|\mat{A}\|_F$ as its spectral and Frobenius norm, respectively. We define $\|\mat{A}\|_{1,\text{off}} := \sum_{i \neq j} |\mat{A}_{i,j}|$ as its off-diagonal $\ell_1$ norm. For tensor algebra, we adopt the notations used by \citet{kolda2009tensor}. A $K$-th order tensor is denoted by boldface Euler script letters, e.g, $\tensor{X} \in \mathbb R^{d_1 \times \dots \times d_K}$. The $(i_1,\dots, i_K)$-th element of $\tensor{X}$ is denoted by $\tensor{X}_{i_1,\dots, i_K}$, and the vectorization of $\tensor{X}$ is the $d$-dimensional vector $\vecto(\tensor{X}) := (\tensor{X}_{1,1,\dots,1},\tensor{X}_{2,1,\dots,1},\dots,\tensor{X}_{d_1,1,\dots,1},\dots,\tensor{X}_{d_1,d_2,\dots,d_k})^T$ with $d=\prod_{k=1}^K d_k$. A fiber is the higher order analogue of the row and column of matrices. It is obtained by fixing all but one of the indices of the tensor. Matricization, also known as unfolding, is the process of transforming a tensor into a matrix. The mode-$k$ matricization of a tensor $\tensor{X}$, denoted by $\tensor{X}_{(k)}$, arranges the mode-$k$ fibers to be the columns of the resulting matrix. The $k$-mode product of a tensor $\tensor{X} \in \mathbb R^{d_1 \times \dots \times d_K}$ and a matrix $\mat{A} \in \mathbb R^{J \times d_k}$, denoted as $\tensor{X} \times_k \mat{A}$, is of size $d_1 \times \dots \times d_{k-1} \times J \times d_{k+1} \times \dots d_K$. Its entry is defined as $(\tensor{X} \times_k \mat{A})_{i_1,\dots,i_{k-1},j,i_{k+1},\dots,i_K} := \sum_{i_k=1}^{d_k} \tensor{X}_{i_1,\dots,i_K} A_{j,i_k}$. For a list of matrices $\{\mat{A}_k\}_{k=1}^K$ with $\mat{A}_k \in \mathbb R^{d_k \times d_k}$, we define $\tensor{X} \times \{\mat{A}_1,\dots,\mat{A}_K\} := \tensor{X} \times_1 \mat{A}_1 \times_2 \dots \times_K \mat{A}_K$. Lastly, we define the $K$-way Kronecker product as $\bigotimes_{k=1}^K \mat{A}_k = \mat{A}_1 \otimes \cdots \otimes \mat{A}_K$, and the equivalent notation for the Kronecker sum as $\bigoplus_{k=1}^K \mat{A}_k = \mat{A}_1 \oplus \dots \oplus \mat{A}_K = \sum_{k=1}^K \mat I_{[d_{k+1:K}]} \otimes \mat{A}_k \otimes \mat I_{[d_{1:k-1}]}$, where $\mat I_{[d_{k:\ell}]} = \mat I_{d_k} \otimes \dots \otimes \mat I_{d_\ell}$. For the case of $K=2$, $\mat{A}_1 \oplus \mat{A}_2 = \mat{I}_{d_2} \otimes \mat{A}_1 + \mat{A}_2 \otimes \mat{I}_{d_1}$. \subsection{Tensor Graphical Models} A random tensor $\tensor{X} \in \reals^{d_1 \times \dots \times d_K}$ follows the tensor normal distribution with zero mean when $\vecto(\tensor{X})$ follows a normal distribution with mean $\mat{0} \in \reals^d$ and precision matrix $\mat\Omega := \mat\Omega(\mat\Psi_1,\dots,\mat\Psi_K)$, where $d=\prod_{k=1}^K d_k$. Here, $\mat\Omega(\mat\Psi_1,\dots,\mat\Psi_K)$ is parameterized by $\mat\Psi_k \in \reals^{d_k \times d_k}$ via either Kronecker product, Kronecker sum, or the Sylvester structure, and the corresponding negative log-likelihood function (assuming $N$ independent observations $\tensor{X}^i, i=1,\dots,N$) \begin{equation}\label{eqn:gaussiannegloglik} -\frac{N}{2} \log|\mat\Omega| + \frac{N}{2}\tr(\mat{S}\mat\Omega), \end{equation} where $\mat\Omega = \bigotimes_{k=1}^K \mat\Psi_k$, $\bigoplus_{k=1}^K \mat\Psi_k$, or $\Big(\bigoplus_{k=1}^K \mat\Psi_k\Big)^2$ for KP, KS, and Sylvester models, respectively; and $\mat{S} = \frac{1}{N}\sum_{i=1}^N \vecto(\tensor{X}^i) \vecto(\tensor{X}^i)^T$. To encourage sparsity, penalized negative log-likelihood function is proposed \begin{equation*} -\frac{N}{2} \log|\mat\Omega| + \frac{N}{2}\tr(\mat{S}\mat\Omega) + \sum_{k=1}^K P_{\lambda_k}(\mat\Psi_k), \end{equation*} where $P_{\lambda_k}(\cdot)$ is a penalty function indexed by the tuning parameter $\lambda_k$ and is applied elementwise to the off-diagonal elements of $\mat\Psi_k$. Popular choices for $P_{\lambda_k}(\cdot)$ include the lasso penalty~\citep{tibshirani1996regression}, the adaptive lasso penalty~\citep{zou2006adaptive}, the SCAD penalty~\citep{fan2001variable}, and the MCP penalty~\citep{zhang2010nearly}. \subsection{The Sylvester Generating Equation} \citet{wang20sylvester} proposed a Sylvester graphical model that uses the Sylvester tensor equation to define a generative process for the underlying multivariate tensor data. The Sylvester tensor equation has been studied in the context of finite-difference discretization of high-dimensional elliptical partial differential equations~\citep{grasedyck2004existence,kressner2010krylov}. Any solution $\tensor{X}$ to such a PDE must have the (discretized) form: \begin{equation}\label{eqn:sylvester} \begin{aligned} \sum_{k=1}^K \tensor{X} \times_k \mat\Psi_k = \tensor{T} &\Longleftrightarrow \Big(\bigoplus_{k=1}^K \mat\Psi_k \Big) \vecto(\tensor{X}) = \vecto(\tensor{T}). \end{aligned} \end{equation} where $\tensor{T}$ is the driving source on the domain, and $\bigoplus_{k=1}^K \mat\Psi_k$ is a Kronecker sum of $\mat\Psi_k$'s representing the discretized differential operators for the PDE, e.g., Laplacian, Euler-Lagrange operators, and associated coefficients. These operators are often sparse and structured. For example, consider a physical process characterized as a function $u$ that satisfies: \begin{equation*} \mathcal{D}u = f \quad \text{in} \quad \Omega, \quad u(\Gamma)=0, \quad \Gamma = \partial \Omega. \end{equation*} where $f$ is a driving process, e.g., a Wiener process (white Gaussian noise); $\mathcal{D}$ is a differential operator, e.g, Laplacian, Euler-Lagrange; $\Omega$ is the domain; and $\Gamma$ is the boundary of $\Omega$. After discretization, this is equivalent to (ignoring discretization error) the matrix equation \begin{equation*} \mat{D}\mat{u} = \mat{f}. \end{equation*} Here, $\mat{D}$ is a sparse matrix since $\mathcal{D}$ is an infinitesimal operator. Additionally, $\mat{D}$ admits Kronecker structure as a mixture of Kronecker sums and Kronecker products. The matrix $\mat{D}$ reduces to a Kronecker sum when $\mathcal{D}$ involves no mixed derivatives. For instance, consider the Poisson equation in 2D, where $u(x,y)$ on $[0,1]^2$ satisfies the elliptical PDE \begin{equation*} \mathcal{D}u = (\partial^2_x + \partial^2_y)u = f. \end{equation*} The Poisson equation governs many physical processes, e.g., electromagnetic induction, heat transfer, convection, etc. A simple Euler discretization yields $\mat{U} = (u(i,j))_{i,j}$, where $u(i,j)$ satisfies the local equation (up to a constant discretization scale factor) \begin{equation*} \begin{aligned} 4 u(i,j) &= u(i+1,j) + u(i-1,j) + u(i,j+1) \\ & \quad + u(i,j-1) - f(i,j). \end{aligned} \end{equation*} Defining $\mat{u}=\text{vec}(\mat{U})$ and $\mat{A}$ (a tridiagonal matrix) \begin{equation*} \mat{A} = \begin{bmatrix} -1 & 2 & -1 & & & \\ & \ddots & \ddots & \ddots \\ & & \ddots & \ddots & \ddots \\ & & & -1 & 2 & -1 \end{bmatrix}, \end{equation*} then $(\mat{A} \oplus \mat{A})\mat{u} = \mat{f}$, which is the Sylvester equation ($K=2$). For the Poisson example, if the source $\mat{f}$ is a white noise random variable, i.e., its covariance matrix is proportional to the identity matrix, then the inverse covariance matrix of $\mat{u}$ has sparse square-root factors, since $\text{Cov}^{-1}(\mat{u})=(\mat{A} \oplus \mat{A})(\mat{A} \oplus \mat{A})^T$. Other physical processes that are generated from differential equations will also have sparse inverse covariance matrices, as a result of the sparsity of general discretized differential operators. Note that similar connections between continuous state physical processes and sparse ``discretized'' statistical models have been established by \citet{lindgren2011explicit}, who elucidated a link between Gaussian fields and Gaussian Markov Random Fields via stochastic partial differential equations. The Sylvester generative (SG) model~\eqref{eqn:sylvester} leads to a tensor-valued random variable $\tensor{X}$ with a precision matrix $\mat\Omega=\Big(\bigoplus_{k=1}^K \mat\Psi_k\Big)^2$, given that $\tensor{T}$ is white Gaussian. The Sylvester generating factors $\mat\Psi_k$'s can be obtained via minimization of the penalized negative log-pseudolikelihood \begin{equation} \label{eqn:objective} \begin{aligned} \mathcal{L}_{\mat\lambda}(\mat\Psi) = & -\frac{N}{2} \log | (\bigoplus_{k=1}^K \mathop{\text{diag}}(\mat\Psi_k))^2| \\ & + \frac{N}{2} \tr(\mat{S} \cdot (\bigoplus_{k=1}^K \mat\Psi_k)^2) + \sum_{k=1}^K \lambda_k \|\mat\Psi_k\|_{1, \text{off}}. \end{aligned} \end{equation} This differs from the true penalized Gaussian negative log-likelihood in the exclusion of off-diagonals of $\mat\Psi_k$'s in the log-determinant term. \eqref{eqn:objective} is motivated and derived directly using the Sylvester equation defined in~\eqref{eqn:sylvester}, from the perspective of solving a sparse linear system. This maximum pseudolikelihood estimation procedure has been applied to vector-variate Gaussian graphical models (see \citet{khare2015convex} and references therein). Detailed derivations and further discussions are provided in Appendix~\ref{supp:pseudolik}. \section{The SG-PALM Algorithm}\label{sec:method} Estimation of the generating parameters $\mat\Psi_k$'s of the SG model is challenging since the sparsity penalty applies to the square root factors of the precision matrix, which leads to a highly coupled likelihood function. \citet{wang20sylvester} proposed an estimation procedure called SyGlasso, that recovers only the off-diagonal elements of each Sylvester factor. This is a deficiency in many applications where the factor-wise variances are desired. Moreover, the convergence rate of the cyclic coordinate-wise algorithm used in SyGlasso is unknown and the computational complexity of the algorithm is higher than other sparse Glasso-type procedures. To overcome these deficiencies, we propose a proximal alternating linearized minimization method that is more flexible and versatile, called SG-PALM, for finding the minimizer of \eqref{eqn:objective}. SG-PALM is designed to exploit structures of the coupled objective function and yields simultaneous estimates for both off-diagonal and diagonal entries. The PALM algorithm was originally proposed to solve nonconvex optimization problems with separable structures, such as those arising in nonnegative matrix factorization~\citep{xu2013block,bolte2014proximal}. Its efficacy in solving convex problems has also been established, for example, in regularized linear regression problems~\citep{shefi2016rate}, it was proposed as an attractive alternative to iterative soft-thresholding algorithms (ISTA). The SG-PALM procedure is summarized in Algorithm~\ref{alg:sg-palm}. For clarity of notation we write \begin{equation}\label{eqn:decomp_obj} \mathcal{L}_{\mat\lambda}(\mat\Psi_1,\dots,\mat\Psi_K) = H(\mat\Psi_1,\dots,\mat\Psi_K) + \sum_{k=1}^K G_k(\mat\Psi_k), \end{equation} where $H: \mathbb{R}^{d_1 \times d_1} \times \cdots \times \mathbb{R}^{d_K \times d_K} \rightarrow \mathbb{R}$ represents the log-determinant plus trace terms in \eqref{eqn:objective} and $G_k: \mathbb{R}^{d_k \times d_k} \rightarrow (-\infty,+\infty]$ represents the penalty term in \eqref{eqn:objective} for each axis $k=1,\dots,K$. For notational simplicity we use $\mat\Psi$ (i.e., omitting the subscript) to denote the set $\{\mat\Psi_k\}_{k=1}^K$ or the $K$-tuple $(\mat\Psi_1,\dots,\mat\Psi_K)$ whenever there is no risk of confusion. The gradient of the smooth function $H$ with respect to $\mat\Psi_k$, $\nabla_k H(\mat\Psi)$, is given by \begin{equation}\label{eqn:block-grad} \begin{aligned} & \mathop{\text{diag}}\Big(\Big\{\tr[(\mathop{\text{diag}}((\mat\Psi_k)_{ii}) + \bigoplus_{j \neq k}\mathop{\text{diag}}(\mat\Psi_j))^{-1}] \Big\}_{i=1}^{d_k} \Big) \\ & \quad + \mat{S}_k\mat\Psi_k + \mat\Psi_k\mat{S}_k + 2\sum_{j \neq k}\mat{S}_{j,k}. \end{aligned} \end{equation} Here, the first ``$\mathop{\text{diag}}$'' maps a $d_k$-vector to a $d_k \times d_k$ diagonal matrix, the second one maps a scalar (i.e., $(\mat\Psi_k)_{ii}$) to a $(\prod_{j \neq k}d_j) \times (\prod_{j \neq k}d_j)$ diagonal matrix with the same elements, and the third operator maps a symmetric matrix to a matrix containing only its diagonal elements. In addition, we define: \begin{equation} \begin{aligned} & \mat{S}_k = \frac{1}{N}\sum_{i=1}^N \tensor{X}_{(k)}^i(\tensor{X}_{(k)}^i)^T, \\ & \mat{S}_{j,k} = \frac{1}{N}\sum_{i=1}^N \mat{V}_{j,k}^i(\mat{V}_{j,k}^i)^T, \\ & \mat{V}_{j,k}^i = \tensor{X}_{(k)}^i\Big(\mat{I}_{d_{1:j-1}} \otimes \mat\Psi_j \otimes \mat{I}_{d_{j:K}}\Big)^T, \quad j \neq k. \end{aligned} \end{equation} A key ingredient of the PALM algorithm is a proximal operator associated with the non-smooth part of the objective, i.e., $G_k$'s. In general, the proximal operator of a proper, lower semi-continuous convex function $f$ from a Hilbert space $\mathcal{H}$ to the extended reals $(-\infty,+\infty]$ is defined by~\citep{parikh2014proximal} \begin{equation*} \vspace{-1pt} \text{prox}_f(v) = \argmin_{x \in \mathcal{H}} f(x) + \frac{1}{2}\|x-v\|^2_2 \end{equation*} for any $v \in \mathcal{H}$. The proximal operator well-defined as the expression on the right-hand side above has a unique minimizer for any function in this class. For $\ell_1$-regularized cases, the proximal operator for the function $G_k$ is given by \begin{equation} \text{prox}_{G_k}^{\lambda_k}(\mat\Psi_k) = \mathop{\text{diag}}(\mat\Psi_k) + \text{soft}(\mat\Psi_k-\mathop{\text{diag}}(\mat\Psi_k), \lambda_k), \end{equation} where the soft-thresholding operator $\text{soft}_{\lambda}(x) = \text{sign}(x)\max(|x|-\lambda,0)$ has been applied element-wise. For popular choices of non-convex $G_k$, the proximal operators are derived in Appendix~\ref{supp:nonconvex}. \begin{algorithm}[!tbh] \begin{algorithmic} \caption{SG-PALM}\label{alg:sg-palm} \REQUIRE Data tensor $\tensor{X}$, mode-$k$ Gram matrix $\mat{S}_k$, regularizing parameter $\lambda_k$, backtracking constant $c \in (0,1)$, initial step size $\eta_0$, initial iterate $\mat\Psi_k$ for each $k=1,\dots,K$. \WHILE{not converged} \FOR{$k=1,\dots,K$} \STATE \textit{Line search:} Let $\eta^t_k$ be the largest element of $\{c^j \eta_{k,0}^t\}_{j=1,\dots}$ such that condition~\eqref{eqn:linesearch-cond} is satisfied. \STATE \textit{Update:} $\mat\Psi_k^{t+1} \leftarrow \text{prox}_{\eta^t_k\lambda_k}^{G_k}\Big(\mat\Psi_k^t - \eta^t_k \nabla_k H(\mat\Psi_{i < k}^{t+1},\mat\Psi_{i \geq k}^t)\Big)$. \ENDFOR \STATE \textit{Update initial step size:} Compute Barzilai-Borwein step size $\eta_0^{t+1}=\min_k \eta^{t+1}_{k,0}$, where $\eta^{t+1}_{k,0}$ is computed via~\eqref{eqn:bb-step}. \ENDWHILE \ENSURE Final iterates $\{\mat\Psi_k\}_{k=1}^K$. \end{algorithmic} \end{algorithm} \subsection{Choice of Step Size} In the absence of a good estimate of the blockwise Lipchitz constant, the step size of each iteration of SG-PALM is chosen using backtracking line search, which, at iteration $t$, starts with an initial step size $\eta_0^t$ and reduces the size with a constant factor $c \in (0,1)$ until the new iterate satisfies the sufficient descent condition: \begin{equation}\label{eqn:linesearch-cond} H(\mat\Psi_{i \leq k}^{t+1},\mat\Psi_{i > k}^t) \leq Q_{\eta^t}(\mat\Psi_{i \leq k}^{t+1},\mat\Psi_{i > k}^t;\mat\Psi_{i < k}^{t+1},\mat\Psi_{i \geq k}^t). \end{equation} Here, \begin{equation*} \begin{aligned} & Q_{\eta}(\mat\Psi_{i < k},\mat\Psi_k,\mat\Psi_{i > k};\mat\Psi_{i < k},\mat\Psi_k',\mat\Psi_{i > k}) \\ &= H(\mat\Psi_{i < k},\mat\Psi_k,\mat\Psi_{i > k}) \\ &+ \tr\Big((\mat\Psi_k'-\mat\Psi_k)^T \nabla_k H(\mat\Psi_{i < k},\mat\Psi_k,\mat\Psi_{i > k})\Big) \\ &+ \frac{1}{2\eta}\|\mat\Psi_k'-\mat\Psi_k\|_F^2. \end{aligned} \end{equation*} The sufficient descent condition is satisfied with any $\frac{1}{\eta}=M_k$ and $M_k \geq L_k$, for any function that has a block-wise Lipschitz gradient with constant $L_k$ for $k=1,\dots,K$. In other words, so long as the function $H$ has block-wise gradient that is Lipschitz continuous with some block Lipschitz constant $L_k>0$ for each $k$, then at each iteration $t$, we can always find an $\eta^t$ such that the inequality in \eqref{eqn:linesearch-cond} is satisfied. Indeed, we proved in Lemma~\ref{lemma:lip} in the Appendix that $H$ has the desired properties. Additionally, in the proof of Theorem~\ref{thm:sg-palm-main} we also showed that the step size found at each iteration $t$ satisfies $\frac{1}{\eta_k^{0}} \leq L_k \leq \frac{1}{\eta_k^{t}} \leq c L_k$. In terms of the initialization, a safe step size (i.e., very small $\eta_0^t$) often leads to slower convergence. Thus, we use the more aggressive Barzilai-Borwein (BB) step~\citep{barzilai1988two} to set a starting $\eta_0^t$ at each iteration (see Appendix~\ref{supp:bb-step-size} for justifications of the BB method). In our case, for each $k$, the step size is given by \begin{equation}\label{eqn:bb-step} \eta_{k,0}^t = \frac{\|\mat\Psi_k^{t+1}-\mat\Psi_k^{t}\|_F^2}{\tr(\mat{A})}, \end{equation} where \begin{equation*} \begin{aligned} \mat{A} &= (\mat\Psi_k^{t+1}-\mat\Psi_k^{t})^T \times \\ &(\nabla_k H(\mat\Psi_{i \leq k}^{t+1},\mat\Psi_{i > k}^t) - \nabla_k H(\mat\Psi_{i < k}^{t+1},\mat\Psi_{i \geq k}^t)). \end{aligned} \end{equation*} \subsection{Computational Complexity} After pre-computing $\mat{S}_k$, the most significant computation for each iteration in the SG-PALM algorithm is the sparse matrix-matrix multiplications $\mat{S}_k \mat\Psi_k$ and $\mat{S}_{j,k}$ in the gradient calculation. In terms of computational complexity, if $s_k$ is the number of non-zeros per column in $\mat\Psi_k$, then the former and latter can be computed using $O(s_k d_k^2)$ and $O(N\sum_{j \neq k} s_jd_j^2)$ operations, respectively. Thus, each iteration of SG-PALM can be computed using $O\Big(\sum_{k=1}^K (s_k d_k^2 + N\sum_{j \neq k} s_jd_j^2) \Big)$ floating point operations, which is significantly lower than competing methods. For instance, other popular algorithms for tensor-variate graphical models, such as the TG-ISTA presented in \citet{greenewald2019tensor} and the Tlasso proposed in \citet{lyu2019tensor} both require inversion of $d_k \times d_k$ matrices, which is non-parallelizable and requires $O(d_k^3)$ operations for each $k$. In particular, TeraLasso's TG-ISTA algorithm requires $O\Big(Kd + \sum_{k=1}^K d_k^3\Big)$ operations. The TG-ISTA algorithm requires matrix inversions that cannot easily exploit the sparsity of $\mat\Psi_k$'s. In the sample-starved ultra-sparse setting ($N \ll d$ and $s_k \ll d_k$), the $O(N\sum_{j \neq k} s_jd_j^2)$ terms in SG-PALM are comparable to $O(Kd)$ in TG-ISTA, making it more appealing. The cyclic coordinate-wise method proposed in~\citep{wang20sylvester} does not allow for parallelization since it requires cycling through entries of each $\mat\Psi_k$ in specified order. In contrast, SG-PALM can be implemented in parallel to distribute the sparse matrix-matrix multiplications because at no step do the algorithms require storing all dense matrices on a single machine. \section{Convergence Analysis}\label{sec:theory} In this section, we present the main convergence theorems. Detailed proofs are included in the supplement. Here, we study the statistical convergence behavior for the Sylvester graphical model with an $\ell_1$ penalty function. The convergence behavior of the SG-PALM iterates is presented for convex cases but similar convergence rate can be established for non-convex penalties (see Appendix~\ref{supp:nonconvex}). We first establish statistical convergence of a global minimizer $\hat{\mat\Psi}$ of \eqref{eqn:objective} to its true value, denoted as $\bar{\mat\Psi}$, under the correct statistical model. \begin{theorem}\label{thm:statistical} Let $\mathcal{A}_{k}:=\{(i,j):(\bar{\mat\Psi}_k)_{i,j} \neq 0, i \neq j\}$ and $q_{k}:=|\mathcal{A}_{k}|$ for $k=1,\dots,K$. If $N > O(\max_k q_k d_k \log d)$ and $d:=d_N=O(N^{\kappa})$ for some $\kappa \geq 0$, and further, if the penalty parameter satisfies $\lambda_k:=\lambda_{N,k}=O(\sqrt{\frac{d_k\log d}{N}})$ for all $k=1,\dots,K$, then under conditions (A1-A3) in Appendix~\ref{supp:thm_statistical}, there exists a constant $C>0$ such that for any $\eta>0$ the following events hold with probability at least $1 - O(\exp(-\eta \log d))$: \begin{equation*} \begin{aligned} & \sum_{k=1}^K\|\text{offdiag}(\hat{\mat\Psi}_k) - \text{offdiag}(\bar{\mat\Psi}_k)\|_F \\ & \leq C\sqrt{K}\max_{k}\sqrt{q_{k}}\lambda_{k}. \end{aligned} \end{equation*} Here $\text{offdiag}(\mat\Psi_k)$ contains only the off-diagonal elements of $\mat\Psi_k$. If further $\min_{(i,j) \in \mathcal{A}_{k}}|(\bar{\mat{\Psi}}_k)_{i,j}| \geq 2C\max_{k}\sqrt{q_{k}}\lambda_{k}$ for each $k$, then sign($\hat{\mat{\Psi}}_k$)=sign($\bar{\mat{\Psi}}_k$). \end{theorem} Theorem~\ref{thm:statistical} means that under regularity conditions on the true generative model, and with appropriately chosen penalty parameters $\lambda_k$'s guided by the theorem, one is guaranteed to recover the true structures of the underlying Sylvester generating parameters $\mat\Psi_k$ for $k=1,\dots,K$ with probability one, as the sample size and dimension grow. We next turn to convergence of the iterates $\{\mat\Psi^t\}$ from SG-PALM to a global optimum of \eqref{eqn:objective}. \begin{theorem}\label{thm:sg-palm-main} The Let $\{\mat\Psi^{(t)}\}_{t \geq 0}$ be generated by SG-PALM. Then, SG-PALM converges in the sense that \begin{equation*} \begin{aligned} & \frac{\mathcal{L}_{\mat\lambda}(\mat\Psi^{(t+1)}) - \min \mathcal{L}_{\mat\lambda}}{\mathcal{L}_{\mat\lambda}(\mat\Psi^{(t)}) - \min \mathcal{L}_{\mat\lambda}} \\ & \leq \Bigg(\frac{\alpha^2L_{\min}}{4Kc^2(\sum_{j=1}^K L_j)^2 + 4c^2L_{\max}} + 1\Bigg)^{-1}, \end{aligned} \end{equation*} where $\alpha$, $L_k,k=1,\dots,K$ are positive constants, $L_{\min}=\min_jL_j$, $L_{\max}=\max_jL_j$, and $c \in (0,1)$ is the backtracking constant defined in Algorithm~\ref{alg:sg-palm}. \end{theorem} Note that the term on the right hand side of the inequality above is strictly less than $1$. This means that the SG-PALM algorithm converges linearly, which is a strong results for a non-strongly convex objective (i.e., $\mathcal{L}_{\mat\lambda}$). Although similar convergence behaviors of the PALM-type algorithms have been studied for other problems~\citep{xu2013block,bolte2014proximal}, such as nonnegative matrix/tensor factorization, the analysis of this paper works for non-strongly block multi-convex objectives, leveraging more recent analyses of multi-block PALM and a class of functions satisfying the the Kurdyka - \L ojasiewicz (KL) property (defined in Section~\ref{supp:proofs} of the Appendix). To the best of our knowledge, for first-order optimization methods, our rate is faster than any other Gaussian graphical models having non-strongly convex objectives (see \citet{khare2015convex,oh2014optimization} and references therein) and comparable with those having strongly-convex objectives (see, for example, \citet{guillot2012iterative,dalal2017sparse,greenewald2019tensor}). \section{Experiments}\label{sec:experiments} Experiments in this section were performed in a system with \texttt{8-core Intel Xeon CPU E5-2687W v2 3.40GHz} equipped with \texttt{64GB RAM}. Both SG-PALM and SyGlasso were implemented in \texttt{Julia v1.5} (\url{https://github.com/ywa136/sg-palm}). For real data analyses, we used the \texttt{Tlasso} package implementation in \texttt{R}~\citep{r-Tlasso} and the TeraLasso implementation in \texttt{MATLAB} (\url{https://github.com/kgreenewald/teralasso}). \subsection{Synthetic Data} We first validate the convergence theorems discussed in the previous section via simulation studies. Synthetic datasets were generated from true sparse Sylvester factors $\{\mat\Psi_k\}_{k=1}^K$ where $K=\{2,3\}$ and $d_k=\{16,32,64,128\}$ for all $k$. Instances of the random matrices used here have uniformly random sparsity patterns with edge densities (i.e., the proportion of non-zero entries) ranging from $0.1\% -30\%$ on average over all $\mat\Psi_k$'s. For each $d$ and edge density combination, random samples of size $N=\{10,100,1000\}$ were tested. For comparison, the initial iterates, convergence criteria were matched between SyGlasso and SG-PALM. Highlights of the results in run times are summarized in Table~\ref{tab:synthetic_run_time}. \begin{table}[tbh!] \centering \caption{Run time comparisons (in seconds with N/As indicating those exceeding $24$ hour) between SyGlasso and SG-PALM on synthetic datasets with different dimensions, sample sizes, and densities of the generating Sylvester factors. Note that the proposed SG-PALM has average speed-up ratios ranging from $1.5$ to $10$ over SyGlasso.} \label{tab:synthetic_run_time} \begin{tabular}{|c|p{0.5cm}|p{0.6cm}||r|r|} \multicolumn{5}{c}{} \\ \hline \multirow{2}{*}{$d$} & \multirow{2}{*}{$N$} & \multirow{2}{*}{NZ\%} & \textbf{SyGlasso} & \textbf{SG-PALM} \\ \cline{4-5} &&& \textbf{iter} \quad \textbf{sec} & \textbf{iter} \quad \textbf{sec} \\ \hline \multirow{6}{*}{$128^2$} & \multirow{2}{*}{$10^1$} & $1.20$ & $17$ \quad $138.5$ & $46$ \quad $5.8$ \\ && $24.0$ & $20$ \quad $169.3$ & $48$ \quad $6.2$ \\ \cline{2-5} & \multirow{2}{*}{$10^2$} & $1.30$ & $21$ \quad $211.3$ & $50$ \quad $12.6$ \\ && $27.0$ & $30$ \quad $303.6$ & $47$ \quad $21.9$ \\ \cline{2-5} & \multirow{2}{*}{$10^3$} & $1.30$ & $21$ \quad $2045.8$ & $50$ \quad $80.1$ \\ && $25.0$ & $47$ \quad $4782.7$ & $51$ \quad $373.1$ \\ \cline{1-5} \multirow{6}{*}{$16^3$} & \multirow{2}{*}{$10^1$} & $0.11$ & $9$ \quad $4.6$ & $11$ \quad $4.5$ \\ && $4.10$ & $9$ \quad $5.1$ & $32$ \quad $5.1$ \\ \cline{2-5} & \multirow{2}{*}{$10^2$} & $0.21$ & $8$ \quad $8.8$ & $11$ \quad $5.4$ \\ && $2.60$ & $8$ \quad $10.8$ & $35$ \quad $7.2$ \\ \cline{2-5} & \multirow{2}{*}{$10^3$} & $0.26$ & $8$ \quad $82.4$ & $12$ \quad $14.3$ \\ && $3.40$ & $10$ \quad $99.2$ & $37$ \quad $33.5$ \\ \cline{1-5} \multirow{6}{*}{$32^3$} & \multirow{2}{*}{$10^1$} & $0.13$ & $10$ \quad $191.2$ & $19$ \quad $7.3$ \\ && $7.50$ & $17$ \quad $304.8$ & $42$ \quad $10.2$ \\ \cline{2-5} & \multirow{2}{*}{$10^2$} & $0.46$ & $9$ \quad $222.4$ & $24$ \quad $28.9$ \\ && $7.00$ & $17$ \quad $395.2$ & $41$ \quad $48.5$ \\ \cline{2-5} & \multirow{2}{*}{$10^3$} & $0.10$ & $9$ \quad $1764.8$ & $22$ \quad $226.4$ \\ && $6.90$ & $19$ \quad $3789.4$ & $41$ \quad $473.9$ \\ \cline{1-5} \multirow{6}{*}{$64^3$} & \multirow{2}{*}{$10^1$} & $0.65$ & $10$ \quad $583.7$ & $42$ \quad $91.3$ \\ && $14.5$ & $22$ \quad $952.2$ & $47$ \quad $119.0$ \\ \cline{2-5} & \multirow{2}{*}{$10^2$} & $0.62$ & $9$ \quad $6683.7$ & $41$ \quad $713.9$ \\ && $14.4$ & $21$ \quad $15607.2$ & $48$ \quad $1450.9$ \\ \cline{2-5} & \multirow{2}{*}{$10^3$} & $0.85$ & N/A & $39$ \quad $6984.4$ \\ && $14.0$ & N/A & $48$ \quad $12968.7$ \\ \hline \end{tabular} \end{table} Convergence behavior of SG-PALM is shown in Figure~\ref{fig:convergence_sg-palm} (a) for the datasets with $d_k=32$, $N=\{10,100\}$, and edge densities roughly around $5\%$ and $20\%$, respectively. Geometric convergence rate of the function value gaps under Theorem~\ref{thm:sg-palm-main} can be verified from the plot. Note an acceleration in the convergence rate (i.e., a steeper slope) near the optimum, which is suggested by the ``localness'' of the KL property of the objective function close to its global optimum. Further for the same datasets, in Figure~\ref{fig:convergence_sg-palm} (b), SG-PALM graph recovery performances is illustrated, where the Matthew's Correlation Coefficients (MCC) is plotted against run time. Here, MCC is defined by \begin{equation*} \text{MCC} = \frac{\text{TP}\times\text{TN}-\text{FP}\times\text{FN}}{\sqrt{(\text{TP}+\text{FP})(\text{TP}+\text{FN})(\text{TN}+\text{FP})(\text{TN}+\text{FN})}}, \end{equation*} where TP is the number of true positives, TN the number of true negatives, FP the number of false positives, and FN the number of false negatives of the estimated edges (i.e., non-zero elements of $\mat\Psi_k$'s). An MCC of $1$ represents a perfect prediction, $0$ no better than random prediction and $-1$ indicates total disagreement between prediction and observation. The results validate the statistical accuracy under Theorem~\ref{thm:statistical}. It also shows that SG-PALM outperforms SyGlasso (indicated by blue/red solid dots) within the same time budget. \begin{figure*}[tbh!] \centering \begin{subfigure}{0.45\linewidth} \centering \includegraphics[width=0.85\textwidth]{Figures/convergence_sg-palm.png} \caption{Cost gap vs. Iteration} \end{subfigure} \hspace{0.1pt} \begin{subfigure}{0.45\linewidth} \centering \includegraphics[width=0.85\textwidth]{Figures/mcc_sg-palm.png} \caption{MCC vs. Run time} \end{subfigure} \caption{Convergence of SG-PALM algorithm under datasets with varying sample sizes (solid and dashed) generated via matrices with different sparsity (red and blue). The function value gaps on log-scale (left) verifies the geometric convergence rate in all cases and the MCC over time (right) demonstrates the algorithm's accuracy and efficiency. Note that the SG-PALM reached almost perfect recoveries (i.e., MCC of $1$) within $20$ seconds in all cases. In comparison, SyGlasso (big solid dots with line-range) was only able to achieve at lower MCCs for lower sample-size cases within $30$ seconds.} \label{fig:convergence_sg-palm} \end{figure*} \subsection{Solar Flare Imaging Data} A solar flare occurs when magnetic energy that has built up in the solar atmosphere is suddenly released. Such events strongly influence space weather near the Earth. Therefore, reliable predictions of these flaring events are of great interest. Recent work~\citep{chen2019identifying,jiao2019solar,sun2019interpreting} has shown the promise of machine learning methods for early forecasting of these events using imaging data from the solar atmosphere. In this work, we illustrate the viability of the SG-PALM algorithm for solar flare prediction using data acquired by multiple instruments: the Solar Dynamics Observatory (SDO)/Helioseismic and Magnetic Imager (HMI) and SDO/Atmospheric Imaging Assembly (AIA). It is evident that these data contain information about the physical processes that govern solar activities (see Appendix~\ref{supp:additional_experiments} for detailed data descriptions). The data samples are summarized in $d_1 \times d_2 \times d_3 \times d_4$ tensors with $q=d_1 \cdot d_2 \cdot d_3=50 \cdot 100 \cdot 7 = 35000$ and $p=d_4=13$. The first two modes represent the images' heights and widths, the third mode represents the HMI/AIA components/channels, and the last mode represents the length of the temporal window. Previous studies~\citep{chen2019identifying,jiao2019solar} found that the time series of solar images from the SDO/HMI data provide useful information for distinguishing strong solar flares of M/X class from weak flares of A/B class roughly 24 to 12 hours prior to the flare event. Thus, in this study we use a $13$-hour temporal window recorded with $1$-hour cadence, prior to the occurrence of a solar flare. The task is to predict the $p$th frame using the frames in each of the $p-1$ previous hours (i.e., one hour ahead prediction). Each observation is a video with full dimension $d=pq$, and each $p$-dimensional observation vector is formed by concatenating the $p$ time-consecutive $q$-dimensional vectors (vectorization of the matrices representing pixels of the multichannel images) without overlapping the time segments. The training set contains two types (B- and MX-class flares) of active regions producing flares. Each is distinguished by the flaring intensities, and there are a total of $186$ B flares and $48$ MX flares. Forward linear predictors were constructed using estimated precision matrices in a multi-output least squares regression setting. Specifically, we constructed the linear predictor of a frame from the $p-1$ previous frames in the same video: \begin{equation} \hat{\mat{y}}_t = -\mat\Omega_{2,2}^{-1}\mat\Omega_{2,1}\mat{y}_{t-1:t-(p-1)}, \end{equation} where $\mat{y}_{t-1:t-(p-1)} \in \mathbb{R}^{(p-1)q}$ is the stacked set of pixel values from the previous $p-1$ time instances and $\mat\Omega_{2,1} \in \mathbb{R}^{q \times (p-1)q}$ and $\mat\Omega_{2,2} \in \mathbb{R}^{q \times q}$ are submatrices of the $pq \times pq$ estimated precision matrix: \begin{equation*} \hat{\mat\Omega} = \begin{pmatrix} \mat\Omega_{1,1} & \mat\Omega_{1,2} \\ \mat\Omega_{2,1} & \mat\Omega_{2,2} \end{pmatrix}. \end{equation*} The predictors were tested on the data containing flares observed from different active regions than those in training set, so that the predictor has never ``seen'' the frames that it attempts to predict, corresponding to $117$ observations of which $93$ are B-class flares and $24$ are MX-class flares. Figure~\ref{fig:nrmse_comparison} shows the root mean squared error normalized by the difference between maximum and minimum pixels (NRMSE) over the testing samples, for the forecasts based on the SG-PALM estimator, TeraLasso estimator~\citep{greenewald2019tensor}, Tlasso estimator~\citep{lyu2019tensor}, and IndLasso estimator. Here, the TeraLasso and the Tlasso are estimation algorithms for a KS and a KP tensor precision matrix model, respectively; the IndLasso denotes an estimator obtained by applying independent and separate $\ell_1$-penalized regressions to each pixel in $\mat{y}_t$. The SG-PALM estimator was implemented using a regularization parameter $\lambda_{N}=C_1\sqrt{\frac{\min(d_k)\log(d)}{N}}$ for all $k$ with the constant $C_1$ chosen by optimizing the prediction NRMSE on the training set over a range of $\lambda$ values parameterized by $C_1$. The TeraLasso estimator and the Tlasso estimator were implemented using $\lambda_{N,k}=C_2\sqrt{\frac{\log(d)}{N\prod_{i \neq k}d_i}}$ and $\lambda_{N,k}=C_3\sqrt{\frac{\log(d_k)}{Nd}}$ for $k=1,2,3$, respectively, with $C_2, C_3$ optimized in a similar manner. Each sparse regression in the IndLasso estimator was implemented and tuned independently with regularization parameters chosen from a grid via cross-validation. We observe that SG-PALM outperforms all three other methods, indicated by NRMSEs across pixels. Figure~\ref{fig:predicted_vs_real_img} depicts examples of predicted images, comparing with the ground truth. The SG-PALM estimates produced most realistic image predictions that capture the spatially varying structures and closely approximate the pixel values (i.e., maintaining contrast ratios). The latter is important as the flares are being classified into weak (B-class) and strong (MX-class) categories based on the brightness of the images, and stronger flares are more likely to lead to catastrophic events, such as those damaging spacecrafts. Lastly, we compare run times of the SG-PALM algorithm for estimating the precision matrix from the solar flare data with SyGlasso. Table~\ref{tab:solar_flare_run_time} in Appendix~\ref{supp:additional_experiments} illustrates that the SG-PALM algorithm converges faster in wallclock time. Note that in this real dataset, which is potentially non-Gaussian, the convergence behavior of the algorithms is different compare to synthetic examples. Nonetheless, SG-PALM enjoys an order of magnitude speed-up over SyGlasso. \begin{figure*}[!tbh] \centering \begin{tabular}{@{}c@{}} \quad Avg. NRMSE = $0.0379$, $0.0386$, $0.0579$, $0.1628$ (from left to right) \\ \rotatebox{90}{\qquad \quad AR B} \includegraphics[width=0.85\textwidth]{Figures/B_forward_nrmse_all_model_0_12_60.png} \\ \quad Avg. NRMSE = $0.0620$, $0.0790$, $0.0913$, $0.1172$ (from left to right) \\ \rotatebox{90}{\qquad \quad AR M/X} \includegraphics[width=0.85\textwidth]{Figures/MX_forward_nrmse_all_model_0_12_60.png} \end{tabular} \caption{Comparison of the SG-PALM, Tlasso, TeraLasso, IndLasso performances measured by NRMSE in predicting the last frame of $13$-frame video sequences leading to B- and MX-class solar flares. The NRMSEs are computed by averaging across testing samples and AIA channels for each pixel. 2D images of NRMSEs are shown to indicate that certain areas on the images (usually associated with the most abrupt changes of the magnetic field/solar atmosphere) are harder to predict than the rest. SG-PALM achieves the best overall NRMSEs across pixels. B flares are generally easier to predict due to both a larger number of samples in the training set and smoother transitions from frame to frame within a video (see the supplemental material for details).} \label{fig:nrmse_comparison} \end{figure*} \begin{figure*}[!tbh] \centering \begin{tabular}{@{}c@{}} Predicted examples - B vs. M/X \\ \rotatebox{90}{\qquad AR B} \includegraphics[width=0.85\textwidth]{Figures/B_forward_yhat_all_model_test10_channel0_0_12_60.png} \\ \rotatebox{90}{\qquad AR B} \includegraphics[width=0.85\textwidth]{Figures/B_forward_yhat_all_model_test10_channel1_0_12_60.png} \\ \rotatebox{90}{\quad AR M/X} \includegraphics[width=0.85\textwidth]{Figures/MX_forward_yhat_all_model_test0_channel0_0_12_60.png} \\ \rotatebox{90}{\quad AR M/X} \includegraphics[width=0.85\textwidth]{Figures/MX_forward_yhat_all_model_test0_channel1_0_12_60.png} \end{tabular} \caption{Examples of one-hour ahead prediction of the first two AIA channels of last frames of $13$-frame videos, leading to B- (first two rows) and MX-class (last two rows) flares, produced by the SG-PALM, Tlasso, TeraLasso, IndLasso algorithms, comparing to the real image (far left column). Note that in general linear forward predictors tend to underestimate the contrast ratio of the images. The proposed SG-PALM produced the best-quality images in terms of both the spatial structures and contrast ratios.See the supplemental material for examples of predicted images from the HMI instrument.} \label{fig:predicted_vs_real_img} \end{figure*} \subsection{Physical Interpretability} To explain the advantages of the proposed model over other similar models (e.g., Tlasso, TeraLasso), we provide further discussions here on the connection between the Sylvester generating model and PDEs. Consider the 2D spatio-temporal process $u(\mat{x},t)$: \begin{equation}\label{eqn:convec-diff} \partial u / \partial t = \theta \sum_{i=1}^2 \partial^2 u / \partial x_i^2 + \epsilon \sum_{i=1}^2 \partial u / \partial x_i, \end{equation} where $\theta,\epsilon$ are positive real (unknown) coefficients. This is the basic form of a class of parabolic and hyperbolic PDEs, the Convection-Diffusion equation that generalizes the Poisson equation presented in Section~\ref{sec:background} by incorporating temporal evolution. These equations are closely related to the Navier-Stokes equation commonly used in stochastic modelling for weather and climate prediction. Coupled with Maxwell's equations, they can be used to model and study magneto-hydrodynamics~\citep{roberts2006slow}, which characterize solar activities including flares. After finite-difference discretization, Equation~\eqref{eqn:convec-diff} is equivalent to the Sylvester matrix equation $\mat{A}_{\theta,\epsilon}\mat{U}_t + \mat{U}_t\mat{A}_{\theta,\epsilon} = \mat{U}_{t-1}$, where $\mat{U}_t=(u((i,j),t))_{ij}$ and $\mat{A}_{\theta,\epsilon}$ is a tridiagonal matrix with values that depend on the coefficients $\theta,\epsilon$ and discretization step sizes. Assuming a linear Gaussian state-space model for some observed process $\mat{X}_t$ governed by the Convection-Diffusion dynamics: \begin{equation*} \begin{aligned} & \mat{A}_{\theta,\epsilon}\mat{U}_t + \mat{U}_t\mat{A}_{\theta,\epsilon} = \mat{U}_{t-1}, \\ & \mat{X}_t = \mat{U}_t + \mat{V}_t, \end{aligned} \end{equation*} where $\mat{V}_t \sim \mathcal{N}(\mat{0},\sigma^2\mat{I})$ is some time-invariant white noise. Then the precision matrix of the true process $\mat{U}_t$ evolves as $\mat\Omega_t = \mat{A}_{\theta,\epsilon} \mat\Omega_{t-1} \mat{A}_{\theta,\epsilon}^T + \sigma^2\mat{I}$. Note that this is not necessarily sparse as assumed by the Sylvester graphical model, but the steady-state precision matrix satisfies $\mat\Omega_{\infty} = \mat{A}_{\theta,\epsilon} \mat\Omega_{\infty} \mat{A}_{\theta,\epsilon}^T + \sigma^2\mat{I}$, which is indeed sparse because $\mat{A}_{\theta,\epsilon}$ is tridiagonal. This strong connection between the Sylvester graphical model and the underlying physical processes governing solar activities make the proposed approach particularly suitable for the case study presented in the previous section. Additionally, the learned generating factors $\mat{A}_{\theta,\epsilon}$ could be further used to interpret physical processes that involve both \textit{unknown structure and unknown parameters}. Particularly, in Equation~\eqref{eqn:convec-diff}, the coefficients $\theta$ (diffusion constant) and $\epsilon$ (convective constant) affect the dynamics. Similarly, with the estimated Sylvester generating factors ($\mat\Psi_k$'s), we are not only able to extract the sparsity patterns of the discretized differential operators but also estimate the coefficients of the underlying magneto-hydrodynamics equation for solar flares. Therefore, the SG-PALM can be used as a data-driven method for PDE parameter estimation from physical observations. \section{Conclusion}\label{sec:conclusion} We proposed SG-PALM, a proximal alternating linearized minimization method for solving a pseudo-likelihood based sparse tensor-variate Gaussian precision matrix estimation problem. Geometric rate of convergence of the proposed algorithm is established building upon recent advances in the theory of PALM-type algorithms. We demonstrated that SG-PALM outperforms the coordinate-wise minimization method in general, and in ultra-high dimensional settings SG-PALM can be faster by at least an order of magnitude. A link between the Sylvester generating equation underlying the graphical model and the Convection-Diffusion type of PDEs governing certain physical processes was established. This connection was illustrated on a novel astrophysics application, where multi-instrument imaging datasets characterizing solar flare events were used. The proposed methodology was able to robustly forward predict both the patterns and intensities of the solar atmosphere, yielding potential insights to the underlying physical processes that govern the flaring events. \subsubsection*{Acknowledgements} The authors thank Zeyu Sun and Xiantong Wang for their help in pre-processing the solar flare datasets. The research was partially supported by US Army grant W911NF-15-1-0479 and NASA grant 80NSSC20K0600. \clearpage \bibliographystyle{icml2021}
1,314,259,995,404
arxiv
\section{Introduction} \label{sec:intro} It has recently been shown that attention mechanisms can boost the performance of neural networks in various tasks by learning to focus on relatively important and salient parts of input signals. Most notably, attention-based recurrent neural networks have achieved great success in machine translation~\cite{bahdanau2014neural, luong2015effective} and image captioning~\cite{xu2015show}. Attention mechanisms have also been widely adopted by deep convolutional neural networks (CNNs) in several forms of feature re-weighting such as spatial attention~\cite{xu2016ask, oktay2018attention}, channel attention~\cite{hu2018squeeze, zhang2018image}, etc~\cite{woo2018cbam, suganuma2018attention}. These methods usually let neural networks learn \textit{what and where} to focus on from their own responses. In this paper, we introduce an effective probabilistic method for integrating human gaze into a spatiotemporal attention mechanism. It has been well discussed in cognitive science that human gaze is closely related to a person's behavioral intention and visual attention~\cite{vickers2009advances, castiello2003understanding, frischen2007gaze, phillips2002infants}. At the same time, however, there is always uncertainty in the process of recording the gaze fixation points because of saccadic suppression\footnote{phenomenon in which visual information is not processed while blinking or under rapid eye movements.}\cite{krekelberg2010saccadic} and measurement errors. Furthermore, it is not always guaranteed that the surrounding region around the point of gaze fixation has the most important information, especially when interacting with multiple objects or under dissociation\footnote{dissociation of the focus of attention is a phenomenon where the points of gaze fixation are not correlated with the visual attention within the field of view.}\cite{brefczynski1999physiological, juan2004dissociation}. To address such problems, we present a probabilistic modeling method as follows: First, we propose to represent the locations of gaze fixation points in space and time as structured discrete latent variables to model their uncertainties. Second, we model the distribution of the gaze fixations using a variational method. During the training process, the distribution of gaze fixations is learned using the ground-truth annotations of gaze points. Specifically, we propose to reformulate the discrete training objective so that it can be optimized using an unbiased gradient estimator. The gaze locations are predicted from the learned gaze distribution so that the ground-truth annotations of gaze fixation points are no longer needed in testing scenarios. The predicted gaze locations are integrated into a soft attention mechanism to make the intermediate features more attended to informative regions. It is empirically shown that our gaze-combined attention mechanism leads to a significant improvement of activity recognition performance on egocentric videos by providing additional cues across space and time. We demonstrate the effectiveness of our method on EGTEA~\cite{li2018eye} and GTEA gaze+~\cite{li2015delving}, which are large-scale datasets for egocentric activities provided with gaze measurements. Our method significantly outperforms all the previous state-of-the-art approaches. We also perform an ablation study to verify that probabilistic modeling of gaze data is truly beneficial. We then visualize the spatiotemporal responses of our networks to qualitatively show that the gaze-combined soft attention provides informative attentional cues. \section{Related work} \label{sec:related} Recently, attention-based recurrent neural networks have been widely adopted for neural machine translation~\cite{bahdanau2014neural,luong2015effective} as well as for image captioning~\cite{xu2015show}. They generate attention vectors by manipulating hidden states of recurrent neural networks and annotated information. Attention mechanisms have also been incorporated with deep CNNs to improve the representation quality of intermediate features by refining the features~\cite{xu2016ask,oktay2018attention,hu2018squeeze,zhang2018image}. They usually introduce attention modules which find channel-wise or spatial-wise attention maps from the average-pooled features descriptors. There are more recent works which utilize both attention methods across spatial and channel dimensions~\cite{woo2018cbam,suganuma2018attention}. These methods also have shown that using both average-pooling and max-pooling in parallel is beneficial to building attention maps. There have been a few attempts to utilize human gaze data for egocentric activity recognition~\cite{fathi2012learning,huang2019mutual,li2018eye}. Fathi~et~al.~\cite{fathi2012learning} propose a conditional generative model that jointly predicts gaze locations and egocentric activity labels. More related and recent works~\cite{huang2019mutual,li2018eye} have shown that incorporating gaze data into an attention mechanism can boost the performance of CNNs on egocentric activity recognition. Huang~et~al.~\cite{huang2019mutual} propose Mutual Context Network (MCN) that tries to use human gaze for recognizing activities and use the activity labels for predicting gaze locations. However, MCN has multiple sub-modules that should be trained separately. Furthermore, an inference procedure requires many iterations because of the complicated network architecture. They also use saccades as ground-truth gaze points, which should be ignored to improve the prediction performance. Li~et~al.~\cite{li2018eye} is built on a similar probabilistic framework to ours; however, there are three crucial differences. First, to model the distribution of gaze points for $T$ time steps, they use $T$ independent 2D latent variables. This totally ignores the temporal correlation of the gaze distribution, which limits the recognition performance. Second, they use the approximated Gumbel-Softmax objective~\cite{jang2016categorical,maddison2016concrete} that introduces a significant bias to a gradient estimator. As a result, the recognition performance of their method is further limited. Third, they directly apply the sampled gaze points $\mathbf{\mspace{1mu}z}^*$ to the input feature map without any modifications. This is vulnerable to situations where the gaze points are misleading and not informative. On the contrary, we use structured discrete latent variables to model the gaze distribution in a 3D space. We apply the direct optimization method to handle this structured latent space, which also minimizes the bias. Moreover, we use the sigmoid activated linear mapping on the sampled gaze points to produce a soft attention map. \section{Background: Direct optimization} \label{sec:background} Direct optimization~\cite{lorberbom2018direct} was originally proposed for learning a variational auto-encoder (VAE) with discrete latent variables. The objective of VAE is given by: \begin{equation} \label{eq:vae-loss} \mathcal{L}_{\text{VAE}}=-\mathbb{E}_{\mathbf{\mspace{1mu}z} \sim q_{\phi}}[\log p_{\theta}(\mathbf{\mspace{1mu}x}|\mathbf{\mspace{1mu}z})] + D_{\mathrm{KL}}[q_{\phi}(\mathbf{\mspace{1mu}z}|\mathbf{\mspace{1mu}x})||p_{\theta}(\mathbf{\mspace{1mu}z})] \end{equation} where $\mathbf{\mspace{1mu}x}$ is an input and $\mathbf{\mspace{1mu}z}$ is a discrete latent variable. Computing the expected log-likelihood requires drawing samples from the discrete distribution $q_{\phi}(\mathbf{\mspace{1mu}z}|\mathbf{\mspace{1mu}x})$, which makes it difficult to optimize. Gumbel-Softmax reparameterization technique~\cite{jang2016categorical,maddison2016concrete} was recently suggested to relax the discrete variables to continuous counterparts. However, this continuous relaxation is known to introduce a significant bias when evaluating gradients and become intractable under the high-dimensional structured latent spaces. The direct optimization method introduces an unbiased gradient estimator for the discrete VAE that can be used even under the high-dimensional structured latent spaces. For simplicity, let us rewrite the log-probabilities as follows: $h_{\phi}(\mathbf{\mspace{1mu}x}, \mathbf{\mspace{1mu}z})=\log q_{\phi}(\mathbf{\mspace{1mu}z}|\mathbf{\mspace{1mu}x})$, $f_{\theta}(\mathbf{\mspace{1mu}x}, \mathbf{\mspace{1mu}z})=\log p_{\theta}(\mathbf{\mspace{1mu}x}|\mathbf{\mspace{1mu}z})$. By using the Gumbel-Max trick~\cite{maddison2014sampling}, the expected log-likelihood can be reformulated as follows: $\mathbb{E}_{\mathbf{\mspace{1mu}z} \sim q_{\phi}}[\log p_{\theta}(\mathbf{\mspace{1mu}x}|\mathbf{\mspace{1mu}z})] = \mathbb{E}_{\gamma \sim G}[f_{\theta}(\mathbf{\mspace{1mu}x},\mathbf{\mspace{1mu}z}^*)]$ where $\mathbf{\mspace{1mu}z}^*=\argmax_{\hat{\mathbf{\mspace{1mu}z}}} \{ h_{\phi}(\mathbf{\mspace{1mu}x},\hat{\mathbf{\mspace{1mu}z}}) +\gamma(\hat{\mathbf{\mspace{1mu}z}})\}$, G denotes a Gumbel distribution, and $\gamma(\hat{\mathbf{\mspace{1mu}z}})$ represents a random variable sampled from the Gumbel distribution that is associated with each input $\hat{\mathbf{\mspace{1mu}z}}$. Then, the proposed gradient estimator for the expectation term is given in the following form: \begin{multline} \label{eq:vae-direct} \nabla_{\phi} \, \mathbb{E}_{\gamma \sim G}[f_{\theta}(\mathbf{\mspace{1mu}x},\mathbf{\mspace{1mu}z}^*)] = \lim\limits_{\epsilon \to 0} \frac{1}{\epsilon} \Big( \mathbb{E}_{\gamma \sim G}\big[\nabla_{\phi} \, h_{\phi} (\mathbf{\mspace{1mu}x},\mathbf{\mspace{1mu}z}^*(\epsilon)) \\ - \nabla_{\phi} \, h_{\phi}(\mathbf{\mspace{1mu}x},\mathbf{\mspace{1mu}z}^*) \big] \Big) \end{multline} where $\mathbf{\mspace{1mu}z}^*(\epsilon)=\argmax_{\hat{\mathbf{\mspace{1mu}z}}} \{ \epsilon f_{\theta}(\mathbf{\mspace{1mu}x},\hat{\mathbf{\mspace{1mu}z}}) + h_{\phi}(\mathbf{\mspace{1mu}x},\hat{\mathbf{\mspace{1mu}z}}) +\gamma(\hat{\mathbf{\mspace{1mu}z}})\}$. The suggested gradient estimator is unbiased when the perturbation parameter $\epsilon$ goes to 0, but small $\epsilon$ brings a large variance of the estimation. Therefore, in practice, we set $\epsilon$ to a large value in the beginning of the training process and decrease it progressively. \section{Method} \label{sec:method} We start this section by building a probabilistic framework and the loss function of our method. Next, we propose a 3D gaze modeling approach using structured discrete latent variables. We then introduce the direct loss minimization approach~\cite{lorberbom2018direct} that is used for optimization in the presence of the structured discrete latent variables. Finally, we describe our overall network architecture for activity recognition that integrates the gaze information into attention. \subsection{Probabilistic framework} \label{subsec:framework} Let us consider a recognition task of predicting activity labels $\mathbf{\mspace{1mu}y}$ given an input clip of egocentric videos $\mathbf{\mspace{1mu}x}$, which is equivalent to finding a conditional probability $p(\mathbf{\mspace{1mu}y}|\mathbf{\mspace{1mu}x})$. We represent the gaze locations in space and time with a discrete latent variable $\mathbf{\mspace{1mu}z}$. Then, the conditional probability is written as follows by the law of total probability: \begin{equation} \label{eq:pf1} p_{\theta}(\mathbf{\mspace{1mu}y}|\mathbf{\mspace{1mu}x}) = \int p_{\theta}(\mathbf{\mspace{1mu}y}|\mathbf{\mspace{1mu}x},\mathbf{\mspace{1mu}z})p_{\theta}(\mathbf{\mspace{1mu}z}|\mathbf{\mspace{1mu}x})d\mathbf{\mspace{1mu}z} \end{equation} where $\theta$ denotes the parameters of a network for recognition. Since $\mathbf{\mspace{1mu}z}$ generally has an intractable posterior distribution, we upper bound the negative log-likelihood by taking the negative log on both sides of Equation~(\ref{eq:pf1}) and introducing the variational approximation $q_{\phi}(\mathbf{\mspace{1mu}z}|\mathbf{\mspace{1mu}x})$ for gaze modeling as follows: \begin{multline} \label{eq:pf2} -\log p_{\theta}(\mathbf{\mspace{1mu}y}|\mathbf{\mspace{1mu}x}) \leq \int -q_{\phi} \log \Big(p_{\theta}(\mathbf{\mspace{1mu}y}|\mathbf{\mspace{1mu}x},\mathbf{\mspace{1mu}z})\frac{p_{\theta}(\mathbf{\mspace{1mu}z}|\mathbf{\mspace{1mu}x})}{q_{\phi}} \Big) d\mathbf{\mspace{1mu}z} \\ = -\mathbb{E}_{\mathbf{\mspace{1mu}z} \sim q_{\phi}}[\log p_{\theta}(\mathbf{\mspace{1mu}y}|\mathbf{\mspace{1mu}x},\mathbf{\mspace{1mu}z})] + D_{\mathrm{KL}}[q_{\phi}||p_{\theta}(\mathbf{\mspace{1mu}z}|\mathbf{\mspace{1mu}x})] \end{multline} where $\phi$ denotes parameters of a network for gaze modeling. We use the upper bound in Equation~(\ref{eq:pf2}) as our loss function. \subsection{Reformulating the training objective} \label{subsec:objective} In order to compute the expected log-likelihood of the loss function in Equation~(\ref{eq:pf2}), we need to sample the gaze points from $q_{\phi}$. We apply the Gumbel-Max trick~\cite{maddison2014sampling} that is an efficient method of drawing samples from a discrete distribution. For simplicity, let us rewrite the log-probability as follows: $h_{\phi}(\mathbf{\mspace{1mu}x}, \mathbf{\mspace{1mu}z})=\log q_{\phi}(\mathbf{\mspace{1mu}z}|\mathbf{\mspace{1mu}x})$. Then, we can draw a gaze sample $z^*$ using the following equation: \begin{equation} \label{eq:opt1} \mathbf{\mspace{1mu}z}^*=\argmax \limits_{\hat{\mathbf{\mspace{1mu}z}}} \{ h_{\phi}(\mathbf{\mspace{1mu}x},\hat{\mathbf{\mspace{1mu}z}}) +\gamma(\hat{\mathbf{\mspace{1mu}z}})\} \end{equation} where $\gamma(\hat{\mathbf{\mspace{1mu}z}})$ represents a random variable sampled from a Gumbel distribution that is associated with each input $\hat{\mathbf{\mspace{1mu}z}}$. However, $\mathbf{\mspace{1mu}z}^*$ includes a non-differentiable operation, $\argmax$, so we cannot evaluate the gradient of the expectation term with respect to $\phi$ using a standard backpropagation algorithm. Here, we propose to apply the direct optimization method~\cite{lorberbom2018direct} to optimize the expected log-likelihood term. In the following, we demonstrate that our loss function can be optimized using the direct optimization method. Since our task is to classify activity labels, we can model $\mathbf{\mspace{1mu}y}$ given $\mathbf{\mspace{1mu}x}$ and $\mathbf{\mspace{1mu}z}$ with a categorical distribution. Specifically, let us say that there are $C$ number of predefined activity classes. Then, $p_{\theta}(\mathbf{\mspace{1mu}y}|\mathbf{\mspace{1mu}x},\mathbf{\mspace{1mu}z})=\prod_{c=1}^{C}p_{c}^{\mathbbm{1}_{y=c}}$ for some class-wise probabilities $p_{c}$'s that are dependent on $\mathbf{\mspace{1mu}x}$ and $\mathbf{\mspace{1mu}z}$ where $\mathbbm{1}_{y=c}$ is an indicator function that is equal to 1 if $y=c$ and 0 otherwise. This allows us to rewrite $\log p_{\theta}(\mathbf{\mspace{1mu}y}|\mathbf{\mspace{1mu}x},\mathbf{\mspace{1mu}z})$ in the following form: \begin{equation} \label{eq:opt2} \log p_{\theta}(\mathbf{\mspace{1mu}y}|\mathbf{\mspace{1mu}x},\mathbf{\mspace{1mu}z})=\sum_{c=1}^{C}\mathbbm{1}_{y=c} \, f_{\theta}^{c}(\mathbf{\mspace{1mu}x},\mathbf{\mspace{1mu}z}) \end{equation} where $f_{\theta}^{c}(\mathbf{\mspace{1mu}x},\mathbf{\mspace{1mu}z})$'s are the corresponding class-wise log-probabilities. Now, we propose to reformulate the expected log-likelihood using the class-wise log-probabilities: \begin{align} \mathbb{E}_{\mathbf{\mspace{1mu}z} \sim q_{\phi}}[\log p_{\theta}] &= \sum_{\mathbf{\mspace{1mu}z}} \Big( \mathbb{P}_{\gamma \sim G}[\mathbf{\mspace{1mu}z}^* = \mathbf{\mspace{1mu}z}] \sum_{c=1}^{C}\mathbbm{1}_{y=c} \, f_{\theta}^{c}(\mathbf{\mspace{1mu}x},\mathbf{\mspace{1mu}z}) \Big) \nonumber \\ \label{eq:opt3} &= \sum_{c=1}^{C} \mathbbm{1}_{y=c} \, \mathbb{E}_{\gamma \sim G}[f_{\theta}^{c}(\mathbf{\mspace{1mu}x},\mathbf{\mspace{1mu}z}^*)] \end{align} where $G$ denotes the Gumbel distribution. In Equation~(\ref{eq:opt3}), We show that the expected log-likelihood can be decomposed into a sum of multiple expectation terms of the class-wise log-probabilities, each multiplied by an indicator function. Since the gradient is a linear operator, we can estimate the gradient of the expected log-likelihood as follows: \begin{equation} \label{eq:opt4} \nabla_{\phi} \, \mathbb{E}_{\mathbf{\mspace{1mu}z} \sim q_{\phi}}[\log p_{\theta}] = \sum_{c=1}^{C} \mathbbm{1}_{y=c} \, \nabla_{\phi} \, \mathbb{E}_{\gamma \sim G}[f_{\theta}^{c}(\mathbf{\mspace{1mu}x},\mathbf{\mspace{1mu}z}^*)] \end{equation} where each class-wise gradient estimator $\nabla_{\phi} \, \mathbb{E}_{\gamma \sim G}[f_{\theta}^{c}(\mathbf{\mspace{1mu}x},\mathbf{\mspace{1mu}z}^*)]$ is computed by applying the direct optimization: \begin{multline} \label{eq:opt5} \nabla_{\phi} \, \mathbb{E}_{\gamma \sim G}[f_{\theta}^{c}(\mathbf{\mspace{1mu}x},\mathbf{\mspace{1mu}z}^*)] = \lim\limits_{\epsilon \to 0} \frac{1}{\epsilon} \Big( \mathbb{E}_{\gamma \sim G}\big[\nabla_{\phi} \, h_{\phi} (\mathbf{\mspace{1mu}x},\mathbf{\mspace{1mu}z}^*(\epsilon,c)) \\ - \nabla_{\phi} \, h_{\phi}(\mathbf{\mspace{1mu}x},\mathbf{\mspace{1mu}z}^*) \big] \Big) \end{multline} when $\mathbf{\mspace{1mu}z}^*(\epsilon,c)=\argmax_{\hat{\mathbf{\mspace{1mu}z}}} \{ \epsilon f_{\theta}^{c}(\mathbf{\mspace{1mu}x},\hat{\mathbf{\mspace{1mu}z}}) + h_{\phi}(\mathbf{\mspace{1mu}x},\hat{\mathbf{\mspace{1mu}z}}) +\gamma(\hat{\mathbf{\mspace{1mu}z}})\}$. Other gradients, such as the gradient of the expected log-likelihood with respect to $\theta$, are obtained using a standard backpropagation algorithm. As a result of the reformulation, we can optimize the training objective without introducing a bias of gradient estimator. \subsection{Structured gaze modeling} \label{subsec:gaze-modeling} \begin{figure*}[t] \centering \includegraphics[width=0.9\linewidth]{network.PNG} \caption{An illustration of our overall network architecture. We use the two-stream I3D~\cite{carreira2017quo} as a backbone network. To model the gaze distribution $q_{\phi}(\mathbf{\mspace{1mu}z}|\mathbf{\mspace{1mu}x})$, we use the same convolutional blocks of the I3D (\texttt{Mixed\_5b-c}) and add three convolutional layers (conv) on top of it. The two intermediate features at the end of the 4th max-pooling layer (\texttt{MaxPool\_5a}) are added in an element-wise fashion and used as input to the network for gaze modeling. The sampled gaze point is applied with a fully-connected layer (FC) and with the sigmoid function to produce a soft attention map. \label{fig:network} } \end{figure*} We propose to use structured discrete latent variables to model the gaze locations as follows. First, we will write $\mathcal{Z}$ to denote a set of every possible $\mathbf{\mspace{1mu}z}$. Let us say that we want to model the gaze locations in a 3D space: $\mathcal{Z}=\mathbb{R}^{T \times H \times W}$ where $T$ is the length of the temporal dimension and $H$ and $W$ represent the height and width of spatial dimensions. For each time step, gaze is fixated at a single location of a $H \times W$ dimensional space. Therefore, it is more reasonable to represent the gaze locations with a sequence of 2D discrete random variables rather than with a single 3D random variable. Specifically, we assign a 2D discrete random variable to each time step: $\mathbf{\mspace{1mu}z}=(\mathbf{\mspace{1mu}z}_1,...,\mathbf{\mspace{1mu}z}_t,...,\mathbf{\mspace{1mu}z}_T)$ where each $\mathbf{\mspace{1mu}z}_t$ is one-hot encoded. For example, if the gaze is fixated at $(h,w)$ on the $t$-th time step, $\mathbf{\mspace{1mu}z}_{t}(j,k)=1$ if $(j,k)=(h,w)$ and 0 otherwise. Computing $\mathbf{\mspace{1mu}z}^*(\epsilon,c)$ in Equation~(\ref{eq:opt5}) requires evaluating $f_{\theta}^{c}(\mathbf{\mspace{1mu}x},\mathbf{\mspace{1mu}z})$ for every $\mathbf{\mspace{1mu}z}$, which causes serious overhead. Although our structured gaze modeling reduces the number of possible realizations from $2^{THW}$ to $(HW)^{T}$, it is still computationally expensive. We propose to further reduce the number of computations by applying a low-dimensional approximation as suggested by Lorberbom et al.~\cite{lorberbom2018direct}. In particular, we approximate $f_{\theta}^{c}(\mathbf{\mspace{1mu}x},\mathbf{\mspace{1mu}z})=\sum_{t=1}^{T} f_{t}^{c}(\mathbf{\mspace{1mu}x},\mathbf{\mspace{1mu}z}_{t};\theta)$ where $f_{t}^{c}(\mathbf{\mspace{1mu}x},\mathbf{\mspace{1mu}z}_{t};\theta)=f_{\theta}^{c}(\mathbf{\mspace{1mu}x},\mathbf{\mspace{1mu}z}_{1}^{*},...,\mathbf{\mspace{1mu}z}_{t},...\mathbf{\mspace{1mu}z}_{T}^{*})$. This low-dimensional approximation further reduces the number of possible realizations from $(HW)^{T}$ to $THW$. We implement the realization of $\mathbf{\mspace{1mu}z}$ by using the batch operation so that we can obtain $\mathbf{\mspace{1mu}z}^*(\epsilon,c)$ in a single forward pass. \subsection{Network architecture} \label{subsec:network} The overall network architecture is illustrated in Figure~\ref{fig:network}. As a backbone network, we use the two-stream I3D~\cite{carreira2017quo} which is a popular network for activity recognition tasks (\#Params: 24.7M, FLOPs: 80.2G). To model the gaze distribution $q_{\phi}(\mathbf{\mspace{1mu}z}|\mathbf{\mspace{1mu}x})$, we use the same convolutional blocks of the I3D (\texttt{Mixed\_5b-c}) and add three convolutional layers (kernel size=[(1,3,3), (1,3,3), (1,1,1)], stride=[(1,1,1), (1,1,1), (1,1,1)]) on top of it. We add the two intermediate features at the end of the 4th max-pooling layer (\texttt{MaxPool\_5a}) and use the added feature map as an input to the network for gaze modeling. We draw a sample $\mathbf{\mspace{1mu}z}^*$ using the Equation~(\ref{eq:opt1}), which is then applied with a fully connect layer and the sigmoid function to produce a soft attention map. The two features at the end of the 5th convolutional block (\texttt{Mixed\_5c}) are added in an element-wise way, and we apply the soft attention map to the added feature map via a residual connection. Our final network has \#Params: 31.9M, FLOPs: 81.3G. \section{Experiments} \label{sec:exp} We evaluate our method on EGTEA~\cite{li2018eye}, which is a large-scale dataset with over 10k video clips of 106 fine-grained egocentric activities and annotated gaze fixations. It is demonstrated that our method outperforms other previous state-of-the-art approaches. Furthermore, we provide a qualitative analysis by visualizing the spatiotemporal responses of our network. We perform additional experiments on GTEA Gaze+~\cite{li2015delving} that consists of 2k videos with 44 activity categories. \begin{table*}[t] \centering \begin{tabular}{L{4.6cm}|L{4.6cm}|C{2.3cm}|C{2.3cm}} \toprule Method & Backbone network & Acc (\%) & Acc$^{*}$ (\%) \\ \midrule Li~et~al.~\cite{li2018eye} & I3D~\cite{carreira2017quo} & 53.30 & - \\ Sudhakaran~et~al.~\cite{sudhakaran2018attention} & ResNet34+LSTM~\cite{he2016deep,xingjian2015convolutional} & - & 60.76 \\ LSTA~\cite{sudhakaran2019lsta} & ResNet34+LSTM~\cite{he2016deep,xingjian2015convolutional} & - & 61.86 \\ MCN~\cite{huang2019mutual} & I3D~\cite{carreira2017quo} & 55.63 & - \\ Kapidis~et~al.~\cite{kapidis2019multitask} & MFNet~\cite{chen2018multi} & 59.44 & 66.59 \\ Lu~et~al.~\cite{lu2019learning} & I3D~\cite{carreira2017quo} & 60.54 & 68.60 \\ \midrule Ours & I3D~\cite{carreira2017quo} & \textbf{62.84} & \textbf{69.58} \\ \bottomrule \end{tabular} \caption{Performance comparison of our method with other state-of-the-art methods on EGTEA dataset~\cite{li2018eye}. We report both Acc (mean class accuracy) and Acc$^{*}$ (ratio of correctly classified videos to the total number of videos). Acc is typically lower than Acc$^{*}$ due to an imbalanced class distribution of the dataset.} \label{tab:act} \end{table*} \begin{table*}[t] \centering \begin{tabular}{L{4.6cm}|L{4.6cm}|C{2.3cm}|C{2.3cm}} \toprule Method & Backbone network & Acc (\%) & Acc$^{*}$ (\%) \\ \midrule Sudhakaran~et~al.~\cite{sudhakaran2018attention} & ResNet34+LSTM~\cite{he2016deep,xingjian2015convolutional} & - & 60.13 \\ MCN~\cite{huang2019mutual} & I3D~\cite{carreira2017quo} & 61.14 & - \\ Ma~\textit{et~al.}~\cite{ma2016going} & FCN32s+CNN-M-2048~\cite{long2015fully,chatfield2014return} & - & 66.40 \\ Shen~\textit{et~al.}~\cite{shen2018egocentric} & SSD+LSTM~\cite{liu2016ssd} & - & 67.10 \\ \midrule Ours & I3D~\cite{carreira2017quo} & \textbf{64.81} & \textbf{68.67} \\ \bottomrule \end{tabular} \caption{Performance comparison on the GTEA Gaze+~\cite{li2015delving} dataset. We report both Acc (mean class accuracy) and Acc$^{*}$ (ratio of correctly classified videos to the total number of videos). Ours again achieves the best performance.} \label{tab:gtea} \end{table*} \subsection{Implementation details} \label{subsec:imp} \textbf{Training/testing process. } First, we resize each frame to $256\times340$ and generate optical flow frames by using the TV-L1 algorithm~\cite{zach2007duality}. Following the previous works on the EGTEA dataset~\cite{huang2019mutual,li2018eye}, we use the I3D pre-trained on Kinetics dataset~\cite{carreira2017quo} as a backbone network. During the training process, we randomly sample 24-frame input segments and randomly crop $224\times224$ regions for each segment. We train our network in an end-to-end manner with a batch size of 24 on 8299 training video clips using the first split of the dataset. We use the SGD algorithm with 0.9 momentum and 0.00004 weight decay. The learning rate starts at 0.032 and decays two times by a factor of 10 after 8k and 15k iterations. $\epsilon$ is set to 1000 in the beginning and decreases exponentially with a 0.001 annealing rate. We set the minimal $\epsilon$ to be 0.1. $\epsilon$ goes to this minimum value within 10k iterations. The whole training process of 18K iterations takes less than 12 hours using 4 GPUs (TITAN Xp). For the evaluation, we divide each testing video into non-overlapping 24-frame segments. The whole evaluation process takes less than a half hour using a single GPU. \textbf{Dimensions of the latent space. } For better comparison, we decided to follow the previous approaches for the dimensions of the latent space. Li~et~al.~\cite{li2018eye} suggests predicting gaze points for every 8 frames using the fact that a common duration of gaze fixation is roughly the same as the time interval of 8 frames (about 300ms). It is also suggested to reduce the spatial dimensions of the space for gaze distribution by a factor of 32. This is reasonable since our final goal is to improve the recognition performance, not to predict the exact gaze location in a high-dimensional space. As a result, the dimensions of the 3D latent space for gaze points described in Section~\ref{subsec:gaze-modeling} become $\mathcal{Z}=\mathbb{R}^{3 \times 7 \times 7}$ as $T=24/8$ and $H=W=224/32$. \subsection{Comparison with the State-of-the-art} \label{subsec:comparison} We compare our method with other state-of-the-art methods. Performance comparison on the EGTEA dataset is reported in Table~\ref{tab:act}. We want to point out that Li~et~al.~\cite{li2018eye}, MCN~\cite{huang2019mutual}, and Lu~et~al.~\cite{lu2019learning} use the same backbone network as ours, which is two-stream I3D~\cite{carreira2017quo}. Our method outperforms all other methods by a large margin. We also evaluate our method on the GTEA Gaze+~\cite{li2015delving}, which is another commonly-used dataset for egocentric activity recognition provided with gaze measurements. It is collected by 6 different human subjects. Following previous works, we perform a leave-one-subject-out cross validation. The performance comparison is reported in Table~\ref{tab:gtea}. Our method again achieves the best performance among the recent approaches. \begin{figure*}[t] \centering \hspace{-6pt} \adjustbox{valign=t}{\begin{minipage}[t]{0.49\textwidth} \small \includegraphics[width=1\textwidth]{ex-a.png}\\[-0.5ex] \hspace*{13.7em}(a)\\[0.7ex] \includegraphics[width=1\textwidth]{ex-b.png}\\[-0.5ex] \hspace*{13.7em}(b)\\[-0.8ex] \hspace*{1.2em}\includegraphics[width=0.925\textwidth]{ex-t.png} \end{minipage}} \hspace{0.029cm}\vline\hspace{0.02cm} \adjustbox{valign=t}{\begin{minipage}[t]{0.49\textwidth} \small \includegraphics[width=1\textwidth]{ex-c.png}\\[-0.5ex] \hspace*{12.2em}(c)\\[0.7ex] \includegraphics[width=1\textwidth]{ex-d.png}\\[-0.5ex] \hspace*{12.2em}(d)\\[-0.8ex] \hspace*{0.14em}\includegraphics[width=0.925\textwidth]{ex-t.png} \end{minipage}} \caption{Qualitative results of our model and the baseline network (I3D). We use Grad-CAM++~\cite{chattopadhay2018grad} to visualize the spatiotemporal responses of the last layer of each models. We can observe that our method makes the network better at attending objects or regions which are related to the activity. Activity label of (a): ``Move Around bacon'', (b): ``Cut cucumber'', (c): ``Cut bell\_pepper'', (d): ``Put lettuce''.} \label{fig:qual} \end{figure*} \begin{table*}[t] \centering \begin{tabular}{L{6.2cm}|C{1.9cm}C{1.9cm}|C{1.9cm}|C{1.9cm}} \toprule \multicolumn{1}{l|}{\multirow{2}{*}{Method}} & \multicolumn{2}{c|}{Using gaze data during} & \multirow{2}{*}{Acc (\%)} & \multirow{2}{*}{Acc$^{*}$ (\%)} \\ & Training & Testing & & \\ \midrule I3D w/ Gaze & $\checkmark$ & $\checkmark$ & 59.56 & 67.46 \\ I3D w/ Gumbel-Softmax~\cite{jang2016categorical,maddison2016concrete} & $\checkmark$ & & 61.24 & 68.69 \\ \midrule Ours & $\checkmark$ & & \textbf{62.84} & \textbf{69.58} \\ \bottomrule \end{tabular} \caption{Performance comparison of different ablative settings. Interestingly, I3D w/ Gaze that uses gaze data also in the testing process performs the worst. The results demonstrate that our structured gaze modeling with direct optimization is effective in improving the performance of egocentric activity recognition. Qualitative analysis regarding this ablation study is provided in the next section.} \label{tab:ablation} \end{table*} \subsection{Qualitative analysis} \label{subsec:qualitative} We visualize the response of the last convolutional layer of our model and of I3D~\cite{carreira2017quo} to see how the gaze integration affects the top-down attention of the two networks. We use Grad-CAM++~\cite{chattopadhay2018grad}, which is a recently proposed visualization method for CNNs. It is an improved and generalized version of famous Grad-CAM~\cite{selvaraju2017grad}. It is recently shown that Grad-CAM++ is effective in understanding 3D CNNs on the task of activity recognition by visualizing the attended locations by the networks across space and time. The visualization results are illustrated in Figure~\ref{fig:qual}. We can clearly observe that our model is better at attending activity-related objects or regions. Specifically, our model is more sensitive to the target objects. The baseline network is sometimes distracted by the background objects. The results qualitatively demonstrate that modeling gaze distributions improves the attentional ability of the networks and the performance of egocentric activity recognition. \subsection{Ablation study} \begin{figure*}[t] \centering \hspace{-6pt} \adjustbox{valign=t}{\begin{minipage}[t]{0.49\textwidth} \small \includegraphics[width=1\textwidth]{add-a.png}\\[-0.5ex] \hspace*{13.7em}(a)\\[-0.8ex] \hspace*{1.54em}\includegraphics[width=0.925\textwidth]{ex-t.png} \end{minipage}} \hspace{0.029cm}\vline\hspace{0.02cm} \adjustbox{valign=t}{\begin{minipage}[t]{0.49\textwidth} \small \includegraphics[width=1\textwidth]{add-b.png}\\[-0.5ex] \hspace*{12.2em}(b)\\[-0.8ex] \hspace*{0.04em}\includegraphics[width=0.925\textwidth]{ex-t.png} \end{minipage}} \caption{Our method is robust to situations where the ground-truth gaze fixations do not carry activity-related information and are misleading. White marks denote ground-truth annotations of gaze fixations and black marks denote the predicted gaze locations. The predicted gaze locations are successfully fixated on the target objects when the ground-truth annotations are misleading. It demonstrates that our structured gaze modeling with direct optimization is effective. Activity label of (a) is ``Mix pasta'' and (b) is ``Move Around bacon''.} \label{fig:add} \end{figure*} \label{subsec:ablation} We perform an ablation study on EGTEA dataset~\cite{li2018eye} as reported in Table~\ref{tab:ablation}. ``I3D w/ Gaze'' refers to the method of using the ground-truth gaze annotations without any gaze modeling. For each input segment, the 3D tensor representing the ground-truth gaze locations $\mathbf{\mspace{1mu}z}_{\text{GT}}$ is first down-sampled to have $3 \times 7 \times7$ dimensions and is applied with a fully-connected layer and the sigmoid function to produce a soft-attention map. This method requires using the gaze data in testing because it does not model the distribution of gaze points. ``I3D w/ Gumbel-Softmax~\cite{jang2016categorical,maddison2016concrete}'' uses the Gumbel-Softmax reparameterization trick to relax the discrete objective to make it continuous. Specifically, it draws a relaxed gaze sample $\mathbf{\mspace{1mu}z}_{\text{GS}}^{*}$ instead of $\mathbf{\mspace{1mu}z}^{*}$ in Equation~\ref{eq:opt1} using the following equation: $\mathbf{\mspace{1mu}z}_{\text{GS}}^{*}=\softmax \big\{ \big(h_{\phi}(\mathbf{\mspace{1mu}x},\mathbf{\mspace{1mu}z}) +\gamma(\mathbf{\mspace{1mu}z})\big)/\tau \big\}$. We set $\tau=2$ following the previous work, Li~et~al.~\cite{li2018eye}, that uses the Gumbel-Softmax objective (but takes different gaze modeling approach). The results indicate that our structured gaze modeling with direct optimization is more effective than the other two methods. Interestingly, ``I3D w/ Gaze'' that uses gaze data also in the testing process performs the worst. This is probably because some of the ground-truth gaze annotations are not correlated with the actual visual attention. As mentioned in the introduction, measurement error and other uncertainties (saccadic suppression~\cite{krekelberg2010saccadic} and dissociation~\cite{brefczynski1999physiological,juan2004dissociation}) make the annotated gaze points uninformative and sometimes misleading. We argue that our method is capable of learning only the informative gaze distribution that is related to the activities. We qualitatively analyze these interesting results in the following section. \subsection{Robustness to misleading gaze fixations} We perform an additional qualitative analysis to show the robustness of our method to the misleading gaze fixations. Here, misleading gaze points refer to the ground-truth gaze annotations that are not correlated with the actual visual attention. We compare our model with I3D~\cite{carreira2017quo} (without any gaze incorporation) and ``I3D w/ Gaze'' which uses gaze data in training and testing without gaze modeling. We again use Grad-CAM++~\cite{chattopadhay2018grad} to visualize the spatiotemporal activation maps of the last convolutional layer of each model. Figure~\ref{fig:add} illustrates the situations where the ground-truth gaze points are not fixated at the activity-related objects or regions. In these examples, the gaze points are not informative and misleading: the ground-truth gaze points are fixated on the background, not on the pan. This leads to blurry and noisy activation maps of ``I3D w/ Gaze'' because it uses the misleading ground-truth gaze points directly as a soft-attention map. We can observe that our method is robust to such misleading gaze points while ``I3D w/ Gaze'' is not. Specifically, the predicted gaze locations (denoted as black marks) are successfully fixated on the target objects when the ground-truth annotations (denoted as white marks) are not. It demonstrates the effectiveness of our proposed structured gaze modeling with direct optimization. \section{Additional analysis} We visualize confusion matrices for the baseline network (I3D~\cite{carreira2017quo}) and our method on the EGTEA dataset~\cite{li2018eye} in Figure~\ref{fig:conf}. Our method outperforms the baseline at least by 0.1\% on 28 classes. For better comparison, we also visualize confusion matrices of the two methods on these 28 classes in Figure~\ref{fig:conf2}. We can observe that many activities containing ``Cut", ``Take", and ``Put" are benefitted from our gaze incorporation. \begin{figure*}[htbp] \centering \hspace{-6pt} \adjustbox{valign=t}{\begin{minipage}[t]{0.45\linewidth} \small \includegraphics[width=1\linewidth]{cropped_baseline.png}\\[-0.5ex] \hspace*{12.1em}(a) Baseline \end{minipage}} \hspace{0.6cm} \adjustbox{valign=t}{\begin{minipage}[t]{0.45\linewidth} \small \includegraphics[width=1\linewidth]{cropped_ours.png}\\[-0.5ex] \hspace*{12.7em}(b) Ours \end{minipage}} \caption{Confusion matrices for the baseline (I3D~\cite{carreira2017quo}) and ours on the EGTEA dataset~\cite{li2018eye}.} \label{fig:conf} \end{figure*} \begin{figure*}[htbp] \centering \hspace{-6pt} \adjustbox{valign=t}{\begin{minipage}[t]{0.45\linewidth} \small \includegraphics[width=\linewidth]{cropped_top28_baseline.png}\\[-0.5ex] \hspace*{13.6em}(a) Baseline \end{minipage}} \hspace{0.6cm} \adjustbox{valign=t}{\begin{minipage}[t]{0.45\linewidth} \small \includegraphics[width=\linewidth]{cropped_top28_ours.png}\\[-0.5ex] \hspace*{14.5em}(b) Ours \end{minipage}} \caption{Confusion matrices for the baseline and ours on 28 classes where our method beats the baseline by a meaningful margin (0.1\%). We can observe that many activities containing ``Cut", ``Take", and ``Put" are better recognized by our gaze incorporated model.} \label{fig:conf2} \end{figure*} \section{Conclusion} \label{sec:con} We have presented an effective method of integrating human gaze into attention on the task of egocentric activity recognition. Incorporating gaze data is non-trivial because there is always uncertainty in the process of recording and the regions near the gaze fixation points are sometimes uninformative. Our method addresses both problems with a probabilistic modeling and an efficient optimization technique. We implement the overall network structures with a simple and powerful 3D CNNs. We evaluate our method in various ways on large-scale datasets. An ablation study demonstrates that incorporating gaze data improves the recognition performance. This is because gaze is correlated with egocentric activity. Moreover, it shows that our proposed structured gaze modeling provides performance improvements by extracting only the informative cues. Interestingly, modeling gaze distribution is more effective in improving the performance than when using ground-truth gaze measurements. We argue that our model is capable of learning only the informative gaze distribution, which is related to the activities of interest. We also qualitatively analyze the effectiveness of our model using the state-of-the-art visualization technique. Our method outperforms all the other previous methods on the task of egocentric activity recognition. \noindent \textbf{Acknowledgement } We thank Ryan Szeto and Christina Jung for their valuable comments. This research was, in part, supported by NIST grant 60NANB17D191. {\small \bibliographystyle{ieee_fullname}
1,314,259,995,405
arxiv
\section{Introduction}\label{sec1} For an asynchronous two dimensional multi-carrier code-division multiple access (2D-MC-CDMA) system, the ideal 2D correlation properties of two dimensional complete complementary codes (2D-CCCs)\cite{farkas2003two} can be properly utilized to obtain interference-free performance \cite{turcsany2004new}. Similar to one dimensional complete complementary code (1D-CCC)\cite{chen2008complete,das2018novel,liu2014new}, one of the most significant drawbacks of 2D-CCC is that the set size is restricted \cite{xeng2004theoretical}. Motivated by the scarcity of 2D-CCC with flexible set sizes, Zeng \textit{et al.} proposed 2D Z-complementary array code sets (2D-ZCACSs) in \cite{zeng2005construction,xeng2004theoretical}. For a $2D-(K,Z_{1} \times Z_{2})-\text{ZCACS}_{M}^{L_{1}\times L_{2}},K,Z_{1}\times Z_{2},L_{1}\times L_{2}$ and $M$ denote the set size, two dimensional zero-correlation zone (2D-ZCZ) width, array size and the number of constituent arrays, respectively. In \cite{zeng2005construction,xeng2004theoretical}, authors obtained ternary 2D-ZCACSs by inserting some zeros into the existing binary 2D-ZCACSs. In 2021, Pai \textit{et al}. presented a new construction method of 2D binary Z-complementary array pairs (2D-ZCAP) \cite{pai2021two}. Recently, Das \textit{et al}. in \cite{das2020two} proposed a construction of 2D-ZCACS by using Z-paraunitary (ZPU) matrices. All these constructions of 2D-ZCACS depend heavily on initial sequences and matrices which increase hardware storage. For the first time in the literature, Roy \textit{et al}. in \cite{roy2021construction} proposed a direct construction of 2D-ZCACS based on MVF. The array size of the proposed 2D-ZCACS is of the form $L_{1}\times L_{2}$, where $L_{1}=2^{m}$, $L_{2}=2p_{1}^{m_{1}}p_{2}^{m_{2}}\ldots p_{k}^{m_{k}}$, $m\geq1, m_{i}\geq 2$ and the set size is of the form $2p_{1}^{2}p_{2}^{2}\ldots p_{k}^{2}$ where $p_{i}$ is a prime number. Therefore the array size and the set size is restricted to some even numbers. \par Existing array and set size limitations through direct construction in the literature motivates us to search multivariable function (MVF) for more flexible array and set sizes. Our proposed construction provides 2D-ZCACS with parameter $2D-(R_{1}R_{2}M_{1}M_{2},N_{1} \times N_{2})-\text{ZCACS}_{M_{1}M_{2}}^{R_{1}N_{1}\times R_{2}N_{2}}$ where $M_{1}=\prod_{i=1}^{a}p_{i}^{k_{i}}$, $M_{2}=\prod_{j=1}^{b}q_{j}^{t_{j}}$, $p_{i}$ is any prime or $1$, $q_{j}$ is prime, $a,b,k_{i},t_{j}\geq 1$, $R_{1}$ and $R_{2}$ are positive integer, such that $R_{1}\geq 1$ and $R_{2}\geq 2$, $N_{1}=\prod_{i=1}^{a}p_{i}^{m_{i}}$, $N_{2}=\prod_{j=1}^{b}q_{j}^{n_{j}}$, $m_{i},n_{j}\geq 1$. The set size in our proposed 2D-ZCACS construction, $R_{1}R_{2}M_{1}M_{2}$, is more adaptable than the set size of 2D-ZCACS given in \cite{roy2021construction}. Unlike \cite{roy2021construction}, the proposed 2D-ZCACS can be reduced to 1D-ZCCS \cite{shen2022new,sarkar2020construction,sarkar2020direct,wu2020z,kumar2022direct,sarkar2018optimal,sarkar2021pseudo,ghosh2022direct} also. As a result, many existing optimal 1D-ZCCSs have become special cases of the proposed construction \cite{sarkar2018optimal,sarkar2021pseudo,ghosh2022direct}. The proposed construction also derived a new set of optimal 1D-ZCCS that had not previously been presented by direct method. \par The rest of the paper is organized as follows. Section $2$ discusses construction related definitions and lemmas. Section $3$ contains the construction of 2D-ZCACS and the comparison with the existing state-of-the-art. Finally, in Section 4, the conclusions are drawn. \section{Notations and definitions} The following notations will be followed throughout this paper: $\omega_{n}=\exp\left(2\pi\sqrt{-1}/n\right)$, $\mathbb{A}_{n}=\{0,1,\ldots,n-1\}\subset \mathbb{Z}$, where $n$ is a positive integer and $\mathbb{Z}$ is the ring of integer. \subsection{Two Dimensional Array} \begin{definition}[\cite{das2020two}] Let $\mathbf{A}=\left(a_{g, i}\right)$ and $\mathbf{B}=\left(b_{g, i}\right)$ be complex-valued arrays of size $l_{1} \times l_{2}$ where $0 \leq g<l_{1},0 \leq i<l_{2}$. The two dimensional aperiodic cross correlation function (2D-ACCF) of arrays $\mathbf{A}$ and $\mathbf{B}$ at shift $\left(\tau_{1}, \tau_{2}\right)$ is defined as \begin{equation*} \begin{split} \boldsymbol{C}\left(\mathbf{A},\mathbf{B}\right)\left(\tau_{1}, \tau_{2}\right)= \begin{cases} \sum_{g=0}^{l_{1}-1-\tau_{1}}\sum_{i=0}^{l_{2}-1-\tau_{2}} a_{g, i}b^{*}_{g+\tau_{1}, i+\tau_{2}},\text{if}~~\makecell{0 \leq \tau_{1}<l_{1},\\0 \leq \tau_{2}<l_{2};}\\ \sum_{g=0}^{l_{1}-1-\tau_{1}}\sum_{i=0}^{l_{2}-1+\tau_{2}} a_{g, i-\tau_{2}}b^{*}_{g+\tau_{1}, i},\text{if}~~\makecell{0 \leq \tau_{1}<l_{1},\\ -l_{2} < \tau_{2}<0;}\\ \sum_{g=0}^{l_{1}-1+\tau_{1}}\sum_{i=0}^{l_{2}-1-\tau_{2}} a_{g-\tau_{1}, i}b^{*}_{g, i+\tau_{2}},\text{if}~~\makecell{-l_{1} < \tau_{1}<0,\\0 \leq \tau_{2}<l_{2};}\\ \sum_{g=0}^{l_{1}-1+\tau_{1}}\sum_{i=0}^{l_{2}-1+\tau_{2}} a_{g-\tau_{1},i-\tau_{2}}b^{*}_{g,i},\text{if}~~\makecell{-l_{1} <\tau_{1}<0,\\-l_{2} < \tau_{2}<0.} \end{cases} \end{split} \end{equation*} \end{definition} Here, $(.)^{*}$ denotes the complex conjugate. If $\mathbf{A}=\mathbf{B},$ then $ \boldsymbol{C}\left(\mathbf{A}, \mathbf{B}\right)\left(\tau_{1},\tau_{2}\right)$ is called the two dimensional aperiodic auto correlation function (2D-AACF) of $\mathbf{A}$ and referred to as $\boldsymbol{C}\left(\mathbf{A}\right)\left(\tau_{1},\tau_{2}\right)$. \par When $l_{1}= 1$, the complex-valued arrays $\mathbf{A}$ and $\mathbf{B}$ are reduced to one dimensional complex-valued sequences $\mathbf{A}=(a_{j})_{j=0}^{l_{2}-1}$ and $\mathbf{B}=(b_{j})_{j=0}^{l_{2}-1}$ with the corresponding one dimensional aperiodic cross correlation function (1D-ACCF) given by \begin{equation}\label{equ:cross} \boldsymbol{C}(\mathbf{A},\mathbf{B})({\tau_{2}})=\begin{cases} \sum_{i=0}^{l_{2}-1-\tau_{2}}a_{i}b^{*}_{i+\tau_{2}}, & 0 \leq \tau_{2} < l_{2}, \\ \sum_{i=0}^{l_{2}+\tau_{2} -1}a_{i-\tau_{2}}b^{*}_{i}, & -l_{2}< \tau_{2} < 0, \\ 0, & \text{otherwise}. \end{cases} \end{equation} \begin{definition}\cite{pai2022designing},\cite{das2020two} For a set of $s$ sets of arrays $\boldsymbol{A}=\left\{\mathbf{A}^{k} \mid k=\right.$ $0,1, \ldots, s-1\}$, each set $\mathbf{A}^{k}=\left\{\mathbf{A}_{0}^{k}, \mathbf{A}_{1}^{k}, \ldots, \mathbf{A}_{s-1}^{k}\right\}$ is composed of $s$ arrays of size is $l_{1} \times l_{2}$. The set $\boldsymbol{A}$ is said to be 2D-CCC with parameters $(s,s,l_{1},l_{2})$ if the following holds \begin{equation} \begin{split} \boldsymbol{C}\left(\mathbf{A}^{k},\mathbf{A}^{k^{\prime}}\right)\left(\tau_{1},\tau_{2}\right)&=\sum_{i=0}^{s-1} \boldsymbol{C}\left(\mathbf{A}_{i}^{k}, \mathbf{A}_{i}^{k^{\prime}}\right)\left(\tau_{1}, \tau_{2}\right)\\ &= \begin{cases}sl_{1}l_{2}, \quad\left(\tau_{1}, \tau_{2}\right)=(0,0), k=k^{\prime} ;\\ 0, \quad\left(\tau_{1}, \tau_{2}\right)\neq(0,0), k=k^{\prime} ;\\ 0, ~~~~k\neq k^{\prime}. \end{cases} \end{split} \end{equation} \end{definition} \begin{definition}\cite{roy2021construction},\cite{das2020two} Let $z_1, z_2, l_{1}, l_{2}$ are positive integers and $z_{1}\leq l_{1}, z_{2}\leq l_{2}$. Consider the sets of $\hat{s}$ set of arrays $\boldsymbol{A}=\left\{\mathbf{A}^{k} \mid k=\right.$ $0,1, \ldots, \hat{s}-1\}$, where each set $\mathbf{A}^{k}=\left\{\mathbf{A}_{0}^{k}, \ldots, \mathbf{A}_{s-1}^{k}\right\}$ is composed of $s$ arrays of size $l_{1} \times l_{2}$. The set $\boldsymbol{A}$ is said to be $2D-(\hat{s},z_{1}\times z_{2})-\text{ZCACS}_{s}^{l_{1}\times l_{2}}$ if the following holds \begin{equation} \begin{split} \boldsymbol{C}\left(\mathbf{A}^{k},\mathbf{A}^{k^{\prime}}\right)\left(\tau_{1},\tau_{2}\right)&=\sum_{i=0}^{s-1} \boldsymbol{C}\left(\mathbf{A}_{i}^{k}, \mathbf{A}_{i}^{k^{\prime}}\right)\left(\tau_{1}, \tau_{2}\right)\\ &= \begin{cases}sl_{1}l_{2}, \quad\left(\tau_{1}, \tau_{2}\right)=(0,0), k=k^{\prime} ;\\ 0, \quad\left(\tau_{1}, \tau_{2}\right)\neq(0,0),\abs{\tau_{1}}<z_{1},\abs{\tau_{2}}<z_{2}, k=k^{\prime} ;\\ 0, ~~~~\abs{\tau_{1}}<z_{1},\abs{\tau_{2}}<z_{2},k\neq k^{\prime}. \end{cases} \end{split} \end{equation} \end{definition} When $z_{1}=l_{1}, z_{2}=l_{2},\hat{s}=s$ the 2D-ZCACS becomes 2D-CCC\cite{ghosh2022direct1,pai2022designing} with parameter $(s,l_{1},l_{2})$. It should be noted that for $l_1=1$, each array $\mathbf{A}_{i}^{k}$ becomes $l_2$-length sequence. Therefore, 2D-ZCACS can be reduced to a conventional 1D-$\left(\hat{s}, z_{2}\right)- \textit{ZCCS}_{s}^{l_{2}}$\cite{wu2018optimal}, \cite{yu2022new},\cite{shen2022new11}, where, $\hat{s},s,z_{2},l_{2}$ represents no. of set, set size, ZCZ width and sequence length respectively. \begin{lemma} \cite{das2020two} For a $2D-(\hat{s},z_{1}\times z_{2})-\text{ZCACS}_{s}^{l_{1}\times l_{2}}$, the following inequality holds \begin{equation} \hat{s}z_{1}z_{2}\leq s\left(l_{1}+z_{1}-1\right)\left( l_{2}+z_{2}-1\right). \end{equation} We called 2D-ZCACS is optimal if the following equality holds \begin{equation} \label{318} \hat{s}=s\Big\lfloor\frac{l_{1}}{z_{1}}\Big\rfloor\Big\lfloor\frac{l_{2}}{z_{2}}\Big\rfloor, \end{equation} where $\lfloor.\rfloor$ denotes the floor function. \end{lemma} \subsection{Multivariable Function} Let $a$, $b$, $m_i$, and $n_j$ be positive integers for $1\leq i\leq a$ and $1\leq j\leq b$. Let $p_{i}$ be any prime or $1$, and $q_{j}$ be a prime number. A multivariable function (MVF) can be defined as \begin{equation*} f: \mathbb{A}_{p_1}^{m_1}\times \mathbb{A}_{p_2}^{m_2}\times \dots \times \mathbb{A}_{p_a}^{m_a}\times\mathbb{A}_{q_1}^{n_1}\times \mathbb{A}_{q_2}^{n_2} \times \dots \times \mathbb{A}_{q_b}^{n_b}\rightarrow\mathbb{Z}. \end{equation*} Let $c,d\geq0$ be integers such that $0\leq c <r$ and $0\leq d<s$ where $r=p_{1}^{m_{1}}p_{2}^{m_{2}}\ldots p_{a}^{m_{a}}$ and $s=q_{1}^{n_{1}}q_{2}^{n_{2}}\ldots q_{b}^{n_{b}}$. Then $c$ and $d$ can be written as \begin{equation} \label{c,d} \begin{split} &c=c_{1}+c_{2}p_{1}^{m_{1}}+\dots+c_{a}p_{1}^{m_{1}}p_{2}^{m_{2}}\ldots p_{a-1}^{m_{a-1}},\\ &d=d_{1}+d_{2}q_{1}^{n_{1}}+\dots+d_{b}q_{1}^{n_{1}}q_{2}^{n_{2}}\ldots q_{b-1}^{n_{b-1}}, \end{split} \end{equation} where, $0\leq c_{i}< p_{i}^{m_{i}}$ and $0\leq d_{j}< q_{j}^{n_{j}}$. Let $\mathbf{C}_{i}=(c_{i,1},c_{i,2},\ldots,c_{i,m_{i}})\in \mathbb{A}_{p_{i}}^{m_{i}}$, be the vector representation of $c_{i}$ with base $p_{i}$, i.e., $c_{i}=\sum_{k=1}^{m_{i}}c_{i,k}p_{i}^{k-1}$ and $\mathbf{D}_{j}=(d_{j,1},d_{j,2},\ldots,d_{j,n_{j}})\in \mathbb{A}_{q_{j}}^{n_{j}}$ be the vector representation of $d_{j}$ with base $q_{j}$, i.e., $d_{j}=\sum_{l=1}^{n_{j}}d_{j,l}q_{j}^{l-1}$ where $0\leq c_{i,k}<p_{i}$, and $0\leq d_{j,l}< q_{j}$. We define vectors associated with $c$ and $d$ as \begin{equation*} \begin{split} &\phi(c)=\left(\mathbf{C}_{1},\mathbf{C}_{2},\ldots,\mathbf{C}_{a}\right)\in \mathbb{A}_{p_1}^{m_1}\times \mathbb{A}_{p_2}^{m_2}\times \dots \times \mathbb{A}_{p_a}^{m_a},\\ &\phi(d)=\left(\mathbf{D}_{1},\mathbf{D}_{2},\ldots,\mathbf{D}_{b}\right)\in \mathbb{A}_{q_1}^{n_1}\times \mathbb{A}_{q_2}^{n_2} \times \dots \times \mathbb{A}_{q_b}^{n_b}, \end{split} \end{equation*} respectively. We also define an array associated with $f$ as \begin{equation} \psi_{\lambda}({f})=\left(\begin{array}{cccc} \omega_{\lambda}^{f_{0,0}} & \omega_{\lambda}^{f_{0,1}} & \cdots & \omega_{\lambda}^{f_{0,r-1}} \\ \omega_{\lambda}^{f_{1,0}} & \omega_{\lambda}^{f_{1,1}} & \cdots & \omega_{\lambda}^{f_{1,r-1}} \\ \vdots & \vdots & \ddots & \vdots \\ \omega_{\lambda}^{f_{s-1,0}} & \omega_{\lambda}^{f_{s-1,1}} & \cdots & \omega_{\lambda}^{f_{s-1,r-1}} \end{array}\right), \end{equation} where $f_{c,d}=f\left(\phi(c),\phi(d)\right)$ and $\lambda$ is a positive integer. \begin{lemma}[\cite{vaidyanathan2014ramanujan}] \label{DauJi} Let $t$ and $t'$ be two non-negative integers, where $t\neq t'$, and $p$ is a prime number. Then \begin{equation} \displaystyle\sum_{j=0}^{p-1}\omega_{p}^{(t-t')j}=0. \end{equation} \end{lemma} Let us consider the set $\mathcal{C}$ as \begin{equation} \mathcal{C}=\left(\mathbb{A}_{p_1}^{m_1}\times \mathbb{A}_{p_2}^{m_2}\times \dots \times \mathbb{A}_{p_a}^{m_a}\right)\times\left(\mathbb{A}_{q_1}^{n_1}\times \mathbb{A}_{q_2}^{n_2} \times \dots \times \mathbb{A}_{q_b}^{n_b}\right). \end{equation} Let $0\leq \gamma< p_{1}^{m_{1}}p_{2}^{m_{2}}\ldots p_{a}^{m_{a}}$ and $0\leq \mu< q_{1}^{n_{1}}q_{2}^{n_{2}}\ldots q_{b}^{n_{b}}$ be positive integers such that \begin{equation} \begin{split} &\gamma=\gamma_{1}+\sum_{i=2}^{a}\gamma_{i}\left(\prod_{i_{1}=1}^{i-1}p_{i_{1}}^{m_{i_{1}}}\right),\\ &\mu=\mu_{1}+\sum_{j=2}^{b}\mu_{j}\left(\prod_{j_{1}=1}^{j-1}q_{j_{1}}^{n_{j_{1}}}\!\right), \end{split} \end{equation} where $0\leq \gamma_{i}< p_{i}^{m_{i}}$ and $0\leq \mu_{j}< q_{j}^{n_{j}}$. Let $\boldsymbol{\gamma}_{i}=(\gamma_{i,1},\gamma_{i,2},\ldots,\gamma_{i,m_{i}})\in \mathbb{A}_{p_{i}}^{m_{i}}$ be the vector representation of $\gamma_{i}$ with base $p_{i}$, i.e., $\gamma_{i}=\sum_{k=1}^{m_{i}}\gamma_{i,k}p_{i}^{k-1}$, where $0\leq \gamma_{i,k}<p_{i}$. Similarly $\boldsymbol{\mu}_{j}=(\mu_{j,1},\mu_{j,2},\ldots,\mu_{j,n_{j}})\in \mathbb{A}_{q_{j}}^{n_{j}}$ be the vector representation of $\mu_{j}$ with base $q_{j}$ i.e., $\mu_{j}=\sum_{l=1}^{n_{j}}\mu_{j,l}q_{j}^{l-1}$ where $0\leq \mu_{j,l}<q_{j}$. Let \begin{equation} \phi(\gamma)=\left(\boldsymbol{\gamma}_{1},\boldsymbol{\gamma}_{2},\ldots,\boldsymbol{\gamma}_{a}\right)\in \mathbb{A}_{p_1}^{m_1}\!\!\times \!\mathbb{A}_{p_2}^{m_2}\!\times \dots \times \mathbb{A}_{p_a}^{m_a}, \end{equation} be the vector associated with $\gamma$ and \begin{equation} \phi(\mu)=\left(\boldsymbol{\mu}_{1},\boldsymbol{\mu}_{2},\ldots,\boldsymbol{\mu}_{b}\right)\in \mathbb{A}_{q_1}^{n_1}\!\!\times \!\mathbb{A}_{q_2}^{n_2}\!\times \dots \times \mathbb{A}_{q_b}^{n_b}, \end{equation} be the vector associated with $\mu$. Let $\pi_{i}$ and $\sigma_{j}$ be any permutations of the set $\{1,2,\ldots,m_{i}\}$ and $\{1,2,\ldots,n_{j}\}$, respectively. Let us also define the MVF $f:\mathcal{C}\rightarrow\mathbb{Z},$ as \begin{equation} \label{a11111} \begin{split} &f(\phi(\gamma),\phi(\mu))\\ &=f\left(\boldsymbol{\gamma}_{{1}},\boldsymbol{\gamma}_{{2}}, \ldots, \boldsymbol{\gamma}_{{a}},\boldsymbol{\mu}_{{1}},\boldsymbol{\mu}_{{2}}, \ldots, \boldsymbol{\mu}_{b}\right)\\ &=\sum_{i=1}^{a}\!\frac{\lambda}{p_{i}}\!\!\sum_{e=1}^{m_{i}-1}\!\!\gamma_{i, \pi_{i}(e)} \gamma_{i, \pi_{i}(e+1)}+\sum_{i=1}^{a}\!\sum_{e=1}^{m_{i}}\!d_{i,e} \gamma_{i, e +\sum_{j=1}^{b}\!\frac{\lambda}{q_{j}}\!\!\sum_{o=1}^{n_{j}-1} \mu_{j, \sigma_{j}(o)} \mu_{j, \sigma_{j}(o+1)}\\ &+\sum_{j=1}^{b}\!\sum_{o=1}^{n_{j}} c_{j,o} \mu_{j,o}, \end{split} \end{equation} where $d_{i,e},c_{j,o}\in \{0,1,\ldots,\lambda-1\}$ and $\lambda=l.c.m.(p_{1},$ $\ldots,p_{a},q_{1},\ldots,q_{b})$. Let us define the set $\Theta$ and $T$ as \begin{equation*} \label{kaka} \begin{split} &\Theta=\{\theta:\theta=(r_{{1}}, r_{{2}}, \ldots, r_{{a}},s_{{1}}, s_{{2}}, \ldots, s_{{b}})\},\\ &T=\{t:t=(x_{{1}}, x_{{2}}, \ldots, x_{{a}},y_{{1}}, y_{{2}}, \ldots, y_{{b}})\}, \end{split} \end{equation*} where $0\leq r_{i},x_{i}< p_{i}^{k_{i}}$ and $0\leq s_{j},y_{j}< q_{j}^{r_{j}}$ and $k_{i},r_{j}$ are positive integers. Now, we define a function $a^{\theta}_{t}\!\!:\mathcal{C}\rightarrow\!\! \mathbb{Z},$ as \begin{equation}{\label{5}} \begin{split} &a^{\theta}_{t}\left(\phi(\gamma),\phi(\mu)\right)\\ &=a^{\theta}_{t}\left(\boldsymbol{\gamma}_{{1}},\boldsymbol{\gamma}_{{2}}, \ldots, \boldsymbol{\gamma}_{{a}},\boldsymbol{\mu}_{{1}},\boldsymbol{\mu}_{{2}}, \ldots, \boldsymbol{\mu}_{b}\right)\\ &=\!\!f\left(\phi(\gamma),\phi(\mu)\right)\!\!+\!\!\sum_{i=1}^{a} \frac{\lambda}{p_{i}} \gamma_{i, \pi_{i}(1)}{r_{i}}+\!\sum_{j=1}^{b} \frac{\lambda}{q_{j}} \mu_{j, \sigma_{j}(1)}{s_{j}} +\sum_{i=1}^{a} \frac{\lambda}{p_{i}} \gamma_{i, \pi_{i}(m_{i})}{x_{i}}\\ &+\sum_{j=1}^{b} \frac{\lambda}{q_{j}} \mu_{j, \sigma_{j}(n_{j})}{y_{j}}+d_{\theta}, \end{split} \end{equation} where $0\leq d_{\theta}<\lambda$, $\gamma_{i, \pi_{i}(1)},\gamma_{i, \pi_{i}(m_{i})}$ denote $\pi_{i}(1)-$th and $\pi_{i}(m_{i})-$th element of $\boldsymbol{\gamma}_{{i}}$ respectively. Similarly, $\mu_{j, \sigma_{j}(1)},\mu_{j, \sigma_{j}(n_{j})}$ denote $\sigma_{j}(1)-$th and $\sigma_{j}(n_{j})-th$ element of $\boldsymbol{\mu}_{{j}}$ respectively. For simplicity, we denote $a^{\theta}_{t}\left(\phi({\gamma}),\phi({\mu})\right)$ by $(a^{\theta}_{t})_{\gamma,\mu}$ and $f\left(\phi({\gamma}),\phi({\mu})\right)$ by $f_{\gamma,\mu}$. \begin{lemma}[\cite{ghosh2022direct1}] \label{KB} We define the ordered set of arrays $\mathbf{A}^{t}=\{\psi_{\lambda}\left(a^{\theta}_{t}\right):\theta\in \Theta\}$. Then the set $\{\mathbf{A}^{t}:t\in T\}$ forms a 2D-CCC with parameter $ (\alpha,\alpha,m,n) $, where, $\alpha=\prod_{i=1}^{a}p^{k_{i}}_{i}\prod_{j=1}^{b}q^{r_{j}}_{j}$, $m=\prod_{i=1}^{a}p_{i}^{m_{i}}$, $n=\prod_{j=1}^{b}q_{j}^{n_{j}}$ and $k_{i},m_{i},n_{j},r_{j}$ are non-negative integers. \end{lemma} \section{Proposed construction of 2D-ZCACS} Let $a',b'$ be positive integers for $1\leq i'\leq a'$ and $1\leq j'\leq b'$, $p_{i'}'$ be any prime or $1$, and $q_{j'}'$ be prime number. Let $\gamma',\mu'$ are positive integers such that $0\leq \gamma'<\left(\prod_{i=1}^{a}p_{i}^{m_{i}}\right)\left(\prod_{i'=1}^{a'}p'_{i'}\right)$ and $0\leq \mu'< \left(\prod_{j=1}^{b}q_{j}^{n_{j}}\right)\left(\prod_{j'=1}^{b'}q'_{j'}\right)$. Then $\gamma',\mu'$ can be written as \begin{equation} \begin{split} &\gamma'\!=\!\gamma_{1}\!+\!\!\displaystyle\sum_{i=2}^{a}\gamma_{i}\left(\prod_{i_{1}=1}^{i-1}p_{i_{1}}^{m_{i_{1}}}\right)\!\!+\!\!\left(\gamma'_{1}+\sum_{i'=2}^{a'}\gamma'_{i'}\left(\prod_{i_{1}=1}^{i'-1}p'_{i_{1}}\right)\right) m,\\ &\mu'\!=\!\mu_{1}\!+\!\!\displaystyle\sum_{j=2}^{b}\mu_{j}\!\!\left(\prod_{j_{1}=1}^{j-1}q_{j_{1}}^{n_{j_{1}}}\right)\!\!+\!\!\left(\mu'_{1}+\sum_{j'=2}^{b'}\mu'_{j'}\left(\prod_{j_{1}=1}^{j'-1}q'_{j_{1}}\right)\right) n, \end{split} \end{equation} where $m=\prod_{i=1}^{a}p_{i}^{m_{i}}$, $n=\prod_{j=1}^{b}q_{j}^{n_{j}}$, $0\leq\gamma_{i}<p_{i}^{m_{i}}$, $0\leq\mu_{j}<q_{j}^{n_{j}}$, $0\leq\gamma_{i'}'<p_{i'}'$ and $0\leq\mu_{j'}'< q_{j'}'$. We denote the vectors associated with $\gamma'$ and $\mu'$ are \begin{equation} \begin{split} &\phi(\gamma')=\left(\boldsymbol{\gamma}_{{1}}, \ldots, \boldsymbol{\gamma}_{{a}},\gamma_{1}',\ldots,\gamma_{a}'\right)\in\mathbb{A}_{p_{1}}^{m_{1}}\times\hdots\times\mathbb{A}_{p_{a}}^{m_{a}}\times\mathbb{A}_{p'_{1}}\times\hdots\times\mathbb{A}_{p'_{a'}},\\ &\phi(\mu')=\left(\boldsymbol{\mu}_{{1}}, \ldots, \boldsymbol{\mu}_{{b}},\mu_{1}',\ldots,\mu_{b}'\right)\in \mathbb{A}_{q_{1}}^{n_{1}}\times\hdots\times\mathbb{A}_{q_{b}}^{n_{b}}\times\mathbb{A}_{q'_{1}}\times\hdots\times\mathbb{A}_{q'_{b'}}, \end{split} \end{equation} respectively, where $\boldsymbol{\gamma}_{{i}}\in \mathbb{A}_{p_i}^{m_i}$, $\boldsymbol{\mu}_{{j}}\in\mathbb{A}_{q_j}^{n_j}$ are the vectors associated with $\gamma_{i}$ and $\mu_{j}$ respectively i.e., $\boldsymbol{\gamma}_{i}=(\gamma_{i,1},\gamma_{i,2},\ldots,\gamma_{i,m_{i}})\in \mathbb{A}_{p_{i}}^{m_{i}}$, $\boldsymbol{\mu}_{j}=(\mu_{j,1},\mu_{j,2},\ldots,\mu_{j,n_{j}})\in \mathbb{A}_{q_{j}}^{n_{j}}$ , $\gamma_{i}=\sum_{k=1}^{m_{i}}\gamma_{i,k}p_{i}^{k-1}$, $\mu_{j}=\sum_{l=1}^{n_{j}}\mu_{i,l}q_{j}^{l-1}$, $0\leq \gamma_{i,k}<p_{i}$ and $0\leq \mu_{j,l}<q_{j}$. Let us consider the set $\mathcal{D}$ as \begin{equation} \mathcal{D}= \mathbb{A}_{p_{1}}^{m_{1}}\times\hdots\times\mathbb{A}_{p_{a}}^{m_{a}}\times\mathbb{A}_{p'_{1}}\times\hdots\times\mathbb{A}_{p'_{a'}}\times\mathbb{A}_{q_{1}}^{n_{1}}\times\hdots\times\mathbb{A}_{q_{b}}^{n_{b}}\times\mathbb{A}_{q'_{1}}\times\hdots\times\mathbb{A}_{q'_{b'}}. \end{equation} Let $f$ be the function as defined (\ref{a11111}). We define the MVF $M^{\mathbf{c},\mathbf{d}}:\mathcal{D}\rightarrow \mathbb{Z}$ as \begin{equation} \begin{split} \label{rama} &M^{\mathbf{c},\mathbf{d}}\left(\phi(\gamma'),\phi(\mu')\right)\\ &=M^{\mathbf{c},\mathbf{d}}\left(\boldsymbol{\gamma}_{{1}}, \ldots, \boldsymbol{\gamma}_{{a}},\gamma_{1}',\ldots,\gamma_{a'}',\boldsymbol{\mu}_{{1}}, \ldots, \boldsymbol{\mu}_{{b}},\mu_{1}',\ldots,\mu_{b'}'\right)\\ &=\frac{\delta}{\lambda}f\left(\boldsymbol{\gamma}_{{1}}, \ldots, \boldsymbol{\gamma}_{{a}},\boldsymbol{\mu}_{{1}}, \ldots, \boldsymbol{\mu}_{b}\right)\!+\!\!\sum_{i'=1}^{a'}\!c_{i'}\frac{\delta}{p'_{i'}}\gamma_{i'}'+\!\!\sum_{j'=1}^{b'}\!d_{j'}\frac{\delta}{q'_{j'}}\mu_{j'}', \end{split} \end{equation} where $0\leq c_{i'}< p'_{i'}$, $0\leq d_{j'}<q'_{j'}$, $\mathbf{c}=(c_{1},c_{2},\ldots ,c_{a'})$ and $\mathbf{d}=(d_{1},d_{2},\ldots,d_{b'})$. For simplicity, now on-wards we denote $M^{\mathbf{c},\mathbf{d}}(\boldsymbol{\gamma}_{{1}}, \ldots, \boldsymbol{\gamma}_{{a}},\gamma_{1}',\ldots,\gamma_{a'}',\boldsymbol{\mu}_{{1}}, \ldots, \boldsymbol{\mu}_{{b}},\mu_{1}',\ldots,\mu_{b'}')$ by $M^{\mathbf{c},\mathbf{d}}$. Consider the set $\Theta$ and $T$ as \begin{equation*} \begin{split} &\Theta=\{\theta:\theta=(r_{{1}}, r_{{2}}, \ldots, r_{{a}},s_{{1}}, s_{{2}}, \ldots, s_{{b}})\},\\ &T=\{t:t=(x_{{1}}, x_{{2}}, \ldots, x_{{a}},y_{{1}}, y_{{2}}, \ldots, y_{{b}})\}, \end{split} \end{equation*} where $0\leq r_{i},x_{i}< p_{i}^{k_{i}}$ and $0\leq s_{j},y_{j}< q_{j}^{r_{j}}$ and $k_{i},r_{j}$ are positive integers. Let us define MVF, $b_{t}^{\theta,\mathbf{c},\mathbf{d}}:\mathcal{D}\rightarrow \mathbb{Z}$, as \begin{equation} \label{hare} \begin{split} b_{t}^{\theta,\mathbf{c},\mathbf{d}}=&M^{\mathbf{c},\mathbf{d}}+\sum_{i=1}^{a} \frac{\delta}{p_{i}} \gamma_{i, \pi_{i}(1)}{r_{i}}+\sum_{j=1}^{b} \frac{\delta}{q_{j}} \mu_{j, \sigma_{j}(1)}{s_{j}}+\sum_{i=1}^{a} \frac{\delta}{p_{i}} \gamma_{i, \pi_{i}(m_{i})}{x_{i}}\\ &+\sum_{j=1}^{b} \frac{\delta}{q_{j}} \mu_{j, \sigma_{j}(n_{j})}{y_{j}}+\frac{\delta}{\lambda}d_{\theta}, \end{split} \end{equation} where $0\leq d_{\theta}<\lambda$. By (\ref{5}), (\ref{rama}) and (\ref{hare}) we have \begin{equation} b_{t}^{\theta,\mathbf{c},\mathbf{d}}=\frac{\delta}{\lambda}a^{\theta}_{t}+\sum_{i'=1}^{a'}c_{i'}\frac{\delta}{p'_{i'}}\gamma'_{i'}+\sum_{j'=1}^{b'}d_{j'}\frac{\delta}{q'_{j'}}\mu'_{j'}. \end{equation} We define the ordered set of arrays as \begin{equation} \Omega_{t}^{\mathbf{c},\mathbf{d}}=\{\psi_{\delta}(b_{t}^{\theta,\mathbf{c},\mathbf{d}}):\theta\in \Theta\}. \end{equation} where $\delta=l.c.m(\lambda,p'_{1},p'_{2},\ldots,p'_{a'},q'_{1},q'_{2},\ldots,q'_{b'}).$ \begin{theorem} \label{VrindavanBihari} Let $m=\prod_{i=1}^{a}p^{m_{i}}_{i}, n=\prod_{j=1}^{b}q^{n_{j}}_{j},\mathbf{c}=(c_{1},\ldots,c_{a'}),\mathbf{d}=(d_{1},\ldots,d_{b'})$. Then the set $S=\{\Omega_{t}^{\mathbf{c},\mathbf{d}}:t\in T,0\leq c_{i'}<p'_{i'},0\leq d_{j'}<q'_{j'}\}$ forms a $2D-(\alpha_{1},z_{1}\times z_{2})-\text{ZCACS}_{\alpha}^{l_{1}\times l_{2}}$, where, $\alpha_{1}=\left(\prod_{i'=1}^{a'}p'_{i'}\right)\left(\prod_{j'=1}^{b'}q'_{j'}\right)\alpha$, $l_{1}=m\left(\prod_{i'=1}^{a'}p'_{i'}\right)$, $l_{2}=n\left(\prod_{j'=1}^{b'}q'_{j'}\right)$, $z_{1}=m$ ,$z_{2}=n$, $\alpha=(\prod_{i=1}^{a}p^{k_{i}}_{i})(\prod_{j=1}^{b}q^{r_{j}}_{j})$, $k_{i},r_{j},m_{i},n_{j}\geq 1$. \end{theorem} \begin{proof} Let $\hat{\gamma},\hat{\mu}$ are positive integers such that $0\leq\hat{\gamma}<l_{1}$ and $0\leq\hat{\mu}<l_{2}$. Then $\hat{\gamma},\hat{\mu}$ can be written as \begin{equation*} \begin{split} &\hat{\gamma}=\gamma_{1}\!+\!\!\displaystyle\sum_{i=2}^{a}\gamma_{i}\left(\prod_{i_{1}=1}^{i-1}p_{i_{1}}^{m_{i_{1}}}\right)\!\!+\!\!\left(\gamma'_{1}+\sum_{i'=2}^{a'}\gamma'_{i'}\left(\prod_{i_{1}=1}^{i'-1}p'_{i_{1}}\right)\right) m,\\ &\hat{\mu}=\mu_{1}\!+\!\!\displaystyle\sum_{j=2}^{b}\mu_{j}\left(\prod_{j_{1}=1}^{j-1}q_{j_{1}}^{n_{j_{1}}}\right)\!\!+\!\!\left(\mu'_{1}+\sum_{j'=2}^{b'}\mu'_{j'}\left(\prod_{j_{1}=1}^{j'-1}q'_{j_{1}}\right)\!\!\right) n, \end{split} \end{equation*} where $0\leq \gamma_{i}< p_{i}^{m_{i}}$, $0\leq \mu_{j}< q_{j}^{n_{j}}$, $0\leq \gamma'_{i'}<p'_{i'}$ and $0\leq \mu'_{j'}<q'_{j'}$. The proof will be split into following cases \begin{mycases} \case $(\tau_{1}=0,\tau_{2}=0)$\\ The ACCF between $\Omega_{t}^{\mathbf{c},\mathbf{d}}$ and $\Omega_{t'}^{\mathbf{c}',\mathbf{d}'}$ at $\tau_{1}=0$ and $\tau_{2}=0$ can be expressed as \begin{equation} \label{2.a} \begin{split} &C(\Omega_{t}^{\mathbf{c},\mathbf{d}},\Omega_{t'}^{\mathbf{c}',\mathbf{d}'})(0,0)\\ &=\sum_{\theta\in \Theta}C(\psi_{\delta}((b_{t}^{\theta,\mathbf{c},\mathbf{d}})),\psi_{\delta}((b_{t'}^{\theta,\mathbf{c}',\mathbf{d}'})))(0,0)\\ &=\sum_{\theta\in \Theta}\sum_{\hat{\gamma}=0}^{l_{1}-1}\sum_{\hat{\mu}=0}^{l_{2}-1}\omega_{\delta}^{(b_{t}^{\theta,\mathbf{c},\mathbf{d}})_{\hat{\gamma},\hat{\mu}}-(b_{t'}^{\theta,\mathbf{c}',\mathbf{d}'})_{\hat{\gamma},\hat{\mu}}}\\ &=\sum_{\theta\in \Theta}\sum_{\gamma=0}^{m-1}\sum_{\mu=0}^{n-1}\sum_{\gamma'_{1}=0}^{p'_{1}-1}\ldots\sum_{\gamma'_{a'}=0}^{p'_{a'}-1}\sum_{\mu_{1}=0}^{q'_{1}-1}\ldots\sum_{\mu'_{b'}=0}^{q'_{b'}-1}\omega_{\delta}^{D}, \end{split} \end{equation} where $D=\frac{\delta}{\lambda}\left((a_{t}^{\theta})_{\gamma,\mu}-(a_{t'}^{\theta})_{\gamma,\mu}\right)+\sum_{i'=1}^{a'}\frac{\delta}{p'_{i'}}(c_{i'}-c_{i'}')\gamma_{i'}+\sum_{j'=1}^{b'}\frac{\delta}{q'_{j'}}(d_{j'}-d_{j'}')\mu_{j'}$. After splitting (\ref{2.a}), we get \begin{equation} \label{029} \begin{split} &C(\Omega_{t}^{\mathbf{c},\mathbf{d}},\Omega_{t'}^{\mathbf{c}',\mathbf{d}'})(0,0)\\&=\left(\sum_{\theta\in \Theta}\sum_{\gamma=0}^{m-1}\sum_{\mu=0}^{n-1}\omega_{\delta}^{\frac{\delta}{\lambda}\left((a_{t}^{\theta})_{\gamma,\mu}-(a_{t'}^{\theta})_{\gamma,\mu}\right)}\right)\mathcal{E}\mathcal{F}\\ &=\left(\sum_{\theta\in \Theta}\sum_{\gamma=0}^{m-1}\sum_{\mu=0}^{n-1}\omega_{\lambda}^{\left((a_{t}^{\theta})_{\gamma,\mu}-(a_{t'}^{\theta})_{\gamma,\mu}\right)}\right)\mathcal{E}\mathcal{F}\\ &=C(\mathbf{A}^{t},\mathbf{A}^{t'})(0,0)\mathcal{E}\mathcal{F}, \end{split} \end{equation} where \begin{equation} \label{jajya} \begin{split} &\mathcal{E}=\prod_{i'=1}^{a'}\left(\sum_{\gamma'_{i'}=0}^{p'_{i'}-1}\omega_{p'_{i'}}^{(c_{i'}-c_{i'}')\gamma'_{i'}}\right),\\ &\mathcal{F}=\prod_{j'=1}^{b'}\left(\sum_{\mu'_{j'}=0}^{q'_{j'}-1}\omega_{q'_{j'}}^{(d_{j'}-d'_{j'})\mu'_{j'}}\right). \end{split} \end{equation} \subcase $(t\neq t'$)\\ By \textit{lemma} \ref{DauJi} we know, the set $\{\mathbf{A}^{t}:t\in T\}$ forms a 2D-CCC. Hence By \textit{lemma} \ref{DauJi}, we have \begin{equation} \label{sunnk} C(\mathbf{A}^{t},\mathbf{A}^{t'})(0,0)=0. \end{equation} Hence by (\ref{029}) and (\ref{sunnk}) we have \begin{equation} \label{king} C(\Omega_{t}^{\mathbf{c},\mathbf{d}},\Omega_{t'}^{\mathbf{c}',\mathbf{d}'})(0,0)=0. \end{equation} \subcase $(t= t'$)\\ By \textit{lemma} \ref{DauJi}, we know \begin{equation} \label{Hare} C(\mathbf{A}^{t},\mathbf{A}^{t'})(0,0)=\left(\prod_{i=1}^{a}p_{i}^{m_{i}+k_{i}}\right)\left(\prod_{j=1}^{b}q_{j}^{n_{j}+r_{j}}\right). \end{equation} Let $M=\left(\prod_{i=1}^{a}p_{i}^{m_{i}+k_{i}}\right)\left(\prod_{j=1}^{b}q_{j}^{n_{j}+r_{j}}\right)$ hence by \textit{Lemma} \ref{DauJi}, (\ref{029}), (\ref{jajya}), (\ref{Hare}), we have the following \begin{equation} \label{r.1} \begin{split} C(\Omega_{t}^{\mathbf{c},\mathbf{d}},\Omega_{t}^{\mathbf{c}',\mathbf{d}'})(0,0) =\begin{cases} M\left(\prod_{i'=1}^{a'}p'_{i'}\right)\left(\prod_{j'=1}^{b'}q'_{j'}\right) & {\mathbf{c}}={\mathbf{c}}',{\mathbf{d}}={\mathbf{d}}'\\ 0 ,& {\mathbf{c}}\neq{\mathbf{c}}',{\mathbf{d}}={\mathbf{d}}'\\\ 0 ,& {\mathbf{c}}={\mathbf{c}}',{\mathbf{d}}\neq{\mathbf{d}}'\\ 0 ,& {\mathbf{c}}\neq{\mathbf{c}}',{\mathbf{d}}\neq{\mathbf{d}}'. \end{cases} \end{split} \end{equation} \case$(0<\tau_1<\prod_{i=1}^{a}p_{i}^{m_{i}},0<\tau_2<\prod_{j=1}^{b}q_{j}^{n_{j}})$\\ Let $\sigma,\rho$ are positive integers such that $0\leq \sigma< m'$ and $0\leq \rho< n'$ where $m'=\prod_{i'=1}^{a'}p'_{i'},n'=\prod_{j'=1}^{b'}q'_{j'}$. Then $\sigma$ and $\rho$ can be written as \begin{equation} \begin{split} &\sigma=\sigma_{1}+\sigma_{2}p'_{1}+\ldots+\sigma_{a'}\left(\prod_{i'=1}^{a'-1}p'_{i'}\right),\\ &\rho=\rho_{1}+\rho_{2}q'_{1}+\ldots+\rho_{b'}\left(\prod_{j'=1}^{b'-1}q'_{j'}\right), \end{split} \end{equation} respectively where $0\leq \sigma_{i'}< p'_{i'}$ and $0\leq \rho_{j'}< q'_{j'}$ . We define vectors associated with $\sigma$ and $\rho$ to be \begin{equation} \begin{split} &\phi(\sigma)=(\sigma_{1},\ldots,\sigma_{a'})\in \mathbb{A}_{p'_{1}}\times\hdots\times\mathbb{A}_{p'_{a'}},\\ &\phi(\rho)=(\rho_{1},\ldots,\rho_{b'})\in \mathbb{A}_{q'_{1}}\times\hdots\times\mathbb{A}_{q'_{b'}}, \end{split} \end{equation} respectively. The ACCF between $\Omega_{t}^{\mathbf{c},\mathbf{d}}$ and $\Omega_{t'}^{\mathbf{c}',\mathbf{d}'}$ for $0<\tau_1<\prod_{i=1}^{a}p_{i}^{m_{i}}$ and $0<\tau_2<\prod_{j=1}^{b}q_{j}^{n_{j}}$, can be derived as \begin{equation} \label{31} \begin{split} &C(\Omega_{t}^{c,d},\Omega_{t'}^{c',d'})(\tau_1,\tau_2) =\!C(\mathbf{A}^{t},\mathbf{A}^{t'})(\tau_1,\tau_2)DE\!+\! C(\mathbf{A}^{t},\mathbf{A}^{t'})(\tau_1\!-\!\prod_{i=1}^{a}p_{i}^{m_{i}},\tau_2)D'E+\\ &C(\mathbf{A}^{t},\mathbf{A}^{t'})(\tau_1,\tau_2-\prod_{j=1}^{b}q_{j}^{n_{j}})DE' +C(\mathbf{A}^{t},\mathbf{A}^{t'})(\tau_1-\prod_{i=1}^{a}p_{i}^{m_{i}},\tau_2-\prod_{j=1}^{b}q_{j}^{n_{j}})D'E', \end{split} \end{equation} where \begin{eqnarray} \label{sri} D=\sum_{\sigma=0}^{m'-1}\left(\prod_{i'=1}^{a'}\omega_{p'_{i'}}^{(c_{i'}-c_{i'}')(\sigma_{i'})}\right),\\ E=\sum_{\rho=0}^{n'-1} \left(\prod_{j'=1}^{b'}\omega_{q'_{j'}}^{(d_{j'}-d_{j'}')(\rho_{j'})}\right),~\\ D'=\displaystyle\sum_{\sigma=0}^{m'-2}\left(\prod_{i'=1}^{a'}\omega_{p'_{i'}}^{\left(c_{i'}(\sigma_{i'})-c'_{i'}\left(\sigma+1\right)_{i'}\right)}\right),\\ E'=\displaystyle\sum_{\rho=0}^{n'-2}\left(\prod_{j'=1}^{b'}\omega_{q'_{j'}}^{\left(d_{j'}(\rho_{j'})-d'_{j'}\left(\rho+1\right)_{j'}\right)}\right), \end{eqnarray} and $\left(\sigma+1\right)_{i'},\left(\rho+1\right)_{j'}$ denotes the $i'$-th and $j'$-th components of $\phi\left(\sigma+1\right)$ and $\phi\left(\rho+1\right)$ respectively. By \textit{Lemma} \ref{DauJi}, for $0<\tau_1<\prod_{i=1}^{a}p_{i}^{m_{i}}$ and $0<\tau_2<\prod_{j=1}^{b}q_{j}^{n_{j}}$, we have \begin{eqnarray} \label{3.1} C(\mathbf{A}^{t},\mathbf{A}^{t'})(\tau_1,\tau_2)=0,\\ \label{3.2} C(\mathbf{A}^{t},\mathbf{A}^{t'})(\tau_1\!-\!\prod_{i=1}^{a}p_{i}^{m_{i}},\tau_2)=0,\\ \label{3.3} C(\mathbf{A}^{t},\mathbf{A}^{t'})(\tau_1,\tau_2-\prod_{j=1}^{b}q_{j}^{n_{j}})=0,\\ \label{3.4} C(\mathbf{A}^{t},\mathbf{A}^{t'})(\tau_1-\prod_{i=1}^{a}p_{i}^{m_{i}},\tau_2-\prod_{j=1}^{b}q_{j}^{n_{j}})=0. \end{eqnarray} By (\ref{31}), (\ref{3.1}), (\ref{3.2}), (\ref{3.3}), (\ref{3.4}) we have \begin{equation} \label{r.2} C(\Omega_{t}^\mathbf{c,d},\Omega_{t^{'}}^{\mathbf{c}^{'},\mathbf{d}^{'}})(\tau_1,\tau_2)=0. \end{equation} \case $(0<\tau_1<\prod_{i=1}^{a}p_{i}^{m_{i}},-\prod_{j=1}^{b}q_{j}^{n_{j}}<\tau_2<0)$\\ The ACCF between $\Omega_{t}^{\mathbf{c},\mathbf{d}}$ and $\Omega_{t'}^{\mathbf{c}',\mathbf{d}'}$ for $0<\tau_1<\prod_{i=1}^{a}p_{i}^{m_{i}}$ and $-\prod_{j=1}^{b}q_{j}^{n_{j}}<\tau_2<0$, can be derived as \begin{equation} \label{muk} \begin{split} &C(\Omega_{t}^{\mathbf{c},\mathbf{d}},\Omega_{t'}^{\mathbf{c'},\mathbf{d'}})(\tau_1,\tau_2)\\ &=\!C(\mathbf{A}^{t},\mathbf{A}^{t'})(\tau_1,\tau_2)DE\!+\! C(\mathbf{A}^{t},\mathbf{A}^{t'})(\tau_{1}-\prod_{i=1}^{a}p_{i}^{m_{i}},\tau_2)D'E\\ &+ C(\mathbf{A}^{t},\mathbf{A}^{t'})(\tau_1,\prod_{j=1}^{b}q_{j}^{n_{j}}+\tau_{2})DE''+C(\mathbf{A}^{t},\mathbf{A}^{t'})(\tau_1-\prod_{i=1}^{a}p_{i}^{m_{i}},\prod_{j=1}^{b}q_{j}^{n_{j}}+\tau_2)D'E'', \end{split} \end{equation} where \begin{equation} \begin{split} E''=\displaystyle\sum_{\rho=0}^{n'-2}\left(\prod_{j'=1}^{b'}\omega_{q'_{j'}}^{\left(d_{j'}\left(\rho+1\right)_{j'}-d_{j'}'(\rho_{j'})\right)}\right). \end{split} \end{equation} By \textit{Lemma} \ref{DauJi}, for $0<\tau_1<\prod_{i=1}^{a}p_{i}^{m_{i}}$ and $-\prod_{j=1}^{b}q_{j}^{n_{j}}<\tau_2<0$, we have \begin{eqnarray} \label{1012} C(\mathbf{A}^{t},\mathbf{A}^{t'})(\tau_1,\prod_{j=1}^{b}q_{j}^{n_{j}}+\tau_{2})=0,\\ \label{10121} C(\mathbf{A}^{t},\mathbf{A}^{t'})(\tau_1-\prod_{i=1}^{a}p_{i}^{m_{i}},\prod_{j=1}^{b}q_{j}^{n_{j}}+\tau_2)=0. \end{eqnarray} By (\ref{muk}) , (\ref{1012}) and (\ref{10121}) we have \begin{equation} \label{r.3} C(\Omega_{t}^{\mathbf{c},\mathbf{d}},\Omega_{t'}^{\mathbf{c'},\mathbf{d'}})(\tau_1,\tau_2)=0. \end{equation} \case ($0<\tau_1<\prod_{i=1}^{a}p_{i}^{m_{i}},\tau_2=0$) \\ The ACCF between $\Omega_{t}^{\mathbf{c},\mathbf{d}}$ and $\Omega_{t'}^{\mathbf{c}',\mathbf{d}'}$ for $0<\tau_1<\prod_{i=1}^{a}p_{i}^{m_{i}}$ and $\tau_2=0$ , can be derived as \begin{equation} \label{308123} \begin{split} C(\Omega_{t}^{\mathbf{c},\mathbf{d}},\Omega_{t'}^{\mathbf{c'},\mathbf{d'}})(\tau_1,0)=\!C(\mathbf{A}^{t},\mathbf{A}^{t'})(\tau_1,0)DE\!+ C(\mathbf{A}^{t},\mathbf{A}^{t'})(\tau_1-\prod_{i=1}^{a}p_{i}^{m_{i}},0)D'E. \end{split} \end{equation} By \textit{Lemma} \ref{DauJi}, for $0<\tau_1<\prod_{i=1}^{a}p_{i}^{m_{i}}$, we have \begin{equation} \begin{split} \label{308000} &C(\mathbf{A}^{t},\mathbf{A}^{t'})(\tau_1,0)=0.\\ & C(\mathbf{A}^{t},\mathbf{A}^{t'})(\tau_1-\prod_{i=1}^{a}p_{i}^{m_{i}},0)=0, \end{split} \end{equation} by (\ref{308123}) and (\ref{308000}) we have \begin{equation} C(\Omega_{t}^{\mathbf{c},\mathbf{d}},\Omega_{t'}^{\mathbf{c'},\mathbf{d'}})(\tau_1,0)=0. \end{equation} \case \\($\tau_1=0,0<\tau_2<\prod_{j=1}^{b}q_{j}^{n_{j}}$)\\ The ACCF between $\Omega_{t}^{\mathbf{c},\mathbf{d}}$ and $\Omega_{t'}^{\mathbf{c}',\mathbf{d}'}$ for $\tau_1=0$ and $0<\tau_2<\prod_{j=1}^{b}q_{j}^{n_{j}}$, can be derived as \begin{equation} \label{3081} \begin{split} C(\Omega_{t}^{\mathbf{c},\mathbf{d}},\Omega_{t'}^{\mathbf{c'},\mathbf{d'}})(0,\tau_2)=\!C(\mathbf{A}^{t},\mathbf{A}^{t'})(0,\tau_2)DE\!+ C(\mathbf{A}^{t},\mathbf{A}^{t'})(0,\tau_2-\prod_{j=1}^{b}q_{j}^{n_{j}})DE'. \end{split} \end{equation} By \textit{Lemma} \ref{DauJi}, for $0<\tau_2<\prod_{j=1}^{b}q_{j}^{n_{j}}$, we have \begin{equation} \begin{split} \label{3080} &C(\mathbf{A}^{t},\mathbf{A}^{t'})(0,\tau_2)=0,\\ & C(\mathbf{A}^{t},\mathbf{A}^{t'})(0,\tau_2-\prod_{j=1}^{b}q_{j}^{n_{j}})=0. \end{split} \end{equation} By (\ref{3081}) and (\ref{3080}) we have \begin{equation} C(\Omega_{t}^{\mathbf{c},\mathbf{d}},\Omega_{t'}^{\mathbf{c'},\mathbf{d'}})(0,\tau_2)=0. \end{equation} \case($\tau_1=0,-\prod_{j=1}^{b}q_{j}^{n_{j}}<\tau_2<0$)\\ Similarly the ACCF between $\Omega_{t}^{\mathbf{c},\mathbf{d}}$ and $\Omega_{t'}^{\mathbf{c}',\mathbf{d}'}$ for $\tau_1=0$ and $-\prod_{j=1}^{b}q_{j}^{n_{j}}<\tau_2<0$ is \end{mycases} \begin{equation} \label{0912} \begin{split} C(\Omega_{t}^{\mathbf{c},\mathbf{d}},\Omega_{t'}^{\mathbf{c'},\mathbf{d'}})(0,\tau_2) =\!C(\mathbf{A}^{t},\mathbf{A}^{t'})(0,\tau_2)DE\!+ C(\mathbf{A}^{t},\mathbf{A}^{t'})(0,\tau_2+\prod_{j=1}^{b}q_{j}^{n_{j}})DE''. \end{split} \end{equation} By \textit{Lemma} \ref{DauJi}, for $-\prod_{j=1}^{b}q_{j}^{n_{j}}<\tau_2<0$, we have \begin{equation} \begin{split} \label{30818} & C(\mathbf{A}^{t},\mathbf{A}^{t'})(0,\tau_2+\prod_{j=1}^{b}q_{j}^{n_{j}})=0. \end{split} \end{equation} Hence by (\ref{3080}), (\ref{0912}) and (\ref{30818}) we have \begin{equation} \label{r.5} C(\Omega_{t}^{\mathbf{c},\mathbf{d}},\Omega_{t'}^{\mathbf{c'},\mathbf{d'}})(0,\tau_2)=0. \end{equation} Combining all the cases we have \begin{equation} \label{r.9} \begin{split} C(\Omega_{t}^{\mathbf{c},\mathbf{d}},\Omega_{t'}^{\mathbf{c'},\mathbf{d'}})(\tau_{1},\tau_2) = \begin{cases} M\left(\prod_{i'=1}^{a'}p'_{i'}\right)\left(\prod_{j'=1}^{b'}q'_{j'}\right), &\makecell{({\mathbf{c}},{\mathbf{d}},t)=({\mathbf{c'}},{\mathbf{d'}},t')\\(\tau_{1},\tau_{2})=(0,0),}\\ \\ \\ 0, &\makecell{({\mathbf{c}},{\mathbf{d}},t)\neq({\mathbf{c'}},{\mathbf{d'}},t')\\(\tau_{1},\tau_{2})=(0,0),}\\ \\ 0, &\makecell{0\leq \tau_{1}<\prod_{i=1}^{a}p^{m_{i}}_{i},\\ (\tau_{1},\tau_{2})\neq(0,0).} \end{cases} \end{split} \end{equation} Similarly it can be shown \begin{equation} \label{r.10} C(\Omega_{t}^{\mathbf{c},\mathbf{d}},\Omega_{t'}^{\mathbf{c'},\mathbf{d'}})(\tau_{1},\tau_2)=0,~-\prod_{i=1}^{a}p^{m_{i}}_{i}<\tau_{1}<0. \end{equation} Hence from (\ref{r.9}), (\ref{r.10}) we derive our conclusion. \end{proof} \begin{example} Suppose that $a=1$, $b=1$, $a'=1$, $b'=1$, $p_1=2$, $m_1=2$, $k_{1}=1$, $q_1=3$, $n_1=2$, $r_{1}=1$, $p_{1}'=3$, $q_{1}'=2$. Let $\delta=6,~\lambda=6$, $\boldsymbol{\gamma}_{1}=(\gamma_{11},\gamma_{12})\in\mathbb{A}^{2}_{2}=\{0,1\}^{2}$ be the vector associated with ${\gamma}_{1}$ where $0\leq \gamma_{1}\leq 3$, i.e., $\gamma_{1}=\gamma_{11}+2\gamma_{12}$ and $\boldsymbol{\mu}_{1}=(\mu_{11},\mu_{12})\in\mathbb{A}_{3}^{2}=\{0,1,2\}^{2}$ be the vector associated with $\mu_{1}$ where $0\leq\mu_{1}\leq 8$, i.e., $\mu_{1}=\mu_{11}+3\mu_{12}$ and $0\leq \gamma'_{1}\leq 2$, $0\leq \mu'_{1}\leq 1$. We define the MVF $f:\mathbb{A}_{2}^{2}\times \mathbb{A}_{3}^{2}\rightarrow \mathbb{Z}$ as \begin{equation*} \begin{split} f\left(\boldsymbol{\gamma}_{1},\boldsymbol{\mu}_{1}\right)\!\!=&3\gamma_{1,2}\gamma_{1,1}\!\!+\!\gamma_{1,1}\!+\!2\gamma_{1,2}\!+\!2\mu_{1,2}\mu_{1,1}\!+\!2\mu_{1,1}\!+\!\mu_{1,2}. \end{split} \end{equation*} Consider the MVF, $M^{\mathbf{c},\mathbf{d}}:\mathbb{A}_{2}^{2}\times\mathbb{A}_{3}\times \mathbb{A}_{3}^{2}\times\mathbb{A}_{2} \rightarrow \mathbb{Z}$ as \begin{equation} \begin{split} &M^{\mathbf{c},\mathbf{d}}\left(\boldsymbol{\gamma}_{1},\gamma'_{1},\boldsymbol{\mu}_{1},\mu'_{1}\right)\\ &=f(\boldsymbol{\gamma}_{1},\boldsymbol{\mu}_{1})+2c_{1}\gamma'_{1}+3d_{1}\mu'_{1}\\ &=3\gamma_{1,2}\gamma_{1,1}+\gamma_{1,1}+2\gamma_{1,2}+2\mu_{1,2}\mu_{1,1}+2\mu_{1,1}+\mu_{1,2}+2c_{1}\gamma'_{1}+3d_{1}\mu'_{1}, \end{split} \end{equation} where $0\leq c_{1} <p'_{1}=2$, $0 \leq d_{1} < q'_{1}=3$, $\mathbf{c}=c_{1}\in\{0,1\}, \text{and}~ \mathbf{d}=d_{1}\in\{0,1,2\}$. We have \begin{equation} \begin{split} &\Theta=\{\theta:\theta=(r_{{1}}, s_{{1}}):0\leq r_1\leq 1,0\leq s_1\leq 2\},\\ &T=\{t:t=(x_{{1}}, y_{{1}}):0\leq x_1\leq 1,0\leq y_1\leq 2\}. \end{split} \end{equation} Let $d_{\theta}=0$, now from (\ref{hare}) we have \begin{equation} b_{t}^{\theta,\mathbf{c},\mathbf{d}}=M^{\mathbf{c},\mathbf{d}}+3\gamma_{1,2}r_{1}+2\mu_{1,2}s_{1}+3\gamma_{1,1}x_{1}+2\mu_{1,2}y_{1}, \end{equation} and \begin{equation} \begin{split} \Omega_t^{\mathbf{c},\mathbf{d}}\!\!=\!\!\Big\{& \psi_{6}(b_{t}^{\theta,\mathbf{c},\mathbf{d}}):\theta=(r_{1},s_{1})\in\{0,1\}\times\{0,1,2\}\Big\}. \end{split} \end{equation} Therefore, the set \begin{equation} S=\{ \Omega_t^{\mathbf{c},\mathbf{d}}:t\in T,0\leq c_{1}\leq 1,0\leq d_{1}\leq 2 \}, \end{equation} forms an optimal $2D-(36,4\times 9)-\text{ZCACS}_{6}^{12\times 18}$ over $\mathbb{Z}_{6}$. \end{example} \begin{table}[h] \begin{center} \begin{minipage}{\textwidth} \caption{Comparison with Previous Works}\label{tab2} \begin{tabular*}{\textwidth}{@{\extracolsep{\fill}}lcccccc@{\extracolsep{\fill}}} \toprule% \\% Source & No. of set & Array Size & Condition & Based on \\ \midrule \cite{zeng2005construction} & $K=K'r$ & $L'_{1}\!\!\times\!\!(L'_{2}+r+1)$ & $r\geq 0$ & $\makecell{2D-ZCACS~of \\set~size~K'~ and \\array~size~L'_{1}\!\!\times\!\!L'_{2}}$ \\ \\ \cite{pai2021two} & 1 & $2^{m}\times 2^{n}L$ & $m,n\geq 0$ & ZCP \!of length \!$L$ \\ \cite{das2020two} & $K$ & $K\times K$& \makecell{$K$ divides set size} & BH matrices \\ \cite{roy2021construction} & $2\prod_{i=1}^{k_{i}}p^{2}_{i}$ & $2^{m} \times \prod_{i=1}^{k_{i}}p_{i}^{m_{i}}$& $k_{i},m_{i}\geq 1$, $p_{i}$'s are prime & MVF \\ \\ \textit{Thm 2} & $rs\alpha$ & $rm\times sn$ & $\makecell { \alpha=(\prod_{i=1}^{a}p^{k_{i}}_{i})(\prod_{j=1}^{b}q^{r_{j}}_{j}),\\m\!\!=\!\!\prod_{i=1}^{a}\!\!p^{m_{i}}_{i},n\!\!=\!\!\prod_{j=1}^{b}\!\!q^{n_{j}}_{j},\\ r,s,\alpha\geq 1, p_{i},q_{j} are primes}$& MVF\\ \botrule \end{tabular*} \end{minipage} \end{center} \end{table} \begin{remark} In \textit{Theorem \ref{VrindavanBihari}}, if we take $a=1$, $p_{1}=1$, $a'=1$, $p_{1}'=1$, $b=1$, $q_{1}=2$, $b'=l$, $r_{1}\geq 2$, we have optimal 1D-ZCCS with parameter $(\prod_{i=1}^{l}q'_{i}2^{r_{1}},2^{n_{1}})-\text{ZCCS}_{2^{r_{1}}}^{\prod_{i=1}^{l}q'_{i}2^{n_{1}}}$, which is exactly the same result as in \cite{ghosh2022direct}. Also if we take $l=1$, then we have optimal $1$D-ZCCS of the form $(q'_{1}2^{r_{1}},2^{n_{1}})-\text{ZCCS}_{2^{r_{1}}}^{q'_{1}2^{n_{1}}}$, which is exactly the same result in \cite{sarkar2021pseudo}. Therefore the optimal $1$D-ZCCS given by \cite{ghosh2022direct,sarkar2021pseudo} appears as a special case of the proposed construction \end{remark} \begin{remark} In \textit{Theorem \ref{VrindavanBihari}}, if $a=1$, $p_{1}=1$, $a'=1$, $p_{1}'=1$, $b=1$, $q_{1}=2$, $b'=l$, $r_{1}=1$, we have 1d-ZCCS with parameter $(2\prod_{i=1}^{l}q'_{i},2^{n_{1}})-\text{ZCCS}_{2}^{\prod_{i=1}^{l}q'_{i}2^{n_{1}}}$, which is just a collection of $2\prod_{i=1}^{l}q'_{i}$ ZCPs with sequence length $\prod_{i=1}^{l}q'_{i}2^{n_{1}}$ and ZCZ width $2^{n_{1}}$. Hence our work produces collections of ZCPs\cite{kumar2022direct} as well. \end{remark} \begin{remark} In \textit{Theorem \ref{VrindavanBihari}}, if we take $a=1$, $p_{1}=1$, $a'=1$, $p_{1}'=1$, $b=1$, $q_{1}=2$, $b'=r$, $q'_{1}=q'_{2}=\ldots=q'_{r}=2$, $n_{1}=m-r$~\text{and}~$r_{1}=s+1$ then we have 1D-ZCCS with parameter $(2^{s+r+1},2^{m-r})-\text{ZCCS}_{2^{s+1}}^{2^{m}}$, which is exactly the same result in \cite{sarkar2018optimal}. Hence, the ZCCS in \cite{sarkar2018optimal} appears as a special case of our proposed construction. \end{remark} \begin{remark} The 2D-ZCACS given by the proposed construction satisfies the equality given in (\ref{318}). Therefore the 2D-ZCACS obtained by the proposed construction is optimal. \end{remark} \begin{remark} If we take $a=1$, $a'=1$, $p_{1}=1$ and $p'_{1}=1$, in \textit{Theorem \ref{VrindavanBihari}}, we have optimal 1D-ZCCS with parameter $\left(\left(\prod_{j'=1}^{b'}q'_{j'}\right)\prod_{j=1}^{b}q^{r_{j}}_{j},n\right)-\text{ZCCS}_{\prod_{j=1}^{b}q^{r_{j}}_{j}}^{n\left(\prod_{j'=1}^{b'}q'_{j'}\right)}$ where, $n=\prod_{j=1}^{b}q_{j}^{n_{j}}$. Hence, we have optimal 1D-ZCCS of length $nm$ where, $n,m>1$ and $m=\prod_{j'=1}^{b'}q'_{j'}$. Therefore our construction produces optimal 1D-ZCCS with a new length which is not present in the literature by direct method. \end{remark} \begin{remark} The set size of our proposed 2D-ZCACS is $\left(\prod_{i'=1}^{a'}p'_{i'}\right)\left(\prod_{j'=1}^{b'}q'_{j'}\right)\prod_{i=1}^{a}p^{k_{i}}_{i}\prod_{j=1}^{b}q^{r_{j}}_{j}$ where, $k_{i},t_{j}\geq 1$. If we take $a=1,p_{1}=1,a'=1,p'_{1}=1,r_{1}=r_{2}=\ldots=r_{b}=2,b'=1,~\text{and}~q'_{1}=2$ then we have set size $2\prod_{j=1}^{b}q^{2}_{j}$ which is the set size of the 2D-ZCACS in \cite{roy2021construction}. Therefore, we have flexible number of set sizes compared to \cite{roy2021construction}. \end{remark} \subsection{Comparison with Previous Works} Table I compares the proposed work with indirect constructions from \cite{zeng2005construction,das2020two,pai2021two} and direct construction from \cite{roy2021construction}. The constructions in \cite{zeng2005construction,das2020two,pai2021two} heavily rely on initial sequences, increasing hardware storage. The construction in \cite{roy2021construction} is direct, but set size and array sizes are limited to some even numbers. Our construction doesn't require initial matrices or sequences and produces flexible parameters. \section{Conclusion} In this paper, 2D-ZCACSs are designed by using MVF. The proposed design does not depend on initial sequences or matrices, so it is direct. Our proposed design produces flexible array size and set size compared to existing works. Also, our proposed construction can be reduced to 1D-ZCCS. As a result, many 1D-ZCCSs become special cases of our work. Finally, we compare our work to the existing state-of-the-art and show that it's more versatile.
1,314,259,995,406
arxiv
\section{#1}\setcounter{lemma}{0}} \title{Existence and rigidity of quantum isometry groups for compact metric spaces} \author{Alexandru Chirvasitu\footnote{Partially supported by NSF grant DMS-1801011} and Debashish Goswami\footnote{Partially supported by J.C. Bose Fellowship from D.S.T. (Govt. of India).}} \begin{document} \date{} \newcommand{\Addresses}{ \bigskip \footnotesize \textsc{Department of Mathematics, University at Buffalo, Buffalo, NY 14260-2900, USA}\par\nopagebreak \textit{E-mail address}: \texttt{achirvas@buffalo.edu} \medskip \textsc{Indian Statistical Institute, 203, B. T. Road, Kolkata 700108, India}\par\nopagebreak \textit{E-mail address}: \texttt{goswamid@isical.ac.in} }} \maketitle \begin{abstract} We prove the existence of a quantum isometry groups for new classes of metric spaces: (i) geodesic metrics for compact connected Riemannian manifolds (possibly with boundary) and (ii) metric spaces admitting a uniformly distributed probability measure. In the former case it also follows from recent results of the second author that the quantum isometry group is classical, i.e. the commutative $C^*$-algebra of continuous functions on the Riemannian isometry group. \end{abstract} \noindent {\em Key words: compact quantum group, quantum isometry group, Riemannian manifold, geodesic, smooth action} \vspace{.5cm} \noindent{MSC 2010: 81R50, 81R60, 20G42, 58B34} \section*{Introduction} Having originated in the mathematical physics literature \cite{drinfeld,jimbo,frt,soi}, quantum groups now constitute a rich and actively-developed field. While the original impetus was mainly algebraic in nature, further developments have given the topic a functional-analytic flavor through the work of Woronowicz \cite{Pseudogroup}, Podles \cite{Podles}, Kustermans-Vaes \cite{kustermans} and many more (too numerous to do justice here). Actions of quantum groups are typically cast as coactions of certain Hopf algebras on algebraic or geometric structures, in the style of Manin's study \cite{manin_book} of quantum symmetries for quadratic graded algebras. In the framework introduced in \cite{Pseudogroup} the types of structures whose quantum symmetries one is led to consider abound: finite (quantum) graphs, finite non-commutative measure spaces (i.e. finite-dimensional $C^*$-algebras equipped with distinguished states, finite metric spaces, etc.). We refer the reader to \cite{ban_1,bichon,wang,ban_col} for some (of the numerous) examples. In the same spirit, the second author introduced in \cite{Goswami} the concept of quantum automorphism group of a spectral triple, the latter being an incarnation of a Riemannian or spin manifold in Connes' framework for non-commutative geometry \cite{con}. The topic has provided a rich supply of problems and examples, as reflected by further work on it \cite{skalski_bhow,Soltan}. In the present paper we are concerned with quantum symmetries of {\it classical} structures, specifically compact metric spaces. One phenomenon that has emerged from recent work in the field is that certain ``sufficiently regular'' classical structures are quantum-rigid, in the sense that a compact quantum group acting faithfully in a structure-preserving manner is automatically classical, i.e. a plain compact group. The recent \cite{final} confirms a conjecture to that effect by the second author: \begin{theorem}[3.10 of \cite{final}] A compact quantum group acting faithfully and smoothly on a closed connected smooth manifold is classical. \end{theorem} In this paper we prove a stronger version of the above theorem by weakening the smoothness condition to what we have termed `weak smoothness'. This keeps with the spirit of similar rigidity results in slightly varying settings: \begin{enumerate}[(1)] \item An analogue under the additional assumption that the action preserves the Laplacian of a Riemannian metric \cite{gafa}. \item A semisimple and cosemisimple Hopf algebra (hence also finite-dimensional) coacting faithfully on a commutative domain must be commutative \cite{Etingof_walton_1}. \item An isometric faithful action of a compact quantum group on the geodesic metric space of a negatively-curved connected closed Riemannian manifold is classical \cite{chirvasitu}. \end{enumerate} This last result is placed in the context of isometric actions as introduced in \cite{metric} and will be generalized in some of our main results below (\Cref{th.nbdry,th.rig-bdry}): \begin{theorem} A compact quantum group acting isometrically on the geodesic metric space of a compact connected Riemannian manifold is classical. \end{theorem} On a somewhat different note, a phenomenon that has received some attention in the literature is the problem of whether or not a given piece of structure even {\it has} a quantum automorphism group: a ``largest'' or universal quantum group acting in a structure-preserving manner. The issue was first illustrated in \cite[Theorem 6.1]{wang}: although a finite classical space $X$ admits a quantum automorphism group that automatically preserves the uniform measure on $X$, in general a finite-dimensional $C^*$-algebra $A$ does not admit such a universal action. The problem is that every compact quantum group acting on $A$ will automatically preserve a state on $A$, but there is no ``canonical'' state preserved by all such actions. For essentially the same reason, it is unclear whether, for a given compact metric space $(X,d)$, there is a universal compact quantum group acting isometrically on $X$ in the sense of \cite[Definition 3.1]{metric}. Contrast this with classical group actions: the isometry group of a compact metric space is compact, and hence is universal among classical compact groups acting isometrically. As in the case of finite-dimensional algebras touched on above, it is not difficult to show that having fixed a probability measure $\mu$ on $X$, there {\it is} a universal compact quantum group $QAUT(X,d,\mu)$ among those that act on $X$ so as to preserve both $d$ and $\mu$. As before, it is unclear in general how to select a ``best'' measure $\mu$ preserved by every quantum action in order to construct a universal quantum isometry group $QAUT(X,d)$. The choice, however, is obvious when the metric space $(X,d)$ admits a {\it uniformly distributed measure} (see \Cref{def.ud}): one which assigns equal mass to balls of equal radii. It is well known that uniformly distributed probability measures are unique when they exist. In that case we have (see \Cref{th.ud}): \begin{theorem} Let $(X,d)$ be a compact metric space admitting a uniformly distributed probability measure $\mu$. Then, every compact quantum group acting isometrically on $(X,d)$ leaves $\mu$ invariant. \end{theorem} Coupling this with the previous remarks on the existence of $QAUT(X,d,\mu)$, it follows that all such metric spaces $(X,d)$ have quantum isometry groups. These need not be classical, in general: perhaps the ``simplest'' example is the quantum symmetric group $S_n^+$ introduced in \cite[\S 3]{wang}: it can be recast as $QAUT(X,d)$ where \begin{equation*} X=\{1,\cdots,n\} \end{equation*} and $d$ is the uniform distance: \begin{equation*} d(i,j)= \begin{cases} 0&\text{if } i=j\\ 1&\text{otherwise} \end{cases} \end{equation*} The paper is organized as follows. \Cref{se.prel} recalls some background needed later, on the various topics we touch on (compact quantum groups, their actions, Riemannian geometry, etc.). In \Cref{se.smth} we prove some preliminary results on smooth actions, building on some of the material from \cite{gafa,final}. Finally, \Cref{se.main} contains the main results of the paper. \Cref{th.nbdry} proves that faithful isometric quantum actions on connected closed Riemannian manifolds are classical and \Cref{th.rig-bdry} extends this to compact connected manifolds with boundary. In the course of unwinding the argument we prove other results that might be of some independent interest: \begin{itemize} \item Recall that a homeomorphism of a topological manifold automatically preserves its boundary. We prove in \Cref{pr.bdry-inv} that similarly, a quantum isometric action on a compact connected manifold leaves the boundary invariant. \item We also prove in \Cref{cor.faith} that (once more, as expected from the classical situation) if a quantum isometric action as above is faithful and all connected components of the compact manifold acted upon have non-empty boundary then the restriction of the action to the boundary is again faithful. \item In \Cref{th.doubled} we extend a quantum action $\alpha$ on a compact manifold with boundary to the {\it double} $M\cup_{\partial M}M$ of the manifold in the sense of \cite[Example 9.32]{lee} and show that the doubled action retains some of the relevant properties of $\alpha$. \end{itemize} Finally, in \Cref{subse.unif} we prove that compact metric spaces which admit uniformly distributed probability measures have quantum isometry groups. \subsection*{Acknowledgements} We would like to express our warm thanks to the anonymous referees for a very thorough treatment of the initial draft; we believe their suggestions have improved the material considerably. \section{Preliminaries}\label{se.prel} \subsection{Notational conventions} We write $\cB(\cH)$ for the algebra of bounded operators on a Hilbert space $\cH$ and $\cB_0(\cH)$ for the ideal of compact operators. $Sp$, $\overline{Sp}$ denote the linear span and respectively the closed linear span of elements of a vector space (closed in whatever topology is relevant to the discussion). Several flavors of tensor products appear below: \begin{itemize} \item $\otimes$ is the minimal tensor product between $C^*$-algebras and more generally locally convex spaces and on one occasion, the spatial tensor product between von Neumann algebras. \item $\overline{\otimes}$ stands for the tensor product of Hilbert spaces and modules. \item $\otimes_{\rm alg}$ is the algebraic tensor product between vector spaces, non-topological algebras, etc. \item $T\otimes S$ denotes the tensor product of maps $S$ and $T$ in {\it all} of the above-mentioned cases. \end{itemize} We denote by $C(X)$ or $C^{\infty}(X)$ the spaces of continuous and smooth complex-valued functions on $X$ respectively and add an `$\bR$' to indicate real-valued functions, as in $C^{\infty}(X,\bR)$. \subsection{Compact quantum groups and their actions}\label{subse:cqg} We need some basic material on compact quantum groups and their actions on non-commutative spaces, as covered, say, in \cite{Van,Pseudogroup,Wor98}. The present section serves to recall some of this material. A compact quantum group (CQG for short) is a unital $C^{\ast}$ algebra ${\cal Q}$ equipped with a $C^*$-algebra morphism $\Delta$, {\it coassociative} in the sense that \begin{equation*} \begin{tikzpicture}[auto,baseline=(current bounding box.center)] \path[anchor=base] (0,0) node (1) {$\cQ$} +(2,.5) node (2) {$\cQ\otimes \cQ$} +(2,-.5) node (3) {$\cQ\otimes \cQ$} +(4,0) node (4) {$\cQ\otimes\cQ\otimes \cQ$}; \draw[->] (1) to[bend left=6] node[pos=.5,auto] {$\scriptstyle \Delta$} (2); \draw[->] (1) to[bend right=6] node[pos=.5,auto,swap] {$\scriptstyle \Delta$} (3); \draw[->] (2) to[bend left=6] node[pos=.5,auto] {$\scriptstyle \Delta\otimes\id$} (4); \draw[->] (3) to[bend right=6] node[pos=.5,auto,swap] {$\scriptstyle \id\otimes \Delta$} (4); \end{tikzpicture} \end{equation*} commutes and such that \begin{equation*} \Delta({\cal Q})({\cal Q}\otimes 1),\quad \Delta({\cal Q})(1\otimes {\cal Q})\quad \subseteq\quad \cQ\otimes \cQ \end{equation*} are both norm-dense. This suffices to ensure the existence of a unique dense Hopf $*$-subalgebra $\cQ_0\subseteq \cQ$, equipped with a counit $\varepsilon:\cQ_0\to \bC$ and an antipode $\kappa:\cQ_0\to \cQ_0$. For every compact quantum group $\cQ$ the convolution multiplication \begin{equation*} \begin{tikzpicture}[auto,baseline=(current bounding box.center)] \path[anchor=base] (0,0) node (1) {$\cQ$} +(2,.5) node (2) {$\cQ\otimes \cQ$} +(4,0) node (3) {$\bC$}; \draw[->] (1) to[bend left=6] node[pos=.5,auto] {$\scriptstyle \Delta$} (2); \draw[->] (2) to[bend left=6] node[pos=.5,auto] {$\scriptstyle \varphi\otimes\psi$} (3); \draw[->] (1) to[bend right=6] node[pos=.5,auto,swap] {$\scriptstyle \varphi*\psi$} (3); \end{tikzpicture} \end{equation*} of states $\varphi$ and $\psi$ makes the state space $S(\cQ)$ (or $\mathrm{Prob}(\cQ)$) of $\cQ$ into a semigroup (or monoid if $\cQ$ has a bounded counit). A compact quantum group $\cQ$ has a unique {\it Haar state} $h$ characterized by the fact that it ``absorbs'' every other state under convolution: \begin{equation*} \varphi*h = h*\varphi = h,\ \forall \varphi\in S(\cQ). \end{equation*} A compact quantum group is {\it reduced} if its Haar state is faithful. Every compact quantum group $\cQ$ has a reduced version $\cQ_r$ defined as the image of the GNS representation of the Haar state. The comultiplication of $\cQ$ descends through the quotient $\cQ_r$, making the latter into a CQG again. \begin{definition}\label{def:act} \label{def.CQG_action} A unital $\ast$-homomorphism $\alpha:{\cal C}\rightarrow {\cal C} \otimes {\cal Q}$, where ${\cal C}$ is a unital $C^\ast$-algebra and ${\cal Q}$ is a CQG, is said to be an action of ${\cal Q}$ on ${\cal C}$ if \begin{enumerate}[(1)] \item\label{item:11} the diagram \begin{equation*} \begin{tikzpicture}[auto,baseline=(current bounding box.center)] \path[anchor=base] (0,0) node (1) {$\cC$} +(2,.5) node (2) {$\cC\otimes \cQ$} +(2,-.5) node (3) {$\cC\otimes \cQ$} +(4,0) node (4) {$\cC\otimes\cQ\otimes \cQ$}; \draw[->] (1) to[bend left=6] node[pos=.5,auto] {$\scriptstyle \alpha$} (2); \draw[->] (1) to[bend right=6] node[pos=.5,auto,swap] {$\scriptstyle \alpha$} (3); \draw[->] (2) to[bend left=6] node[pos=.5,auto] {$\scriptstyle \alpha\otimes\id$} (4); \draw[->] (3) to[bend right=6] node[pos=.5,auto,swap] {$\scriptstyle \id\otimes \Delta$} (4); \end{tikzpicture} \end{equation*} commutes (co-associativity) and \item\label{item:12} $Sp \ \alpha({\cal C})(1\otimes {\cal Q})$ is norm-dense in ${\cal C} \otimes {\cal Q}$. \end{enumerate} $\alpha$ is {\it faithful} if \begin{equation*} Sp\{(\varphi\otimes\id)\alpha(x)\ |\ x\in \cC,\ \varphi\in S(\cC)\} \end{equation*} generates $\cQ$ as a $C^*$-algebra. \end{definition} An action $\alpha$ as in \Cref{def.CQG_action} induces a right action of the semigroup $S(\cQ)$ introduced above on the state space $S(\cC)$ of $\cC$, denoted by $\triangleleft$ and defined by \begin{equation}\label{eq:12} \begin{tikzpicture}[auto,baseline=(current bounding box.center)] \path[anchor=base] (0,0) node (1) {$\cC$} +(2,.5) node (2) {$\cC\otimes \cQ$} +(4,0) node (3) {$\bC$.}; \draw[->] (1) to[bend left=6] node[pos=.5,auto] {$\scriptstyle \alpha$} (2); \draw[->] (2) to[bend left=6] node[pos=.5,auto] {$\scriptstyle \varphi\otimes\psi$} (3); \draw[->] (1) to[bend right=6] node[pos=.5,auto,swap] {$\scriptstyle \varphi\triangleleft\psi$} (3); \end{tikzpicture} \end{equation} An action $\alpha$ of ${\cal Q}$ on ${\cal C}$ induces an action $\alpha_r$ by the reduced version ${\cal Q}_r$ of $\cQ$: \begin{equation*} \begin{tikzpicture}[auto,baseline=(current bounding box.center)] \path[anchor=base] (0,0) node (1) {$\cC$} +(2,.5) node (2) {$\cC\otimes \cQ$} +(4,0) node(3){$\cC\otimes \cQ_r$,}; \draw[->] (1) to[bend left=6] node[pos=.5,auto] {$\scriptstyle \alpha$} (2); \draw[->] (2) to[bend left=6] node[pos=.5,auto] {$\scriptstyle \id\otimes \pi_{\cQ}$} (3); \draw[->] (1) to[bend right=6] node[pos=.5,auto,swap] {$\scriptstyle \alpha_r$} (3); \end{tikzpicture} \end{equation*} where $\pi_{\cQ}:\cQ\to \cQ_r$ is the canonical surjection. The original action $\alpha$ is faithful if and only if $\alpha_r$ is. For every action $\alpha$ there is a dense $*$-subalgebra $\cC_0\subseteq \cC$ on which $\alpha$ restricts to a purely algebraic coaction of the Hopf algebra $\cQ_0\subseteq \cQ$: \begin{equation*} \begin{tikzpicture}[auto,baseline=(current bounding box.center)] \path[anchor=base] (0,0) node (1) {$\cC_0$} +(2,.5) node (mu) {$\cC$} +(2,-.5) node (md) {$\cC_0\otimes_{\rm alg}\cQ_0$} +(4,0) node (2) {$\cC\otimes \cQ$,}; \draw[right hook->] (1) to[bend left=6] node[pos=.5,auto] {$\scriptstyle $} (mu); \draw[->] (mu) to[bend left=6] node[pos=.5,auto] {$\scriptstyle \alpha$} (2); \draw[right hook->] (md) to[bend right=6] node[pos=.5,auto] {$\scriptstyle $} (2); \draw[->] (1) to[bend right=6] node[pos=.5,auto] {$\scriptstyle $} (md); \end{tikzpicture} \end{equation*} where the hooked arrows are the obvious inclusions. Following \cite{Wor98} (or rather paraphrasing it), recall that a {\it unitary representation} of a CQG $(\cQ,\Delta)$ on a Hilbert space $\cH$ is a unitary $U$ in the space $\cL(\cH\overline{\otimes}\cQ)$ of adjointable operators on the Hilbert $\cQ$-module $\cH\overline{\otimes}\cQ$ such that the linear map \begin{equation*} V:\cH\to \cH\overline{\otimes}\cQ,\quad V(\xi):=U(\xi\otimes 1) \end{equation*} makes \begin{equation*} \begin{tikzpicture}[auto,baseline=(current bounding box.center)] \path[anchor=base] (0,0) node (1) {$\cH$} +(2,.5) node (2) {$\cH\overline{\otimes} \cQ$} +(2,-.5) node (3) {$\cH\overline{\otimes} \cQ$} +(4,0) node (4) {$\cH\overline{\otimes}\cQ\otimes \cQ$}; \draw[->] (1) to[bend left=6] node[pos=.5,auto] {$\scriptstyle V$} (2); \draw[->] (1) to[bend right=6] node[pos=.5,auto,swap] {$\scriptstyle V$} (3); \draw[->] (2) to[bend left=6] node[pos=.5,auto] {$\scriptstyle V\otimes\id$} (4); \draw[->] (3) to[bend right=6] node[pos=.5,auto,swap] {$\scriptstyle \id\otimes \Delta$} (4); \end{tikzpicture} \end{equation*} commute. \begin{definition}\label{def.impl} An action $\alpha$ as in \Cref{def.CQG_action} is {\it implemented} by a unitary representation of $\cQ$ on $\cH$ if we can represent \begin{equation*} \pi:\cC\subset \cB(\cH) \end{equation*} faithfully on a Hilbert space such that \begin{equation*} \alpha(a)=U(\pi(a)\otimes 1)U^{-1} \end{equation*} for all $a\in \cC$ \end{definition} It is not difficult to see that if $\alpha$ is implemented by a unitary representation $U$ then it is one-to-one (or injective). We can say even more: $U$ induces a unitary representation \begin{equation*} U_r:=(\id\otimes \pi_{\cQ})U \end{equation*} of $\cQ_r$, where $\pi_{\cQ}:\cQ\to \cQ_r$ is reduction surjection. It is then easy to check that $U_r$ implements the reduced counterpart $\alpha_r$ of $\alpha$, and hence $\alpha_r$ is injective. The converse also holds: an injective reduced action $\alpha$ is implemented by a unitary representation $U$ of $\cQ$. To see this, consider the family of all states $\varphi_i$, $i\in I$ on $\cC$. Since \begin{equation*} \alpha:\cC\to \cC\otimes \cQ \end{equation*} is an embedding, the compositions \begin{equation*} \overline{\varphi_i}:=\varphi_i\triangleleft h = (\varphi_i\otimes h)\circ \alpha \end{equation*} defined in \Cref{eq:12} form a jointly faithful family on $\cC$, and hence the direct sum of their attached GNS representations \begin{equation*} \rho_i:\cC\to \cB(L^2(\cC,\overline{\varphi_i})) \end{equation*} is faithful. Furthermore, because each $\overline{\varphi_i}$ is {\it invariant} under $\alpha$ in the sense that \begin{equation*} \overline{\varphi_i}\triangleleft \psi = \overline{\varphi_i} \end{equation*} for all states $\psi\in S(\cQ)$, the map \begin{equation*} a\otimes q\mapsto \alpha(a)(1\otimes q),\ a\in \cC,\ q\in \cQ \end{equation*} extends to a unitary representation of $\cQ$ on the underlying space \begin{equation*} \bigoplus_{i\in I} L^2(\cC,\overline{\varphi_i}) \end{equation*} of the direct sum representation $\bigoplus_i \rho_i$ which as desired, implements $\alpha$. When $\cC$ is classical, i.e. $C(X)$ for a compact Hausdorff space $X$, the invariant states $\overline{\varphi_i}$ are probability measures $\mu_i$ on $X$ and hence $\alpha$ is induced by a unitary representation of $\cQ$ on the direct sum of Hilbert spaces $L^2(X,\mu_i)$. Furthermore, when $X$ admits a {\it single} faithful probability measure (e.g. if $X$ is metrizable) then we can represent $\cQ$ on a single Hilbert space $L^2(X,\mu)$. \subsection{Isometric actions}\label{subse.metric} Let $(X,d)$ be a compact metric space and $\cQ$ a compact quantum group acting faithfully on $X$. We always assume $\cQ$ has a bounded antipode whenever referring to isometric actions. This is mostly harmless in our circumstances: according to \cite[Theorem 3.16]{hua-inv} compact quantum groups acting faithfully on a unital $C^*$-algebra so as to preserve a tracial state are automatically {\it of Kac type} in the sense that their antipodes are involutive ($\kappa^2=\id$) on the unique dense Hopf subalgebra of $\cQ$. $\kappa$ then descends to a bounded multiplication-reversing $*$-automorphism of the reduced counterpart $\cQ_r$ of $\cQ$ and we can always pass to the reduced version $\alpha_r$ of the action $\alpha$. We follow \cite[Definition 3.1 and Lemma 3.2]{metric} in defining the notion of an isometric action of a compact quantum group $\cQ$ on $X$: \begin{definition}\label{def.isometric} Let $(X,d)$ be a compact metric space and write $d_x$ for the function $d(x,-)$ and $\cC=C(X)$. A faithful action $\alpha:\cC\to \cC\otimes \cQ$ is {\it isometric} if \begin{equation*} \alpha(d_x)(y)=\kappa(\alpha(d_y)(x)) \end{equation*} for all $x,y\in X$, where $\kappa$ is the bounded antipode of $\cQ$. \end{definition} Note that if $\alpha$ is isometric then so is $\alpha_r$, and moreover by \cite[Proposition 3.10]{Chi15} \begin{equation*} \alpha_r:\cC\to \cC\otimes \cQ \end{equation*} is one-to-one. We will make crucial use of this below. \subsection{Manifolds with boundary}\label{subse:prel-bdry} In \Cref{subse.bdry} we work extensively with (compact) smooth and Riemannian manifolds with boundary. Much of the general background extends from the boundaryless case without issue, but it is sometimes difficult to locate appropriate references in the literature. With that in mind, we give a few references here. \cite{lee} is a good overall source, given that care is taken throughout to phrase results so that they apply to manifolds with boundary. In particular, smooth structures on such manifolds are introduced on \cite[pp.27-29]{lee} and Riemannian structures in \cite[Chapter 13]{lee}. Assuming for simplicity that our manifold is embedded into a Euclidean space of the same dimension (the reasoning carries through in general by picking coordinate patches, etc.), the discussion on \cite[p.27]{lee} shows that the following two conditions on a function $f:M\to \bR$ (or $\bC$) are equivalent: \begin{itemize} \item $f$ extends to a smooth map on an open neighborhood of $M$; \item $f$ is continuous on $M$, smooth in the interior of $M$, and all of its partial derivatives extend continuously to the boundary $\partial M$. \end{itemize} These are the functions one naturally regards as smooth on $M$ even when $\partial M\ne \emptyset$, and we denote the algebra they constitute by \begin{itemize} \item $C^{\infty}(M)$ for complex-valued functions; \item $C^{\infty}(M,\bR)$ in the real-valued case. \end{itemize} We focus on the former to fix ideas, but everything of substance mentioned below is valid for real-valued functions. The suprema of all of the partial derivatives form a family of seminorms making $C^{\infty}(M)$ into a locally convex topological vector space, and one proves as ``usual'' (i.e. in the boundaryless case) that $C^{\infty}(M)$ is nuclear (e.g. \cite[Corollary to Theorem 51.5]{trev}). We will also work with smooth maps on $M$ valued in a $C^*$-algebra (like, say, the compact quantum group function algebras $\cQ$ discussed above). We denote these by \begin{equation*} C^{\infty}(M,\cQ):=C^{\infty}(M)\otimes \cQ, \end{equation*} where the tensor of locally convex vector spaces is unambiguous by nuclearity. Concretely, the elements of $C^{\infty}(M,\cQ)$ are those functions $M\to \cQ$ which \begin{itemize} \item are continuous on $M$; \item smooth in the interior of $M$ in the usual sense that all higher derivatives exist; \item admit continuous extensions of all partial derivatives to the boundary $\partial M$. \end{itemize} We will need the following version of Nachbin's approximation theorem for algebras of smooth functions (e.g. \cite[Theorem 1.2.1]{llav}), which is usually stated as the equivalence of \Cref{item:7} and \Cref{item:8} below. \begin{theorem}\label{th:nachb} Let $M$ be an $n$-dimensional compact smooth manifold and $\cA\subseteq C^{\infty}(M,\bR)$ a unital subalgebra. The following conditions are equivalent: \begin{enumerate}[(1)] \item\label{item:7} $\cA\subseteq C^{\infty}(M,\bR)$ is Fr\'echet-dense; \item\label{item:8} $\cA$ separates points and tangent vectors, in the sense that for each non-zero vector $v\in T_xM$ we have $df_x(v)\ne 0$ for some $f\in \cA$; \item\label{item:9} $\cA$ separates points and \begin{equation*} \dim\{df_x,\ f\in \cA\}=n,\quad \forall x\in M. \end{equation*} \item\label{item:10} There is some finite subset $f_i\in \cA$, $1\le i\le k$ such that \begin{equation*} M\ni x\mapsto (f_1(x),\ \cdots,\ f_k(x))\in \bR^k \end{equation*} is an embedding. \end{enumerate} \end{theorem} \begin{proof} {\bf \Cref{item:7} $\Rightarrow$ \Cref{item:8}} This is immediate from the fact that certainly, $C^{\infty}(M,\bR)$ itself satisfies the separation conditions in \Cref{item:8}. {\bf \Cref{item:8} $\Rightarrow$ \Cref{item:9}} If the differentials $df_x$ spanned a space of dimension $<n$ then they would have to annihilate some non-zero vector in the $n$-dimensional space $T_xM$. {\bf \Cref{item:9} $\Rightarrow$ \Cref{item:10}} We denote by $\overline{\cA}\subseteq C^{\infty}(M,\bR)$ the Fr\'echet closure of $\cA$ and seek to show that the inclusion is an equality. For an arbitrary $x\in M$ some $k$-tuple \begin{equation*} \Psi:=(f_1,\cdots,f_k)\in \cA^k \end{equation*} has non-zero Jacobian around $x$ and hence is a local $C^\infty$ coordinate system around $x$. We thus have {\it local} embeddability by functions in $\cA$. By the point-separation assumption (and the fact that $\cA$ is unital) the standard Stone-Weierstrass theorem (e.g. \cite[p.122]{rud-fa}) shows that $\cA$ is dense in $C(M,\bR)$ with the supremum norm. It follows that for every inclusion \begin{equation*} \overline{V}\subset U \end{equation*} with open $U,V\subseteq M$ and every $\varepsilon>0$ there are functions $\varphi\in\cA$ with \begin{equation*} \sup_{V}|1-\varphi|<\varepsilon\quad \text{and}\quad \sup_{M\setminus U}|\varphi|<\varepsilon. \end{equation*} Since there are smooth functions $\theta:\bR\to \bR$ with \begin{equation*} \theta\circ \varphi|_V\equiv 1\quad\text{and}\quad \theta\circ \varphi|_{M\setminus U}\equiv 0 \end{equation*} and $\theta\circ \varphi\in \overline{\cA}$, the latter algebra contains arbitrary ``bump'' functions: equal to $1$ in any given open set $V$ and $0$ outside any given superset of the closure of $V$. Because $M$ is compact, the above local-embeddability conclusion and the existence of bump functions show that we can cover $M$ with finitely many open $U_j$ such that \begin{itemize} \item functions $f_{j,i}\in \cA$, $1\le i\le k_j$ implement an embedding into $\bR^{k_j}$ of the union of all $U_{j'}$ with \begin{equation*} \overline{U_j}\cap \overline{U_{j'}}\ne \emptyset; \end{equation*} \item for $j\ne j'$ such that \begin{equation*} \overline{U_j}\cap \overline{U_{j'}}= \emptyset; \end{equation*} we have a function $\psi_{j,j'}$ equal to $1$ on $U_j$ and $0$ on $U_{j'}$. \end{itemize} The tuple $\psi_{j,j'}f_{j,i}$ (for all $i$, $j$ and $j'$ as above) will be the desired $f_i$, $1\le i\le k$. {\bf \Cref{item:10} $\Rightarrow$ \Cref{item:7}} Every smooth function on $M$ will be Fr\'echet-approximable by polynomials in the $f_i$, $1\le i\le k$. \end{proof} \begin{remark}\label{re.loc-not-glob} The purely local condition on the differentials of $f\in \cA$ would not have sufficed in \Cref{item:8} or \Cref{item:9} of \Cref{th:nachb}: consider for instance the algebra $\cA$ of {\it even} smooth functions on the standard sphere $\bS^n$ (`even' in the sense that $f(x)=f(-x)$). It satisfies the local condition but not the point-separation requirement in the statement above. \end{remark} \subsection{Riemannian geometry}\label{subse.ri} This will be very brief, as good reference sources abound (the reader can consult \cite{doC92,ccl} for instance), though references are much richer for manifolds {\it without} boundary. All of our manifolds are assumed compact, smooth and connected unless specified otherwise. Given a (compact, smooth, connected) Riemannian manifold $M$ we typically denote by $d$ its {\it geodesic distance}, i.e. \begin{equation*} d(x,y) = \inf_{\gamma}\text{length of $\gamma$}, \end{equation*} where $\gamma$ ranges over the Lipschitz curves connecting $x$ and $y$ (e.g. \cite[p.2]{grom-struct}). According to the celebrated Hopf-Rinow theorem (e.g. \cite[p.9]{grom-struct}), the compactness (and hence completeness) of $M$ as a metric space under $d$ implies that the distance $d(x,y)$ is always achieved by a {\it minimizing geodesic} (\cite[Definition 1.9]{grom-struct}). In general, we follow standard convention in referring to a curve that locally achieves $d$ as a geodesic (e.g. \cite[Introduction]{abb-cauchy}). There is a positive $\delta>0$ such that all functions \begin{equation*} d^2_x(-) := d(x,-)^2,\ x\in \overset{\circ}{M} = M\setminus\partial M \end{equation*} are smooth on balls in the interior of $M$ of radius $\le \delta$. Indeed, we can simply choose $\delta$ sufficiently small to allow for {\it normal coordinates} in every such ball, where we recall (e.g. \cite[p.145]{ccl}) that a coordinate system on an open neighborhood $U$ of $x\in M$ is normal if the exponential map \begin{equation*} \exp:T_xM\to M \end{equation*} maps some open ball around $0\in T_xM$ diffeomorphically onto $U$. The squared distance $d^2_x$ can then be identified, in $\delta$-small neighborhoods around $x$, with the squared Euclidean distance; clearly, the latter is smooth. In order to ``cut off'' large problematic distances where $d^2$ might fail to be smooth we will often work with \begin{equation}\label{eq:2} D(-,-) := \psi\circ d^2(-,-) \end{equation} for a smooth ``bump'' function $\psi:\bR\to \bR$ equal to the identity on, say, $\left(-\frac \delta 2,\frac\delta 2\right)$ and vanishing outside $(-\delta,\delta)$ (where $\delta>0$ is chosen sufficiently small, as explained above). \begin{remark}\label{re.smth_Dy} For sufficiently small $\delta$ the map $D:M\times M\to \bR$ is smooth and hence so is \begin{equation*} y\mapsto D_y\in C(M), \end{equation*} where smoothness of a map into a possibly-infinite-dimensional Banach space means $C^{\infty}$, as defined for instance on \cite[pp.7-8]{lang-diff}. \end{remark} When working with Riemannian manifolds $M$ with boundary $\partial M\ne \emptyset$ we take it for granted that the Riemannian structure can be extended to a closed (i.e. compact, boundary-less) manifold $N\supset M$. Such an extension result follows, for instance, from \cite[Theorem A]{pv}. \section{Smooth actions revisited}\label{se.smth} In the present section we work with closed manifolds only, i.e. the assumption $\partial M=\emptyset$ is in place throughout. We refer to \cite{gafa} for a detailed discussion on the natural Fr\'echet topology of $C^\infty(M)$ as well as the space of ${\cal B}$-valued smooth functions $C^\infty(M, {\cal B})$ for any Banach space ${\cal B}$. Indeed, by the nuclearity of $C^\infty(M)$ as a locally convex space, $C^\infty(M, {\cal B})$ is the unique topological tensor product of $C^\infty(M)$ and ${\cal B}$ in the category of locally convex spaces. This allows us to define $T \otimes {\rm id}$ from $C^\infty(M, {\cal B})$ for any Fr\'echet continuous linear map $T $ from $ C^\infty(M) $ to $C^\infty(M)$ (or, more generally, to some other locally convex space). We also recall from \cite{gafa} the space $\Omega^1(M)\equiv \Omega^1(C^\infty(M))$ of smooth one-forms and the space $\Omega^1(M, {\cal B})$ of smooth ${\cal B}$-valued one-forms, as well as the natural extension of the differential map $d$ to a Fr\'echet continuous map from $C^\infty(M, {\cal B})$ to $\Omega^1(M, {\cal B})$. In fact, for $F \in C^\infty(M, {\cal B})$, the element $dF \in \Omega^1(M, {\cal B})$ is the unique element satisfying $({\rm id } \otimes \xi)(dF(m))=(dF_\xi)(m),$ for every continuous linear functional $\xi$ on ${\cal B}$, where $m \in M,$ $dF(m) \in T^*_m M \otimes_{\rm alg} {\cal B}$ and $F_\xi \in C^\infty(M)$ is given by $F_\xi(x):=\xi(F(x))~\forall x \in M.$ The notion of smooth action given below follows \cite{gafa}; we supplement it here with a weaker notion, as follows. \begin{definition}\label{def.smth} An action $\alpha$ of a CQG ${\cal Q}$ on $C(M)$ {\it weakly smooth} if \begin{equation*} \alpha(C^\infty(M)) \subseteq C^\infty(M, {\cal Q}). \end{equation*} $\alpha$ is {\it smooth} if it is weakly smooth and \begin{equation*} \overline{Sp} \ \alpha(C^{\infty}(M))(1\otimes {\cal Q}) = C^\infty(M, {\cal Q}) \end{equation*} in the Fr\'echet topology. \end{definition} \begin{remark} In case ${\cal Q}=C(G)$ where $G$ is a compact group acting on $M$, say by $\alpha_g: x \mapsto gx$, the smoothness of the induced action $\alpha$ given by $\alpha(f)(x,g)=f(gx)$ on $C(M)$ in the sense of the above definition is equivalent to the smoothness of the map $M \ni x \mapsto gx$ for each $g$. Moreover, in this case smoothness and weak smoothness are equivalent. \end{remark} It is proved in \cite[Corollary 3.3]{injective}that for any smooth action $\alpha$, the corresponding reduced action $\alpha_r$ is injective and hence it is implemented by some unitary representation. Moreover, the arguments in \cite{Podles} can be adapted to prove that for a smooth action there is a norm-dense unital $\ast$-subalgebra ${\cal C}_0$ consisting of smooth functions on which $\alpha$ is algebraic. Indeed, this follows from the fact that the spectral projection $P_\pi$ corresponding to any irreducible unitary representation $\pi$ leaves $C^\infty(M)$ invariant and $P_\pi(C^\infty(M))$ is clearly norm-dense in $P_\pi(C(M))$ as $P_\pi$ is a norm-bounded linear operator. The norm-density of $\cC_0$ is crucial in \cite[\S 3.2]{final} in producing a Riemannian structure invariant (along with its Laplacian) under the action $\alpha$. We cannot rely on those results directly when the action is only weakly smooth, hence the additional effort below. We will prove an analogue of the main result of Subsection 3.1 of \cite{final}. To make sense of the statement, recall (e.g. \cite[Definition 3.3]{final}) \begin{definition}\label{def:riempres} Let $M$ be a Riemannian manifold and $\alpha:C(M)\to C(M,\cQ)$ a weakly smooth action. Casting the Riemannian structure as a sesquilinear pairing \begin{equation*} \langle -,- \rangle:\Omega^1(M)\times \Omega^1(M)\to C^{\infty}(M) \end{equation*} on 1-forms (complex anti-linear in the first variable), we say that $\alpha$ {\it preserves} (or {\it leaves invariant}) the Riemannian structure if \begin{equation*} \langle d\alpha(f),d\alpha(g)\rangle_{\cQ} = \alpha(\langle df,dg\rangle), \ \forall f,g\in C^{\infty}(M) \end{equation*} where \begin{itemize} \item $d\alpha(f) = (d\otimes \id)\alpha(f)\in \Omega^1(M)\otimes \cQ$ simply applies the differential on the left hand leg of $\alpha(f)$ (this makes sense by weak smoothness); \item the $\cQ$-valued inner product is the extension of \begin{equation*} (\omega_1\otimes x, \omega_2\otimes y)\mapsto \langle \omega_1,\omega_2\rangle\otimes x^*y \end{equation*} for $\omega_i\in \Omega^1(M)$ and $x,y\in \cQ$. \end{itemize} \end{definition} We can now state \begin{theorem}\label{th.prsrv} \label{metric_pres} Let $\alpha$ be a weakly smooth action of a CQG ${\cal Q}$ on a compact Riemannian manifold $M$ such that the corresponding reduced action $\alpha_r$ is injective. Then $\alpha$ preserves some Riemannian metric on $M$. \end{theorem} \begin{proof} If we carefully examine steps of \cite[Theorem 3.7 and Corollary 3.8]{final} it becomes clear that we only need the unitary $U$ which implements the action and the fact that \begin{equation*} \alpha(C^\infty(M))\subseteq C^\infty(M,\cQ). \end{equation*} Adapting those arguments, we can conclude that \begin{equation}\label{eq:dfg} d\alpha(f)\alpha(g)=\alpha(g) d\alpha(f),\ \forall f,g \in {\cal C}_0. \end{equation} However, ${\cal C}_0$ is only norm-dense. using that, we get the above identity for all $g \in C(M)$ and all $f \in {\cal C}_0$. Now, we fix $g \in C^\infty(M)$ and use the Leibniz rule (and the commutativity of $\alpha(f)$ with $\alpha(g)$) which gives $\alpha(f)d\alpha(g)=d\alpha(g)\alpha(f)$ for all $g \in C^\infty(M),~f \in {\cal C}_0$. Again using the norm-density of ${\cal C}_0$, we conclude $d\alpha(f)\alpha(g)=\alpha(g) d\alpha(f)$ for all $f,g \in C^\infty(M)$, hence the argument of \cite[Theorem 3.5]{gafa} applies and completes the proof of the present theorem. \end{proof} Following \cite[(3)]{gafa}, for $m\in M$ we denote \begin{equation}\label{eq:qm'} \cQ'_m:=\text{unital $*$-algebra generated by } \{\alpha(f)(m),\ (X\otimes\id)\alpha(f)(m)\} \end{equation} for $f\in C^{\infty}(M)$ and smooth vector fields $X$ on $M$. In the course of the proof of \Cref{th.prsrv} we have shown that \begin{lemma}\label{le:qm'} All $\cQ'_m$, $m\in M$ are commutative. \end{lemma} \begin{proof} Indeed, this is what \Cref{eq:dfg} says. \end{proof} We now want to prove the commutativity among higher order partial derivatives. This involves a lift to the cotangent bundle which we can do by following the arguments of \cite[Lemma 3.10]{final} verbatim. However, in order to be able to apply \Cref{metric_pres} to the lift, we must ensure that the corresponding reduced action for the lift is injective. This is equivalent to proving the existence of a faithful positive Borel measure on the sphere bundle of the cotangent space which is preserved by the lifted action. We do this in a few steps. The proof requires some notation. Having fixed an invariant Riemannian metric on $M$, we write $S$ for the unit sphere bundle of the cotangent bundle on $M$: \begin{equation*} \pi:S\subset T^*M\to M. \end{equation*} The typical element of $T^*M$ will be denoted by $(x,\omega)$, where $\omega\in T^*_xM$ is a cotangent vector. As in \cite[\S 3.3]{final}, for a local chart $U$ on $M$ trivializing $T^*M$ with coordinates $x_1,\cdots,x_n$ we define functions $t^U_j\in C^{\infty}(S)$ by \begin{equation*} t^U_j(x,\omega):=\langle \omega,\omega_j(x)\rangle_x \end{equation*} where $\omega_1,\cdots,\omega_n$ is a fixed set of $1$-forms on $M$ orthonormal at every point in $U$ and $\langle-,-\rangle_x$ is the inner product on $T^*_xM$ induced by the Riemannian metric (note the slight abuse of notation: $t^U_j$ depends on the choice of $\omega_j$). We also define functions $T^U_j\in C^{\infty}(S,\cQ)$ as follows: having extended $\alpha$ to an action $d\alpha$ on the $C^{\infty}(M)$-module of $1$-forms as in \cite[\S 3.2]{gafa} (where that extension is denoted $d\alpha_{(1)}$) and denoting by $\langle\langle-,-\rangle\rangle_x$ the $\cQ$-valued inner product on the Hilbert $\cQ$-module $T^*_xM\otimes \cQ$, we set \begin{equation*} T^U_j(x,\omega):=\langle\langle\omega\otimes 1,d\alpha(\omega_j)(x)\rangle\rangle_x. \end{equation*} As in \cite[\S 3.3]{final}, we construct an action $\beta$ of ${\cal Q}$ on $S$ given by \begin{equation*} \beta(f t^U_j)=\alpha(f)T^U_j,\quad f \in C_c^\infty(U) \end{equation*} However, in our case, $\beta$ is only a $C^*$-action, weakly smooth in the sense that \begin{equation*} \beta(C^{\infty}(S))\subset C^{\infty}(S,\cQ). \end{equation*} Note that we have used the continuity of $\alpha$ in the Fr\'echet topology, which follows from weak smoothness by the closed graph theorem. \begin{lemma}\label{higher_comm} Assume the hypotheses of \Cref{th.prsrv}. For any point $x\in M$ and local coordinates $(x_1, \ldots, x_n)$ around $x$, the algebra generated by $\alpha(f)(x), \frac{\partial}{\partial x_{i_1}}\ldots \frac{\partial}{\partial x_{i_k}}\alpha(g)(x)$, where $f,g \in C^\infty(M),$ $k \geq 1$ and $i_j \in \{1,\ldots, n\}$, is commutative. \end{lemma} \begin{proof} Let $\mu$ be a faithful Borel measure preserved by $\alpha$. Let $\mu_0$ denote the unique $O(n)$ invariant faithful Borel measure (Lebesgue measure) of $S^{n-1}$ and we have a canonical positive, faithful Borel measure on $S$ which is given by the product measure $\mu \times \mu_0$ on any local trivialization. We call this measure $\nu$ and claim that it is preserved by $\beta$. Choose and fix any locally trivializing neighborhood $U$ and also a function of the form \begin{equation*} F(e)=f(\pi(e)) P\left(t^U_{j}(e),j=1,\ldots, n\right), \end{equation*} where $P$ is some polynomial and $f$ has a compact support within $U$. Let $\chi$ be a smooth function with support in $U$ such that $\chi=1$ on the support of $f$. Now, fix another trivializing neighborhood $V$. Note that the integral $\int_{\pi^{-1}(V)} G d\nu=\int_{m \in V} d\mu(m) \left( \int_{\pi^{-1}(m)} G_m d\mu_0 \right),$ where $G_m$ is the restriction of $G \in C(S, {\cal Q})$ to the fibre at $m$ which is homeomorphic to $S^{n-1}$. In particular, \begin{equation*} \int_{\pi^{-1}(V)} \beta(F) d\nu=\int_{m \in V} \alpha(f)(m) \alpha(\chi)(m) \int_{ e \in \pi^{-1}(m)} P\left( T^U_{j}(e),~j=1,\ldots,n \right) d\mu_0. \end{equation*} Recalling the $*$-algebras $\cQ'_m$ defined by \Cref{eq:qm'}, we claim that \begin{equation}\label{vol_pres} \int_{\pi^{-1}(m)} \gamma \left( \alpha(\chi)(m)P( T^U_{j}(e),j=1, \ldots, n) \right) d\mu_0= \int_{\pi^{-1}(m)} \gamma \left( \alpha(\chi)(m)P( t^{U}_{j}(e),j=1, \ldots, n)\right) d\mu_0, \end{equation} for any character $\gamma$ on $\cQ'_m$. Now, it can be proved along the lines of Lemma 3.11 of \cite{gafa} that either $\gamma(\alpha(\chi)(m))$ is zero or we have \begin{equation*} \sum_j \gamma(T^U_j(e))^2=1,\quad \forall e \in \pi^{-1}(m). \end{equation*} In case $\gamma(\alpha(\chi)(m))=0$, the equality (\ref{vol_pres}) is immediate. Otherwise, we observe that \begin{equation*} e\equiv (t^U_1(e), \cdots, t^U_n(e)) \mapsto (\gamma(T^U_1(e)),\cdots, \gamma(T^U_n(e)) ) \end{equation*} gives an isometric map of the fibre $\pi^{-1}(\{ m\}) \cong S^{n-1}$, hence it must be induced by some orthogonal (linear) map restricted to the sphere. As $\mu_0$ is invariant under any such orthogonal transformation, we have (\ref{vol_pres}). The commutativity of $\cQ'_m$ (\Cref{le:qm'}) then implies the same relation without $\gamma$, i.e. for all $m \in V,$ \begin{equation*} \int_{\pi^{-1}(m)} \alpha(\chi)(m)P(T^U_{j}(e),j=1, \ldots,n ) d\mu_0=\int_{\pi^{-1}(m)} \alpha(\chi)(m)P( t^{U}_{j}(e),j=1, \ldots, n ) d\mu_0. \end{equation*} Now, $\int_{\pi^{-1}(m)} P( t^{U}_{j}(e),j=1, \ldots, n )d\mu_0$ does not depend on $m$ and is equal to $C=\int_{\pi^{-1}(m)} \psi(y)d\mu_0(y)$, where $\psi: \pi^{-1}(m) \rightarrow \bR$ given by \begin{equation*} \psi(y\equiv (y_1,\ldots, y_n))=P(y_{i},i=1,\ldots,n). \end{equation*} This gives \begin{equation*} \int_{\pi^{-1}(V)} \beta(F) d\mu=C\int_V \alpha(f) \alpha(\chi) d\mu= C\int_V \alpha(f) d\mu. \end{equation*} As this is true for every locally trivializing $V$ we get by a partition of unity argument $\int_{S} \beta(F)d\nu=C \int_M \alpha(f)d\mu= C (\int_M f d\mu)1_{\cal Q}=(\int F d\nu) 1_{\cal Q}$, as $\int F d\nu$ is clearly equal to $C\int_M f d\mu$. Thus, the lifted action $\beta$ on $S$ remains weakly smooth and $\beta_r$ is injective. We can now follow the iterative arguments of \cite{final} to complete the proof of higher order commutativity. \end{proof} The proof of the main theorem of \cite{final} now goes through verbatim to give is the following: \begin{theorem}\label{main} Let $\alpha$ be a weakly smooth faithful action of a CQG ${\cal Q}$ on a compact connected smooth manifold $M$. Then ${\cal Q}$ must be classical, i.e. isomorphic with $C(G)$ for a compact group $G$ acting smoothly on $M$. \end{theorem} \begin{proof} By passing to the reduced action $\alpha_r$ we can assume without loss of generality that the action preserves some faithful positive Borel measure on $M$. Note that the isometry condition, i.e. commutation with the Laplacian, is not used up to \cite[\S 3]{gafa}, so all those results are valid for a weakly smooth actions. Following the arguments of \cite{gafa} we can prove that a weakly smooth action commutes with the Laplacian on ${\cal C}_0$, but absent Fr{\'e}chet density, it will not be a core for the Laplacian and commutation does not extend to $C^\infty(M)$. Nevertheless, \Cref{higher_comm} already proves the commutativity of higher-order partial derivatives, bypassing the arguments of \cite[\S 4]{gafa} (which used the isometry condition). The proof of \cite[Theorem 5.3]{gafa} then carries through more or less verbatim, as we proceed to sketch briefly. Given the smooth action $\alpha$ of ${\cal Q}$ on $M$, we choose a Riemannian metric by Corollary \ref{metric_pres} which is preserved by the action. This implies the commutativity of ${\cal Q}'_x$. Using this, we can proceed along the lines of \cite{gafa} to lift the given action to $O(M)$. Now, by Lemma \ref{higher_comm}, we do have the commutativity of partial derivatives of all orders for the lifted action $\Phi$ needed in steps (i) and (iv) of the proof of Theorem 5.3 of \cite{gafa} and the rest of the arguments of Theorem 5.3 of \cite{gafa} will go through. \end{proof} \section{Quantum isometry groups: existence and rigidity for closed manifolds}\label{se.main} Let $M$ be a smooth compact Riemannian manifold without boundary and $d$ its geodesic distance, as before. If $\alpha$ is an isometric CQG action on $(M,d)$ then it automatically preserves all functions of the form \begin{equation*}\label{eq:13} \psi\circ d\in C(M\times M) \end{equation*} for continuous $\psi:\bR\to \bR$. In particular, it will preserve the function $D(-,-)$ defined by \Cref{eq:2}. We write $D_x$, $x\in M$ for the function $D(x,-)$. \Cref{le.d_x_density} below will implicitly make use of the following observation. \begin{lemma}\label{le.d_x_density} For a compact connected Riemannian manifold $M$ without boundary the algebra generated by $\{ D_x:~x \in M \}$ is Fr\'echet-dense in $C^\infty(M)$. \end{lemma} \begin{proof} We will apply \Cref{th:nachb} by verifying, say, condition \Cref{item:9} in that statement. Since an appeal to Stone-Weierstrass quickly shows that the algebra in question is norm-dense, only the local condition needs verification. That is, if ${\cal C}$ is the linear span of functions of the form $D_x, x \in M$, we have to show that for any point $y \in M$ the space $\{ df|_y, f \in {\cal C}\}$ is $n$-dimensional (where $n=\dim M$). We thus focus on proving this full-dimension claim. Suppose there is some $y$ for which \begin{equation*} \dim \{df|_y,\ f\in \cC\}<n. \end{equation*} Then there is some unit tangent vector $v\in T_yM$ for which $(df_y, v)=0$ for all $f=D_x$. Now consider the arc-length-parametrized geodesic starting at $y$ with velocity $v$ and let $x$ be a point on it, sufficiently close to $y$ to ensure that some normal coordinate neighborhood \cite[p.145]{ccl} $U$ of $x$ contains $y$ and that \begin{equation*} D(x,-) = d(x,-)^2 \end{equation*} throughout $U$. If $\exp:T_yM\to M$ is the exponential map, we now have \begin{equation*} D_x(\exp(tv)) = (d(x,y)-t)^2D(x,y), \end{equation*} whose derivative at $t=0$ clearly does not vanish. This gives the desired contradiction and finishes the proof. \end{proof} \begin{theorem}\label{th.nbdry} Let $M$ be a Riemannian closed connected smooth manifold and $d$ the corresponding geodesic metric. Then $QISO(M, d)$ exists and coincides with $C(ISO(M))$ where $ISO(M)$ is the group of Riemannian isometries. \end{theorem} \begin{proof} We denote by $D$ a function $\psi\circ d^2$ as in \Cref{eq:2}, for a bump function $\psi:\bR\to \bR$ equal to $\id$ around $0\in \bR$ and vanishing outside a sufficiently small neighborhood of $0$. We know from \cite[Proposition 3.10]{Chi15} that every reduced isometric action is injective, so \Cref{th.prsrv} applies. It is thus enough to prove that any CQG isometric action $\alpha$ on $C(M)$ is weakly smooth. To see this, recall from \Cref{def.isometric} that the isometric property of the action reads \begin{equation*} \alpha(D_x)(y)=\kappa(\alpha(D_y)(x)). \end{equation*} Fixing $x$, we now examine the function \begin{equation*} y\mapsto \alpha(D_x)(y)=\kappa(\alpha(D_y)(x)). \end{equation*} It is the composition between the smooth function \begin{equation*} M\ni y\mapsto D_y\in C(M) \end{equation*} (see \Cref{re.smth_Dy}) and the $C^*$-algebra morphism \begin{equation*} C(M)\ni f\mapsto \kappa(\alpha(f)(x)), \end{equation*} and hence is itself smooth. By \Cref{th:nachb}, we can find finitely many polynomials $\xi_i$ in the functions $D_x$ such that \begin{equation*} y \mapsto (\xi_1(y) , \ldots ,\xi_k(y)) \end{equation*} is a smooth embedding of $M$ as a submanifold in $\bR^k$. Every smooth function $f$ on $M$ can be written as $f=\tilde{f}(\xi_1, \ldots, \xi_k)$ for some function $\tilde{f}$ of $k$ real variables with compact support in some open neighborhood of $M$, so all in all we obtain \begin{equation*} \alpha(f)=\tilde{f}(\alpha(\xi_1), \ldots, \alpha(\xi_k)) \end{equation*} Since we have just argued that $\alpha(\xi_i) \in C(M,\cQ)$ are smooth, so is $\alpha(f)$, finishing the proof. \end{proof} \section{Manifolds with boundary}\label{subse.bdry} As the title suggests, we will now extend the quantum rigidity result in \Cref{th.nbdry} to the case when $\partial M\ne \emptyset$. To that end, throughout the present subsection $M$ denotes a compact Riemannian manifold with boundary Consider an action of a compact quantum group $\cQ$ on $M$, with $\cC=C(M)$: \begin{equation}\label{eq:6} \alpha:\cC\to \cC\otimes \cQ. \end{equation} For the actions we are interested in (isometric with respect to the geodesic distance of a Riemannian structure), it will be crucial to know that they leave the boundary invariant, in a sense to be made precise below. \subsection{Invariant subspaces}\label{subse:inv} \begin{definition}\label{def.prsv} Let $Z\subseteq X$ be an inclusion of compact Hausdorff spaces and \Cref{eq:6} an action of a compact quantum group on $\cC=C(X)$. We say that $\alpha$ {\it preserves $Z$} or that $Z$ is {\it $\alpha$-invariant} (or just plain {\it invariant} when $\alpha$ is understood) if we have a factorization \begin{equation}\label{eq:7} \begin{tikzpicture}[auto,baseline=(current bounding box.center)] \path[anchor=base] (0,0) node (1) {$\cC$} +(3,.5) node (2) {$\cC\otimes \cQ$} +(6,0) node (3) {$C(Z)\otimes \cQ$} +(3,-.5) node (4) {$C(Z)$}; \draw[->] (1) to[bend left=6] node[pos=.5,auto] {$\scriptstyle \alpha$} (2); \draw[->] (2) to[bend left=6] node[pos=.5,auto] {$\scriptstyle \pi\otimes\id$} (3); \draw[->] (1) to[bend right=6] node[pos=.5,auto,swap] {$\scriptstyle \pi$} (4); \draw[->] (4) to[bend right=6] node[pos=.5,auto,swap] {$\scriptstyle \beta$} (3); \end{tikzpicture} \end{equation} where \begin{equation*} \pi:\cC=C(X)\to C(Z) \end{equation*} is restriction. \end{definition} Assuming such a factorization does exist, the lower right hand arrow $\beta$ will automatically be an action. Denoting by $J=J_Z\subset C(X)$ the ideal of functions vanishing along $Z$, $\alpha$ restricts to a map \begin{equation*} J\to J\otimes \cQ = C_0(U,\cQ) \end{equation*} where $U:=X-Z$ and $C_0$ means functions vanishing at infinity on the locally compact space $U$. We will also need \begin{definition}\label{def:strprsv} In the context of \Cref{def.prsv} $Z$ is {\it strongly $\alpha$-invariant} (or $\alpha$ {\it preserves $Z$ strongly}) if the restriction $J_Z\to J_Z\otimes \cQ$ of $\alpha$ satisfies the density condition \Cref{item:12} in \Cref{def:act} for an action (we say in short that $\alpha$ induces an action of $\cQ$ on $U=X-Z$): \begin{equation}\label{eq:zxu} \overline{Sp} \ \alpha(J_Z)(1\otimes \cQ) = C_0(U,\cQ). \end{equation} \end{definition} \begin{remark}\label{re:notauto} We do not know whether \Cref{def:strprsv} is redundant, i.e. whether strong preservation is automatic given preservation. In fact, if $\alpha:C(X)\to C(X,\cQ)$ is not injective (a possibility we cannot rule out at the moment) and $Z$ is the spectrum of the proper quotient $\alpha(C(X))$ of $C(X)$, then \begin{itemize} \item $Z$ is preserved by $\alpha$, but nevertheless \item by construction, every element of $J_Z$ is annihilated by $\alpha$. \end{itemize} \end{remark} Now let $\alpha$ be an action of $\cQ$ on $X$ as before, and $Z\subseteq X$ an $\alpha$-invariant subspace. We denote the dense $*$-subalgebras resulting from this as recalled in \Cref{subse:cqg} by `$0$' subscripts, as in $\cQ_0$, $C(X)_0$, etc. Our first observation on strong invariance is \begin{lemma}\label{le:isdense} If $Z\subseteq X$ is strongly $\alpha$-invariant then the non-unital $*$-algebra \begin{equation*} C(X)_0\cap J_Z \end{equation*} of elements of $C(X)_0$ vanishing along $Z$ is dense in $J_Z$. \end{lemma} \begin{proof} One simply imitates the usual proof that $C(X)_0\subseteq C(X)$ is dense; see e.g. \cite[Theorem 1.5]{Podles}. Alternatively, we can simply {\it apply} that density result to the $\cQ$-action on the one-point compactification of $U$ induced by $\alpha$; that the map \begin{equation*} C_0(U)^+\to C_0(U)^+\otimes \cQ \end{equation*} (where `$+$' superscripts denote unitizations) is indeed an action requires precisely the density condition strong invariance imposes. \end{proof} \Cref{le:isdense} will come in handy in the context of ``gluing'' actions along a common subspace of two spaces. The setup is as follows. Let $X_i$, $i=1,2$ be compact Hausdorff spaces equipped with actions \begin{align*} \alpha_i &:C(X_i)\to C(X_i)\otimes \cQ \end{align*} by a quantum group $\cQ$ and \begin{align*} \iota_i:Z\to X_i \end{align*} embeddings of compact spaces. We write $X:=X_1\cup_ZX_2$ for the resulting space obtained by gluing $X_i$ along $Z$ via the embeddings $\iota_i$ (though by a slight abuse of notation said embeddings are absent from the notation). Setting $Y:=X_1\sqcup X_2$, we have a product action \begin{equation}\label{eq:10} \begin{tikzpicture}[auto,baseline=(current bounding box.center)] \path[anchor=base] (0,0) node (1) {$C(Y)$} +(2,1) node (2) {$C(X_1)\times C(X_2)$} +(8,1) node (3) {$(C(X_1)\otimes \cQ)\times (C(X_2)\otimes \cQ)$} +(12,0) node (4) {$C(Y)\otimes \cQ$.}; \draw[->] (1) to[bend left=6] node[pos=.5,auto] {$\scriptstyle \cong$} (2); \draw[->] (2) to[bend left=6] node[pos=.5,auto] {$\scriptstyle \alpha_1\times\alpha_2$} (3); \draw[->] (3) to[bend left=6] node[pos=.5,auto] {$\scriptstyle \cong$} (4); \draw[->] (1) to[bend right=6] node[pos=.5,auto,swap] {$\scriptstyle \beta$} (4); \end{tikzpicture} \end{equation} Now assume furthermore that $\alpha_i$ \begin{itemize} \item preserve the subspaces \begin{equation*} \iota_i(Z)\subseteq X_i,\ i=1,2 \end{equation*} in the sense of \Cref{def.prsv}, i.e. for any $f\in C(X_i)$ the restriction of \begin{equation*} \alpha_i(f)\in C(X_i,\cQ) \end{equation*} to $\iota_i(Z)\subseteq X_i$ depends only on the restriction of $f$ to the same subspace. \item agree on $Z$, in the sense that the actions on $Z$ induced by $\alpha_i$ upon identifying $Z\cong \iota_i(Z)$ coincide. \end{itemize} In this case, if $f_i\in C(X_i)$ have equal restrictions to $Z$ via $\iota_i$ then similarly, \begin{equation*} \alpha_i(f_i)\in C(X_i,\cQ) \end{equation*} have equal restrictions to $Z$. But this simply means that with $\beta$ defined as in \Cref{eq:10}, \begin{equation*} \beta(f_1,f_2)\in C(X\otimes \cQ)\cong C(X)\otimes \cQ. \end{equation*} Since this holds for arbitrary $(f_1,f_2)\in C(X)$ we have \begin{proposition}\label{le.glue} If actions $\alpha_i$ of $\cQ$ on compact spaces $X_i$ strongly preserve a common subspace $Z\subseteq X_i$ on which they agree, we obtain a natural action $\alpha$ of $\cQ$ on the connected sum $X=X_1\cup_Z X_2$. If at least one of the actions $\alpha_i$ is faithful then so is $\alpha$ and if $(\alpha_i)_r$ are injective then so is $\alpha_r$. \end{proposition} \begin{proof} The proof of the existence of $\alpha$ is essentially contained in the discussion preceding the statement, with the possible caveat that we have not argued that the density condition \Cref{item:12} in the definition of an action holds: \begin{equation}\label{eq:wantdense} \overline{Sp} \ \alpha(C(X))(1\otimes \cQ) = C(X,\cQ). \end{equation} We can see this by working at the purely algebraic level, with the dense subalgebras \begin{equation*} C(X)_0\subset C(X) \end{equation*} and similarly for the spaces $X_i$ and $Z$ (but not $X$ yet, as we do not know at this stage that $\cQ$ acts on it), and with $\cQ_0\subset \cQ$ in place of $\cQ$. The $\cQ$-equivariant embeddings $Z\subseteq X_i$ induce surjections \begin{equation}\label{eq:xitoz} C(X_i)_0\to C(Z)_0, \end{equation} giving us a coaction of the Hopf algebra $\cQ_0$ on the pullback $C(X)_0$ of these surjections in the category of $*$-algebras: \begin{equation*} \begin{tikzpicture}[auto,baseline=(current bounding box.center)] \path[anchor=base] (0,0) node (l) {$C(X_1)_0$} +(2,.5) node (u) {$C(X)_0$} +(2,-.5) node (d) {$C(Z)_0$} +(4,0) node (r) {$C(X_2)_0$} ; \draw[<-] (l) to[bend left=6] node[pos=.5,auto] {$\scriptstyle $} (u); \draw[->] (u) to[bend left=6] node[pos=.5,auto] {$\scriptstyle $} (r); \draw[->] (l) to[bend right=6] node[pos=.5,auto,swap] {$\scriptstyle $} (d); \draw[<-] (d) to[bend right=6] node[pos=.5,auto,swap] {$\scriptstyle $} (r); \end{tikzpicture} \end{equation*} {\bf Claim: The pullback $C(X)_0$ is dense in $C(X)$.} This is where {\it strong} preservation will be needed. Consider an arbitrary element of $C(X)$, i.e. a pair of continuous functions $f_i$ on $X_i$ (respectively) agreeing on $Z$. Approximate $f_2$ arbitrarily well (within $\varepsilon>0$, say) with an element \begin{equation*} f_{2,app}\in C(X_2)_0 \end{equation*} and restrict the latter to $f_{Z,app}\in C(Z)_0$. In turn, {\it that} function lifts to some element $g_{1,app}\in C(X_1)_0$. On the other hand, because $f_{Z,app}$ and the common restriction \begin{equation*} f_Z:=f_1|_Z = f_2|_Z \end{equation*} are within $\varepsilon$, their difference $f_{Z,app}-f_Z$ lifts to a function $h_1$ on $X_1$ of norm $<2\varepsilon$. The sum \begin{equation*} f_1 + h_1\in C(X_1) \end{equation*} is $2\varepsilon$-close to $f_1$ and restricts to $f_{Z,app}\in C(Z)_0$ on $Z$. So does $g_{1,app}\in C(X_1)_0$, so the difference \begin{equation}\label{eq:3dif} f_1+h_1-g_{1,app} \end{equation} vanishes on $Z$. But then, by the strong preservation of $Z\subseteq X_1$ by $\alpha_1$, \Cref{eq:3dif} is $\varepsilon$-close to some element \begin{equation*} d_{1,app}\in C(X_1)_0\cap C_0(X_1-Z) \end{equation*} (i.e. in the dense $*$-subalgebra $C(X_1)_0\subseteq C(X_1)$ and vanishing on $Z$). All in all, $f_1$ is within $2\varepsilon$ of $f_1+h_1$, which in turn is within $2\varepsilon$ of \begin{equation}\label{eq:g+d} g_{1,app}+d_{1,app}\in C(X_1)_0. \end{equation} Furthermore, because $d_{1,app}$ vanishes on $Z$, \Cref{eq:g+d} agrees with $f_{2,app}$ along $Z$. This finishes the proof of the claim: the arbitrary pair of functions $f_i\in X_i$ agreeing on $Z$ has been $4\varepsilon$-approximated by a pair of functions $f_{i,app}\in C(X_i)_0$ agreeing on $Z$. With the claim proven \Cref{eq:wantdense} follows, since in general, at the level of Hopf algebra coactions \begin{equation*} \cA\ni a\longmapsto a_0\otimes a_1\in \cA\otimes_{\rm alg}\cQ_0 \end{equation*} the bijectivity of \begin{equation*} \cA\otimes_{\rm alg}\cQ_0\ni a\otimes b\longmapsto a_0\otimes a_1b\in \cA\otimes_{\rm alg}\cQ_0 \end{equation*} follows from the existence of the antipode $\kappa$ on $\cQ_0$: the inverse is simply \begin{equation*} \cA\otimes_{\rm alg}\cQ_0\ni a\otimes b\longmapsto a_0\otimes \kappa(a_1)b\in \cA\otimes_{\rm alg}\cQ_0. \end{equation*} For the faithfulness claim, note that for points $x_i\in X_i$ we have \begin{equation*} \cQ_{x_i}^{\alpha_i} = \cQ_{x_i}^{\alpha}\subseteq \cQ. \end{equation*} Since we are assuming these algebras generate $\cQ$ as $x_i$ ranges over $X_i$ for at least one of the indices $i=1,2$, the slice algebras $\cQ_{x}^{\alpha}$ do indeed generate $\cQ$ as $x\in X=X_1\cup_Z X_2$. Finally, suppose $(\alpha_i)_r$ are injective. Since every non-zero function $f\in C(X)$ restricts to a non-zero function on at least one $X_i$ and both $X_i$ are preserved by $\alpha_r$ which induces back the actions \begin{equation*} (\alpha_i)_r:C(X_i)\to C(X_i,\cQ), \end{equation*} we have $\alpha_r(f)\ne 0$, as desired. \end{proof} \begin{remark} Although we do not use this, note that in fact the proof of \Cref{le.glue} shows that it is enough to assume {\it one} of the actions $\alpha_i$ preserves $Z$ strongly. \end{remark} \subsection{Back to manifolds}\label{subse:back} We begin with precisely the boundary-invariance result alluded to at the beginning of \Cref{subse.bdry}. \begin{proposition}\label{pr.bdry-inv} Let $M$ be a compact Riemannian manifold and $d$ its geodesic distance. Then, every reduced isometric CQG action on $(M,d)$ preserves the boundary. \end{proposition} \begin{proof} We have to argue that if $x\in \partial M$ then the entire $\alpha$-orbit (\cite[\S 3]{chirvasitu}) of $x$ is contained in the boundary. To see this we assume otherwise and derive a contradiction. Suppose $y\in \overset{\circ}M=M\setminus \partial M$ is a point in the orbit of $x$ and $\varphi$ is a state on $\cQ$ with $x\triangleleft \varphi = y$. We also denote by $x'\in \overset{\circ}M$ a point placed a small distance $r$ away from $x$, connected to the latter by a geodesic arc $\gamma$ orthogonal to the boundary at $x$. The probability measure $x'\triangleleft \varphi$ is supported on the sphere $S(y,r)$ of radius $r$ around $y=x\triangleleft\varphi$ (e.g. by \cite[Theorem 3.1]{Chi15}), and we may assume $r>0$ is small enough that that sphere is entirely within the interior of $M$. Let \begin{equation}\label{eq:3} y'\in\mathrm{supp}~(x'\triangleleft\varphi ) \end{equation} and denote by $y''\in S(y,r)$ the antipode opposite $y'$, so that \begin{equation}\label{eq:4} d(y,y') = d(y,y'') = \frac {d(y',y'')}{2} = r. \end{equation} Now denote $\overline{\varphi}=\varphi\circ\kappa$. It follows from \Cref{eq:3} and \cite[Proposition 3.1]{chirvasitu} that \begin{equation}\label{eq:5} x'\in\mathrm{supp}~\left(y'\triangleleft\overline{\varphi} \right) \end{equation} (and note that we also have $y\triangleleft\overline{\varphi}=x$, by \cite[Corollary 3.2]{chirvasitu}). All in all, $\overline{\varphi}$ maps \begin{itemize} \item $y\in \overset{\circ} M$ to $x\in \partial M$; \item $y'\in S(y,r)$ to a measure whose support contains $x'$ and is contained in $S(x,r)$. \item $y''\in S(y,r)$ to a measure supported on the same sphere $S(x,r)$, by \Cref{eq:4}. \end{itemize} The last equality in \Cref{eq:4} and \cite[Theorem 3.1]{Chi15} also show that there is a probability measure on $M\times M$, supported on \begin{equation*} \{(p,q)\in M\times M\ |\ d(p,q)=2r\}, \end{equation*} whose pushforwards through the two projections are $y'\triangleleft\overline{\varphi}$ and $y''\triangleleft\overline{\varphi}$. \Cref{eq:5} now implies that there is some \begin{equation*} x''\in \mathrm{supp}\left(y''\triangleleft\overline{\varphi}\right)\subseteq S(x,r) \end{equation*} with $d(x',x'')=2r$. This, however, contradicts the choice of $x'$: since the geodesic arc $\gamma$ connecting $x$ and $x'$ has length $r$ and is orthogonal to $\partial M$ at $x$, the antipode of $S(x,r)\subset N$ opposite $x'$ (for an extension $N\supset M$ as in \Cref{subse.ri}) is not contained in $M$. \end{proof} Denote \begin{align*} \partial_rM &:=\{x\in M\ |\ d(x,\partial M)=r\}\\ \partial_{\le r}M &:=\{x\in M\ |\ d(x,\partial M)\le r\}\label{eq:partials}\numberthis\\ \partial_{< r}M &:=\{x\in M\ |\ d(x,\partial M)< r\} \end{align*} and similarly for `$\ge $', `$>$', etc. For $r\le s$ set \begin{equation*} \partial_{s\leftarrow r}M = \{x\in \partial_rM |\ \exists y\in \partial_s M\text{ such that }d(x,y)=s-r\}. \end{equation*} The following result is now an immediate consequence of \Cref{pr.bdry-inv}. \begin{corollary}\label{cor.prsv} Under the hypotheses of \Cref{pr.bdry-inv} the action $\alpha$ preserves the compact sets $\partial_r M$, $\partial_{\ge r}M$ and $\partial_{s\leftarrow r}M$ for all real numbers $0\le r\le s$. \mbox{}\hfill\ensuremath{\blacksquare} \end{corollary} This ensures that for each $r\ge 0$ we have an action $\beta$ as in \Cref{eq:7} for $X=\partial_r M$. We will be interested in the following choices of $r$. \begin{definition}\label{def.tame} Let $M$ be a compact Riemannian manifold with boundary. A positive real $r$ is {\it tame} if it is sufficiently small so that $\partial_r M$ is contained in a collar neighborhood of $\partial M$ with a system of coordinates {\it adapted} to the boundary: $x_n$ is distance from $\partial M$ and $x_i$, $1\le i\le n-1$ are coordinates on the boundary extended as constant along geodesic arcs orthogonal to $\partial M$. \end{definition} If we knew that the resulting action $\beta$ is faithful we could conclude that the quantum group $\cQ$ is classical by a slight adaptation of \Cref{th.nbdry}. Though this is not quite the strategy we adopt for generalizing \Cref{th.nbdry} to \Cref{th.rig-bdry} below, we nevertheless prove that $\beta$ is faithful for whatever independent interest that result might hold and also because the requisite techniques will be useful later. According to \Cref{def.CQG_action} an action $\alpha$ is faithful if $\cQ$ is generated as a $C^*$-algebra by the subalgebras \begin{equation}\label{eq:8} \cQ_x = \cQ^{\alpha}_x:=\mathrm{Im} ({\rm ev}_x \otimes {\rm id})\circ\alpha. \end{equation} Note that this differs from the algebra denoted by $\cQ_x$ in \cite{final}; indeed, in the present paper the latter algebra would be denoted by $\cQ'_x$ instead. We need the following notion. \begin{definition}\label{def.att} Consider an action $\alpha$ as in \Cref{eq:6} and $x,y\in M$ two points. We say that $y$ is {\it $\alpha$-attached} to $x$ (or just {\it attached} when the action is understood) if for states $\varphi$ and $\psi$ on $S$ we have \begin{equation*} x\triangleleft\varphi = x\triangleleft\psi\quad \Rightarrow\quad y\triangleleft\varphi = y\triangleleft\psi. \end{equation*} \end{definition} The concept is relevant to faithfulness due to the following result proved in passing in the course of the proof of \cite[Proposition 4.4]{chirvasitu}. \begin{proposition}\label{pr.att} Let $(M,d)$ be a compact metric space, $\alpha$ an isometric action of a compact quantum group $\cQ$ on $M$ and $x,y\in M$. If $y$ is $\alpha$-attached to $x$ then $\cQ_y\subseteq \cQ_x$. \mbox{}\hfill\ensuremath{\blacksquare} \end{proposition} Going back to the situation at hand, consider the action $\beta$ on $X=\partial_r M$ resulting from $\alpha$ as in \Cref{eq:7}. For $x\in \partial_r M$ the subalgebra $\cQ_x^{\beta}$ defined as in \Cref{eq:8} coincides with $\cQ_x^{\alpha}$. On the other hand, \Cref{pr.att} shows that $\cQ^{\alpha}_y$ is contained in $\cQ^{\alpha}_x$ whenever $y$ is attached to $x$. Since we know (from the faithfulness of $\alpha$) that \begin{equation*} \cQ^{\alpha}_y,\ y\in M \end{equation*} generate $\cQ$, we will have shown that $\beta$ is indeed faithful provided we prove \begin{proposition}\label{le.is-att} Let $M$, $\alpha$, etc. be as above, with the additional assumption that every component of $M$ has non-empty boundary. For sufficiently small $r>0$ every point in $M$ is $\alpha$-attached to some $x\in \partial_r M$. \end{proposition} We will prove this in a few stages. First, we have \begin{lemma}\label{le.uniq-rs} Let $0<r$. There is some $\varepsilon>0$, depending only on the Riemannian manifold $M$, with the following property: For every $s>r$ with $s-r\le \varepsilon$ and $x\in \partial_{s\leftarrow r}M$ the set \begin{equation}\label{eq:9} \{y\in \partial_s M\ |\ d(x,y)=s-r\} \end{equation} is a singleton. \end{lemma} \begin{proof} Choose $0<\varepsilon<r$ smaller than the injectivity radius of $M$ at every point \begin{equation*} p\in M,\ d(p,\partial M)\ge r. \end{equation*} The very definition of $\partial_{s\leftarrow r}M$ says that the set in question is non-empty, so we have to prove that the set \Cref{eq:9} cannot contain distinct points $y\ne y'$. Indeed, two such points would entail the existence of two distinct geodesic arcs \begin{equation*} \gamma:x\to y,\quad \gamma':x\to y' \end{equation*} of length $s-r$. They cannot both prolong a geodesic arc $\eta$ of length $r$ connecting $x$ to $\partial M$, so one of the concatenations \begin{equation*} \eta\cdot \gamma,\quad \eta\cdot \gamma' \end{equation*} is not a geodesic. But both curves have length $r+s-r=s$, meaning that one of the two points $y,y'\in \partial_s M$ can be connected to $\partial M$ by a curve of length $<s$. This contradiction finishes the proof. \end{proof} Now let $r>0$. According to \Cref{le.uniq-rs}, for every $s>r$ sufficiently close to $r$ there is a well-defined map \begin{equation}\label{eq:psis} \psi_{s\leftarrow r}:\partial_{s\leftarrow r} M\to \partial_s M\quad \text{such that}\quad d(x,\psi_{s\leftarrow r}(x)) = s-r. \end{equation} Furthermore, uniqueness implies \begin{itemize} \item the transitivity of $\psi$: \begin{equation*} \psi_{s_2\leftarrow r} = \psi_{s_2\leftarrow s_1}\circ \psi_{s_1\leftarrow r} \end{equation*} for $s_2>s_1>r$ sufficiently close to $r$; \item the continuity of each $\psi_{s\leftarrow r}$: if $x_n\to x$ is a convergent sequence in $\partial_{s\leftarrow r}M$, then by the continuity of the distance function the limit of every convergent subsequence of $(\psi_{s\leftarrow r}(x_n))_n$ is at distance $s-r$ from $x=\lim_n x_n$, so that limit must be $x$. It follows that $(\psi_{s\leftarrow r}(x_n))_n$ must be convergent to this common limit point, since it is a sequence in a compact metric space. \end{itemize} By transitivity, we can define $\psi_{s\leftarrow r}$ for arbitrary \begin{equation*} r\le s<\max_p d(p,\partial M). \end{equation*} \pf{le.is-att} \begin{le.is-att} Of course, it suffices to argue that points $y$ in the interior $\overset{\circ} M$ are attached to points on the boundary. Let $\ell=d(y,\partial M)$ and $\gamma$ a shortest geodesic, parametrized by arclength, connecting some point $\gamma(0)=x\in \partial M$ to $\gamma(\ell)=y$ (the existence of such a geodesic requires our assumption that all connected components have boundary). Note that we have \begin{equation*} \gamma(t)\in \partial_t M,\ \forall t\in [0,\ell]. \end{equation*} Now let $\varphi$ be a state on $\cQ$ and $r>0$ tame for $M$ in the sense of \Cref{def.tame}. As in the proof of \Cref{pr.bdry-inv}, we can conclude from \cite[Theorem 3.1]{Chi15} that for every $\ell\ge s>r$ the measures \begin{equation*} \gamma(r)\triangleleft\varphi\in\mathrm{Prob}(\partial_r M),\quad \gamma(s)\triangleleft\varphi\in\mathrm{Prob}(\partial_s M) \end{equation*} are the marginals of a probability measure on $M\times M$ supported on \begin{equation*} \{(p,q)\in M\times M\ |\ d(p,q) = d(\gamma(r),\gamma(s))=s-r\}. \end{equation*} It follows that $\gamma(r)\triangleleft\varphi$ is in fact supported on $\partial_{s\leftarrow r}M$. For $s$ sufficiently close to $r$ the uniqueness (\Cref{le.uniq-rs}), for every point in $\partial_{s\leftarrow r}M$, of a point in $\partial_sM$ that is $s-r$ away from it then implies that we have \begin{equation*} \gamma(s)\triangleleft\varphi = (\psi_{s\leftarrow r})_*(\gamma(r)\triangleleft\varphi). \end{equation*} We can now repeat the procedure with $s$ in place of $r$ and $s'\in (s,\ell]$. \Cref{le.uniq-rs} ensures that we can choose the differences $s'-s$ to be bounded below by some $\varepsilon>0$ and hence eventually exhaust the interval $[r,\ell]$. All in all, the conclusion will be that \begin{equation}\label{eq:11} \gamma(\ell)\triangleleft\varphi = (\psi_{\ell\leftarrow r})_*(\gamma(r)\triangleleft\varphi),\quad \forall \text{ states $\varphi$ on }\cQ. \end{equation} But this says that the image of $y=\gamma(\ell)$ through $\triangleleft\varphi$ depends only on the image of $x$ through $\varphi$; since the state $\varphi$ on $\cQ$ was arbitrary, this finishes the proof that $y$ is attached to $\gamma(r)\in \partial_rM$. \end{le.is-att} As a consequence of \Cref{le.is-att} we have \begin{corollary}\label{cor.faith} Let $\alpha$ be an isometric faithful action of a compact quantum group $\cQ$ on a compact Riemannian manifold $M$, all of whose connected components have non-empty boundary. Then, the actions induced by $\alpha$ on any of the sets $\partial_rM$ for sufficiently small $r>0$ are faithful. \end{corollary} \begin{proof} This follows from \Cref{pr.att,le.is-att}, which show jointly that every slice $\cQ_y$, $x\in M$ is contained in some other slice $\cQ_x$, $x\in \partial_rM$. Since \begin{equation*} \cQ_y,\ y\in M \end{equation*} generate $\cQ$, so do the subalgebras $\cQ_x\subseteq \cQ$, $x\in \partial_rM$, finishing the proof. \end{proof} We also record the following consequence of the proof of \Cref{le.is-att}: \begin{corollary}\label{cor.pres-trnsl} If $r\ge 0$ is sufficiently small and $s\ge r$ then the map \begin{equation*} \psi_{s\leftarrow r}:\partial_{s\leftarrow r} M\to \partial_s M \end{equation*} is equivariant for the actions of $\cQ$ on $\partial_{s\leftarrow r}M$ and $\partial_sM$ from \Cref{cor.prsv}. \end{corollary} \begin{proof} This follows from \Cref{eq:11}. \end{proof} Next, we address the smoothness issue for isometric quantum actions on Riemannian manifolds with boundary. \begin{proposition}\label{pr.bdry-wsmth} An isometric action $\alpha$ of a compact quantum group $\cQ$ on a Riemannian manifold $M$ (possibly with boundary) is weakly smooth. \end{proposition} \begin{proof} The boundary-less case was taken care of in the course of proving \Cref{th.nbdry}, so we focus on the case when $\partial M\ne \emptyset$. We know from \Cref{cor.prsv} that all of the sets described in \Cref{eq:partials} (and the analogues $\partial_{\ge r}M$, etc.) are preserved by $\alpha$. Now fix a small $r>0$. The functions \begin{equation*} D_x\in C^{\infty}(\partial_{\ge r}M),\quad x\in \text{a neighborhood of }\partial_{\ge r}M \end{equation*} are easily seen to satisfy the conclusion of \Cref{le.d_x_density} by a simple adaptation of the proof of that result, so we can conclude as in the proof of \Cref{th.nbdry} that the restriction of $\alpha$ to the invariant submanifold $\partial_{\ge r}M$ is weakly smooth. Next, for small $r>0$ (small enough so that $2r$ is tame, for instance) consider an automatically-increasing diffeomorphism \begin{equation*} \theta:\bR_{\ge 0}\to \bR_{\ge r} \end{equation*} that is the identity on $\bR_{\ge 2r}$. With the help of $\theta$ we can define a ``collar-squeeze'' diffeomorphism \begin{equation*} \theta_{\cat{sq}}:M\cong M_{\ge r}, \end{equation*} acting as \begin{equation*} \psi_{\theta(s)\leftarrow s}:\partial_s M\to \partial_{\theta(s)}M,\ \forall s\in \bR_{\ge 0} \end{equation*} for the $\psi_{\bullet\leftarrow\bullet}$ maps introduced in \Cref{eq:psis} (so in particular $\theta_{\cat{sq}}$ is the identity on $\partial_{\ge 2r}M$). \Cref{cor.prsv} and the $\alpha$-equivariance of the maps $\psi_{\bullet\leftarrow\bullet}$ (expressed for instance as \Cref{eq:11}) imply that $\theta_{\cat{sq}}$ is $\alpha$-equivariant. But we have already argued that $\alpha$ is weakly smooth on the image $\partial_{\ge r}$ of $\theta_{\cat{SQ}}$, so since the latter is a diffeomorphism, $\alpha$ must be weakly smooth on the domain $M$ of $\theta_{\cat{SQ}}$ as well. This finishes the proof. \end{proof} Now fix some tame $r>0$, and let $\psi:\bR_{\ge 0}\to \bR$ be a continuous function, constant on $[r,\infty)$. For any $C^*$-algebra $\cQ$ we have a bounded (Banach space) endomorphism \begin{equation*} \cat{sc}_{\psi}:C(M,\cQ)\to C(M,\cQ) \end{equation*} that scales a function $f:M\to \cQ$ by $\psi(s)$ along $\partial_s M$. Note that the norm of $\cat{sc}_{\psi}$ is the supremum of $|\psi|$. Because by \Cref{cor.prsv} $\alpha$ preserves all $\partial_r M$ and the resulting action \begin{equation*} \beta:C(\partial_r M)\to C(\partial_r M, \cQ) \end{equation*} is linear, scaling a function $f$ on $M$ along $\partial_rM$ and then applying $\alpha$ results in the scaling of $\alpha(f)$ along $\partial_r M$ by the same amount. In other words, for any $\psi$ as above, $\alpha$ intertwines the two instances of $\cat{sc}_{\psi}$: \begin{equation}\label{eq:alphaisequiv} \begin{tikzpicture}[auto,baseline=(current bounding box.center)] \path[anchor=base] (0,0) node (l) {$C(M)$} +(3,.5) node (u) {$C(M)$} +(3,-.5) node (d) {$C(M,\cQ)$} +(6,0) node (r) {$C(M,\cQ)$.} ; \draw[->] (l) to[bend left=6] node[pos=.5,auto] {$\scriptstyle \cat{sc}_{\psi}$} (u); \draw[->] (u) to[bend left=6] node[pos=.5,auto] {$\scriptstyle \alpha$} (r); \draw[->] (l) to[bend right=6] node[pos=.5,auto,swap] {$\scriptstyle \alpha$} (d); \draw[->] (d) to[bend right=6] node[pos=.5,auto,swap] {$\scriptstyle \cat{sc}_{\psi}$} (r); \end{tikzpicture} \end{equation} We are now ready to link the present discussion to the strong-invariance material from \Cref{subse:inv}. \begin{lemma}\label{le:bdrystrong} The boundary $\partial M=\partial_0 M$ is in fact {\it strongly} $\alpha$-invariant. \end{lemma} \begin{proof} With $X=M$, $Z=\partial M$ and $U=X-Z$ we want to show that the density condition \Cref{eq:zxu} holds. Because $\alpha$ itself is an action, every \begin{equation*} F\in C_0(U,\cQ) \end{equation*} (i.e. continuous function $M\to \cQ$ vanishing on the boundary) is approximable arbitrarily well by elements of the form \begin{equation*} \sum_{i=1}^t \alpha(f_i)(1\otimes x_i),\quad f_i\in C(M),\ x_i\in \cQ. \end{equation*} Now pick a continuous, non-decreasing $\psi:\bR_{\ge 0}\to \bR_{\ge 0}$ as in the discussion above, which \begin{itemize} \item equals $1$ for $[r,\infty)$ for some sufficiently small tame $r$ and \item vanishes at $0$. \end{itemize} Applying the contraction $\cat{sc}_{\psi}$ of $C(M,\cQ)$ to both sides of \begin{equation*} \sum_{i=1}^t \alpha(f_i)(1\otimes x_i)\quad \simeq_{\varepsilon}\quad F \end{equation*} will produce \begin{itemize} \item on the right hand side a function close to $F$ if the $r$ above is sufficiently small, and \item on the left hand side \begin{equation*} \cat{sc}_{\psi}\left(\sum_{i=1}^t \alpha(f_i)(1\otimes x_i)\right) = \sum_{i=1}^t \alpha(\cat{sc}_{\psi}(f_i))(1\otimes x_i), \end{equation*} using the $\cat{sc}_{\psi}$-equivariance \Cref{eq:alphaisequiv} of $\alpha$. \end{itemize} Since $\cat{sc}_{\psi}(f_i)$ belong to $C_0(U)$ (i.e. vanish on $\partial M$) because $\psi$ vanishes at $0$, this finishes the proof. \end{proof} It follows from \Cref{le:bdrystrong} that the discussion in \Cref{subse:inv} on gluing actions along common subspaces applies to the case when $X_1=M=X_2$ is a smooth manifold with boundary and \begin{equation*} \iota_i:Z=\partial M\to M \end{equation*} are both equal to the inclusion, so that $X=X_1\cup_Z X_2$ is the {\it double} $D(M)$ of $M$ (e.g. \cite[Example 9.32]{lee}). $D(M)$ is a topological boundary-less manifold which can be given a smooth structure compatible with that of (the two copies of) $M$ \cite[Theorem 9.29]{lee}. The proof of the latter theorem makes it clear that the smooth structure on $D(M)$ depends on a choice of collar neighborhoods of $\partial M$ in the two copies of $M$. For our purposes, we select (on both copies of $M$) a neighborhood adapted to the boundary in the sense of \Cref{def.tame}: one coordinate measures Riemannian distance from the boundary whereas the others are chosen arbitrarily on the boundary and kept constant along geodesics orthogonal to it. Whenever we refer to $D(M)$ as a smooth manifold we always assume the smooth structure is constructed as described above. Doubling a manifold without boundary simply produces two disjoint copies of it, so that $D(M)$ also contains two copies of each boundary-less component of $M$. Starting with the action $\alpha$ on $M$, we write $\alpha^2$ (``doubled $\alpha$'') for the action on $D(M)$ induced as in \Cref{le.glue}. \begin{proposition}\label{th.doubled} Let $\alpha$ be an isometric action of $\cQ$ on a Riemannian manifold $M$ with boundary. The doubled action \begin{equation*} \alpha^2:C(D(M))\to C(D(M),\cQ) \end{equation*} is weakly smooth and $\alpha^2_r$ is injective. \end{proposition} Consider a collar neighborhood $U=\partial_{<r}M$ of $\partial M$ in $M$ (see \Cref{eq:partials} for the notation) with its adapted coordinate system $(x_1,\cdots,x_n)$ in the sense of \Cref{def.tame}, $x_n$ denoting distance from $\partial M$. Let $\psi:[0,r]\to \bR$ be a continuous (typically smooth) function. We call a function on $U$ {\it $\psi$-separable} if it is of the form \begin{equation}\label{eq:sepmod} (x_1,\cdots,x_n)\mapsto f(x_1,\cdots,x_{n-1})\cdot \psi(x_n). \end{equation} The relevance of the notion to the subsequent discussion is captured by \begin{lemma}\label{le:sep} If $f\in C(M)$ is $\psi$-separable on $U$ for some continuous $\psi$ then $\alpha(f)\in C(M,\cQ)$ is again $\psi$-separable. \end{lemma} \begin{proof} We work on the $\alpha$-invariant closed collar $X:=\partial_{r\leftarrow 0}M$, to simplify the discussion. In the notation introduced prior to \Cref{le:bdrystrong}, the function \Cref{eq:sepmod} is obtained by applying $\cat{SC}_{\psi}$ to a function on $X$ independent of $x_n$ (i.e. a function depending only on the first $n-1$ variables). By the $\alpha$-equivariance \Cref{eq:alphaisequiv} of $\cat{SC}_{\psi}$, it is enough to prove the claim for $x_n$-independent functions, i.e. for $\psi\equiv 1$. To that end consider such a function $f\in C(X)$, independent of $x_n$. This means that for $s\in [0,r]$ in the commutative diagram \begin{equation*} \begin{tikzpicture}[auto,baseline=(current bounding box.center)] \path[anchor=base] (0,0) node (l) {$C(\partial_s M)$} +(3,.5) node (u) {$C(\partial M)$} +(3,-.5) node (d) {$C(\partial_s M,\cQ)$} +(6,0) node (r) {$C(\partial M,\cQ)$} ; \draw[->] (l) to[bend left=6] node[pos=.5,auto] {$\scriptstyle \psi_{s\leftarrow 0}^*$} (u); \draw[->] (u) to[bend left=6] node[pos=.5,auto] {$\scriptstyle \alpha$} (r); \draw[->] (l) to[bend right=6] node[pos=.5,auto,swap] {$\scriptstyle \alpha$} (d); \draw[->] (d) to[bend right=6] node[pos=.5,auto,swap] {$\scriptstyle \psi_{s\leftarrow 0}^*\otimes\id_{\cQ}$} (r); \end{tikzpicture} \end{equation*} (where the actions induced by $\alpha$ are denoted by the same symbol) the top left arrow maps \begin{equation*} f|_{\partial_s M}\longmapsto f|_{\partial M}. \end{equation*} But then by the very definition of the induced actions $\alpha$ (the two south-easterly arrows) the bottom right arrow similarly maps \begin{equation*} \alpha(f)|_{\partial_s M}\longmapsto \alpha(f)|_{\partial M}; \end{equation*} in turn, this says precisely that, having identified $\partial M$ and $\partial_s M$ via the first $n-1$ coordinates $x_i$, the restrictions of $\alpha(f)$ to the two sets are equal. Since $s\in [0,r]$ was arbitrary, this is what was needed. \end{proof} \pf{th.doubled} \begin{th.doubled} The second part (injectivity) follows from the last statement in \Cref{le.glue}, so it remains to prove weak smoothness. As above, fix a collar neighborhood $U=\partial_{<r}M$ of $\partial M$ with the adapted coordinate system $(x_1,\cdots,x_n)$ that we used in the construction of the smooth structure on $D(M)$ ($x_n$ denoting distance from $\partial M$). We extend the notion of $\psi$-separability to functions on \begin{equation*} V:=U\cup_{\partial M}U\cong (-r,r)\times \partial M \end{equation*} and $\psi:(-r,r)\to \bR$. It follows from the definition of $\alpha^2$ that for every smooth $\psi:(-r,r)\to \bR$ smooth functions on $D(M)$ that are $\psi$-separable on $V$ are sent by $\alpha^2$ to smooth functions $D(M)\to \cQ$ that are $\psi$-separable on $V$. The weak smoothness of $\alpha^2$ now follows from the Fr\'echet density of the algebra generated by \begin{equation*} \{f\in C^{\infty}(D(M))\ |\ f\text{ is }\psi-\text{separable on }V\} \end{equation*} in $C^{\infty}(D(M))$. \end{th.doubled} As a consequence, we have the following generalization of \Cref{th.nbdry}. \begin{theorem}\label{th.rig-bdry} Let $M$ be a compact connected Riemannian manifold, possibly with boundary. Then, every faithful compact quantum group action on $M$ isometric with respect to the geodesic distance $d$ is classical. \end{theorem} \begin{proof} Let $\alpha$ be an action by the compact quantum group $\cQ$ as in the statement and $\alpha^2$ its doubled version. Since $M$ is connected, $D(M)$ is a connected closed manifold. By \Cref{th.doubled} $\alpha^2$ meets the requirements of \Cref{th.prsrv} and hence $\alpha^2$ preserves some Riemannian metric on $D(M)$. But then $\cQ$ must be classical by \Cref{th.nbdry}, finishing the proof. \end{proof} \section{Uniformly distributed measures}\label{subse.unif} Another situation when quantum isometry groups exist automatically (though they may not be classical, in general) occurs when the metric space is equipped with a probability measure as in the title of the present subsection. We first recall that concept (see e.g. \cite[Definition 3.3]{mat}). \begin{definition}\label{def.ud} A measure on a metric space $X$ is {\it uniformly distributed} (or {\it UD} for short) if \begin{equation*} \mu(B(x,r)) = \mu(B(y,r)),\ \forall x,y\in X,\ \forall r\in \bR_{\ge 0}. \end{equation*} In other words, the measure assigns equal mass to balls of equal radius, regardless of center. \end{definition} Uniformly distributed measures on compact metric spaces are unique up to scaling when they exist \cite[Theorem 3.4]{mat}, and hence UD probability measures are unique (or non-existent). Now let $\mu$ be a UD probability measure on $(X,d)$ and consider a CQG action \begin{equation*} C(X)\to C(X)\otimes C(G) \end{equation*} on $X$ that is isometric in the sense of \cite[Definition 3.1]{metric}. The following auxiliary observation will be used later. \begin{lemma}\label{le.munu} Let $\mu$ be a UD probability measure on the compact metric space $(X,d)$ and $\nu$ any probability measure. Then, for every $r\in \bR_{\ge 0}$ we have \begin{equation*} \int_X \nu(B(x,r))\ \mathrm{d}\mu(x)=\mu_r:=\mu(B(x,r)),\ \forall x\in X. \end{equation*} \end{lemma} \begin{proof} By Fubini's theorem, the left hand side is \begin{align*} \int_{X\times X}\chi_{B(x,r)}(y)\ \mathrm{d}\nu(y)\ \mathrm{d}\mu(x) &= \int_{X\times X}\chi_{B(y,r)}(x)\ \mathrm{d}\nu(y)\ \mathrm{d}\mu(x)\\ &=\int_X \mu_r\ \mathrm{d}\nu(y) = \mu_r. \end{align*} This finishes the proof. \end{proof} \begin{theorem}\label{th.ud} A uniformly distributed measure $\mu$ on a compact metric space $(X,d)$ is automatically invariant under any isometric CQG action $\alpha$. \end{theorem} \begin{proof} We have to show that for every state $\varphi$ on $C(G)$ and UD probability measure $\mu$ $(X,d)$ we have \begin{equation*} \mu\triangleleft\varphi = \mu. \end{equation*} Lift $\alpha$ to a coaction (denoted slightly abusively by the same symbol) \begin{equation*} \alpha:W(X)\to W(X)\otimes C(G)'' \end{equation*} where \begin{itemize} \item $C(G)''$ is the von Neumann algebra generated by $C(G)$ in its Haar-state GNS representation; \item $W(X)$ is the von Neumann hull of $C(X)$. \end{itemize} As in \cite[\S 3]{Chi15}, for a point $x\in X$ and a Borel subset $S\subseteq X$ we denote by $a_{x;S}$ the image of the characteristic function $\chi_S$ through $(\mathrm{ev}_x\otimes \id)\alpha$. According to \cite[equation (13)]{Chi15} we have \begin{equation}\label{eq:1} a_{x;B(y,r)} = \kappa(a_{y;B(x,r)}) \end{equation} for all pairs of points $x,y\in X$ and radii $r\in \bR_{\ge 0}$. By the very definition of the action $\triangleleft$ of the state semigroup $\mathrm{Prob}(G)$ on $\mathrm{Prob}(X)$, we have \begin{equation*} (\mu\triangleleft\varphi)(B(y,r)) = \int_X \varphi(a_{x;B(y,r)})\ \mathrm{d}\mu(x), \end{equation*} i.e. the integral of the left hand side of \Cref{eq:1} against $\mu(x)$. Using \Cref{eq:1}, this is also \begin{equation*} \int_X(\mathrm{ev}_y\triangleleft\varphi\circ\kappa)(B(x,r))\ \mathrm{d}\mu(x). \end{equation*} Applying \Cref{le.munu} with $\nu=\mathrm{ev}_y\triangleleft \varphi\circ\kappa$ we conclude that this equals $\mu_r$. In conclusion, \begin{equation*} (\mu\triangleleft\varphi)(B(y,r)) = \mu_r = \mu(B(y,r)), \forall y\in X. \end{equation*} This finishes the proof. \end{proof} The reason why this has a bearing on the existence of $QISO(X,d)$ is encapsulated by the following result. \begin{theorem}\label{th.xdmu} Let $(X,d)$ be a compact metric space and $\mu$ a Borel probability measure with full support. Then, there is a universal compact quantum group $QISO(X,d,\mu)$ acting on $(X,d)$ isometrically and preserving $\mu$. \end{theorem} We need some preparation. Throughout the discussion, we assume $\alpha$ is an isometric, $\mu$-preserving action on $(X,d,\mu)$. First, consider the integral operator with kernel $d$, i.e. \begin{equation}\label{eq:intop} K: f\mapsto \int_Xd(-,x)f(x)\ \mathrm{d}\mu(x). \end{equation} It can be regarded as an operator on either the Banach space $C(X)$ or the Hilbert space $L^2(X,\mu)$, and it is compact in either guise ( self-adjoint in the latter). Working in the $L^2$-picture, its non-zero eigenspaces \begin{equation*} V_{\lambda}:=\ker(K-\lambda),\ \lambda\ne 0 \end{equation*} coincide on $C(X)$ and $L^2(X,\mu)$ because its image consists of continuous functions. \begin{lemma}\label{le:invmeas} Let $(X,d,\mu)$ be a compact metric space equipped with a Borel probability measure and $\alpha:\cC\to \cC\otimes \cQ$ an isometric action as in \Cref{def.isometric} leaving $\mu$ invariant. Then, for any two continuous functions $f,g\in \cC$ we have \begin{equation}\label{eq:bal} \int_X \kappa(\alpha g(x)) f(x)\ \mathrm{d}\mu(x) = \int_X g(x) \alpha f(x)\ \mathrm{d}\mu(x) \in \cQ. \end{equation} \end{lemma} \begin{proof} It is enough to work with the dense $*$-subalgebras $\cC_0\subseteq \cC$ and $\cQ_0\subseteq \cQ$ discussed in \Cref{subse:cqg}, so that we can use Sweedler notation for the comultiplication and coaction: \begin{align*} \cC_0\ni f &\stackrel{\alpha}{\longmapsto} f_0\otimes f_1 \in \cC_0\otimes_{\rm alg} \cQ_0\\ \cQ_0\ni x& \stackrel{\Delta}{\longmapsto} x_1\otimes x_2\in \cQ_0\otimes_{\rm alg} \cQ_0, \end{align*} etc. The desired equality \Cref{eq:bal} then reads \begin{equation*} \mu(fg_0)\kappa(g_1) = \mu(f_0g) f_1. \end{equation*} To see why this is so, use the $\alpha$-invariance of $\mu$, \begin{equation*} \mu(x_0)x_1 = \mu(x)1, \end{equation*} on $x=fg_0$ to obtain \begin{equation*} \mu(fg_0)\kappa(g_1) = \mu(f_0 g_0)f_1g_1\kappa(g_2) = \mu(f_0g) f_1, \end{equation*} where the last equality uses the defining property of the antipode in a Hopf algebra, namely \begin{equation*} g_1\kappa(g_2) = \varepsilon(g)1. \end{equation*} This finishes the proof. \end{proof} \begin{lemma}\label{le:comm} The integral operator $\cK$ intertwines the action $\alpha$ in the sense that \begin{equation}\label{eq:ccqq} \begin{tikzpicture}[auto,baseline=(current bounding box.center)] \path[anchor=base] (0,0) node (l) {$\cC$} +(3,.5) node (u) {$\cC\otimes \cQ$} +(3,-.5) node (d) {$\cC$} +(6,0) node (r) {$\cC\otimes \cQ$} ; \draw[->] (l) to[bend left=6] node[pos=.5,auto] {$\scriptstyle \alpha$} (u); \draw[->] (l) to[bend right=6] node[pos=.5,auto,swap] {$\scriptstyle K$} (d); \draw[->] (d) to[bend right=6] node[pos=.5,auto,swap] {$\scriptstyle \alpha$} (r); \draw[->] (u) to[bend left=6] node[pos=.5,auto] {$\scriptstyle K\otimes \id$} (r); \end{tikzpicture} \end{equation} commutes. \end{lemma} \begin{proof} For an arbitrary $f\in \cC$ we will evaluate the images of $f$ through both the upper and lower paths in \Cref{eq:ccqq} at a fixed point $y\in X$. On the one hand, we have \begin{align*} \alpha(Kf)(y) &= \int_X \alpha(d_x)(y) f(x)\ \mathrm{d}\mu(x)\\ &=\int_X \kappa(\alpha(d_y)(x)) f(x)\ \mathrm{d}\mu(x),\\ \end{align*} using the fact that $\alpha$ is isometric. On the other hand, \begin{align*} (K\otimes \id)(\alpha f)(y) &= \int_X d_x(y) \alpha f(x)\ \mathrm{d}\mu(x)\\ &=\int_X d_y(x) \alpha f(x)\ \mathrm{d}\mu(x).\\ \end{align*} That these two are equal now follows by applying \Cref{le:invmeas} with $g=d_y$. \end{proof} \pf{th.xdmu} \begin{th.xdmu} Let $\cQ$ be a compact quantum group acting isometrically via \begin{equation*} \alpha:C(X)\to C(X)\otimes \cQ \end{equation*} on $(X,d)$ and preserving $\mu$, and consider the integral operator \Cref{eq:intop} on $L^2(X,\mu)$. Because by \Cref{le:comm} $K$ is an intertwiner for the action $\alpha$, the latter preserves the non-zero finite-dimensional eigenspaces $V_{\lambda}$, $\lambda\ne 0$ of $K$. Moreover, since $K$ is self-adjoint, the closed span of the $V_{\lambda}$ coincides with the closure of the range of $K$. Applying $K$ to bump functions $\psi$ localized near points $y\in X$ we can approximate \begin{equation*} d_y:=d(y,-)\simeq K\psi \end{equation*} arbitrarily well, so the $*$-algebra $\cA\subset C(X)$ generated by $V_{\lambda}$, $\lambda\ne 0$ is dense. Now consider the lattice $\cL$ of subspaces of $C(X)$ generated by the $V_{\lambda}$, $\lambda\ne 0$ $\bC 1$ and closed under the following operations \begin{itemize} \item taking products: if $V_i\in \cL$ for $1\le i\le t$ then \begin{equation*} V_1\cdot\ldots\cdot V_t\in \cL. \end{equation*} \item taking adjoints: \begin{equation*} V\in \cL\Rightarrow V^*\in \cL. \end{equation*} \item taking orthogonal complements with respect to the inner product induced by $\mu$: if $V\subseteq W$ both belong to $\cL$ then so does \begin{equation*} W\ominus V:=V^{\perp}\cap W. \end{equation*} \end{itemize} The minimal (non-zero) elements of $\cL$ are then finite-dimensional subspaces preserved by the action, whose direct sum is precisely the $*$-subalgebra $\cA\subset C(X)$. Furthermore, these spaces constitute an {\it orthogonal filtration} $V_i$, $i\in \cI$ for $C(X)$ with respect to the state $\mu$ on it in the sense of \cite[Definition 2.1]{ban-sk}. It follows from \cite[Theorem 2.7]{ban-sk} that the is a universal compact quantum group \begin{equation*} QISO(C(X),\mu,(V_i)_{i\in \cI}) \end{equation*} acting on $X$ in a filtration-preserving manner, and from \cite[Theorem 4.4]{Chi15} that the latter has a largest compact quantum subgroup $\cQ_u$ acting isometrically. The argument above shows that the action of $\cQ$ on $X$ factors through that of $\cQ_u$, i.e. that the latter has the defining universality property of $QISO(X,d,\mu)$. \end{th.xdmu} In particular, we have \begin{corollary}\label{cor.ud-implies-qiso} A compact metric space $(X,d)$ admitting a uniformly distributed probability measure admits a quantum isometry group $QISO(X,d)$. \end{corollary} \begin{proof} Immediate from \Cref{th.ud,th.xdmu}. \end{proof} For instance: \begin{corollary}\label{cor.homog-sp} Let $G$ be a compact group and $X$ a homogeneous $G$-space equipped with a $G$-invariant metric $d$. Then, there is a universal quantum group $QISO(X,d)$ of isometries of $(X,d)$. \end{corollary} \begin{proof} This is a consequence of \Cref{th.ud}, since $(X,d)$ admits a UD probability measure: simply select {\it any} probability measure on $X$ and average it with respect to the Haar measure of $G$. \end{proof} \bibliographystyle{plain} \addcontentsline{toc}{section}{References} \def\polhk#1{\setbox0=\hbox{#1}{\ooalign{\hidewidth \lower1.5ex\hbox{`}\hidewidth\crcr\unhbox0}}}
1,314,259,995,407
arxiv
\section{Introduction} Multiple hypothesis testing with \textit{false discovery rate} (FDR, \cite{Benjamini:1995}) control has been widely applied to various scientific endeavors, and it can often be stated as follows. There are $m\in\mathbb{N}$ test statistics $\left\{ \zeta_{i}\right\} _{i=1}^{m}$ such that $\zeta_{i}$ has parameter $\mu_{i}$, and the $i$th null hypothesis is $H_{i0}:\mu_{i =\mu_{0}$ (versus its alternative hypothesis $H_{i1}:\mu_{i}\neq\mu_{0}$) for a fixed, known $\mu_{0}\in\Theta\subseteq\mathbb{R}$, where $\Theta$ is the parameter space for the $m$ $\zeta_{i}$'s. Define $p_{i}=1-F_{i}\left( \zeta_{i}\right) $ as the one-sided p-value for $\zeta_{i}$, where $F_{i}$ is the \textit{cumulative distribution function} (CDF) of $\zeta_{i}$ when $\mu_{i}=\mu_{0}$. Let $I_{0,m}$ be the set of indices of the true null hypotheses, and denote its cardinality (often being positive) by $m_{0}$. Consider the multiple testing procedure (MTP) with a fixed rejection threshold $t\in\left[ 0,1\right] $ that rejects $H_{i0}$ \textit{if and only if} (iff) $p_{i}\leq t$. Then the MTP induces $R_{m}\left( t\right) =\sum_{i=1 ^{m}1\left\{ p_{i}\leq t\right\} $, the number of rejections, and $V_{m}\left( t\right) =\sum_{i\in I_{0,m}}1\left\{ p_{i}\leq t\right\} $, the number of false discoveries, where $1A$ is the indicator function of a set $A$. Further, the \textit{false discovery proportion} (FDP) and FDR of the MTP are \begin{equation} \mathrm{FDP}_{m}\left( t\right) =\frac{V_{m}\left( t\right) }{R_{m}\left( t\right) \vee1}\text{ \ \ \ and \ \ \ \ }\mathrm{FDR}_{m}\left( t\right) =\mathbb{E}\left[ \mathrm{FDP}_{m}\left( t\right) \right] \label{eqFDR \end{equation} respectively, where the operator $\vee$ returns the maximum of its two arguments. When $m$, the number of tests to conduct, is large, we aim to control the FDR of the MTP at a given level $\theta\in\left( 0,1\right) $ by choosing an appropriate $t$ or to estimate the FDP or FDR of the MTP at a given threshold $t$. However, the test statistics $\left\{ \zeta_{i}\right\} _{i=1}^{m}$ are often dependent on each other, and under dependence the behavior of the FDP is usually unstable and can sometimes be unpredictable; see, e.g., \cite{Finner:2007}, \cite{Owen:2005} and \cite{Schwartzman2011}. This can make irreproducible and untrustable the inferential results from the MTP. The very few works of \cite{Schwartzman:2015}, \cite{Chen:2014SLLN}, \cite{Delattre:2016} and \cite{Fan:2012} studied the asymptotic behavior of $R_{m}\left( t\right) $ or $m^{-1}R_{m}\left( t\right) $ under dependence by utilizing conditions on the correlation matrix $\mathbf{R}=\left( \rho_{ij}\right) $ of $\boldsymbol{\zeta}=\left( \zeta_{1},\ldots,\zeta _{m}\right) $. However, they all considered the setting where each $\zeta _{i}$ is a Gaussian random variable. Specifically, when each pair $\left( \zeta_{i},\zeta_{j}\right) ,i\neq j$ is bivariate Gaussian, the authors of \cite{Chen:2014SLLN} proved \textquotedblleft a SLLN for $R_{m}\left( t\right) $ and $V_{m}\left( t\right) $\textquotedblright, i.e., \begin{description} \item[C1)] I \begin{equation} m^{-2}\left\Vert \mathbf{R}\right\Vert _{1}=O\big(m^{-\delta}\big)\text{ \ for some \ }\delta>0,\label{eq1 \end{equation} the \begin{equation} \left\{ \begin{array} [c]{c \lim_{m\rightarrow\infty}\left\vert m^{-1}R_{m}\left( t\right) -\mathbb{E}\left[ m^{-1}R_{m}\left( t\right) \right] \right\vert =0\text{ \ almost surely,}\\ \lim_{m\rightarrow\infty}\left\vert m^{-1}V_{m}\left( t\right) -\mathbb{E}\left[ m^{-1}V_{m}\left( t\right) \right] \right\vert =0\text{ \ almost surely. \end{array} \right. \label{eqSLLNR \end{equation} \item[C2)] If $\liminf_{m\rightarrow\infty}m_{0}m^{-1}>0$ and (\ref{eq1}) hold, the \begin{equation} \lim_{m\rightarrow\infty}\left\vert m_{0}^{-1}V_{m}\left( t\right) -\mathbb{E}\left[ m_{0}^{-1}V_{m}\left( t\right) \right] \right\vert =0\text{ almost surely.}\label{eqSLLNV \end{equation} \end{description} \noindent Here \textquotedblleft the $l_{1}$-norm $\left\Vert \mathbf{R \right\Vert _{1}$\textquotedblright\ of $\mathbf{R}$ is defined as $\left\Vert \mathbf{R}\right\Vert _{1}=\sum_{i,j=1}^{m}\left\vert \rho_{ij}\right\vert $. We remark that, even though the assertion (\ref{eqSLLNV})\ is not explicitly stated by Theorem 1 of \cite{Chen:2014SLLN}, it is written in the proof of this theorem. As a SLLN is perhaps the strongest characterization of the stability of a sequence random variables, in this work we continue the line of research of \cite{Chen:2014SLLN}, and characterize the type of dependence (via the order of $\left\Vert \mathbf{R}\right\Vert _{1}$) under which (\ref{eqSLLNR}) and (\ref{eqSLLNV}) hold when $\left( \zeta_{i},\zeta_{j}\right) ,i\neq j$ follows a Lancaster (but non-Gaussian) bivariate distribution with an infinite support. It turns out that the strategy of \cite{Chen:2014SLLN} applies to the settings here. Specifically, to prove (\ref{eqSLLNR}) we only need to implement the following two steps: first, obtain a \textquotedblleft comparison inequality\textquotedblright, i.e. \begin{equation} \left\vert \mathrm{cov}\left( 1\left\{ p_{i}\leq t\right\} ,1\left\{ p_{j}\leq t\right\} \right) \right\vert \leq C\left\vert \rho_{ij \right\vert \text{ for all }i\neq j\text{ and a constant }C>0; \label{IneqA \end{equation} second, apply Theorem 1 of \cite{Lyons:1988}, under the condition (\ref{eq1}), to the indicators $X_{i}=1\left\{ p_{i}\leq t\right\} $ with $1\leq i\leq m$ (or $i\in I_{0,m}$) that induces $R_{m}\left( t\right) $ (or $V_{m}\left( t\right) $). Once (\ref{eqSLLNR}) is proved and $\liminf_{m\rightarrow\infty }m_{0}m^{-1}>0$ holds, (\ref{eqSLLNV}) follows as an easy corollary. Our main result is the following: \begin{theorem} \label{ThmMain}Suppose that each pair $\left( \zeta_{i},\zeta_{j}\right) ,i\neq j$ follows any of the following four Lancaster bivariate distributions with correlation $\rho_{ij}$: \begin{enumerate} \item A Lancaster bivariate gamma distribution (defined by (\ref{defbiG})) with shape parameter $\alpha\in(0,1]$; \item A Lancaster bivariate Poisson distribution (defined by (\ref{biPoisson )) with parameter $\alpha>0$; \item A Lancaster negative binomial distribution (defined by (\ref{biNB})) with parameter $\left( \beta,c\right) $ such that $\beta>0$ and $0<c<1$; \item A Lancaster bivariate gamma-negative binomial distribution (defined by (\ref{biGNB})) with parameter $\left( \beta,c,\alpha\right) $ such that $\beta>0$, $0<c<1$ and $\alpha>0$. \end{enumerate} Then (\ref{IneqA}) holds. If (\ref{eq1}) holds, then (\ref{eqSLLNR}) holds. If in addition $\liminf_{m\rightarrow\infty}m_{0}m^{-1}>0$, then (\ref{eqSLLNV}) holds. \end{theorem} The definitions of the four Lancaster bivariate distributions covered by \autoref{ThmMain} can be found in \cite{Koudou:1998} and will be provided in the proof of this theorem. Our findings seem to suggest that the inequality (\ref{IneqA}) is universal for Lancaster bivariate distributions with infinite supports. On the other hand, the Lancaster distributions considered by \autoref{ThmMain} are often associated with the true null hypotheses in a multiple testing scenario. For example, the Lancaster bivariate gamma distribution includes the Lancaster bivariate central chi-square distribution as a special case, and the latter distribution corresponds to the true null hypothesis that its two marginal distributions have a zero centrality parameter and the same degrees of freedom. Further, bivariate Poisson or bivariate negative binomial distributions are widely used to model count data, and the Lancaster bivariate Poisson or negative binomial distribution corresponds to the true null hypothesis that its two marginal distributions have identical parameters. In view of the above discussion, \autoref{ThmMain} has the following implication. Consider the slightly extended multiple testing scenario, where \begin{itemize} \item There are $\tilde{m}$ ($\geq m$) null hypotheses, $H_{i0}:\mu_{i =\mu_{0}$ with $1\leq i\leq\tilde{m}$, to test simultaneously, each of which has an associated test statistic $\zeta_{i}$; \item Each $H_{i0}$ with $1\leq i\leq m$ is a true null hypothesis, and the rest $\tilde{m}-m$ null hypotheses are false; \item The MTP rejects an $H_{i0}$ iff its associated p-value $p_{i}\leq t$ for a fixed rejection threshold $t\in\left( 0,1\right) $. \end{itemize} \noindent Note that the above arrangement of the indices for the true and false null hypotheses is unrestrictive. In this setting, the number of false rejections of the MTP is $V_{\tilde{m}}\left( t\right) =\sum_{i=1 ^{\tilde{m}}1\left\{ p_{i}\leq t\right\} $, and the FDP of the MTP i \[ \mathrm{FDP}_{\tilde{m}}\left( t\right) =\frac{V_{\tilde{m}}\left( t\right) }{R_{\tilde{m}}\left( t\right) \vee1}\text{ \ with \ }R_{\tilde {m}}\left( t\right) =\sum_{i=1}^{\tilde{m}}1\left\{ p_{i}\leq t\right\} . \] Let $\mathbf{S}$ be the correlation matrix of $\left\{ \zeta_{i}\right\} _{i=1}^{\tilde{m}}$ and $\pi_{0,\tilde{m}}=m^{-1}\tilde{m}$. When $\liminf_{m\rightarrow\infty}\pi_{0,\tilde{m}}>0$ \begin{equation} \tilde{m}^{-2}\left\Vert \mathbf{S}\right\Vert _{1}=O\big(\tilde{m}^{-\delta }\big)\text{ \ for \ some}\ \delta>0\label{eq2a \end{equation} and the $p_{i}$'s associated with $H_{i0}$ for $1\leq i\leq m$ are identically distributed as $p_{0}$, \autoref{ThmMain} implies \begin{equation} \lim_{m\rightarrow\infty}\left\vert m^{-1}V_{\tilde{m}}\left( t\right) -\mathbb{P}\left( \left\{ p_{0}\leq t\right\} \right) \right\vert =0\text{ almost surely.}\label{eq3 \end{equation} Let $\hat{\pi}_{0,\tilde{m}}$ be an estimator of $\pi_{0,\tilde{m}}$ and set \begin{equation} \vartheta_{\tilde{m}}\left( t\right) =\frac{\hat{\pi}_{0,\tilde{m }\mathbb{P}\left( \left\{ p_{0}\leq t\right\} \right) }{\tilde{m ^{-1}R_{\tilde{m}}\left( t\right) }.\label{eq4 \end{equation} It is easy to verify that, if $\hat{\pi}_{0,\tilde{m}}\pi_{0,\tilde{m} ^{-1}\rightsquigarrow1$ as $\tilde{m}\rightarrow\infty$, $\liminf_{\tilde {m}\rightarrow\infty}\tilde{m}^{-1}R_{\tilde{m}}\left( t\right) >0$ almost surely and (\ref{eq2a}) holds, then\ $\left\vert \vartheta_{\tilde{m}}\left( t\right) -\mathrm{FDP}_{\tilde{m}}\left( t\right) \right\vert \rightsquigarrow0$ as $\tilde{m}\rightarrow\infty$, where $\rightsquigarrow$ denotes \textquotedblleft convergence in probability\textquotedblright. Namely, $\vartheta_{\tilde{m}}\left( t\right) $ consistently estimates $\mathrm{FDP}_{\tilde{m}}\left( t\right) $ for each fixed $t\in\left( 0,1\right) $. Note that $\vartheta_{\tilde{m}}\left( t\right) $ in (\ref{eq4}) can be regarded as a slight extension of the FDR\ estimator proposed by \cite{Storey:2004}. A second implication of \autoref{ThmMain} is as follows. The \textquotedblleft weak dependence" assumption, proposed in \cite{Storey:2004} and widely used in the multiple testing literature, requires that there exist two continuous functions $G_{0}$ and $G_{1}$ such that\ for each $t\in(0,1], \begin{equation} \lim_{m\rightarrow\infty}m_{0}^{-1}V_{m}\left( t\right) =G_{0}\left( t\right) \text{ \ and \ }\lim_{m-\infty}\left( m-m_{0}\right) ^{-1}\left[ R_{m}\left( t\right) -V_{m}\left( t\right) \right] =G_{1}\left( t\right) \label{eq5 \end{equation} almost surely. However, to check whether (\ref{eq5}) holds is often very hard (even after the continuity requirement on $G_{0}$ and $G_{1}$ is removed). \autoref{ThmMain} here and Theorem 1 in \cite{Chen:2014SLLN} together provide a way to check whether this assumption holds in the scenario of simultaneously testing the parameters of a larger number of dependent random variables, each pair of which follows any of the five Lancaster bivariate distributions that are studied in \cite{Koudou:1998}. Specifically, a check on the order of the $l_{1}$-norm of the correlation matrix of these random variables suffices for this purpose. We will report in another article on how to consistently estimate $m^{-2}\left\Vert \mathbf{R}\right\Vert _{1}$ or efficiently test the order of $\left\Vert \mathbf{R}\right\Vert _{1}$. The rest of the article is devoted to the proof of \autoref{ThmMain}. \section{Proof of \autoref{ThmMain}} In the proof, $\mathbb{V}\left[ \cdot\right] $ and $\mathsf{cov}\left[ \cdot,\cdot\right] $ are the variance and covariance operators, $\mathbb{N}_{0}=\mathbb{N}\cup\left\{ 0\right\} $, and $C$ denotes a positive constant that can assume different (and appropriate) values at different occurrences. We need Theorem 1 of \cite{Lyons:1988} in the proof, which reads \textquotedblleft Let $\left\{ \chi_{n}\right\} _{n=1}^{\infty}$ be a sequence of complex-valued random variables such that $\mathbb{E}\left[ \left\vert \chi_{n}\right\vert ^{2}\right] \leq1$. Set \thinspace $Q_{N}=N^{-1}\sum\nolimits_{n=1}^{N}\chi_{n}$. If $\left\vert \chi _{n}\right\vert \leq1$ a.s. an \begin{equation} \sum\nolimits_{N=1}^{\infty}N^{-1}\mathbb{E}\left[ \left\vert Q_{N \right\vert ^{2}\right] <\infty, \label{eqCondLyons \end{equation} then $\lim_{N\rightarrow\infty}Q_{N}=0$ a.s.\textquotedblright\ \noindent A sufficient condition for the SLLN to hold for $\left\{ \chi_{n}\right\} _{n=1}^{\infty}$ is that $\mathbb{E}\left[ \left\vert Q_{m}\right\vert ^{2}\right] =O\left( m^{-\delta}\right) $ for some $\delta>0$, which implies (\ref{eqCondLyons}). Now we present the arguments. Recall $X_{i}=1\left\{ p_{i}\leq t\right\} $, for which $R_{m}\left( t\right) =\sum_{i=1}^{m}X_{i}$ and $V_{m}\left( t\right) =\sum_{i\in I_{0,m}}X_{i}$. We aim to show that $\mathbb{V}\left[ m^{-1}R_{m}\left( t\right) \right] $ satisfies $O\left( m^{-\delta_{\ast }\right) $ with $\delta_{\ast}=\min\left\{ \delta,1\right\} $. Define two set \[ E_{1,m}=\left\{ \left( i,j\right) :1\leq i,j\leq m,i\neq j,\left\vert \rho_{ij}\right\vert =1\right\} \] an \[ E_{2,m}=\left\{ \left( i,j\right) :1\leq i,j\leq m,i\neq j,\left\vert \rho_{ij}\right\vert <1\right\} . \] Namely, $E_{2,m}$ records pairs $\left( \zeta_{i},\zeta_{j}\right) $ with $i\neq j$ such that $\zeta_{i}$ and $\zeta_{j}$ are linearly dependent almost surely. Obviously, $\left\vert \mathrm{cov}\left( X_{i},X_{j}\right) \right\vert \leq C=C\left\vert \rho_{ij}\right\vert $ for $\left( i,j\right) \in E_{2,m}$. Further, \begin{align} \mathbb{V}\left[ m^{-1}R_{m}\left( t\right) \right] & \leq O\left( m^{-1}\right) +m^{-2}\sum_{\left( i,j\right) \in E_{2,m}}\left\vert \mathrm{cov}\left( X_{i},X_{j}\right) \right\vert +m^{-2}\sum_{\left( i,j\right) \in E_{1,m}}\left\vert \mathrm{cov}\left( X_{i},X_{j}\right) \right\vert \nonumber\\ & \leq O\left( m^{-\min\left\{ \delta,1\right\} }\right) +m^{-2 \sum_{\left( i,j\right) \in E_{1,m}}\left\vert \mathrm{cov}\left( X_{i},X_{j}\right) \right\vert \label{eqE1 \end{align} sinc \[ m^{-2}\sum\nolimits_{\left( i,j\right) \in E_{2,m}}\left\vert \mathrm{cov \left( X_{i},X_{j}\right) \right\vert =O\left( m^{-2}\left\Vert \mathbf{R}\right\Vert _{1}\right) =O\big(m^{-\delta}\big). \] So, we only need to upper bound $B_{1,m}=m^{-2}\sum_{\left( i,j\right) \in E_{1,m}}\left\vert \mathrm{cov}\left( X_{i},X_{j}\right) \right\vert $ on the right-hand side of (\ref{eqE1}). On the other hand, \begin{align*} \mathbb{V}\left[ m^{-1}V_{m}\left( t\right) \right] & \leq O\left( m^{-1}\right) +m^{-2}\sum_{\left( i,j\right) \in\tilde{E}_{2,m}}\left\vert \mathrm{cov}\left( X_{i},X_{j}\right) \right\vert +m^{-2}\sum_{\left( i,j\right) \in\tilde{E}_{1,m}}\left\vert \mathrm{cov}\left( X_{i ,X_{j}\right) \right\vert \\ & \leq Cm^{-1}+Cm^{-\delta}+CB_{1,m}, \end{align*} where $\tilde{E}_{k,m}=E_{k,m}\cap\left( I_{0,m}\times I_{0,m}\right) $ for $k\in\left\{ 1,2\right\} $. So, an upper bound on $B_{1,m}$ will induce the same upper bound for $\mathbb{V}\left[ m^{-1}R_{m}\left( t\right) \right] $ and $\mathbb{V}\left[ m^{-1}V_{m}\left( t\right) \right] $. We will split the rest of the proof into four cases in terms of upper bounding $B_{1,m}$, each corresponding to a Lancaster bivariate distribution in the statement of \autoref{ThmMain} and each occupying a subsection. \subsection{The Lancaster bivariate gamma distribution} \label{lancaster_bvgamma} The Lancaster bivariate gamma distribution was derived by \cite{griffiths1969} and \cite{Koudou:1998}. Specifically, if $\left( X,Y\right) $ follows this distribution with shape parameter $\alpha>0$ and correlation $\rho\in \lbrack0,1)$, then its density i \begin{equation} h\left( x,y;\alpha,\rho\right) =f\left( x;\alpha\right) f\left( y;\alpha\right) \sum_{n=0}^{\infty}\frac{\rho^{n}n!}{\Gamma\left( \alpha+n\right) \Gamma\left( \alpha\right) }L_{n}^{\left( \alpha-1\right) }\left( x\right) L_{n}^{\left( \alpha-1\right) }\left( y\right) , \label{defbiG \end{equation} wher \[ f\left( x;\alpha\right) =\frac{1}{\Gamma\left( \alpha\right) }x^{\alpha -1}e^{-x}\text{ \ for \ }x>0 \] is the gamma density with shape parameter $\alpha>0$, and \[ L_{n}^{\left( \alpha\right) }\left( x\right) =\sum_{k=0}^{n \binom{n+\alpha}{n-k}\frac{\left( -x\right) ^{k}}{k!}\text{ \ for \ n\in\mathbb{N}_{0 \] is the $n$th Laguerre polynomial of order $\alpha>0$. Let $\tau=F_{i}^{-1}\left( 1-t\right) $. If $\left( \zeta_{i},\zeta _{j}\right) $ with $\left( i,j\right) \in E_{2,m}$ follows a Lancaster bivariate gamma distribution with shape parameter $\alpha>0$ and correlation $\rho_{ij}\in\lbrack0,1)$, the \[ \kappa_{ij}=\mathrm{cov}\left( 1\left\{ p_{i}\leq t\right\} ,1\left\{ p_{j}\leq t\right\} \right) =\sum_{n=1}^{\infty}\frac{\rho_{ij}^{n n!}{\Gamma\left( \alpha+n\right) \Gamma\left( \alpha\right) }q_{n ^{2}\left( \tau;\alpha\right) , \] wher \[ q_{n}\left( \tau;\alpha\right) =\int_{-\infty}^{\tau}f\left( x;\alpha \right) L_{n}^{\left( \alpha-1\right) }\left( x\right) dx=\frac{1 {\Gamma\left( \alpha\right) }\int_{-\infty}^{\tau}x^{\alpha-1}e^{-x L_{n}^{\left( \alpha-1\right) }\left( x\right) dx. \] From the Rodrigue's formula (e.g., on page 101 of \cite{Szego:1939}), i.e. \[ L_{n}^{(\alpha)}(x)=\frac{1}{n!}x^{-\alpha}e^{x}\frac{d^{n}}{dx^{n}}\left( x^{n+\alpha}e^{-x}\right) \text{ \ \ for }n\in\mathbb{N}_{0}\text{ \ and }\alpha>-1, \] we obtain, for $y>0$ and $n\geq1$ \begin{align*} \int_{-\infty}^{y}x^{\alpha}e^{-x}L_{n}^{(\alpha)}(x)dx & =\frac{1}{n! \int_{-\infty}^{y}\left[ \frac{d^{n}}{dx^{n}}\left( x^{n+\alpha e^{-x}\right) \right] dx\\ & =\frac{y^{\alpha+1}e^{-y}}{n}\frac{y^{-\left( \alpha+1\right) }e^{y }{\left( n-1\right) !}\left[ \left. \frac{d^{n-1}}{dx^{n-1}}\left( x^{n-1+\alpha+1}e^{-x}\right) \right\vert _{x=y}\right] \\ & =\frac{y^{\alpha+1}e^{-y}}{n}L_{n-1}^{(\alpha+1)}(y). \end{align*} Therefore \[ \kappa_{ij}=\sum_{n=1}^{\infty}\frac{\rho_{ij}^{n}n!}{n^{2}\Gamma\left( \alpha+n\right) \Gamma^{3}\left( \alpha\right) }\left[ \tau^{\alpha }e^{-\tau}L_{n-1}^{(\alpha)}(\tau)\right] ^{2}. \] By Watson's bound on page 21 of \cite{Watson1939jlms}, i.e. \[ \left\vert L_{n}^{\left( \alpha\right) }\left( x\right) \right\vert \leq\frac{\Gamma\left( \alpha+1+n\right) }{\Gamma\left( \alpha+1\right) n!}e^{x/2}\text{ \ for \ }x\geq0,\alpha\geq0\text{ and }n\in\mathbb{N}_{0}, \] we obtai \[ \left\vert \kappa_{ij}\right\vert \leq C\sum_{n=1}^{\infty}\frac{\rho_{ij ^{n}\Gamma\left( \alpha+n\right) }{n!}\tau^{2\alpha}e^{-\tau}. \] However, the identity (1) in \cite{tricomi1951asymptotic} states that,\ for distinct real constants $\alpha$ and $\gamma,$ \begin{equation} \frac{\Gamma\left( z+\alpha\right) }{\Gamma\left( z+\gamma\right) }=z^{\alpha-\gamma}\left[ 1+\frac{\left( \alpha-\gamma\right) \left( \alpha+\gamma-1\right) }{2z}+O\left( \left\vert z\right\vert ^{-2}\right) \right] \text{ \ as \ }\left\vert z\right\vert \rightarrow\infty.\label{eq8 \end{equation} So, when $\alpha\leq1$ \[ \left\vert \kappa_{ij}\right\vert \leq C\sum_{n=1}^{\infty}\frac{\rho_{ij ^{n}}{n^{1-\alpha}}=C\rho_{ij}\sum_{n=1}^{\infty}\frac{\rho_{ij}^{n-1 }{n^{1-\alpha}}\leq C\rho_{ij}, \] and (\ref{eqSLLNR}) holds. \begin{remark} If $\zeta_{i}$ is the central chi-square random variable with $v$ degrees of freedom and densit \[ f\left( x;v/2\right) =\frac{1}{\Gamma\left( v/2\right) 2^{v/2} x^{v/2-1}e^{-x/2}\text{ \ for \ }x>0, \] then \autoref{ThmMain} is valid when $v=1$ or $2$. \end{remark} \subsection{The Lancaster bivariate Poisson distribution} \label{lancaster_bvpoisson} For $a>0$ and $x,n\in\mathbb{N}_{0}$, let \[ C_{n}(x;a)=\sqrt{\frac{a^{n}}{n!}}\sum_{k=0}^{n}(-1)^{k}\binom{n}{k}\binom {x}{k}\,\frac{k!}{a^{k} \] denote the Charlier polynomial of degree $n$, where $\binom{n}{k}=\frac {n!}{k!\left( n-k\right) !}$ if $n\geq k$ and $\binom{n}{k}=0$ if $n<k$. The Lancaster bivariate Poisson distribution was derived by \cite{Koudou:1998}. Specifically, if $\left( X,Y\right) $ follows such a distribution with parameter $a>0$ and corelation $\lambda\in\lbrack0,1]$, then it has density \begin{equation} h\left( x,y;a,\rho\right) =f(x;a)\,f(y;a)\sum_{n=0}^{\infty}\rho^{n \,C_{n}(x;a)\,C_{n}(y;a)\text{ \ for }x,y\in\mathbb{N}_{0}, \label{biPoisson \end{equation} where \[ f(x;a)=\frac{a^{x}\,e^{-a}}{x!}\text{ for }x\in\mathbb{N}_{0 \] is the \textit{probability mass function (PMF)} for a Poisson random variable with mean $a$. Set $\tau=F_{i}^{-1}\left( 1-t\right) $, and let $x_{0}$ be the integer part of $\tau$. If $\left( \zeta_{i},\zeta_{j}\right) \ $with $\left( i,j\right) \in E_{2,m}$ follows a Lancaster bivariate Poisson distribution with correlation $\rho_{ij}\in\left[ 0,1\right] $, the \[ \kappa_{ij}=\mathrm{cov}\left( 1\left\{ p_{i}\leq t\right\} ,1\left\{ p_{j}\leq t\right\} \right) =\sum_{n=1}^{\infty}\rho_{ij}^{n}q_{n ^{2}\left( x_{0};a\right) , \] wher \[ q_{n}\left( x_{0};a\right) =\sum_{x=0}^{x_{0}}f(x;a)C_{n}(x;a)=\sqrt {\frac{a^{n}}{n!}}\sum_{x=0}^{x_{0}}\frac{a^{x}\,e^{-a}}{x!}\sum_{k=0 ^{n}(-1)^{k}\binom{n}{k}\binom{x}{k}\,\frac{k!}{a^{k}}. \] It suffices to bound $q_{n}\left( x_{0};a\right) $. Specifically, \[ \left\vert q_{n}\left( x_{0};a\right) \right\vert \leq\sqrt{\frac{a^{n} {n!}}\sum_{x=0}^{x_{0}}a^{x}\,e^{-a}\sum_{k=0}^{n}\frac{a^{-k}}{\left( x-k\right) !}\binom{n}{k}\,\leq C\sqrt{\frac{a^{n}}{n!}}\left( 1+a^{-1}\right) ^{n}. \] So \[ \left\vert \kappa_{ij}\right\vert \leq C\sum_{n=1}^{\infty}\frac{a^{n \rho_{ij}^{n}}{n!}\left( 1+a^{-1}\right) ^{2n}\leq C\rho_{ij \] and (\ref{eqSLLNR}) holds. \subsection{The Lancaster bivariate negative binomial distribution} \label{lancaster_bvNegBin} Let $\beta>0$ and $0<c<1$, and $M_{n}^{\beta,c}(x)$ denote the $nt$h (normalized) Meixner polynomial, i.e., \[ M_{n}^{\beta,c}(x)=\sqrt{\frac{c^{n}\,(\beta)_{n}}{n!}}\sum_{k=0}^{n \frac{(-n)_{k}\,(-x)_{k}}{(\beta)_{k}\,k!}\left( 1-c^{-1}\right) ^{k}\text{ \ for \ }x\in\mathbb{N}_{0}. \] Here $\left( a\right) _{n}=\prod_{k=0}^{n-1}\left( a+k\right) $ for $a\in\mathbb{R}$ and $n\in\mathbb{N}$, and $\left( -x\right) _{k}=0$ is set when $x<k$. The Lancaster bivariate negative binomial distribution with parameter $\left( \beta,c\right) $ and correlation $\rho\in\lbrack0,1)$ was derived by \cite{Koudou:1998}. Specifically, if $(X,Y)$ follows such a distribution, then it has density \begin{equation} h\left( x,y;\beta,c\right) =f(x;\beta,c)\,f(y;\beta,c)\sum_{n=0}^{\infty }\rho^{n}\,M_{n}^{\beta,c}(x)\,M_{n}^{\beta,c}(y)\ \ \text{for x,y\in\mathbb{N}_{0}\text{ \ and \ }0\leq\rho<1, \label{biNB \end{equation} where \[ f(x;\beta,c)=(1-c)^{\beta}\,\frac{c^{x}\,(\beta)_{x}}{x!}\text{ \ for x\in\mathbb{N}_{0 \] is the PMF for a negative binomial random variable. Set $\tau=F_{i}^{-1}\left( 1-t\right) $, and let $x_{0}$ be the integer part of $\tau$. If $\left( \zeta_{i},\zeta_{j}\right) \ $with $\left( i,j\right) \in E_{2,m}$ follows a Lancaster bivariate negative binomial distribution with parameter $\left( \beta,c\right) $ and correlation $\rho_{ij}\in\lbrack0,1)$, the \[ \kappa_{ij}=\mathrm{cov}\left( 1\left\{ p_{i}\leq t\right\} ,1\left\{ p_{j}\leq t\right\} \right) =\sum_{n=1}^{\infty}\rho_{ij}^{n}q_{n ^{2}\left( x_{0};\beta,c\right) , \] wher \begin{align*} q_{n}\left( x_{0};\beta,c\right) & =\sum_{x=0}^{x_{0}}f(x;\beta ,c)M_{n}^{\beta,c}(x)\\ & =\sqrt{\frac{c^{n}\,(\beta)_{n}}{n!}}\sum_{x=0}^{x_{0}}(1-c)^{\beta \,\frac{c^{x}\,(\beta)_{x}}{x!}\sum_{k=0}^{n}\frac{(-n)_{k}\,(-x)_{k} {(\beta)_{k}\,k!}\left( 1-c^{-1}\right) ^{k}. \end{align*} It suffices to bound $q_{n}\left( x_{0};\beta,c\right) $. Specifically, \begin{align*} \left\vert q_{n}\left( x_{0};\beta,c\right) \right\vert & \leq C\sqrt{\frac{c^{n}\,(\beta)_{n}}{n!}}\sum_{k=0}^{x_{0}}\frac{n\left( n-1\right) \cdots\left( n-k+1\right) \,}{(\beta)_{k}\,k!}\left\vert 1-c^{-1}\right\vert ^{k}\\ & \leq C\sqrt{\frac{c^{n}\,(\beta)_{n}}{n!}}x_{0}n^{x_{0}}\leq Cc^{n/2 n^{\left( \beta-1+2x_{0}\right) /2}, \end{align*} where we have applied the identity (\ref{eq8}) to obtain the last inequality. Since $0<c<1$, we hav \[ \left\vert \kappa_{ij}\right\vert \leq C\sum_{n=1}^{\infty}\rho_{ij}^{n c^{n}n^{\beta-1+2x_{0}}\leq C\rho_{ij}. \] So, (\ref{eqSLLNR}) holds. \subsection{The Lancaster bivariate gamma-negative binomial distribution} Let $\sigma>0$, $\beta>0$ and $0<c<1$ be three constants. The Lancaster bivariate gamma-negative binomial distribution was derived by \cite{Koudou:1998}. Specifically, if $(X,Y)$ follows such a distribution with parameter $\left( \alpha,\beta,c\right) $ and correlation $\rho\in\left[ 0,\sqrt{c}\right] $, then it has density \begin{equation} h\left( x,y;\alpha,\beta,c\right) =f(x;\beta,c)\,g\left( y;\alpha\right) \sum_{n=0}^{\infty}\rho^{n}\,\sqrt{\frac{n!}{\left( \alpha\right) _{n} }L_{n}^{\left( \alpha-1\right) }\left( x\right) \,M_{n}^{\beta,c}(y) \label{biGNB \end{equation} for $x\in\mathbb{N}_{0}$ and $y>0$, for which $X$ is a negative binomial random variable with PMF \[ f(x;\beta,c)=(1-c)^{\beta}\,\frac{c^{x}\,(\beta)_{x}}{x!}\text{ \ for \ x\in\mathbb{N}_{0}, \] and $Y$ is a gamma random variable with density \[ g\left( y;\alpha\right) =\frac{1}{\Gamma\left( \alpha\right) }y^{\alpha -1}e^{-y}\text{ \ for \ }y>0. \] \ If $\left( \zeta_{i},\zeta_{j}\right) \ $with $\left( i,j\right) \in E_{2,m}$ follows a Lancaster bivariate gamma-negative binomial distribution with parameter $\left( \alpha,\beta,c\right) $ and correlation $\rho_{ij \in\lbrack0,1)$, the \[ \kappa_{ij}=\mathrm{cov}\left( 1\left\{ p_{i}\leq t\right\} ,1\left\{ p_{j}\leq t\right\} \right) =\sum_{n=1}^{\infty}\rho_{ij}^{n}q_{n}\left( x_{0};\beta,c\right) r_{n}\left( \tau,\alpha\right) , \] where $x_{0}$ is the integer part of $F_{i}^{-1}\left( 1-t\right) $, $\tau=F_{j}^{-1}\left( 1-t\right) $ \[ q_{n}\left( x_{0};\beta,c\right) =\sum_{x=0}^{x_{0}}f(x;\beta,c)M_{n ^{\beta,c}(x) \] an \[ r_{n}\left( \tau,\alpha\right) =\sqrt{\frac{n!}{\left( \alpha\right) _{n }}\int_{-\infty}^{\tau}g\left( y;\alpha\right) L_{n}^{\left( \alpha -1\right) }\left( x\right) dx. \] Using the bounds derived in \autoref{lancaster_bvgamma} and \autoref{lancaster_bvNegBin}, we obtain \[ \left\vert \kappa_{ij}\right\vert \leq\sum_{n=1}^{\infty}\rho_{ij}^{n c^{n/2}n^{\left( \beta-1+2x_{0}\right) /2}n^{\left( 1-\alpha\right) /2}\leq C\rho_{ij}. \] So, (\ref{eqSLLNR}) holds. \section*{Acknowledgements} This research was funded by the New Faculty Seed Grant provided by Washington State University. I would like to thank G\'{e}rard Letac for suggesting to me several references on orthogonal polynomials, and Donald Richards for encouraging me to extend the work of \cite{Chen:2014SLLN} to the setting of Lancaster bivariate distributions. \bibliographystyle{chicago}
1,314,259,995,408
arxiv
\section{DFT calculations for nanoribbons} Here we present DFT calculations for the MoS$_2$ and TiCl$_2$ nanoribbons shown in Fig. \ref{fig:MoS2_ribbon_edge}. The main difference between these two materials is that the former has non-trivial topological polarization while the latter does not. Tight binding calculations for ribbons of polar materials are unreliable due to the electronic redistribution originating from the polarization and a proper description of charge transfer requires a self-consistent approach. \begin{figure}[b] \centering \includegraphics[width=0.18\textwidth]{MoS2_ribbon_zz1.png} \includegraphics[width=0.18\textwidth]{MoS2_ribbon_zz2.png} \includegraphics[width=0.1\textwidth]{MoS2_ribbon_arm.png} \caption{Nanoribbons of the AB$_2$ structures considered in the present work (here representing either MoS$_2$ of TiCl$_2$). Left: zigzag edges with S(Cl) dimer termination on Mo(Ti)-edge marked in red and S(Cl)-edge termination marked in blue. Middle: zigzag termination with Mo(Ti)-edge marked in red and alternating S(Cl)-edge marked in blue. Right: armchair termination} \label{fig:MoS2_ribbon_edge} \end{figure} In Fig. \ref{fig:MoS2_bs} we show the band structures and electrostatic potentials for ribbons of MoS$_2$. The color code in the band structures measures the weight of states on different edges highlighted in Fig. \ref{fig:MoS2_ribbon_edge}. The potentials are evaluated at a representative point in the direction orthogonal to the atomic plane (close to the plane) and averaged over the periodic direction. In the direction across the ribbons we carried out a sliding window average to obtain a smooth function. The band structures have been colored according to the weight of states at the two edges. The metallic edges are evident in the two cases of zigzag terminated ribbons, since both blue and red bands (corresponding to states at the two edges) cross the Fermi level. In both cases there is a small hole pocket at the S-edge signifying transfer of electrons from the S-edge to the Mo-edge, which is a direct consequence of the bulk topological polarization of the material. This is also reflected in the potential, which increases linearly across the ribbon. The potential shift across the ribbon, however, is strongly dependent on the edge terminations, which may or may not introduce dipole densities at the edges. In general, the bulk polarization gives rise to bound charge at edges, which leads to a potential difference between the edges that increases with the width of the ribbon. Beyond a certain width (smaller than the ribbons considered here) the potential difference exceeds the band gap and charge is transferred between the edges, which results in metallic edges. For the armchair ribbon the potential is flat across the ribbon since the topological polarization is parallel to the edges and the band structure of the ribbon is gapped. In the case of TiCl$_2$ the electronic structure of the ribbons shows significant differences compared to MoS$_2$. We show the band structures and potential profiles of the ribbons shown in Fig. \ref{fig:TiCl2_bs}. For the zigzag ribbon with Cl dimers at the Ti-edge we again observe a gapless spectrum. It is, however, only the dimer edge that becomes metallic, which is due to the fact that the Cl dimers have a strong electronegativity. Since the Cl-edge band is unoccupied, charge is taken from bulk, which pins both bulk and dimer bands to the Fermi level. The metallic edges states thus arise solely from the structure at the edge and such states may be passivated by appropriate adsorbates that donate the desired electrons to the Cl dimers. For the stoichiometric zigzag ribbon there is a difference in electronegativity between the Ti-edge and Cl-edge, which pins both of the edge states to the Fermi level, but the pinning again originates from the detailed structures of the edges and such states may be passivated. This conclusion is corroborated by the potential profiles of the ribbons, which show that the potential in the bulk of the ribbons becomes flat due to the absence of bulk polarization. We note that there is a significant amount of variation in the potential in the vicinity of the edges as well as a constant potential shift across the ribbons, but these effects arise as a consequence of dipoles residing at the edges. \begin{figure*}[h!] \centering \includegraphics[width=0.32\textwidth]{bs_MoS2_zz1.png} \includegraphics[width=0.32\textwidth]{bs_MoS2_zz2.png} \includegraphics[width=0.32\textwidth]{bs_MoS2_arm.png} \includegraphics[width=0.32\textwidth]{V_MoS2_zz1.png} \includegraphics[width=0.32\textwidth]{V_MoS2_zz2.png} \includegraphics[width=0.32\textwidth]{V_MoS2_arm.png} \caption{Band structures and potential profiles of the three MoS$_2$ nanoribbons shown in Fig. \ref{fig:MoS2_ribbon_edge}. Left: Weight of states localized on the Mo-edge with S dimer termination marked in red and weight on S-edge in blue. Middle: Weight of states localized on zigzag termination with Mo-edge marked in red and weight on S-edge marked in blue. Right: armchair termination. The zero energy marks the Fermi level.} \label{fig:MoS2_bs} \end{figure*} \begin{figure*}[h!] \centering \includegraphics[width=0.32\textwidth]{bs_TiCl2_zz1.png} \includegraphics[width=0.32\textwidth]{bs_TiCl2_zz2.png} \includegraphics[width=0.32\textwidth]{bs_TiCl2_arm.png} \includegraphics[width=0.32\textwidth]{V_TiCl2_zz1.png} \includegraphics[width=0.32\textwidth]{V_TiCl2_zz2.png} \includegraphics[width=0.32\textwidth]{V_TiCl2_arm.png} \caption{Band structures and potential profiles of the three TiCl$_2$ nanoribbons shown in Fig. \ref{fig:MoS2_ribbon_edge}. Left: Weight of states localized on Ti-edge with Cl dimer termination marked in red and weight on Cl-edge marked in blue. Middle: Weight of states localized on zigzag termination with Ti-edge marked in red and weight Cl-edge marked in blue. Right: armchair termination. The zero energy marks the Fermi level.} \label{fig:TiCl2_bs} \end{figure*} \section{Models of polar and non-polar nanotriangles with fractional corner charges} It is perhaps not obvious that the bulk topological polarization have any physical consequences for nanotrinagles, which are intrinsically non-polar due to symmetry. To illustrate that this is indeed the case we consider simple models of one and two electron systems with and without topological polarization. In Fig. \ref{fig:MoS2_triangle_polar} we show cartoons of nanotriangles representing the case of MoS$_2$ where a single electron has been taken from the transition metal atom and placed in the 1b site. This corresponds to the case of $Q_\mathrm{c}=1/3$ (not taking Kramers degeneracy into account) and a polarization of $(1/3, 2/3)$. In the case of the armchair ribbon the edges become fully compensated such that the only fractional charge resides at corners. For the zigzag triangle the edge states become fractionally occupied and dispersive edge states thus appear, which cannot be passivated by adsorbates. The charge at a given edge is $2(N-1)/3$, where $N$ is the number of edge unit cells. Similarly, with a corner charge of $2/3$ any zigzag ribbon would acquire fractionally occupied edge states with an edge charge of $(N-1)/3$. In Fig. \ref{fig:MoS2_triangle_nonpolar} we show the corresponding situation for a zigzag triangle of a non-polar material. In this example, we consider the case where two electrons are removed from the transition metal atom and one is added at the 1b site and another one at the 1c site. This give a vanishing polarization and a fractional corner charge of $1/3$. For clarity we show the construction (left) where all 1b and 1c sites receive one third of an electron from all adjacent transition metal atoms. The resulting structure is shown to the right where it is evident that the edges become fully compensated and the only fractional charge resides at corners. \begin{figure*}[h!] \centering \includegraphics[clip, trim=2.5cm 2.0cm 1.5cm 4.5cm, width=0.95\textwidth]{triangles_polar.pdf} \caption{C$_3$ symmetric triangles with a topological polarization of (1/3, 2/3). The bulk structure has a single occupied state at 1b (center of hexagons) which is taken from the 1a position (blue atoms), while the yellow sites (1c) are neutral. Left: triangle with armchair edges, which are fully occupied, whereas the corners have fractional occupancy of 2/3. Right: triangle with zigzag edges. The corners states again have fractional occupancy of 2/3, but the 1b sites at edges are also fractionally occupied and the edges thus host metallic states that cannot be passivated.} \label{fig:MoS2_triangle_polar} \end{figure*} \begin{figure*}[h!] \centering \includegraphics[clip, trim=2.0cm 2.5cm 5.0cm 3.5cm, width=0.95\textwidth]{triangles_nonpolar.pdf} \caption{C$_3$ symmetric triangles with vanishing polarization. The bulk structure has one occupied state at 1b (center of hexagons) and one occupied state at 1b, which are both taken from the 1a position (blue atoms). Left: result of assigning 1/3 of an electron to all 1b sites adjacent to 1a sites and 1/3 of an electron to all 1c sites adjacent to 1a sites. Right: same as left, but now with all fractionally occupied sites at edges merged into integer occupied sites. This results in passivated edges and corner states with a fractional occupation of 2/3.} \label{fig:MoS2_triangle_nonpolar} \end{figure*} \section{Tight-binding calculation of MoS$_2$ nanotriangle with zigzag edges.} In Fig. \ref{fig:MoS2_flake_zigzag} we show a calculation of a MoS$_2$ nanotriangle with zigzag edges. The spectrum is clearly gapless and the states transversing the bulk band gap are localized at edges. However, in contrast to the non-polar case of TiCl$_2$ (see main text) these cannot be passivated by adsorbates. This follows from the fractional occupation of edge states illustrated in Fig. \ref{fig:MoS2_flake_zigzag}. Adding either one or two electrons per edge unit cell cannot move the Fermi level into a gap with pinned eigenvalues corresponding to corner states. The details of the spectrum is highly dependent on the edge terminations, but the fact that the edges cannot be passivated remains valid for any termination. \begin{figure}[tb] \centering \includegraphics[width=0.45\textwidth]{MoS2_trireal.pdf} \caption{Nanotriangle of MoS$_2$ with Mo zigzag edges and S dimers. The edge states give rise to a gapless spectrum that gapped be passivated by the introduction of suitable adsorbates. The grey dashed line indicates the Fermi level of the bare triangle and the blue dashed line indicates the Fermi level where one electron has been added per S dimer at the edges. The upper inset shows two states at the Fermi level, which are localized at all edges as indicated in the lower inset.} \label{fig:MoS2_flake_zigzag} \end{figure} \section{2D materials with C$_3$ symmetry and fractional corner charges from the C2DB} We have taken 71 materials with space group $P\bar 6m2$ from the C2DB and calculated the fractional corner charges and spontaneous polarization as well as the band gap. In Tab. \ref{tab:nontrivial} we present the results for materials exhibiting non-trivial corner charges. For materials that has been experimentally characterized as bulk van der Waals bonded materials we supply the identifier for either ICSD or COD. In Tab. \ref{tab:trivial} we show the same data for the 43 materials that are not in an OAL phase. We note that only 13 of these have vanishing polarization and may be regarded as having trivial topology. \begin{table}[h] \begin{tabular}{l|llllll} & & $\chi^{(3)}$ & $Q_\mathrm{c}$ & \textbf{P} & ID \\ \hline CrO$_2$ & & (-2, 3) & 2/3 & (1/3, 2/3) & \\ CrS$_2$ & & (-2, 3) & 2/3 & (1/3, 2/3) & \\ CrSe$_2$ & & (-2, 3) & 2/3 & (1/3, 2/3) & \\ CrTe$_2$ & & (-2, 3) & 2/3 & (1/3, 2/3) & \\ Hf$_3$C$_2$O$_2$ & & (-2, 4) & 4/3 & (0, 0) & \\ Hf$_3$N$_2$O$_2$ & & (-3, 5) & 4/3 & (0, 0) & \\ HfBr$_2$ & & (-2, 3) & 2/3 & (0, 0) & \\ HfCl$_2$ & & (-2, 3) & 2/3 & (0, 0) & \\ HfI$_2$ & & (-2, 3) & 2/3 & (0, 0) & \\ MoO$_2$ & & (-2, 3) & 2/3 & (1/3, 2/3) & \\ MoS$_2$ & & (-2, 3) & 2/3 & (1/3, 2/3) & 38401 \\ MoSe$_2$ & & (-2, 3) & 2/3 & (1/3, 2/3) & 49800 \\ MoTe$_2$ & & (-2, 3) & 2/3 & (1/3, 2/3) & 15431 \\ Ti$_2$Te$_2$ & & (-3, 4) & 2/3 & (0, 0) & \\ TiBr$_2$ & & (-2, 3) & 2/3 & (0, 0) & \\ TiCl$_2$ & & (-2, 3) & 2/3 & (0, 0) & \\ TiH$_2$ & & (-2, 3) & 2/3 & (0, 0) & \\ TiI$_2$ & & (-2, 3) & 2/3 & (0, 0) & \\ WO$_2$ & & (-2, 3) & 2/3 & (1/3, 2/3) & \\ WS$_2$ & & (-2, 3) & 2/3 & (1/3, 2/3) & 202367 \\ WSe$_2$ & & (-2, 3) & 2/3 & (1/3, 2/3) & 84182 \\ WTe$_2$ & & (-2, 3) & 2/3 & (1/3, 2/3) & 653170 \\ Zr$_2$Se$_2$ & & (-3, 4) & 2/3 & (0, 0) & \\ Zr$_2$Te$_2$ & & (-3, 4) & 2/3 & (0, 0) & \\ Zr$_3$C$_2$O$_2$ & & (-2, 4) & 4/3 & (0, 0) & \\ ZrBr$_2$ & & (-2, 3) & 2/3 & (0, 0) & \\ ZrCl$_2$ & & (-2, 3) & 2/3 & (0, 0) & 1530902 \\ ZrI$_2$ & & (-2, 3) & 2/3 & (0, 0) & \end{tabular} \caption{2D materials with space group $P\bar 6m2$ that exhibits non-trivial fractional corner charges. We have stated the symmetry indicators $\chi^{(3)}$, the fractional corner charges $Q_\mathrm{c}$, the polarization in dimensionless units \textbf{P} and the ICSD/COD identifier (ID) for materials that are experimentally known in bulk form.} \label{tab:nontrivial} \end{table} \begin{table}[h] \begin{tabular}{l|llllll} & & $\chi^{(3)}$ & $Q_\mathrm{c}$ & \textbf{P} & ID \\ \hline AlN & & (0, 0) & 0 & (0, 0) & \\ Al$_2$O$_2$ & & (-2, 2) & 0 & (0, 0) & \\ Al$_2$S$_2$ & & (-2, 2) & 0 & (2/3, 1/3) & \\ Al$_2$Se$_2$ & & (-2, 2) & 0 & (2/3, 1/3) & \\ Al$_2$Te$_2$ & & (-2, 2) & 0 & (2/3, 1/3) & \\ BaBr$_2$ & & (-2, 2) & 0 & (1/3, 2/3) & \\ BaCl$_2$ & & (-2, 2) & 0 & (0, 0) & \\ BaI$_2$ & & (-2, 2) & 0 & (1/3, 2/3) & \\ Bi$_2$O$_2$ & & (-2, 2) & 0 & (0, 0) & \\ BN & & (0, 0) & 0 & (0, 0) & 186248 \\ BP & & (0, 0) & 0 & (0, 0) & \\ CaBr$_2$ & & (-2, 2) & 0 & (1/3, 2/3) & \\ CaCl$_2$ & & (-2, 2) & 0 & (1/3, 2/3) & \\ CaI$_2$ & & (-2, 2) & 0 & (1/3, 2/3) & \\ GaN & & (0, 0) & 0 & (0, 0) & 159250 \\ Ga$_2$O$_2$ & & (-2, 2) & 0 & (2/3, 1/3) & \\ Ga$_2$S$_2$ & & (-2, 2) & 0 & (2/3, 1/3) & 635254 \\ Ga$_2$Se$_2$ & & (-2, 2) & 0 & (2/3, 1/3) & 2002 \\ Ga$_2$Te$_2$ & & (-2, 2) & 0 & (2/3, 1/3) & 43328 \\ HfSe$_2$ & & (-2, 2) & 0 & (2/3, 1/3) & \\ HfTe$_2$ & & (-2, 2) & 0 & (2/3, 1/3) & \\ In$_2$O$_2$ & & (-2, 2) & 0 & (0, 0) & \\ In$_2$S$_2$ & & (-2, 2) & 0 & (2/3, 1/3) & \\ In$_2$Se$_2$ & & (-2, 2) & 0 & (2/3, 1/3) & 640503 \\ In$_2$Te$_2$ & & (-2, 2) & 0 & (2/3, 1/3) & \\ Mo$_2$Cl$_6$ & & (-1, 1) & 0 & (0, 0) & \\ PbBr$_2$ & & (-2, 2) & 0 & (1/3, 2/3) & \\ PbCl$_2$ & & (-2, 2) & 0 & (1/3, 2/3) & \\ PbI$_2$ & & (-2, 2) & 0 & (1/3, 2/3) & \\ PbS$_2$ & & (-1, 1) & 0 & (1/3, 2/3) & \\ PbSe$_2$ & & (-1, 1) & 0 & (1/3, 2/3) & \\ SnI$_2$ & & (-2, 2) & 0 & (1/3, 2/3) & \\ SrBr$_2$ & & (-2, 2) & 0 & (1/3, 2/3) & \\ SrCl$_2$ & & (-2, 2) & 0 & (1/3, 2/3) & \\ SrI$_2$ & & (-2, 2) & 0 & (1/3, 2/3) & \\ TiSe$_2$ & & (-2, 2) & 0 & (0, 0) & \\ TiTe$_2$ & & (-2, 2) & 0 & (0, 0) & \\ Tl$_2$S$_2$ & & (-2, 2) & 0 & (2/3, 1/3) & \\ Tl$_2$Se$_2$ & & (-2, 2) & 0 & (2/3, 1/3) & \\ Tl$_2$Te$_2$ & & (-2, 2) & 0 & (2/3, 1/3) & \\ W$_2$Br$_6$ & & (-1, 1) & 0 & (0, 0) & \\ W$_2$Cl$_6$ & & (-1, 1) & 0 & (0, 0) & \\ ZrTe$_2$ & & (-2, 2) & 0 & (2/3, 1/3) & \end{tabular} \caption{2D materials with space group $P\bar 6m2$ that exhibits non-trivial fractional corner charges. We have stated the symmetry indicators $\chi^{(3)}$, the fractional corner charges $Q_\mathrm{c}$, the polarization in dimensionless units \textbf{P} and the ICSD/COD identifier (ID) for materials that are experimentally known in bulk form.} \label{tab:trivial} \end{table} \end{document}
1,314,259,995,409
arxiv
\section{Introduction}\label{sec_intro} Currently, it is more evident than ever before that we consume much more energy than we actually need. The extensive use of electrical devices in most of our daily activities, the absence of eco- or energy-friendly design in older devices, and the lack of awareness or care from the consumers' side for reducing their energy footprint leads to an over-consumption of energy. In order to promote energy efficiency awareness and care at the product level and to trigger a global harmonisation, the EU has established a regulatory framework \footnote{https://www.europarl.europa.eu/factsheets/en/sheet/69/efficjenza-energetika}, which among others defines a set of Minimum Energy Performance Standards (MEPS) and an energy-related labelling scheme for electrical devices. In the hypothetical scenario where all countries agree to the MEPS and apply them by 2020, the gross annual energy savings are expected to reach 13\% by 2030, and will reach 34\% if the highest energy efficiency levels are agreed and put into practice \cite{molenbroek2015savings}. These findings show the importance for investing on more efficient electrical devices, but also point a need for adopting a more energy efficient behavior. In order to promote better behaviors, and discourage bad ones, governments use several policy interventions such as energy efficiency labeling for devices, taxation of high energy consumption and financial incentives for consumption reduction, which have short or medium effect to consumers' behavior. They also provide feedback, energy savings tips, and peer device comparisons in order to persuade consumers on the benefits from a behavioral change \cite{cattaneo2019internal}. The aforementioned interventions have a bigger impact when they are addressed to domestic users. However, their impact is smaller in public buildings, such as schools or offices, where people tend to care less for proper energy usage, since they do not directly pay for the consumed energy. The main reason behind high energy consumption in public buildings is the unnecessary usage of devices (e.g. heating or cooling devices, lights, etc.), especially when the public spaces are not occupied \cite{rafsanjani2015review}. In this case, it is important either to automate energy efficiency, by embedding intelligence to the devices (actuators) and the environment (sensors), or to gradually change people's habits and promoting more energy efficient behaviors, through warnings, notifications and up-to-time recommendations \cite{kluckner2013exploring, petkov2012personalised, graml2011improving}. Despite the many works that used feedback for persuading home users to improve their energy consumption behavior, it was in \cite{alsalemi2019ieeesystems} that action recommendations have been addressed in real-time to home users, in association to their actions and daily habits. According to the Habit Loop theory \cite{em3_gardner2015review}, the main neurological loop that governs any change in habit comprises: a cue, a routine, and a reward. In order to replace inefficient energy habits with efficient ones it is important to identify the most promising routines for energy saving, locate their cues and recommend energy saving actions. Recommender systems can be very supportive in this change loop, since they can link the action with the reward, and strengthen the new routine. Generally, recommendation tasks can be classified as addressing the five \textit{W} components: when, where, who, what, why \cite{zhang2020explainable}. The five W’s generally correspond to time-aware recommendations (\textit{when}), location-aware recommendations (\textit{where}), their social aspect (\textit{who}), application-specific recommendations (\textit{what}), and their explainable component (\textit{why}), respectively. In this work, we focus on the \textit{why} aspect with the help of explainable recommender systems. Following the trend of explainable AI (Artificial Intelligence), the explainable recommender systems aim to provide users with useful recommendations, followed by explanations about them \cite{zhang2020explainable}. Explanations may refer to the reasons behind the recommendation or to the benefits from choosing the recommended option. They can improve the persuasiveness of the system, the user understanding and satisfaction and provide an immediate reward to the user. Recent explanatory work focused on two dimensions categorized as: 1) the form of explanations produced (e.g. textual, visual, etc.); and 2) the model or algorithm used to produce the said explanation, which can be loosely categorized as matrix factorisation, topic modeling, graph-driven, deep learning , knowledge-graph, interaction laws, and post-hoc models, etc. \cite{zhang2020explainable} Explainable recommendations can be classified by the type of explanation used: \begin{enumerate} \item User-based and Item-based Explanations \item Content-based Explanation \item Textual Explanations \item Visual Explanations \item Social Explanations \item Hybrid Explanations \end{enumerate} In this work, we focus on creating hybrid explanations that combine the power of contextual and textual explanations. This work builds on our previous work on micro-moment based recommendations, and powers them with information on the reasons that triggered each recommendation and the expected benefit from its acceptance. The main contributions of this work comprise: \begin{itemize} \item a recommendation system, which generates personalised recommendations aligned with the user goals; \item the ability of the system to learn from the user response to a recommended action and adapt the recommendations that follow; \item the explainability of recommendations both in terms of reasoning the selection of a proposed action and of providing the user with persuasive facts about energy savings from the action. \end{itemize} Section \ref{sec_related_work} that follows provides a summary of the related works in the field, whereas Section \ref{sec_methodology} presents the proposed methodology for explainable and persuasive recommendations, which bring human in the loop of an energy efficiency system. The section also presents the core system architecture with emphasis on the explainable recommendation extensions. Section \ref{sec_evaluation} performs a comparative evaluation of the various recommendation strategies in order to demonstrate the improvement in performance and the facts that mostly affect user choices. Finally, Section \ref{sec_conclusions} summarizes the main findings of this work and indicates the next steps. \section{Related Work}\label{sec_related_work} In order to better understand and explain the suggestions of recommendation systems for energy efficiency it is first necessary to perform a comprehensive survey of recommendation algorithms. The survey that follows presents the main recommendation techniques and their applications in the domain of energy efficiency, in an attempt to cover the entire depth and breadth of state-of-the-art approaches so far. The surveyed approaches are summarized in Table \ref{RelWorks}. Then focuses on the explanations and facts that recommendation systems can use in order to improve user acceptance. \subsection{Recommendation techniques for energy efficiency} \subsubsection{Case-based} Case-based recommender systems are rule based systems, which can work for one or more users, by considering each user individually. The individual consumption habits and preferences are evaluated against a set of rules and predefined decisions, which trigger - when met - the corresponding energy saving actions. Authors in \cite{Bravo2019} implement a multi-agent system, which enables to (i) collect power usage patterns from electrical appliances in domestic buildings; (ii) procure electricity price data from Internet; (iii) trigger appropriate recommendations for end-users using consumption footprints electricity prices. To this end, the developed recommendation system furnishes information about the hours to use domestic devices, offering an economic benefit to end-users. This can be regarded as a strategy to distribute/optimize the use of power in households and avoid peak electricity demand. A case-based reasoning recommendation system is introduced in \cite{Pinto2018}. The system knowledge (cases) are historic related examples which map a usage behavior to an energy saving plan. The system recommends an energy-saving plan to a user at each specific moment of the day, by considering his/her consumption behavior and similar cases from the knowledge base. A k-Nearest Neighbor (KNN) technique is used to retrieve the most similar examples at each moment and an SVM-based weighting scheme is employed for optimizing weighting factors of each example. At the last stage, an expert system, which contains an ensemble of ad-hoc rules that ensure the applicability of this strategy to the case is used. The authors develop the aforementioned case-based reasoning scheme using a committed software agent, which allows the integration of the recommender system in a multi-agent framework with more energy saving capabilities. \subsubsection{Collaborative filtering} Collaborative filtering techniques assume a set of users that choose from a closed set of items (or actions) and explicitly or implicitly state their preferences (or ratings) for them. The items recommended to the user are the most preferable to him/her (or those with the highest predicted rating) \cite{morawski2017fuzzy} or to the group of users he/she belongs to\cite{castro2018group} . Energy saving systems that employ collaborative filtering deploy different interacting intelligent agents which dynamically capture user preferences. They then promote energy-efficiency to end-users through tailored recommendations that better match their preferences. More specifically, the authors of \cite{Zhang8412100} focus on the analysis of house appliances data and try to predict the rating levels of various consumption plans, which correspond to the user preferences for each plan. Then, they use the prediction model to help new users to select pertinent plans and appropriate tariffs. The energy-saving recommendations in \cite{ZHENG2020117775} are generated using a dual-step framework, which involves a feature formulation and a recommendation generation phase. In the former phase, user preferences are captured in a matrix, in which the rows correspond to energy-saving devices and the columns correspond to users. The matrix values depict the appliance usage information for each user and each device. As a result, the matrix models end-users' consumption behavior and users are represented as vectors in the feature space of devices. The collaborative filtering algorithm in the second phase, performs a kNN clustering of users, which is the basis for sending the same tailored recommendations to all users of the same cluster. The ReViCEE recommendation system \cite{KAR2019135} provides personalized recommendations to reduce wasted energy in a university campus building in Singapore. The system learns end-user preferences via the analysis of historical power consumption footprints. Specifically, individual and collaborative preferences in the usage of lights are extracted from actual usage data and the recommendations help users to automatically balance between their personalized visual comfort and energy efficiency. \subsubsection{Context-aware} Context-aware recommendation systems analyze historical power consumption data in different contexts and develop rule-based recommendations to ensure that end-users' preferences fit into them. In \cite{Shigeyoshi2013} an advisory recommendation system is proposed, which (i) analyzes energy consumption data in different contexts; (ii) keeps a record of recently issued recommendations in order to avoid repetitive recommendations; and (iii) conducts a social experiment, in which 47 end-users use the recommendation and provide feedback on its effectiveness. The experimental evaluation shows that randomly selected recommendations and recommendation abundance have negative effects on the users. In \cite{Luo2017} a personalized recommendation system is proposed. The system employs service recommendation schemes to derive possible end-users' interests and requirements related to energy efficiency of electrical devices. Next, identifies possibilities for saving energy and issues tailored recommendations to promote energy saving. Simultaneously, data retrieval methods are deployed to draw code words from textual appliance advertisements in order to increase energy saving awareness among users. The developed scheme first applies an energy disaggregation technique using a generalized particle filtering to infer appliance-level patterns from the main power consumption. Then, employs various inference rules to model the end-user's profile and preferences. Finally, tailored recommendations ensure an energy efficiency behavior are triggered, and the similarity between the user profile and device profile is measured, to rank the appliance advertisement and generate recommendations. In \cite{Wei2018} authors begin with the assumption that energy efficiency can be dramatically limited if end-users are considered as \enquote{immovable objects}. Based on this, an energy efficiency recommendation system using end-user location is introduced. Specifically, two kinds of recommendations are generated based on location data. They are defined as (i) move recommendations that advise the end-user to move from a space to another; and (ii) shift-schedule recommendations that notify the end-user to arrive/depart a set amount of time earlier/later. In \cite{SARDIANOS2020394} a context-aware Recommender System (RS) is implemented based on (i) collecting data from smart sensor and actuators describing end-users energy consumption habits; and (ii) evaluating the triggered recommendations. In this regard, a RS called REHAB-C is developed, which can not only generate tailored energy-efficiency suggestions but it can also postpone or remove them based in the actual data and further store end-users' preferences. Specifically, contextual data are analyzed continuously and the recommendations are generated using a rule-based algorithm. Authors in \cite{Garcia2017} opt for the development of a recommendation system that combines the merits of information and communication technology (ICT) and social analysis to improve end-users energy consumption behavior. To that end, a context-aware collaborative algorithm is deployed to generate tailored recommendations for end-users. The implemented recommendation system includes a real-time localization module along with a wireless sensor network that provide real-time data about end-users' activities. The user context combines location and activity, which adds two more dimensions to the original user-item rating matrix of collaborative filtering. The context aware recommendation lead to a more fine grained tracking of user consumption behavior and thus to better recommendations. \subsubsection{Rasch-based} The Rasch model is a psychometric model used for the analysis of user responses to questionnaires, which aims to find the balance between the respondent's behavior and the difficulty of implementing the selected response. Rasch-based recommendation systems are based on conducting a Rasch-based analysis to measure latent traits of end-users that are related to their energy consumption behavior and preferences. On the other side, they conduct survey studies to model the satisfaction of end-users to specific energy-saving actions and then based on the Rasch-analysis they generate personalized recommendations. The Rasch-based power usage recommender system proposed in \cite{Starke2015} provides its end-users with personalized energy saving recommendations. The actions are recommended using a Rasch profile of users' behavior. In their framework, authors provided tailored suggestions (that match their Rasch profile) to 196 end-users via an online recommender platform. In \cite{Starke2017} a recommender system to promote end-users' behavioral change is presented. Authors carried out two large review studies, where personalized energy-efficiency recommendations are evaluated for their feasibility and applicability by the users. Specifically, 79 energy-efficiency actions are drawn using a Rasch-based profile analysis to help end-users (i) make easy actions; (ii) improve system support; and (iii) collect their feedback and rate choice satisfaction. \subsubsection{Probabilistic models} The recommendation systems of this type are based on the analysis of power consumption data. They develop probabilistic relational models in order to predict end-users preferences and then generate appropriate recommendations. In \cite{Li7093924} historical power consumption patterns are analyzed using a continuous Markov chain model, which is based on a time-series investigation and multi-objective programming model. Moving forward, personalized recommendations are generated to support the use renewable energy solutions that are deployed in work environment. In \cite{Wei9001078} authors propose a recommendation system to reduce wasted energy of commercial buildings using human-in-the-loop. The building energy efficiency task is modeled as a Markov decision process. Then, deep reinforcement learning are investigated for learning energy efficiency recommendations and engaging end-users in energy efficiency behaviors. Consequently, the adopted system learns user action, using a high energy efficiency potentiality, and thus notifies the end-users of a commercial building with recommendations. After that, feedback from end-users is utilized to understand what are the best energy efficiency recommendations. \begin{table} [!b] \caption{A comparison of related RS frameworks and their properties.} \label{RelWorks} \begin{center} \resizebox{\textwidth}{!}{ \begin{tabular}{lllll} \hline Work & \begin{tabular}{@{}c@{}}Recommendation \\ Technique \end{tabular} & Recommendations & Explanations & Application scenario \\ \hline Bravo et al. \cite{Bravo2019} & Case based & Electricity price based recommendations & No & Households \\ Pinto et al. \cite{Pinto2018} & Case based & Energy reduction levels from similar cases & No & Public \\ Zhang et al. \cite{Zhang8412100} & Collaborative filtering & Energy consumption plans \& tariffs & No & Households \\ ZHENG et al. \cite{ZHENG2020117775} & Collaborative filtering & Appliance-level consumption rates & No & Households \\ ReViCEE \cite{KAR2019135} & Collaborative filtering & Light levels & No & Households \\ Shigeyoshi et al. \cite{Shigeyoshi2013} & Context-aware & Contextual based energy saving actions & No & Households \\ Luo et al. \cite{Luo2017} & Context-aware & Tailored recommendations \& text ads & No & Households \\ Wei et al. \cite{Wei2018} & Context-aware & Move and shift-schedule actions & No & \begin{tabular}{@{}l@{}}Commercial \\ buildings \end{tabular}\\ REHAB-C \cite{SARDIANOS2020394} & Context-aware & Micro-moment personalised saving actions & No & \begin{tabular}{@{}l@{}}Academic \\ buildings \end{tabular}\\ Garcia et al. \cite{Garcia2017} & \begin{tabular}{@{}l@{}}Context-aware \\ \& Collaborative filtering \end{tabular} & Taolired advices on end-users' activities & No & Households \\ Starke et al. \cite{Starke2015} & Rasch-based & Rasch profile based recommendations of & No & Households \\ & & end-users' behavior & & \\ Starke et al. \cite{Starke2017} & Rasch-based & Rasch profile receommendations based on & No & Households \\ & & a social experiment & & \\ Li et al. \cite{Li7093924} & Probabilistic relational & Tailored recommendations to support the use & No & Work spaces \\ & & renewable energy solutions & & \\ Wei at al. \cite{Wei9001078} & Probabilistic relational & Recommendations learned from actions having & No & Commercial \\ & & high energy efficiency potentiality & & buildings \\ This Work & \begin{tabular}{@{}l@{}}Explanaible\\Context-aware\end{tabular} & Energy saving actions, facts and reasoning & Yes & \begin{tabular}{@{}l@{}}Academic\\buildings\end{tabular} \\ \hline \end{tabular} } \end{center} \end{table} \subsection{Explainable recommender systems} Based on the aforementioned analysis of different types of recommender systems for energy efficiency, and the recent trend for explainable Artificial Intelligence solutions, it seems that Recommendation Systems in the field of energy efficiency still lack Explainability of Persuasion features. Due to the emergent need for explainability in the recommendations provided to the users in a variety of scenarios, recent surveys like \cite{zhang2018explainable} try to review the different approaches for setting the various research questions regarding the explainable recommendation. For example, \cite{gao2019explainable} proposed a deep explicit attentive multi-view learning model to model multi-level features for explanation or the work in \cite{balog2019transparent} that examined an approach for creating a set-based recommendation model for transparent and textual explanations of movie recommendations. Towards a knowledge-based method for creating explainable item recommendations, authors in \cite{catherine2017explainable} illustrate a method for leveraging external knowledge in the form of knowledge graphs when information from content and product/item reviews is not available to generate explanations. Interpretable models, are based on transparent processes for deciding the recommendation lists, so it is easier to generate proper explicit feature-level explanations to justify why the model recommended specific items \cite{zhang2014explicit}. In the context of graph-based models, authors in \cite{he2015trirank} introduced an algorithm that ranks the vertices of a tripartite graph provide explanations for the top-ranked aspects-target user-recommended item triplets. However, in the domain of energy efficiency and recommendations for energy related behaviors there are few works, that usually attempt to explain the rules behind issuing a recommendation, which worth mentioning. They both come from the energy provider and prosumer domain. Authors in \cite{grimaldo2019user} propose a user-centric and visual analytics approach to the development of an interactive and explainable day-to-day forecasting and analysis of energy demand in local prosumer environments. It is also suggested that this will be supported by a behavioral analysis to allow the analysis of potential relationships between consumption patterns and the interaction of prosumers with energy analysis tools such as customer portals and recommendation systems. A mixture of explainable machine learning approaches, such as kNN and decision trees, is used for dynamic simulation and explorative data processing. \section{Proposed Methodology}\label{sec_methodology} \subsection{Data model and architecture} The base for the explainable recommendations of this work, is the ecosystem depicted in Figure \ref{fig:architecture}, a combination of sensors, smart meters, actuators and orchestrating software that collaborate for promoting energy efficiency through smart on-time recommendations that build on the habit loop theory of behavioral change. Sensors (motion, temperature, humidity, light, etc.) capture user presence, and environmental conditions inside and outside of the monitored place, whereas smart meters capture electric power consumption per device thus creating a stream of measurement data that are stored in the platform database. \begin{figure}[!htb] \centering \includegraphics[width=0.8\textwidth]{updated_architecture_2.png} \caption{The core architecture of the system and the explainable recommendation extensions (with grey color).} \label{fig:architecture} \end{figure} In order to efficiently handle the large amount of data generated from the sensors and smart meters an additional Knowledge Abstraction Module (KAM) has been developed. It comprises scripts, which process the sensor data streams and detect the respective micro-moments in real-time. Micro-moments are moments of special interest to the user, e.g. when the user exits the room, when the outside temperature (and humidity) conditions match user preferences, when a device has been used extensively, etc. The detected micro-moments, along with information about user preferences (e.g. the occupancy probability of each room at any moment of the day) are stored in the knowledge-base of the system. The explainable recommendations framework of our system is supported by the appropriate data model, which organizes real-time data collected by the sensors and aggregated data that summarize recent device usage and room presence data (for a few weeks period) at 5-minute granularity. The former data are accumulated in the Database that stores detailed data for a few months period, whereas the latter are periodically updated to depict the recent user habits at any moment and are stored in what we refer to as Knowledge Base. Older sensor data are moved in a Data Repository for archiving purposes. Real-time room occupancy, appliance consumption, and environment-related data along with knowledge about user habits are fed to the Action Triggering Module (ATM). The ATM is developed as scripts in the open source home automation platform of Home Assistant \footnote{https://www.home-assistant.io/}. The platform allows custom scripting in Python and other languages and exposes several APIs for communicating with external applications and systems. The key channel for communicating recommendations to the end user is his/her smartphone and the Telegram application, which displays recommendations and collects user responses (i.e. accept, reject or ignore). At the first step, the recommendation triggering step, the system periodically retrieves information about the user habits from the knowledge base and checks the recent sensor entries in the database in order to detect if it is the right micro-moment for a recommendation. For example, aggregated user presence data in a room are retrieved from the knowledge base and are used to compute the probability of room occupancy at any moment. This probability, along with recent motion sensor data from the database, and the recent user responses to related recommendations, are fed to the decision making algorithm that decides to send the recommendation. Generated recommendations are displayed to the user's smartphone using the Telegram API, which is also installed on Home Assistant. This setup offers full flexibility to the messages sent to the end-user, which can be personalized to the user preferences, accompanied by explanations or additional persuasive facts, concerning the impact of an action. What follows is the explanation of the recommendation. The current work focuses on the explainable recommendations, so the details of the Database schema and the architecture of the data collection system are given in \cite{varlamis2020bds} and are omitted. The emphasis of this work is on: i) the information stored in the Knowledge Base, explaining how this is exploited in order to provide explainable personalised recommendations; ii) the recommendation engine, explaining how user feedback is collected and processed in order to improve recommendation triggering. The additions to the core system architecture are depicted in grey dashed lines and the affected system modules are shaded grey. \subsection{Recommendations for a purpose that come at the right moment} One of the most important aspects of a real-time recommendation system is to be able to trigger an action recommendation at the correct moment for the user. Following the \textit{\textbf{micro-moments based recommendation}} strategy \cite{sardianos2019smartgreens}, the proposed system first detects micro-moments of special meaning to the daily user routine. In terms of an energy efficiency recommendation system \cite{alsalemi2019ieeesystems, alsalemi2020achieving, alsalemi_endorsing_2019}, this involves the identification of user's habitual actions, the analysis of the conditions that hold and the prediction of when actions will happen. For example, learn when the user turns the A/C on or off, in terms of time, environmental conditions, such as temperature and humidity (inside and outside). Recommending the right energy saving action at the right moment can be very helpful for users who want to reduce their energy footprint \cite{sardianos2019smartgreens}. However, the chance to accept a recommendation increases when the \textbf{\textit{recommendation serves a purpose}}, and the purpose is clearly justified to the user. In the case of energy efficiency, the main purpose is to avoid the unnecessary usage of electrical and electronic devices (e.g. when contextual conditions allow it). An additional purpose that further reduces energy usage, can be to limit the usage of high energy-demanding devices. In addition to the purpose of the recommendation, several facts that inform user on the benefits of an action can be beneficial for improving the recommendation acceptance. A \textbf{\textit{persuasive fact}} strengthens the recommendation, and helps user build a more energy efficient profile. In this direction the proposed recommendation system is able to recognize the following aspects: \begin {enumerate} \item the user \textbf{presence} in a room of interest, \item the general \textbf{context} --- which refers to the indoor and outdoor conditions i.e. temperature, humidity, luminosity, and \item user consumption \textbf{habits} in relation to various electrical and electronic appliances \end {enumerate}. The system assists users to improve their energy footprint by recommending to turn off appliances when these are used without reason. This increases the probability of a recommendation to get accepted since it agrees to the user rational and the user intention to avoid unnecessary usage. In order to clarify the intuition behind each recommendation, the system contains an explanation mechanism that creates a justification for each recommended action, based on one of the aforementioned aspects. In addition, the system provides users with persuasive facts, that are related to the savings that the user can have from a specific action or from adopting the new profile for a longer period. The facts, target the user incentives to change, and either promote an ecological profile or an economy profile depending on the user preferences. The mechanisms that explain the recommendations and generate related facts are further analysed in the following. \subsection{Explainable recommendations and human in the loop} As mentioned before, in order to increase the user trust to the system and to maximize the recommendations' acceptance the system accompanies every recommendation for an energy saving action with: \begin {enumerate} \item a justification of \textbf{why} this action is recommended; and \item a fact that explains \textbf{what} would be the benefit for the user, if the recommendation is accepted. \end {enumerate} For supporting the above claims, work is done in two different aspects that define the two most essential characteristics that turn a recommendation for an energy saving action to an explainable recommendation as depicted in Figure \ref{fig:explRecommendations-methodology}: \begin {enumerate \item \textbf{Reasoning:} This aspect considers the overall recommendation context and aim in providing detailed information on why the recommendation has been triggered. It can be information about the \textit{user status} (e.g. user is out of the room), about the \textit{appliance usage} (e.g. it is on for a long period) or the \textit{external conditions} that allow to turn off the appliance (e.g. outside temperature allows to open a window and turn off the air-conditioner). \item \textbf{Persuasion:} This aspect builds on user preferences, incentives and long-term beliefs, and employs user feedback for choosing the right facts for each recommended action in order to make it more appealing for the user. \end {enumerate} \begin{figure*}[!ht] \centering \includegraphics[scale=0.4]{Reasoning_and_Persuasion.png} \caption{The flow of explainable recommendations.} \label{fig:explRecommendations-methodology} \end{figure*} The \textbf{\textit{Reasoning}} aspect of the explainable recommendation focuses on providing the reason(s) that triggered the recommendation. In an energy saving recommendation system, in which the main goal is to avoid the unnecessary usage of electrical and electronic devices, the reasons are tightly coupled with the excessive usage of devices. As a result, the reason behind a turn off action on a cooling or heating device can be that external environment conditions (e.g. temperature and humidity or simply the ``apparent temperature'') are similar to the inside ones and the device is still in use. Similarly, the reason behind a recommendation to turn off the room lights is that the natural light levels in the room are already high. Another reason is the unnecessary usage of some devices (e.g. cooling or heating, lights, monitors) when the user is out of the room. Although many of the above energy saving actions can be easily implemented using sensors and automations \cite{alsalemi2020achieving}, the use of recommendations brings human in the loop and allow him/her to decide on how to achieve the energy saving goals. When the user manages to reduce the unnecessary usage of devices to the minimum, the next goal is to further reduce consumption by limiting the usage time of specific devices. The same recommendation explanation strategy is followed, but this time the reasons are related to time limits. For example, the recommendation to turn off the air conditioning comes a few minutes earlier than before. For this type of recommendations it is important to understand user habits (e.g. when or at what temperature the user turns the A/C on or off) \cite{sardianos2019smartgreens} in order to predict the next user movement (e.g. when the user is about to leave the room) \cite{sardianos2020model}. At this point, comes the second part of the explainable recommendation, which builds on the motivation behind user energy saving actions. These motivations affect user decision to turn off a device, and can either be ecological or economically driven. The \textbf{Eco} (a.k.a. Ecological) type of recommendations are targeted towards users that mostly focus on the environmental side of their energy consumption. These are users that are mainly motivated by contributing on the environmental movement by changing their own consumption habits and thus these recommendations focus on actions for reshaping the user's energy footprint with respect to the impact this will produce on the ecological aspect of the users' consumption habits. Examples of recommendation messages that fall in this category are e.g. "\textit{The total estimated number of kilowatt-hours from using the air-conditioning today is X, if you accept this recommendation you can contribute to a cleaner environment by reducing your energy consumption by Y\%!}". Such explanations aim to increase the persuasiveness of the recommendation and to be an arousal factor to the users that are interested in having a good ecological behavior, but need a trigger to motivate them. The \textbf{Econ} (a.k.a. Economical) type of recommendations have been employed to target the users that prioritize their financial savings over ecology. It is an alternative to the Eco type of recommendation messages for users that are mostly concerned about the amount of money they tend to spend every month for energy consumption (either electricity or gas) to cover their personal consumption needs. \subsection{Personalising the explainable recommendations} As it is depicted in Figure \ref{fig:explRecommendations-methodology}, the life-cycle of an explainable recommendation does not end with its delivery to the user. The user response to each recommendation is valuable for the system and is accounted when issuing the next recommendation. Even if the user decides to ignore the recommendation and to not interact with the system, the response is recorded and the next recommendation is adjusted accordingly. More specifically, in terms of the persuasion mode, the system begins with giving equal importance to the Eco and Econ profiles and the associated persuasion fact is selected with equal probability from the pool of Eco and Econ persuasion facts related to the targeted device. The positive (or negative) response of the user to the recommended action is counted in favor (or against) the type of the persuasion fact. For example, if the user gets a recommendation with an ecology-related fact and decides to reject it, his/her Eco profile is penalised. Similarly, the Eco profile gets a bonus when the user accepts the recommendation. If the user ignores the recommendation, assuming that he/she was not aware of the recommendation and the explanation, we neither bonus nor penalise his/her profile. On a different line, the user response to a recommendation that interacts with a device is used as feedback for the mechanism that triggers the recommendation. For example, when a user decides to accept a recommendation to turn off the air cooling device, which was triggered because the external environment temperature drops in the evening, the successful recommendation is recorded in order to be repeated in the first opportunity. When a large number of acceptances is recorded for a recommendation, the recommendation can be marked for automatic acceptance (i.e. an automation) in the future. In the event of a rejected recommendation, the information is recorded and the recommendation engine can set a temporary pause to the recommendations for the specific device. When the user keeps rejecting a recommendation, it will be permanently paused automatically, or after the user approval. When the user ignores a recommendation, the system temporarily pauses this recommendation for a few minutes. Based on the above analysis, all the user and environment data are recorded and processed in order to issue a recommendation. The user responses (or no response) is also recorded and processed in order to update the user preferences profile. In the subsection that follows, we detail on the data model of the proposed approach and the architecture employed for issuing explainable recommendations. \section{Experimental Evaluation} \label{sec_evaluation} \subsection{Experimental setup} The most important criterion in the evaluation of a recommendation system is the \textbf{\textit{acceptance rate}} of the recommendations it creates. When the system supports a predefined objective, such as improving energy efficiency, then we can also evaluate the recommendation system based on whether it helps users to reach their objective. The effectiveness of the explainable recommendations is an important measure for identifying whether the recommendation engine achieves its goal or not. In order to evaluate the acceptance of the personalised recommendations and the effect of persuasion and explanations to this acceptance, we performed a study on a group of users. The group of users comprises office users, who all evaluated the same scenario. The reason behind this, is to avoid the individual user bias and better understand the effect of explainable recommendations to users' decisions. For this purpose, all users were exposed to the same simulated scenario, that was based on the actual days in the office of another user. The data used for the evaluation of our scenario have been collected for several consecutive days from the installation of that users' office in the facilities of the university. Based on the data collected from this office, and the surrounding (outdoor) conditions, the same energy saving action recommendations were created and presented to all the participants of the study. To be more precise, the sensor data used in the simulated scenario were actually collected from the real sensor setup in the office facilities during consecutive days of office use and are used as a starting baseline in order to identify the environmental conditions and user context and start presenting recommendations to the users. Once each user starts to receive recommended energy actions and responds to these recommendations, the system adjusts to each user's preferences. This means that all the users who participated in the simulation used the same set of sensor data as if they were actually in this office during the period of data collection. Although, all participants run the same scenario (e.g. the outdoor temperature and humidity conditions change similarly for all users, user presence in the room is the same for all, etc.) the decisions of the users affect the conditions inside the room. More specifically, during the evaluation process, recommendations are triggered and displayed to all users at the same time and users have the option to accept, ignore or reject them. Based on the decision of each user (e.g. to turn off the lights or the A/C as recommended) the indoor conditions change accordingly. Given that external conditions change in a similar way for all users, a varying number of recommendations can be issued to each user during the scenario execution. Despite the current experimental setup, in which our problem setting mainly focused on monitoring and controlling the office lights, the A/C unit and the PC monitors as a proof of concept, the scalability of our proposed framework in larger cases is one of the principles of our architecture. Since each appliance is managed autonomously or in combination with other devices based on the user's goals and automations the system can flawlessly scale to larger case scenarios with more devices and actions needed. In addition, the requirements needed for the framework to run are not resource demanding and do not depend on the number of monitored appliances, so it is easy to reproduce this setup in larger spaces. Since the acceptance of recommendations can be boosted by providing additional \textbf{\textit{persuasive facts}} to the user, we include the following type of facts: \begin{itemize} \item The Eco type of persuasive facts which build on the ecological impact of the energy consumption. \item The Econ type of persuasive facts which promote the economic impact of the energy consumption to the user. \end{itemize} Recommending the correct action at the correct moment is a critical aspect that affects the recommendations' performance. It is also important to provide the user with information about the reasons that triggered each recommendation. For the purpose of the evaluation process, we focus only on recommendations about turning-off the A/C unit and the office lights. The \textbf{\textit{reasons}} for triggering a recommendation can mainly be divided into two types: \begin{enumerate} \item Recommendations are triggered because the user has left the room and left an appliance (A/C or lights) on, thus consuming energy without reason, and \item Recommendations are triggered while the user is still in the room and has a device at the on state, but the outdoor conditions (light or temperature) allow to avoid excessive or unreasonable usage, e.g. by opening a window to cool the room or allow natural light. \end{enumerate} The experimental setup evaluates three versions of the ``week in the smart office'' scenario, which is explained in more details in the following subsection. The key point that differentiates the three versions lies in the content of the messages that were delivered to the end-users as well as the persuasive factors and the explanations each recommendation included in its body, and can be summarized in the following: \begin{enumerate \item The simple version recommendations include no particular informative content apart from the date and time that the recommendation has been created, and the prompt for the recommended action (Figure \ref{fig:recommendations-per-scenario}(a)). \item In the second variation of the experiment, each recommendation includes a customized persuasive fact that describes the expected impact of the user's consumption habit (Figure \ref{fig:recommendations-per-scenario}(b)). \item Finally, in the full version, the explainable and persuasive recommendations also include information about the indoor and outdoor conditions and the user presence, as well as a message that informs the user for the reason that triggered the recommendation (Figure \ref{fig:recommendations-per-scenario}(c)). \end{enumerate} \begin{figure}[!ht] \begin{subfigure}[t]{.33\textwidth} \centering \includegraphics[trim=0 45cm 0 0,clip,width=1.0\columnwidth]{recommendation-ac.jpg} \caption{} \label{fig:plain-recommendation} \end{subfigure} \begin{subfigure}[t]{.33\textwidth} \centering \includegraphics[trim=0 41cm 0 0,clip,width=1.0\columnwidth]{recommendation-ac-2-econ.png} \caption{} \label{fig:persuasive-recommendation} \end{subfigure} \begin{subfigure}[t]{.33\textwidth} \centering \includegraphics[trim=0 45cm 0 0,clip,width=1.0\textwidth]{recommendation-ac-2-econ-explainable.jpg} \caption{} \label{fig:explainable-recommendation} \end{subfigure} \caption{Examples of: (a) plain recommendations in \textit{\textbf{Scenario I}}, (b) recommendations with persuasive facts for \textit{\textbf{Scenario II}}, and (c) recommendations accompanied with the reasoning part to form the explainable recommendations for \textit{\textbf{Scenario III}}.} \label{fig:recommendations-per-scenario} \end{figure} Since the decisions of users to the same recommendation can differ, the sequence of recommendations also differs per user, thus giving useful insights about the type of recommendations that each user tends to accept or reject. The same variability occurs to the type of persuasive facts that accompanies each recommendation. More details about the base scenario that was the basis for all users, and the way recommendations are experimental setup and the evaluation process are provided in the subsections that follow. In total 8 users participated in this evaluation process, which revealed many interesting finding concerning user preferences and the impact of explanations and persuasive facts to the recommendation acceptance. \subsection{The scenario: A typical week in the office} Our system has already been deployed in several offices of our University's facilities, as described in Section \ref{sec_methodology} and in \cite{alsalemi2019ieeesystems, sardianos2019smartgreens} using a set of various sensors that record data about device usage, consumption (e.g. energy consumption per device), user presence and the general context (e.g. interior and exterior temperature, humidity, and luminosity). Energy consumption data along with occupancy information and contextual factors derived from the sensor data are used to extract the users’ consumption habits. More specifically the analysis of sensor data, allows the system to identify user consumption habits, which are represented as device usage or room presence patterns (e.g. turn on the A/C at 17:00) and the contextual conditions under each usage is performed (e.g. when the outdoor temperature is above 33$^{\circ}$ for example). On the basis of the learned user patterns it is possible to trigger proper energy saving action recommendations when certain conditions are met (e.g. when the outdoor temperature is lower than indoor so turning off the A/C and opening a window may be a more suitable alternative). For the experimental evaluation and the creation of the evaluation simulation scenario, data from a single user office for one week has been collected. The focus was on user presence, and the environmental conditions (the luminosity levels and the temperature) inside and outside the room. \noindent\textbf{Environmental conditions:} In order to capture the contextual information of the scenario (e.g. the environmental conditions of the office), we employed two DHT-22 temperature and humidity sensors with an operating range of -40-80$^{\circ}$C and 0-100\% for ambient temperature and relative humidity respectively, and the sensor measurements recorded during a three months period was used for the evaluation scenario. \noindent\textbf{Room occupancy:} A key aspect on identifying excessive energy consumption is the ability to recognize when there is no user in the room that an energy consuming device is still turned on. In the office setup used for this evaluation, two HC-SR501 motion sensor units were used in order to record room occupancy which is also stored in the system's data store. The outcomes of the initial data analysis process over the room occupancy data, a set of timeslots where the user has been identified to be absent from the office, resulting in the weekly compiled list of room occupancy slots that is depicted in Table \ref{table:occupancy-time-slots}. \begin{table \caption{The detected office occupancy hours, using our system's setup} \label{table:occupancy-time-slots} \centering \arrayrulecolor{black} \resizebox{0.7\columnwidth}{!}{ \begin{tabular}{|l|l|l|l|l|l|} \hline \diagbox{Time of Day}{Day of Week} & \multicolumn{1}{c|}{{\cellcolor[rgb]{0.9,0.9,0.9}}Monday} & \multicolumn{1}{c|}{{\cellcolor[rgb]{0.9,0.9,0.9}}Tuesday} & \multicolumn{1}{c|}{{\cellcolor[rgb]{0.9,0.9,0.9}}Wednesday} & \multicolumn{1}{c|}{{\cellcolor[rgb]{0.9,0.9,0.9}}Thursday} & {\cellcolor[rgb]{0.9,0.9,0.9}}Friday \\ \hline \rowcolor[rgb]{0,0,0} {\cellcolor[rgb]{0.9,0.9,0.9}}8AM - 9AM & & {\cellcolor[rgb]{1,1,1}} & & & \\ \hhline{|->{\arrayrulecolor[rgb]{0,0,0}}---->{\arrayrulecolor[rgb]{1,1,1}}->{\arrayrulecolor{black}}|} \rowcolor[rgb]{0,0,0} {\cellcolor[rgb]{0.9,0.9,0.9}}9AM -10AM & & & & & {\cellcolor[rgb]{1,1,1}} \\ \hhline{|->{\arrayrulecolor[rgb]{0,0,0}}--->{\arrayrulecolor[rgb]{1,1,1}}-->{\arrayrulecolor{black}}|} \rowcolor[rgb]{0,0,0} {\cellcolor[rgb]{0.9,0.9,0.9}}10AM - 11AM & & & & {\cellcolor[rgb]{1,1,1}} & {\cellcolor[rgb]{1,1,1}} \\ \hhline{|->{\arrayrulecolor[rgb]{0,0,0}}-->{\arrayrulecolor[rgb]{1,1,1}}--->{\arrayrulecolor{black}}|} \rowcolor[rgb]{1,1,1} {\cellcolor[rgb]{0.9,0.9,0.9}}11AM - 12PM & {\cellcolor[rgb]{0,0,0}} & {\cellcolor[rgb]{0,0,0}} & & & \\ \hhline{|->{\arrayrulecolor[rgb]{1,1,1}}----->{\arrayrulecolor{black}}|} \rowcolor[rgb]{1,1,1} {\cellcolor[rgb]{0.9,0.9,0.9}}12PM - 1PM & & & & & \\ \hhline{|->{\arrayrulecolor[rgb]{1,1,1}}--->{\arrayrulecolor[rgb]{0,0,0}}->{\arrayrulecolor[rgb]{1,1,1}}->{\arrayrulecolor{black}}|} \rowcolor[rgb]{1,1,1} {\cellcolor[rgb]{0.9,0.9,0.9}}1PM - 2PM & & & & {\cellcolor[rgb]{0,0,0}} & \\ \hhline{|->{\arrayrulecolor[rgb]{0,0,0}}->{\arrayrulecolor[rgb]{1,1,1}}->{\arrayrulecolor[rgb]{0,0,0}}-->{\arrayrulecolor[rgb]{1,1,1}}->{\arrayrulecolor{black}}|} \rowcolor[rgb]{0,0,0} {\cellcolor[rgb]{0.9,0.9,0.9}}2PM - 3PM & & {\cellcolor[rgb]{1,1,1}} & & & {\cellcolor[rgb]{1,1,1}} \\ \hhline{|->{\arrayrulecolor[rgb]{0,0,0}}--->{\arrayrulecolor[rgb]{1,1,1}}->{\arrayrulecolor[rgb]{0,0,0}}->{\arrayrulecolor{black}}|} \rowcolor[rgb]{0,0,0} {\cellcolor[rgb]{0.9,0.9,0.9}}3PM - 4PM & & & & {\cellcolor[rgb]{1,1,1}} & \\ \hhline{|->{\arrayrulecolor[rgb]{0,0,0}}-->{\arrayrulecolor[rgb]{1,1,1}}->{\arrayrulecolor[rgb]{0,0,0}}-->{\arrayrulecolor{black}}|} \rowcolor[rgb]{0,0,0} {\cellcolor[rgb]{0.9,0.9,0.9}}4PM - 5PM & & & {\cellcolor[rgb]{1,1,1}} & & \\ \hhline{|->{\arrayrulecolor[rgb]{1,1,1}}->{\arrayrulecolor[rgb]{0,0,0}}->{\arrayrulecolor[rgb]{1,1,1}}->{\arrayrulecolor[rgb]{0,0,0}}-->{\arrayrulecolor{black}}|} \rowcolor[rgb]{0,0,0} {\cellcolor[rgb]{0.9,0.9,0.9}}5PM - 6PM & {\cellcolor[rgb]{1,1,1}} & & {\cellcolor[rgb]{1,1,1}} & & \\ \hhline{|->{\arrayrulecolor[rgb]{0,0,0}}->{\arrayrulecolor[rgb]{1,1,1}}--->{\arrayrulecolor[rgb]{0,0,0}}->{\arrayrulecolor{black}}|} \rowcolor[rgb]{1,1,1} {\cellcolor[rgb]{0.9,0.9,0.9}}6PM - 7PM & {\cellcolor[rgb]{0,0,0}} & & & & {\cellcolor[rgb]{0,0,0}} \\ \hhline{|->{\arrayrulecolor[rgb]{0,0,0}}->{\arrayrulecolor[rgb]{1,1,1}}->{\arrayrulecolor[rgb]{0,0,0}}-->{\arrayrulecolor[rgb]{1,1,1}}->{\arrayrulecolor{black}}|} \rowcolor[rgb]{0,0,0} {\cellcolor[rgb]{0.9,0.9,0.9}}7PM - 8PM & & {\cellcolor[rgb]{1,1,1}} & & & {\cellcolor[rgb]{1,1,1}} \\ \hhline{|->{\arrayrulecolor[rgb]{1,1,1}}->{\arrayrulecolor[rgb]{0,0,0}}->{\arrayrulecolor[rgb]{1,1,1}}->{\arrayrulecolor[rgb]{0,0,0}}->{\arrayrulecolor[rgb]{1,1,1}}->{\arrayrulecolor{black}}|} \rowcolor[rgb]{1,1,1} {\cellcolor[rgb]{0.9,0.9,0.9}}8PM - 9PM & & {\cellcolor[rgb]{0,0,0}} & & {\cellcolor[rgb]{0,0,0}} & \\ \hhline{|->{\arrayrulecolor[rgb]{1,1,1}}--->{\arrayrulecolor[rgb]{0,0,0}}->{\arrayrulecolor[rgb]{1,1,1}}->{\arrayrulecolor{black}}|} \rowcolor[rgb]{1,1,1} {\cellcolor[rgb]{0.9,0.9,0.9}}9PM - 10PM & & & & {\cellcolor[rgb]{0,0,0}} & \\ \hline \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} \\ \multicolumn{1}{l}{} & \multicolumn{1}{l}{{\cellcolor[rgb]{0,0,0}}} & \multicolumn{4}{l}{User presence was identified in the office}\\ \multicolumn{1}{l}{} & \multicolumn{1}{|c|}{{\cellcolor[rgb]{1,1,1}}} & \multicolumn{4}{l}{User was absent from the office} \\ \cline{2-2} \end{tabular} } \end{table} \noindent\textbf{Consumption habits:} Based on the consumption data analysis, the system identifies the energy consumption preferences of the user in terms of when the user tends to turn-on and off certain devices and combining this info with the contextual information recorded from the indoor and outdoor sensors, the system "knows" when and under what conditions the user's energy consumption habits tend to occur (e.g. the luminosity levels of the room that triggers the user to switch on the lights). \subsection{Recommendations delivery and user feedback} \label{ss_delivery} When the proper conditions occur, the system generates a recommendation that is presented to the user's smartphone as a pop up notification, like the one presented in Figure \ref{fig:recommendations-presentation}. In the plain recommendation scenario, when the user opens the recommendation in the Telegram app, the recommendation message provides only a timestamp (i.e. the date and time when the recommendation occurred) and the recommended action (e.g. turn off the A/C), in order to allow the evaluators to put themselves into the context under which this recommendation was generated. After the current datetime, the phrasing of the recommended action follows which is accompanied by two ``Accept" and ``Reject" buttons, that allow the user to respond to this recommendation (Figure \ref{fig:recommendations-presentation}). \begin{figure}[!ht] \begin{subfigure}[t]{.33\textwidth} \centering \includegraphics[trim=0 45cm 0 0,clip,width=1.0\textwidth]{popup-recommendation.png} \caption{} \end{subfigure} \begin{subfigure}[t]{.33\textwidth} \centering \includegraphics[trim=0 60cm 0 0,clip,width=1.0\columnwidth]{recommendation-ac.jpg} \caption{} \end{subfigure} \begin{subfigure}[t]{.33\textwidth} \centering \includegraphics[trim=0 60cm 0 0,clip,width=1.0\textwidth]{recommendation-ac-accepted_2.jpg} \caption{} \end{subfigure} \caption{From left to right: (a) an example of a pop up recommendation presented to the user's smartphone, (b) the plain recommendation without any persuasive fact or explanation and the user options, and (c) the result after a user accept response.} \label{fig:recommendations-presentation} \end{figure} Upon acceptance of a recommended action, the system automatically sends a turn-off signal to the respective device and an acknowledgement to the Telegram message that informs the user that this action has been fulfilled (Figure \ref{fig:recommendations-presentation}(c)). If the user decides to reject the recommendation, the system takes into consideration the user's negative response to this recommended action and adjusts the way this recommendation has to be presented again in the future. Every time that a recommendation is delivered to the user, he/she has a pending time (20 seconds) for accepting or rejecting the recommendation, else the recommendation is considered as ignored. Depending on each user's answer the sequence and timing of recommendations is different for each user. The experiment runs in simulated time and the process is sped up, when no recommendations are pending for any user. In particular, when a turn-off action gets rejected (i.e. the user prefers to leave the appliance turned on), the recommendation engine does not reissue this recommendation for a period of one hour and if and only if the conditions that triggered this recommendation (e.g. if the user is still absent from the office, etc.) are still valid. In the case that a recommendation has been ignored or simply the user failed to respond during the allowed time window to a recommendation, then this recommended action is held for a period of 10 minutes and then is sent again to the user, once again given that the trigger conditions are still met. The system will continue to issue the same recommendation for a maximum time window of one hour when the recommendation is ceased and is not displayed again to the end-user. With the above configuration, the same ``a week in the office'' scenario may result to a different sequence of recommendation messages for the users. This leads to a total time of 60-70 minutes needed for running one scenario for one week (5 working days) for all the users. The same experiment is repeated two more times using persuasive and explainable recommendation messages as explained in the following. \subsection{Explainable recommendation messages} For Scenarios II and III, that both employ persuasive facts to accompany the recommended action to increase the possibility to accept a recommendation, the proposed system personalizes the messages addressed to the user. Also, in Scenario III, the explainable recommendations produced by the recommendation engine fit to the recommendation context and provide a verbal description of the conditions that triggered the recommendation. \begin{figure}[!hbt] \centering \begin{subfigure}[b]{.30\textwidth} \centering \includegraphics[trim=0 40cm 0 0,clip,width=1.0\columnwidth]{recommendation-ac-2-eco-explainable} \label{fig:eco-recommendation} \end{subfigure} \begin{subfigure}[b]{.30\textwidth} \centering \includegraphics[trim=0 40cm 0 0,clip,width=1.0\columnwidth]{recommendation-ac-2-econ-explainable.jpg} \label{fig:econ-recommendation} \end{subfigure} \caption{Examples of personalized explainable recommendations presented to the user accompanied by the reasoning and the persuasive facts.} \label{fig:types-of-recommendation} \end{figure} Two examples of explainable recommendations are depicted in Figure \ref{fig:types-of-recommendation}. Recommendations are composed by two different sections, which serve the explainable and persuasive recommendations scenario respectively. The messages contain visual cues that facilitate users to understand the reason of each recommendation. The first message provides the date and time information as in the plain recommendation scenario, as well as information about: i) the internal and external temperature, ii) the internal and external light levels, iii) the user presence in the room. The second message contains a verbal explanation of the reason that triggered the recommendation, which summarizes the values presented in the first message. The third message contains the persuasive fact, which is Eco or Econ and contains an estimation for the actual saving (e.g. by turning off the A/C earlier than usual) or a projection of this saving in month and yearly level. This enables all the different evaluators that participate in this evaluation process to put themselves into the context that triggered this recommendation and thus making this process as realistic as possible. The persuasive part of the recommendation (third message) appears in both Scenarios II and III, whereas the explanation part (first and second message) appears only in Scenario III. In the context of why a recommendation has been triggered, the \textbf{\textit{explanations}} produced for the users are categorized into: \begin{itemize} \item The user is \textbf{\textit{out the office}} while the devices are still on, and \item The user is still in the office, with the devices switched on, but the indoor/outdoor conditions allow the \textbf{use of alternate methods} for achieving similar results (e.g. use natural lighting instead of office lights to light up the office, open the window to cool down the room instead of turning on the A/C, etc.). \end{itemize} In the examples of Figure \ref{fig:types-of-recommendation} the recommendation on the left was created because the indoor and outdoor context (e.g. the difference in the indoor and outdoor temperature) have been identified as suitable for turning off the A/C and opening the window to achieve the same results. On the contrary, the one on the right notifies the user that the system has detected the user absence from the room and the A/C was on. This part of recommendation plays an important role on engaging users to accept the recommended actions since the explainability aspect of the personalized recommendations makes it easier for the users to understand the ``flaws" in their consumption habits and thus increase their trust in the system that assists them to change their energy profile. The \textbf{\textit{persuasion facts}} act as supplements to the main explainability of the recommendation and try to persuade the users to accept the recommendations by pointing out the benefits for the user when accepting the recommended actions. The personalization of this facts is based on detecting whether the user values the ecological factor higher than the economical one or vice versa based on the acceptance or rejection of the respective recommendations issued at any stage of the experiment. Based on this, it adjusts the type of facts that comes with the recommendations that follow, to match the user's preference. So, when a new recommendation is generated, the system calculates the consumption of the targeted device from the time this device was turned on until that moment and makes a projection of the total $CO_2$ emissions and the total cost in \EUR (or other currencies) from the usage of the device for this specific period. The fact is then enriched with the total energy consumption, and the total energy savings (in the case of Eco recommendations) or money savings (in the case of Econ recommendations), and is presented along with the recommended action to the user. The resulting recommendation is more informative than the previous one (that contains only time information) and is expected to affect users' decision more. In the case of Eco type of persuasion, in particular, the system calculates the total duration the device was in use, from the last time the appliance was turned on till the time the recommendation was created. Based on the given type of each appliance (e.g. lights, A/C unit, etc.) and the list of average CO$_2$ emissions per kWh, as given by the European Environment Agency\footnote{https://www.eea.europa.eu/data-and-maps/indicators/overview-of-the-electricity-production-2/assessment-4 (Last accessed: 01/2020)}, it calculates an estimation of the $CO_2$ emissions for this period of usage. Likewise, in the case of Econ type of persuasion facts, the system calculates the total cost of the energy consumed by the corresponding appliance over the latest usage period based on the usage time and the average cost of electricity per region, as reported by Eurostat Energy EU\footnote{https://strom-report.de/electricity-prices-europe/ (Last accessed: 01/2020)}. In both cases, we add two additional projections of the consumption in monthly and annual basis. These projection levels are calculated in run-time during the phase of constructing the recommendation and are mainly based on the apriori knowledge of the type of appliance that is being monitor (e.g. LED lights) and the official reports of the average consumption of the type of device and the actual reported electricity costs of the particular area. Combining this knowledge, the total cost of the consumed energy or emissions is calculated given the time of period this device is turned on. For example, let us consider Alice, a typical office user in Greece who turned the A/C as soon as she entered the office on Friday at 09:00 AM. Supposing that the reported consumption of the A/C unit that is installed in Alice's office is 3.2 kW on the cooling mode and given that the electricity price in cents per kWh in Greece is 16.5 cents. If the system decides at 12:00 PM to create a turn-off recommendation for turning the A/C off, then along with the verbal recommendation and the persuasive fact that is chosen it also calculates that for the duration of three hours that the A/C is on the total actual cost so far for this device would be 1.58\EUR. Now based on the knowledge given by the KAM for Alice's consumption habits, if turning on the A/C on Friday mornings is a usual habit the system uses this information and projects the total cost of this habit in a monthly and an annual level, given that Alice would keep on this habit. In the case when the system chooses to include the monthly or annual projection level in the recommendation message, it will help Alice to actually realize the benefit of altering this consumption habit by projecting the total cost of continuing this particular usage of the A/C in a monthly or annual basis. The use of three different values (the actual value, the monthly and the yearly projection) gives the flexibility to the recommender system, to decide which amount will be presented to the user as a persuasion fact, in order to maximize the probability of acceptance. Providing the monthly or annual amount to the user may in some cases be more informative for the user as it makes it easier for him to comprehend the actual impact of his energy habits either on the environment or his total income. \section{Results and Discussion} \label{ss_results} The experimental process mainly focused on identifying whether the persuasion facts and the explainable recommendations with their more informative content can have a positive impact on the recommendation acceptance by the users. In addition, it is worth identifying if using any level of projection of the total energy savings in the persuasive facts affects the users' decisions on accepting or rejecting a recommendation and thus, affecting their energy footprint. Since a recommendation can be either accepted, rejected or ignored, the definition of recommendation acceptance can either take into account the ignored recommendations or not. Measuring the accepted to rejected recommendations ratio, we have to clarify that the policy definition in the case of ignored recommendations is an important factor. In the evaluation scenarios, a follow up recommendation comes after a small interval, whenever the user ignores a recommendation. This interval is bigger whenever the user rejects the recommendation. This means that the total number of recommendations sent to each user is also subject to his/her responses. Overall, almost 16.5\% of the issued recommendations has been ignored, and this number could be different if a different re-issue policy has been followed. In order to avoid the bias of ignored recommendations in the computation of recommendation acceptance, in Figure \ref{total_acceptance_ratio_with_stdev}, we display the average acceptance ratio computed only on the accepted and rejected recommendations for the three scenario versions. Compared to the plain recommendations scenario (I), in Scenario II the persuasive facts increase the average acceptance ratio from 51\% to 55\%. However, the differences between the users are significant and thus the standard deviation is large, which means that the two performances are comparable. On the other side, the performance in Scenario III is almost 19\% higher than the simple scenario, reaching an acceptance ratio of 70\%. The standard deviation around the mean acceptance ratio is smaller, which means that we have a higher degree of agreement between users. \begin{figure}[!htp] \captionsetup{singlelinecheck = false} \centering \begin{tikzpicture}[scale=1] \begin{axis}[ width = 1*\columnwidth, height = 5.2cm, major x tick style = transparent, ybar=1pt, bar width=0.8cm, ymajorgrids = true, ylabel = {Mean acceptance ratio (\%)}, y label style={at={(axis description cs:-0.08,.5)},anchor=south}, symbolic x coords={Scenario I,Scenario II,Scenario III}, xtick = data, scaled y ticks = false, enlarge x limits=0.5, ymin=0, ymax= 110, legend cell align=left, legend style={draw=none, legend columns=-1}, xtick=data, nodes near coords={ \pgfmathprintnumber[precision=0]{\pgfplotspointmeta} } ] \addplot[style={fill=blue,mark=none},postaction={}, error bars/.cd, y dir=both,y explicit] coordinates { (Scenario I, 51) +- (13,13) (Scenario II, 55) +- (21,21) (Scenario III, 70) +- (18,18)}; \legend{Recommendation Acceptance with STDEV} \end{axis} \end{tikzpicture \caption{The average acceptance ratio (and standard deviation) of the recommendations for the three scenarios.}\label{total_acceptance_ratio_with_stdev} \end{figure} \begin{figure}[!htp] \captionsetup{singlelinecheck = false} \centering \begin{tikzpicture}[scale=1] \begin{axis}[ width = 1*\columnwidth, height = 5.5cm, major x tick style = transparent, ybar=2pt, bar width=1.0cm, ymajorgrids = true, ylabel = {Acceptance ratio (\%)}, y label style={at={(axis description cs:-0.08,.5)},anchor=south}, xlabel = {User response}, symbolic x coords={Actual,Monthly,Annual}, xtick = data, scaled y ticks = false, enlarge x limits=0.5, ymin=0, ymax= 110, legend cell align=left, legend style={draw=none, legend columns=-1}, xtick=data, nodes near coords={ \pgfmathprintnumber[precision=0]{\pgfplotspointmeta} } ] \addplot[style={fill=green,mark=none}, error bars/.cd, y dir=both,y explicit] coordinates { (Actual, 68) +- (26,26) (Monthly, 74) +- (18,18) (Annual, 75) +- (15,15) }; \legend{Projection level of costs} \end{axis} \end{tikzpicture \caption{Comparison of the total acceptance ratio for the different levels of projection (``Actual", ``Monthly", ``Annual") in the persuasive facts of the recommendations of Scenarios II and III.}\label{acceptance_ratio_aggregation_levels} \end{figure} When comparing the average acceptance ratio of the recommendations across the three types of value projections of the persuasion facts (i.e. actual, monthly, yearly), we can see that the report of the actual savings has the worst acceptance ratio (68\%), whereas the acceptance for the monthly and yearly projections is comparable but higher (74\% and 75\% respectively), as depicted in Figure \ref{acceptance_ratio_aggregation_levels}. Also, there is a disagreement between users in the case of persuasive facts that report actual savings, which results in a high standard deviation around the mean in this case. As an outcome of these results, using either a monthly or an annual projection of the user's consumption costs in the explainable personalized recommendation, makes it easier for the users to be convinced to accept the recommended action created from the system. This makes complete sense, since these projections can be easily understood by the user in terms of realising the total benefit of accepting these recommendations. These evaluation findings, imply that the combination of the persuasive facts with the explainable illustration of the recommendations, accompanied by a projection of the total cost of the user's energy habits can both assist users easily comprehend the impact of each personalized explainable recommendation and the system to achieve its goal for improving user's energy profile by utilizing well timed personalized turn off actions recommendations. Given the increase achieved in the acceptance of the recommendations with the use of explainable recommendations in Scenario III, in Figure \ref{fig:acceptance-heatmap} we compare the acceptance ratio of the recommendations for each combination of projection level used in the recommendations' body (e.g. ``Actual", ``Monthly", ``Annual") and the type of recommendation delivered to the users (e.g. ``Eco" or ``Econ") during the evaluation of the explainable recommender engine (Scenarios II and III) to highlight the optimal combination that mostly managed to trigger users in accepting the recommended action and implicitly achieved the goal of transforming user's energy profile. \begin{figure}[h!] \centering \includegraphics[trim=0 0 0 0,clip,width=0.6\columnwidth]{explainable_acceptance_ratio_heatmap.png} \caption{The acceptance ratio heatmap for Scenarios II and III, per type of recommendation (``Eco", ``Econ") and cost projection level (``Actual", ``Monthly", ``Annual").} \label{fig:acceptance-heatmap} \end{figure} As depicted in Figure \ref{fig:acceptance-heatmap}, the use of Eco type of recommendations explanations with an annual projection of total energy cost benefits managed to persuade the users on 77\% of the generated recommendations, whereas the combination of Eco explanations with the use of monthly cost projections also managed to achieve a 75\% of recommendation acceptance. On the other hand, the use of the actual consumption costs at the time the explainable recommendation is triggered combined with an ecological type of persuasion was the least preferred combination for accepting a recommendation with only 65\% of user acceptance ratio. \section{Conclusions}\label{sec_conclusions} In this paper, a context-aware explainable recommendation system for energy efficiency is presented. The proposed intelligent system employs a real-time data collection and knowledge extraction and abstraction on real data to generate explainable recommendations, which are personalized to user preferences and habits. Explanations are categorized into those that emphasize on the economical saving prospects (Econ) and those that foster a positive ecological impact (Eco). Current results show a 19\% increase on the recommendation acceptance ratio when both economical and ecological persuasive facts are employed. Future work includes enriching the data input to the recommender, employing data visualization as a tool of persuasion, and integration with our mobile application. This work is deemed to be a revolution in recommender system research, where the seeds are planted to develop systems that automatically produce intelligent recommendations for energy saving behavior. \section{Acknowledgements}\label{acknowledgements} This paper was made possible by National Priorities Research Program (NPRP) grant No. 10-0130-170288 from the Qatar National Research Fund (a member of Qatar Foundation). The statements made herein are solely the responsibility of the authors.
1,314,259,995,410
arxiv
\section{1. Deriving the mode equations} In this section, we show how to derive the mode equations using the moment expansion technique~\cite{cates2013when, Solon2015comp,Adeleke2020}. We start the derivation from the Fokker-Planck equation (FPE) describing the evolution of $P(\bm{\chi},\bm{r},\bm{\eta},t)$ (eq.~(2) in the main text): \begin{equation} \begin{split} &\partial_t P(\bm{\chi},\bm{r},\bm{\eta},t)=-\nabla_{\bm{\chi}} \cdot \left[ -\bm{v}_w P + \frac{1}{1+q}v_{\rm a}\left(\bm{\chi}'\right)\bm{\eta}P - \frac{D}{1+q}\nabla_{\bm{\chi}}P \right]+\\ &\quad\quad\quad -\nabla_{\bm{r}}\cdot \left[ -\frac{1+q}{q\gamma} \nabla_{\bm{r}} UP+v_{\rm a}\left(\bm{\chi}'\right)\bm{\eta}P - \frac{1+q}{q}D \nabla_{\bm{r}} P \right]+\frac{1}{d\tau}\hat{\mathcal{L}}_{\bm{\eta}} P\,, \end{split} \end{equation} where \begin{equation} \bm{\chi}'=\bm{\chi}+q\bm{r}/(1+q), \label{Seq:chi-p} \end{equation} and the operator $\hat{\mathcal{L}}_{\bm{\eta}}$ is defined as \begin{equation} \hat{\mathcal{L}}_{\bm{\eta}}f(\bm{\eta})=\nabla^2_{\bm{\eta}}f(\bm{\eta})+d\nabla_{\bm{\eta}} \cdot \left[\bm{\eta} f(\bm{\eta}) \right]\,. \label{l_eta} \end{equation} We now expand the joint probability density as \begin{equation} P(\bm{\chi},\bm{r},\bm{\eta},t)=\sum_{\bm{n}}\phi_{\bm{n}}(\bm{\chi},\bm{r},t)u_{\bm{n}}(\bm{\eta})\,, \label{decomposition} \end{equation} where $\bm{n} = \{n_1, n_2, \ldots, n_d \}$ is a set of non-negative integers, while $\left\{u_{\bm{n}}(\bm{\eta})\right\}$ is the corresponding set of eigenfunctions of the operator $\hat{\mathcal{L}}_{\bm{\eta}}$, given by \begin{equation} u_{\bm{n}}(\bm{\eta})=\exp\left\{-\frac{d\bm{\eta}^2}{2} \right\}\prod_{i=1}^d H_{n_i} ( \sqrt{d}\,\eta_i); \label{un1} \end{equation} where $H_n(x)$ is the $n$-th Hermite polynomial in the probabilist convention~\cite{abramowitz1988handbook}. They satisfy the following eigenvalue equation \begin{equation} \hat{\mathcal{L}}_{\bm{\eta}}u_{\bm{n}}(\bm{\eta})=\lambda_{\bm{n}} u_{\bm{n}}(\bm{\eta}), \end{equation} where the eigenvalues $\lambda_{\bm{n}}$ are given by \begin{equation} \lambda_{\bm{n}}=-d\sum_{i=1}^d n_i. \end{equation} Moreover, it is convenient to introduce the family of functions $\left\{ \tilde{u}_{\bm{n}}(\bm{\eta})\right\}$ as \begin{equation} \tilde{u}_{\bm{n}}(\bm{\eta})=(2 \pi)^{-d/2} \prod_{i=1}^d\frac{H_{n_i} ( \sqrt{d}\,\eta_i)}{n_i!}, \end{equation} which are orthogonal to the eigenfunctions $\left\{u_{\bm{n}}(\bm{\eta})\right\}$, i.e., \begin{equation} \int d\bm{\eta}\, u_{\bm{n}}(\bm{\eta}) \tilde{u}_{\bm{m}}(\bm{\eta})=d^{-d/2} \delta_{\bm{n},\bm{m}}, \end{equation} where $\delta_{\bm{n},\bm{m}} = \prod_{i=1}^d \delta_{n_i,m_i}$. Multiplying eq.~\eqref{decomposition} by $\tilde{u}_{\bm{0}}(\bm{\eta})$ and integrating over $\bm{\eta}$, we get \begin{equation} \int d\bm{\eta} \, \tilde{u}_{\bm{0}}(\bm{\eta}) P(\bm{\chi},\bm{r},\bm{\eta},t) = \sum_{\bm{n}}\phi_{\bm{n}}(\bm{\chi},\bm{r},t) \int d\bm{\eta}\, \tilde{u}_{\bm{0}}(\bm{\eta}) u_{\bm{n}}(\bm{\eta})=d^{-d/2}\phi_{\bm{0}}(\bm{\chi},\bm{r},t) \,, \label{phi_0} \end{equation} and after using the definition of $\tilde{u}_{\bm{0}}(\bm{\eta})$: \begin{equation} \begin{split} &\varphi(\bm{\chi},\bm{r},t)\equiv \int d\bm{\eta}\,P(\bm{\chi},\bm{r},\bm{\eta},t) = (2 \pi/d)^{d/2} \phi_{\bm{0}}(\bm{\chi},\bm{r},t) \,, \end{split} \label{rho_phi0} \end{equation} Accordingly, the first coefficient $\phi_{\bm{0}}(\bm{\chi},\bm{r},t)$ of the expansion in eq.~\eqref{decomposition} is related to the marginal density $\varphi(\bm{\chi},\bm{r},t)$. For later purposes, we recall that Hermite polynomials satisfy the recurrence relation \cite{abramowitz1988handbook} \begin{equation} H_{n+1}(x)=xH_n(x)-H'_n(x), \label{sm:recH} \end{equation} and they form an Appell sequence, as they satisfy \begin{equation} H'_n(x)=nH_{n-1}(x). \label{sm:appH} \end{equation} In order to lighten the notation, below we will denote by $\bm{n}_{\alpha \pm}$ the vector $(n_1,..,n_\alpha \pm1,...,n_d)$. Then, by using eqs.~\eqref{sm:recH} and \eqref{sm:appH} in eq.~\eqref{un1} one can write \begin{equation} \begin{split} \eta_\alpha u_{\bm{n}}(\bm{\eta}) &=\frac{1}{\sqrt{d}} \exp\left\{ -\frac{d\bm{\eta}^2}{2 } \right\}\sqrt{d}\eta_\alpha H_{n_\alpha}\left( \sqrt{d}\eta_\alpha\right) \prod_{\beta\neq \alpha} H_{n_\beta}\left( \sqrt{d}\eta_\beta \right)\\ &=\frac{1}{\sqrt{d}}\exp\left\{ -\frac{d\bm{\eta}^2}{2} \right\}\left[H_{n_\alpha+1}\left( \sqrt{d}\eta_\alpha\right) + n_\alpha H_{n_\alpha-1}\left( \sqrt{d}\eta_\alpha\right) \right] \prod_{\beta\neq \alpha} H_{n_\beta}\left( \sqrt{d}\eta_\beta \right)\\ &=\frac{1}{\sqrt{d}} u_{\bm{n}_{\alpha+}}(\bm{\eta}) + \frac{n_\alpha}{\sqrt{d}} u_{\bm{n}_{\alpha-}}(\bm{\eta})\,. \end{split} \end{equation} At this point we can project the FPE onto the $\left\{ \tilde{u}_{\bm{n}}(\bm{\eta})\right\}$ and obtain a set of equations for the coefficients $\left\{ \phi_{\bm{n}}(\bm{\chi},\bm{r},t)\right\}$. In the following, summation over repeated indices is implied. For convenience, we will split the Fokker-Planck operator into the three contributions \begin{equation} \partial_t P(\bm{\chi},\bm{r},\bm{\eta},t)=\left( \hat{\mathcal{L}}_{\bm{\chi}}+\hat{\mathcal{L}}_{\bm{r}}+\frac{1}{d\tau }\hat{\mathcal{L}}_{\bm{\eta}} \right)P, \label{S-eq:FP} \end{equation} where $\hat{\mathcal{L}}_{\bm{\eta}}$ is defined in eq.~\eqref{l_eta}, while \begin{equation} \begin{split} &\hat{\mathcal{L}}_{\bm{\chi}}P=-\partial_\alpha \left[ \frac{1}{1+q}v_{\rm a}\left(\bm{\chi}+\frac{q}{1+q}\bm{r}\right)\eta_\alpha P -\frac{D}{1+q}\partial_\alpha P -v_w \delta_{\alpha,0} P\right],\\ &\hat{\mathcal{L}}_{\bm{r}}P=-\partial'_\alpha \left[ -\frac{1+q}{q\gamma} \partial'_\alpha U(\bm{r})P+v_{\rm a}\left(\bm{\chi}+\frac{q}{1+q}\bm{r}\right)\eta_\alpha P - \frac{(1+q)}{q}D \partial'_\alpha P \right], \end{split} \label{S-eq:FP-proj} \end{equation} where we introduced the shorthand notation $\partial_\alpha \equiv \partial_{\chi_\alpha}$ and $\partial'_\alpha \equiv \partial_{r_\alpha}$. We separately project the various terms of the FPE onto $\tilde{u}_{\bm{m}}(\bm{\eta})$, starting from its l.h.s.: \begin{equation} \int d\bm{\eta} \,\tilde{u}_{\bm{m}}(\bm{\eta}) \partial_t P(\bm{\chi},\bm{r}, \bm{\eta},t)=\partial_t \phi_{\bm{n}}(\bm{\chi},\bm{r},t) \int d\bm{\eta} \, \tilde{u}_{\bm{m}}(\bm{\eta})u_{\bm{n}}(\bm{\eta})=d^{-d/2}\partial_t \phi_{\bm{m}}(\bm{\chi},\bm{r},t) \,. \end{equation} For the first term on the r.h.s., i.e., $\hat{\mathcal{L}}_{\bm{\chi}}P$, we have (for simplicity, we do not indicate below the dependence of $\phi_{\bm{n}}$ on $\bm{\chi}$ and $\bm{r}$): \begin{equation} \begin{split} &\int d\bm{\eta} \,\tilde{u}_{\bm{m}}(\bm{\eta})\hat{\mathcal{L}}_{\bm{\chi}}P=\\ &=-\partial_\alpha \left[ \frac{v_{\rm a}\left(\bm{\chi}'\right)\phi_{\bm{n}}}{1+q}\int d\bm{\eta} \,\tilde{u}_{\bm{m}}(\bm{\eta})\eta_\alpha u_{\bm{n}}(\bm{\eta}) -\left(\frac{D\partial_\alpha \phi_{\bm{n}}}{1+q} +v_w\delta_{\alpha,0} \phi_{\bm{n}} \right)\int d\bm{\eta} \, \tilde{u}_{\bm{m}}(\bm{\eta})u_{\bm{n}}(\bm{\eta}) \right]\\ &=-\partial_\alpha \left\{ \frac{v_{\rm a}\left(\bm{\chi}'\right)\phi_{\bm{n}}}{\sqrt{d}(1+q)}\int d\bm{\eta} \,\tilde{u}_{\bm{m}}(\bm{\eta})\left[ u_{\bm{n}_{\alpha+}}(\bm{\eta}) + n_\alpha u_{\bm{n}_{\alpha-}}(\bm{\eta}) \right] -\frac{Dd^{-d/2}}{1+q}\partial_\alpha \phi_{\bm{m}}-\frac{v_w\delta_{\alpha,0}}{d^{d/2}} \phi_{\bm{m}}\right\}\\ &=-\partial_\alpha \left\{ \frac{d^{-(d+1)/2}}{1+q}v_{\rm a}\left(\bm{\chi}'\right)\left[\phi_{\bm{n}}\delta_{\bm{m},\bm{n}_{\alpha+}} + n_\alpha \phi_{\bm{n}}\delta_{\bm{m},\bm{n}_{\alpha-}}\right] -\frac{Dd^{-d/2}}{1+q}\partial_\alpha \phi_{\bm{m}} -v_w\delta_{\alpha,0}d^{-d/2} \phi_{\bm{m}} \right\}\\ &=-\partial_\alpha \left\{ \frac{d^{-(d+1)/2}}{1+q}v_{\rm a}\left(\bm{\chi}'\right)\left[\phi_{\bm{m}_{\alpha-}} + (m_\alpha+1) \phi_{\bm{m}_{\alpha+}}\right] -\frac{Dd^{-d/2}}{1+q}\partial_\alpha \phi_{\bm{m}} -v_w\delta_{\alpha,0}d^{-d/2} \phi_{\bm{m}} \right\}, \end{split} \label{Seq:FP-1} \end{equation} where we used $\delta_{\bm{m},\bm{n}_{\alpha-}}=\delta_{\bm{m}_{\alpha+},\bm{n}}$ and $\delta_{\bm{m},\bm{n}_{\alpha+}}=\delta_{\bm{m}_{\alpha-},\bm{n}}$. Similarly, the projection of the second term on the r.h.s.~of eq.~\eqref{S-eq:FP}, i.e., $\hat{\mathcal{L}}_{\bm{r}}P$, reads \begin{equation} \begin{split} &\int d\bm{\eta} \,\tilde{u}_{\bm{m}}(\bm{\eta})d^{d/2}\hat{\mathcal{L}}_{\bm{r}}P=\\ &=-\partial'_\alpha \left\{ -\frac{(1+q)}{q\gamma} \partial'_\alpha U(\bm{r}) \phi_{\bm{m}} +d^{-1/2}v_{\rm a}\left(\bm{\chi}\right)\left[\phi_{\bm{m}_{\alpha-}} + (m_\alpha+1) \phi_{\bm{m}_{\alpha+}}\right] - \frac{(1+q)D}{q} \partial'_\alpha \phi_{\bm{m}} \right\}. \end{split} \label{Seq:FP-2} \end{equation} Finally, the last term in eq.~\eqref{S-eq:FP}, i.e., $\frac{1}{d\tau}\hat{\mathcal{L}}_{\bm{\eta}}P$, contributes as \begin{equation} \begin{split} \frac{1}{d\tau}\int d\bm{\eta} \,\tilde{u}_{\bm{m}}(\bm{\eta})\hat{\mathcal{L}}_{\bm{\eta}}P=\frac{1}{d\tau}\phi_{\bm{n}}\int d\bm{\eta} \,\tilde{u}_{\bm{m}}(\bm{\eta})\hat{\mathcal{L}}_{\bm{\eta}} u_{\bm{n}}(\bm{\eta})=\frac{d^{-d/2-1}}{\tau}\lambda_{\bm{m}}\phi_{\bm{m}}. \end{split} \label{Seq:FP-3} \end{equation} Collecting the contributions in eqs.~\eqref{Seq:FP-1}, \eqref{Seq:FP-2}, and \eqref{Seq:FP-3}, the FPE projection onto $\tilde{u}_{\bm{m}}(\bm{\eta})$ yields the following set of coupled equations for the coefficients \begin{equation} \begin{split} &\partial_t \phi_{\bm{m}}(\bm{\chi},\bm{r},t)= \\ &\quad -\partial_\alpha \left\{ \frac{1}{\sqrt{d}(1+q)}v_{\rm a}\left(\bm{\chi}'\right)\left[\phi_{\bm{m}_{\alpha-}} + (m_\alpha+1) \phi_{\bm{m}_{\alpha+}}\right] -\frac{D}{1+q}\partial_\alpha \phi_{\bm{m}} -v_w\delta_{\alpha,0} \phi_{\bm{m}} \right\} +\frac{\lambda_{\bm{m}}}{d\tau}\phi_{\bm{m}}\\ &\quad -\partial'_\alpha \left\{ -\frac{(1+q)}{q\gamma} \partial'_\alpha U(\bm{r}) \phi_{\bm{m}} +\frac{v_{\rm a}\left(\bm{\chi}'\right)}{\sqrt{d}}\left[\phi_{\bm{m}_{\alpha-}} + (m_\alpha+1) \phi_{\bm{m}_{\alpha+}}\right] - \frac{(1+q)D}{q} \partial'_\alpha \phi_{\bm{m}} \right\}. \end{split} \label{set_eq_coeff} \end{equation} In particular, the dynamics of the first two modes $\varphi(\bm{\chi},\bm{r},t)$ and \begin{equation} \sigma_\alpha(\bm{\chi},\bm{r},t)\equiv \int d\bm{\eta}\,\eta_\alpha P(\bm{\chi},\bm{r},\bm{\eta},t)=\left( \frac{2\pi}{d}\right)^{d/2} \frac{\phi_{\bm{0}_{\alpha+}}(\bm{\chi},\bm{r},t)}{\sqrt{d}}\,, \end{equation} can be obtained by specialising eq.~\eqref{set_eq_coeff} to the cases $\bm{m}=\bm{0}$ and $\bm{m}=\bm{0}_{\alpha+}$, finding \begin{equation} \begin{split} &\partial_t \varphi(\bm{\chi},\bm{r},t)= -\partial_\alpha \left[-v_w \delta_{\alpha,0}\varphi + \frac{v_{\rm a}\left(\bm{\chi}'\right) \sigma_\alpha}{(1+q)} -\frac{D}{1+q}\partial_\alpha \varphi \right]\\ &\quad\quad\quad-\partial'_\alpha \left[ -\frac{(1+q)}{q\gamma} \partial'_\alpha U \varphi +v_{\rm a}\left(\bm{\chi}'\right)\sigma_\alpha - \frac{(1+q)D}{q} \partial'_\alpha \varphi \right], \end{split} \label{density2variables} \end{equation} and \begin{equation} \begin{split} &\partial_t \sigma_\alpha(\bm{\chi},\bm{r},t) = -\partial_\beta \left[ \frac{v_{\rm a}\left(\bm{\chi}'\right \varphi \delta_{\alpha,\beta} }{d(1+q)} -\frac{D}{1+q}\partial_\beta \sigma_\alpha -v_w \delta_{\beta,0}\sigma_\alpha \right]\\ &\quad-\partial'_\beta \left[ -\frac{(1+q)}{q\gamma} \partial'_\beta U(\bm{r}) \sigma_\alpha +\frac{v_{\rm a}\left(\bm{\chi}'\right \varphi \delta_{\alpha,\beta }{d} - \frac{(1+q)D}{q} \partial'_\beta \sigma_\alpha \right]-\tau^{-1} \sigma_\alpha+\Upsilon(\bm{\chi},\bm{r},t), \end{split} \label{eq:sigma} \end{equation} with $\Upsilon(\bm{\chi},\bm{r},t)$ denoting the contributions due to higher-order modes. In order to simplify the notation, in the previous expression and in those which follow, the dependence on $(\bm{\chi},\bm{r},t)$ of $\varphi$ and $\sigma_\alpha$ is understood if not explicitly indicated. In order to treat this hierarchy of equations, we adopt below two different approaches depending on the value of the phase velocity $v_w$ of the activity wave compared to the activity field $v_{\rm a}$ itself. \section{2. Slow active traveling waves} In the case of slowly propagating waves $v_w\ll v_0$, the hierarchy in eqs.~\eqref{density2variables} and \eqref{eq:sigma} can be closed by assuming that the activity field $v_{\rm a}$ varies on length scales much larger than the persistence length $l_p=\tau v_0$ (small gradients approximation), and considering quasi-stationary higher-order modes at time scales longer than $\tau$~\cite{cates2013when,Solon2015comp,Adeleke2020} Under these approximations, eq.~\eqref{eq:sigma} for the polarization field $\sigma_\alpha$ can be rewritten as \begin{equation} \begin{split} \tau^{-1}\sigma_\alpha(\bm{\chi},\bm{r},t)= -\frac{\partial_\alpha \left[ v_{\rm a}\left(\bm{\chi}'\right)\varphi \right]}{(1+q)d}-\frac{\partial'_\alpha \left[v_{\rm a}\left(\bm{\chi}'\right)\varphi\right]}{d}+\frac{(1+q)}{q\gamma}\partial'_\beta \left[ \partial'_\beta U \sigma_\alpha \right] + \mathcal{O}(\partial^2)\,, \end{split} \label{qstat_pol} \end{equation} where $\mathcal{O}(\partial^2)$ denotes the contributions coming from higher-order powers of the gradient. This equation for $\sigma_\alpha(\bm{\chi},\bm{r},t)$ can be plugged into the continuity equation for the marginal density $\rho(\bm{\chi},t) = \int\! d\bm{r}\, \varphi(\bm{\chi},\bm{r},t)$, which can be obtained by integrating eq.~\eqref{density2variables} over the coordinate $\bm{r}$, finding \begin{equation} \partial_t \rho(\bm{\chi},t)= -\partial_\alpha \left[\frac{1}{1+q}\int d\bm{r} \, v_{\rm a}\left(\bm{\chi}'\right) \sigma_\alpha(\bm{\chi},\bm{r},t) -v_w \delta_{\alpha,0}\rho(\bm{\chi},t) -\frac{D}{1+q}\partial_\alpha \rho(\bm{\chi},t) \right]. \label{density1variable} \end{equation} On the r.h.s.~of this equation one recognizes the probability current $J_\alpha(\bm{\chi},t)$, corresponding to the expression in square brackets. In addition to the diffusive term $\propto \bm{\nabla} \rho$ (with a renormalized diffusion coefficient $D/(1+q)$, as it refers to the diffusion of the center of friction) and to the current $\propto \bm{v}_w\rho$ due to the change of reference system, the additional contribution \begin{equation} I_\alpha(\bm{\chi},t) \equiv \frac{1}{1+q}\int d\bm{r} \, v_{\rm a}\left(\bm{\chi}'\right) \sigma_\alpha(\bm{\chi},\bm{r},t) \label{Seq:addI} \end{equation} appears. By using eq.~\eqref{qstat_pol}, $I_\alpha(\bm{\chi},t)$ can be written as \begin{equation} I_\alpha(\bm{\chi},t)=\frac{\tau}{1+q}\int d\bm{r} \, v_{\rm a}\left(\bm{\chi}'\right) \left\{ -\frac{\partial_\alpha \left[ v_{\rm a}\left(\bm{\chi}'\right)\varphi \right]}{d(1+q)}-\frac{\partial'_\alpha \left[v_{\rm a}\left(\bm{\chi}'\right)\varphi \right]}{d}-\frac{(1+q)}{q\gamma}\partial'_\beta \left[ F_\beta (\bm{r}) \sigma_\alpha \right] \right\}, \end{equation} where $F_\beta(\bm{r})=-\partial_{\bm{r}_\beta} U(\bm{r})$. Integrating by part and using $\partial'_\alpha v_{\rm a}(\bm{\chi}')=\frac{q}{1+q}\partial_\alpha v_{\rm a}(\bm{\chi}')$ [which follows from eq.~\eqref{Seq:chi-p} and the definitions of $\partial'_\alpha$ and $\partial_\alpha$ given after eq.~\eqref{S-eq:FP-proj}], we can rewrite the previous expression as: \begin{equation} \begin{split} I_\alpha(\bm{\chi},t)&=\frac{\tau}{1+q}\int d\bm{r} \, v_{\rm a}\left(\bm{\chi}'\right) \left[ -\frac{\partial_\alpha \left[ v_{\rm a}\left(\bm{\chi}'\right)\varphi \right]}{d(1+q)}\right]\\ &+\frac{\tau}{1+q}\int d\bm{r} \ \frac{q}{(1+q)d} \left[v_{\rm a}\left(\bm{\chi}'\right)\varphi \right]\partial_\alpha v_{\rm a}\left(\bm{\chi}'\right \\ & + \frac{\tau}{1+q} \frac{1}{\gamma} \int d\bm{r} \ F_\beta (\bm{r}) \sigma_\alpha \partial_\beta v_{\rm a}\left(\bm{\chi}'\right). \end{split} \label{flux1} \end{equation} We define now the quantity $\Sigma$ as \begin{equation} \Sigma(\bm{\chi},t)\equiv \int d\bm{r} \, F_\beta (\bm{r}) \sigma_\alpha (\bm{\chi},\bm{r},t) \partial_\beta v_{\rm a}\left(\bm{\chi}'\right) \,. \label{Seq:defSigma} \end{equation} By using eq.~\eqref{qstat_pol} into this expression for $\Sigma$ and by neglecting all terms $\mathcal{O}(\partial^2)$, we obtain: \begin{equation} \begin{split} \Sigma&=\tau \int d\bm{r} \, F_\beta (\bm{r}) \partial_\beta v_{\rm a}\left(\bm{\chi}'\right) \left\{ -\frac{\partial'_\alpha \left[v_{\rm a}\left(\bm{\chi}'\right)\varphi\right]}{d}-\frac{(1+q)}{q\gamma}\partial'_\gamma \left[ F_{\gamma}(\bm{r}) \sigma_\alpha \right] \right\}\\ &=\tau \int d\bm{r} \, \left\{ \frac{v_{\rm a}\left(\bm{\chi}'\right)\varphi}{d}\partial'_\alpha \left[F_\beta (\bm{r}) \partial_\beta v_{\rm a}\left(\bm{\chi}'\right) \right]+\frac{(1+q)}{q\gamma}F_\gamma (\bm{r}) \sigma_\alpha \partial'_\gamma \left[ F_\beta (\bm{r}) \partial_\beta v_{\rm a}\left(\bm{\chi}'\right) \right] \right\}. \label{Sigma} \end{split} \end{equation} The last line can be further simplified by considering separately \begin{equation} \begin{split} \partial'_\alpha \left[F_\beta (\bm{r}) \partial_\beta v_{\rm a}\left(\bm{\chi}'\right) \right]&=\partial'_\alpha F_\beta (\bm{r}) \partial_\beta v_{\rm a}\left(\bm{\chi}'\right)+F_\beta (\bm{r}) \partial'_\alpha \partial_\beta v_{\rm a}\left(\bm{\chi}'\right)\\ &=\partial'_\alpha F_\beta (\bm{r}) \partial_\beta v_{\rm a}\left(\bm{\chi}'\right)+\frac{q}{1+q}F_\beta (\bm{r}) \partial_\alpha \partial_\beta v_{\rm a}\left(\bm{\chi}'\right)\\ &=\partial'_\alpha F_\beta (\bm{r}) \partial_\beta v_{\rm a}\left(\bm{\chi}'\right)+\mathcal{O}(\partial^2)\,. \end{split} \label{Seq:simp-1} \end{equation} Moreover, since the interaction potential is modeled by a spring with stiffness $\kappa$ and zero rest length, we have that \begin{equation} \partial'_\alpha F_\beta (\bm{r})=-\kappa \delta_{\alpha,\beta}=\partial'_\beta F_\alpha (\bm{r}). \end{equation} Accordingly, eq.~\eqref{Seq:simp-1} can be written as \begin{equation} \partial'_\alpha \left[F_\beta (\bm{r}) \partial_\beta v_{\rm a}\left(\bm{\chi}'\right) \right] \simeq -\kappa \delta_{\alpha,\beta} \partial_\beta v_{\rm a}\left(\bm{\chi}'\right)=-\kappa \partial_\alpha v_{\rm a}\left(\bm{\chi}'\right), \end{equation} and thus eq.~\eqref{Sigma} becomes \begin{equation} \begin{split} \Sigma&=-\kappa \tau \int d\bm{r} \, \left[\frac{v_{\rm a}\left(\bm{\chi}'\right)\varphi}{d}\partial_\alpha v_{\rm a} \left( \bm{\chi}'\right) +\frac{(1+q)}{q\gamma}F_\gamma (\bm{r}) \sigma_\alpha \partial_\gamma v_{\rm a} \left( \bm{\chi}'\right)\right]=\\ &=-\frac{\kappa \tau}{2d} \int d\bm{r} \, \varphi \partial_\alpha v^2_{\rm a}\left(\bm{\chi}'\right) -\kappa \tau \frac{(1+q)}{q\gamma}\Sigma, \end{split} \end{equation} where, in the last line, we used the definition of $\Sigma$, see eq.~\eqref{Seq:defSigma}. Accordingly, the previous equation can be solved and yields \begin{equation} \Sigma(\bm{\chi},t) =-\frac{1}{2d}\frac{\gamma \tau/\tau_{\rm r} }{ 1 + \frac{1+q}{q}\frac{\tau}{\tau_{\rm r}}} \int d\bm{r} \, \varphi(\bm{\chi},\bm{r},t) \partial_\alpha v^2_{\rm a}\left(\bm{\chi}'\right), \label{I_fin} \end{equation} where, as in the main text, we introduced $\tau_{\rm r} = \gamma/\kappa$. This expression of $\Sigma(\bm{\chi},t)$ can be used in eq.~\eqref{flux1}, finding \begin{equation} \begin{split} I_\alpha(\bm{\chi},t)&=-\frac{\tau}{d(1+q)^2}\int d\bm{r} \, v^2_{\rm a}\left(\bm{\chi}'\right)\partial_\alpha \varphi -\frac{1}{2}\frac{\tau}{d(1+q)^2} \epsilon \int d\bm{r} \,\varphi \partial_\alpha v^2_{\rm a}\left(\bm{\chi}'\right), \end{split} \end{equation} where we introduced the tactic coupling \begin{equation} \epsilon=1-\frac{q}{1+\frac{1+q}{q}\frac{\tau}{\tau_{\rm r}}}, \label{tactic_coupling} \end{equation} reported in eq.~(8) of the main text. Moreover, if the typical distance between the active carrier and the cargo is small compared to the persistence length, we can approximate $\varphi = \varphi(\bm{\chi},\bm{r},t)$ in the integrands above as: \begin{equation} \varphi(\bm{\chi},\bm{r},t) \approx \rho(\bm{\chi},t)\delta(\bm{r})\,. \end{equation} Within this approximation, the total current $J_{\alpha}(\bm{\chi},t)$ introduced after eq.~\eqref{density1variable} can be written as \begin{equation} J_{\alpha}(\bm{\chi},t)=V_{\rm eff,\alpha}(\bm{\chi})\rho(\bm{\chi},t)-\partial_{\alpha}\left[D_{\rm eff}\rho(\bm{\chi},t)\right]\,, \end{equation} where the effective drift and diffusivity are, respectively, given by \begin{equation} V_{\rm eff,\alpha}(\bm{\chi})= (1-\epsilon/2)\partial_\alpha D_{\rm eff}(\bm{\chi}) -v_w\delta_{\alpha,0} \quad\mbox{and}\quad D_{\rm eff}(\bm{\chi})=\frac{D}{1+q}+\frac{\tau v^2_{\rm a}(\bm{\chi})}{d(1+q)^2}, \label{Seq:drift-diff} \end{equation} which are reported in eqs.~(6) and (7) of the main text. The stationary solution of the effective Fokker-Planck equation \begin{equation} \partial_t \rho(\bm{\chi},t)=-\nabla_{\bm{\chi}}\cdot \left[\bm{V}_{\rm eff}(\bm{\chi})\rho(\bm{\chi},t) - \nabla_{\bm{\chi}}(D_{\rm eff}(\bm{\chi})\rho(\bm{\chi},t)) \right]\, \end{equation} can be easily proved to be~\cite{hanggi1990reaction, goel2016stochastic,merlitz2018linear}: \begin{equation} \frac{\rho(\bm{\chi})}{\rho_b}=\frac{L\, D_{\rm eff}^{-1}(\chi_0) \int_0^L dx\exp \left\{-\int_{\chi_0}^{\chi_0+x}dy\frac{V_{\rm eff,0}(y)}{D_{\rm eff}(y)} \right\}}{\int_0^L\,du\int_0^Ldx\, D_{\rm eff}^{-1}(u)\exp \left\{-\int_{u}^{u+x}dy\frac{V_{\rm eff,0}(y)}{D_{\rm eff}(y)} \right\}}, \label{steady_state_density_supp} \end{equation} in the case of periodic boundary conditions, as reported in the main text. Moreover, the system can sustain a finite stationary flux in the comoving frame \begin{equation} J_0=\frac{ \rho_b L \left[ 1-\exp \left\{-\int_{0}^{L}dy\frac{V_{\rm eff,0}(y)}{D_{\rm eff}(y)} \right\}\right]}{\int_0^L\,du\int_0^Ldx\, D_{\rm eff}^{-1}(u)\exp \left\{-\int_{u}^{u+x}dy\frac{V_{\rm eff,0}(y)}{D_{\rm eff}(y)} \right\}} \label{stat_flux_comoving} \end{equation} in the direction $\bm{e}_0$, which can be used in order to compute the average drift velocity $v_d=J_0/\rho_b+v_w$ in the lab frame, which is reported in eq.~(13) of the main text. \section{3. Drift velocity at $q_{\rm th}$} In this section we show that the drift velocity $v_d$ [see eq.~(13) in the main text], derived in the limit of slow propagating activity fields $v_w\ll v_0$, vanishes if $q$ takes the threshold value $q_{\rm th}$ reported in eq.~(10) of the main text. In particular, when $q=q_{\rm th}$, the tactic coupling $\epsilon$ in eq.~\eqref{tactic_coupling} [alternatively, see eq.~(8) of the main text] vanishes and the effective drift in eq.~\eqref{Seq:drift-diff} becomes \begin{equation} V_{\rm eff,\alpha}(\bm{\chi})=\partial_\alpha D_{\rm eff}(\bm{\chi}) -v_w\delta_{\alpha,0}\,. \end{equation} This expression can be used in eq.~\eqref{stat_flux_comoving} in order to calculate the stationary current in the comoving frame. In particular, the denominator of that expression reads: \begin{equation} \begin{split} &\int_0^L\!du\int_0^L\!dx\, D_{\rm eff}^{-1}(u)\exp \left\{-\int_{u}^{u+x}dy \, \left[\partial_\alpha \ln D_{\rm eff}(y) -\frac{v_w}{D_{\rm eff}(y)} \right]\right\}\\ &\quad\quad= \int_0^L\!du\int_0^L\!dx\, D_{\rm eff}^{-1}(u+x)\exp \left\{\int_{u}^{u+x}\!dy \, \frac{v_w}{D_{\rm eff}(y)} \right\}\\ &\quad\quad= \frac{1}{v_w}\int_0^L\!du \left[\exp \left\{\int_{u}^{u+L}dy \, \frac{v_w}{D_{\rm eff}(y)}\right\} -1 \right] \\ &\quad\quad= \frac{L}{v_w}\left[\exp \left\{\int_{0}^{L}\!dy \, \frac{v_w}{D_{\rm eff}(y)}\right\} -1 \right], \end{split} \label{Seq:num} \end{equation} where in the last equality we used that the effective drift $D_{\rm eff}(y)$ is a periodic function with period $L$. Analogously the numerator of eq. \eqref{stat_flux_comoving} is given by: \begin{equation} \begin{split} \rho_b L \left[ 1-\exp \left\{-\int_{0}^{L}dy\frac{\partial_\alpha D_{\rm eff}(y) -v_w}{D_{\rm eff}(y)} \right\}\right]=\rho_b L \left[ 1-\exp \left\{\int_{0}^{L}dy\frac{v_w}{D_{\rm eff}(y)} \right\}\right]. \end{split} \label{Seq:den} \end{equation} Combining eqs.~\eqref{Seq:num} and \eqref{Seq:den}, the average drift velocity $v_d$, given after eq.~\eqref{stat_flux_comoving}, reads: \begin{equation} v_d=\frac{J_0}{\rho_b}+v_w = v_w \frac{\rho_b L \left[ 1-\exp \left\{\int_{0}^{L}dy\frac{v_w}{D_{\rm eff}(y)} \right\}\right]}{\rho_b L\left[\exp \left\{\int_{0}^{L}dy \, \frac{v_w}{D_{\rm eff}(y)}\right\} -1 \right]}+v_w=0 \end{equation} \section{4. Fast active traveling waves} In this section we derive analytical expressions for the stationary density, stationary current and average drift velocity in the regime of fast active traveling waves, i.e., for $v_w\gg v_0$. To this aim, we adopt a different strategy to close the hierarchy of equations governing the dynamics of the modes given by Eqs \eqref{density2variables} and \eqref{eq:sigma}, which hinges on assuming a small activity $v_{0}$ compared to the wave velocity $v_w$. For simplicity, we present the derivation for the one-dimensional case $d=1$ with the sinusoidal activity field reported in eq.~(11) of the main text. The extension to the case with $d\neq 1$ is straightforward. To implement the new closure scheme, we start from the dynamics of the polarisation field given by eq.~\eqref{eq:sigma}, which can be conveniently rewritten as: \begin{equation} \hat{\mathcal{L}}_{\sigma}\sigma(\chi,r,t)=-\frac{\partial_\chi \left[ v_{\rm a}(\chi')\varphi \right]}{(1+q)}-\partial_r \left[ v_{\rm a}(\chi')\varphi \right]+\Upsilon(\chi,r,t)\,, \label{eq:sigma2} \end{equation} where $\varphi = \varphi(\chi,r,t)$, the position $\chi'$ of the active carrier in the comoving frame is defined as in eq.~\eqref{Seq:chi-p}, specialized to $d=1$, and $\Upsilon(\chi,r,t)$ includes the contributions of higher-order modes. In the previous equation, the operator $\hat{\mathcal{L}}_{\sigma}$ is defined as \begin{equation} \hat{\mathcal{L}}_{\sigma}=\partial_t + \frac{1}{\tau} -v_w\partial_\chi -\frac{D}{1+q} \partial^2_\chi - \frac{(1+q)D}{q}\left[\partial^2_r + \frac{1}{\ell^2}\partial_r r\right], \label{op_sigma} \end{equation} with the characteristic length $\ell=\sqrt{D \tau_{\rm r}}$ and $\tau_{\rm r}=\gamma/k$. We first determine the Green function $G(\chi,r,t;\chi_0,r_0,t_0)$ of the operator $\hat{\mathcal{L}}_{\sigma}$, defined as: \begin{equation} \hat{\mathcal{L}}_{\sigma} G(\chi,r,t;\chi_0,r_0,t_0)=\delta(\chi-\chi_0)\delta(r-r_0)\delta(t-t_0)\,. \label{green_dimer} \end{equation} Note that, due to the translational invariance of the operator $\hat{\mathcal{L}}_{\sigma}$ in the variables $\chi$ and $t$, one expects $G(\chi,r,t;\chi_0,r_0,t_0)$ to be a function of $\chi-\chi_0$ and $t-t_0$. The presence of the interparticle potential, instead, breaks the transaltional invariance of $\hat{\mathcal{L}}_{\sigma}$ with respect to $r$ and therefore $G(\chi,r,t;\chi_0,r_0,t_0)$ depends separately on $r$ and $r_0$. Accordingly, we can write $G(\chi,r,t;\chi_0,r_0,t_0) = G(\chi-\chi_0,r,t-t_0;0,r_0,0) \equiv G(\chi-\chi_0,r,t-t_0;r_0)$ where in the last equality we introduce a convenient shorthand notation. The function $G(\chi,r,t;r_0)$ can be conveniently determined by expanding it in the Fourier-Hermite basis \begin{equation} G(\chi,r,t;r_0)=\frac{1}{\ell}\sum_{n=0}^{\infty} \int \frac{d \omega}{2 \pi} \int \frac{d \tilde{q}}{2 \pi} \tilde{G}_n(\tilde{q},\omega;r_0) e^{i\tilde{q}\chi+i\omega t} u_n(r), \label{Seq:Gf-exp} \end{equation} where $u_n(r)$ is given by \begin{equation} u_{n}(r)= e^{-r^2/(2\ell^2)} H_{n}(r/\ell)\,, \label{eq:u} \end{equation} and $H_n(x)$ is the $n$-th probabilist's Hermite polynomial. With this expansion, the l.h.s.~of eq.~\eqref{green_dimer} becomes \begin{equation} \frac{1}{\ell}\sum_{n=0}^{\infty} \int \frac{d \omega}{2\pi}\frac{d\tilde{q}}{2 \pi} \left[i\omega + \tau^{-1} +\frac{D}{1+q} \tilde{q}^2 +iv_w \tilde{q} + \frac{(1+q)D}{q}\frac{n}{\ell^2} \right] \tilde{G}_n(\tilde{q},\omega;r_0) e^{i\tilde{q}\chi+i\omega t} u_n(r), \label{Seq:Glhs} \end{equation} while its r.h.s.~is \begin{equation} \frac{1}{\ell}\sum_{n=0}^{\infty} \int \frac{d \omega}{2 \pi} \int \frac{d \tilde{q}}{2 \pi} \tilde{u}_n(r_0) e^{i\tilde{q}\chi+i\omega t} u_n(r), \label{Seq:Grhs} \end{equation} where we used the fact that $\delta(r-r_0)$ in eq.~\eqref{green_dimer} can be written as \begin{equation} \frac{1}{\ell}\sum_{n=0}^\infty \tilde{u}_n(r_0) u_n(r)=\delta(r-r_0)\,, \end{equation} and the functions $\tilde{u}_n(r)$ are defined as: \begin{equation} \tilde{u}_n(r)=\frac{1}{\sqrt{2\pi}n!}H_n(r/\ell). \label{eq:utilde} \end{equation} Accordingly, by comparing eq.~\eqref{Seq:Glhs} with eq.~\eqref{Seq:Grhs}, the Green function in reciprocal space turns out to be given by \begin{equation} \tilde{G}_n(\tilde{q},\omega;r_0)=\frac{\tilde{u}_n(r_0)}{i\omega + \tau^{-1} +\frac{D}{1+q} \tilde{q}^2 +iv_w \tilde{q} + \frac{(1+q)D}{q}\frac{n}{\ell^2}}. \end{equation} After inserting this expression of $\tilde{G}_n(\tilde{q},\omega;r_0)$ into eq.~\eqref{Seq:Gf-exp}, one can readily calculate the integral in $\omega$ via the residue theorem. The corresponding residue is a Gaussian function of $\tilde q$ and thus the corresponding integral is also straightforward, with the final result \begin{equation} G(\chi,r,t;r_0) =\Theta(t) \frac{\exp\left\{- \frac{t}{\tau}-\frac{\left(\chi +v_w t \right)^2}{4 \frac{D}{1+q} t} \right\}}{\sqrt{4 \pi \frac{D}{1+q} t}}\frac{1}{\ell}\sum_{n=0}^{\infty} \exp\left\{-\frac{(1+q)Dn}{q\ell^2}t \right\} \tilde{u}_n(r_0)u_n(r), \label{Seq:G-exp-sum} \end{equation} where the Heaviside function $\Theta$ is defined such that $\Theta(t>0) =1$ and $\Theta(t\le 0) =0$. Before considering the last summation, we introduce the quantity \begin{equation} s=\exp\left\{-\frac{(1+q)}{q}\frac{Dt}{\ell^2} \right\}<1. \label{Seq-def-s} \end{equation} In terms of $s$ the remaining sum in eq.~\eqref{Seq:G-exp-sum} can be written as \begin{equation} \begin{split} \frac{1}{\ell}\sum_{n=0}^{\infty} s^n \tilde{u}_n(r_0)u_n(r)&=\frac{1}{\ell \sqrt{2 \pi}} \exp\left\{-\frac{r^2}{2\ell^2} \right\}\sum_{n=0}^{\infty} \frac{s^n}{ n!} H_n\left(\frac{r_0}{\ell} \right)H_n\left(\frac{r}{\ell} \right)\\ &=\frac{1}{\sqrt{2 \pi \ell^2 (1-s^2)}} \exp\left\{-\frac{(r-sr_0)^2}{2(1-s^2)\ell^2} \right\} \end{split} \label{Seq-sum-M} \end{equation} where we used the expression of $u_n$ and $\tilde u_n$ given in eqs.~\eqref{eq:u} and \eqref{eq:utilde}, respectively, and, in the second equality, we used Mehler's formula~\cite{abramowitz1988handbook} for probabilist's Hermite polynomials, i.e., \begin{equation} \sum_{n=0}^{\infty} \frac{s^n}{ n!} H_n(x)H_n(y)=\frac{1}{\sqrt{1-s^2}} \exp \left\{-\frac{s^2(x^2+y^2)-2sxy}{2(1-s^2)} \right\} \quad\mbox{for}\quad -1<s<1. \end{equation} Accordingly, by using eqs.~\eqref{Seq:G-exp-sum}, \eqref{Seq-def-s}, and \eqref{Seq-sum-M} the the Green function in eq.~\eqref{green_dimer} reads: \begin{equation} G(\chi,r,t;r_0)=\Theta(t)\exp\left\{-t/\tau\right\} \frac{\exp\left\{-\frac{\left(\chi +v_w t\right)^2}{4Dt/(1+q)} \right\}}{\sqrt{4 \pi Dt/(1+q)}} \frac{\exp\left\{-\frac{(r-sr_0)^2}{2(1-s^2)\ell^2} \right\}}{\sqrt{2 \pi \ell^2 (1-s^2)}}. \end{equation} Once this Green function is known, one can determine $\sigma(\chi,r,t)$ by computing the convolution integral over $\chi_0$, $t_0$, and $r_0$ of the product between $G(\chi-\chi_0,r,t-t_0;r_0)$ and the r.h.s.~of eq.~\eqref{eq:sigma2} evaluated for $\chi=\chi_0$, $r=r_0$, and $t=t_0$. Once $\sigma(\chi,r,t)$ is known, we can calculate the current contribution to eq.~\eqref{density1variable} given by eq.~\eqref{Seq:addI}, specialized to the case $d=1$. In particular, one has \begin{equation} \begin{split} &I(\chi,t)=\\ &-\int dr \, v_{\rm a}\left(\chi'\right) \int_{-\infty}^{\infty} \!\!d\chi_0 dr_0 dt_0\, G(\chi-\chi_0,r,t-t_0;r_0)\frac{\partial_{\chi_0} \left[ v_{\rm a}(\chi_0+\frac{qr_0}{1+q})\varphi(\chi_0,r_0,t_0) \right]}{(1+q)^2}\\ &-\int dr \, v_{\rm a}\left(\chi'\right) \int_{-\infty}^{\infty} \!\!d\chi_0 dr_0 dt_0\, G(\chi-\chi_0,r,t-t_0;r_0) \frac{\partial_{r_0} \left[ v_{\rm a}(\chi_0+\frac{qr_0}{1+q})\varphi(\chi_0,r_0,t_0) \right]}{(1+q)}\\ &+\int dr \, v_{\rm a}\left(\chi'\right) \int_{-\infty}^{\infty} \!\!d\chi_0 dr_0 dt_0\, G(\chi-\chi_0,r,t-t_0;r_0)\frac{\Upsilon(\chi_0,r_0,t_0)}{(1+q)}. \end{split} \end{equation} The latter integral can be computed under the approximation of small activity field compared to $v_w$, by keeping only terms of the lowest order in $v_0$. For this reason, we neglect the contribution coming from higher-order modes $\Upsilon(\chi,r,t)$, thus closing the hierarchy, and we evaluate the first two integrals by assuming that the density field \begin{equation} \varphi(\chi_0,r_0,t_0) = \rho_b \frac{e^{-r_0^2/(2\ell^2)}}{\sqrt{2 \pi \ell^2}} + \mathcal{O}(v_0/v_w) \label{Seq:phi-app} \end{equation} is approximately equal to the one in equilibrium, i.e., for $v_{\rm a}\propto v_0 =0$, and $\rho_b$ is the bulk density. In this way, all integrals appearing in the first two lines are standard Gaussian integrals, and can be easily calculated. As a result, $I(\chi,t)$ is actually independent of time (as $\varphi$ in eq.~\eqref{Seq:phi-app}) and is given by \begin{equation} \begin{split} I(\chi)&=-\frac{\rho_b \tau v_0^2 e^{-\frac{q^2\ell^2}{2\lambda^2(1+q)^2}}}{\lambda (1+q)^2} \Bigg\{\frac{\cos (\chi/\lambda + \psi_0)}{|z_0|}-q \frac{\cos \left(\chi/\lambda \right)}{\left(1+\frac{(1+q)\tau D}{q\ell^2}\right)}\\ &\quad\quad\quad+e^{-\frac{q^2\ell^2}{2\lambda^2(1+q)^2}}\sum_{n=0}^\infty \frac{\left[\frac{ q^2 \ell^2}{\lambda^2(1+q)^2} \right]^n}{n!}\left[ f_n(\chi)+qf_{n+1}(\chi)\right]\Bigg\}, \end{split} \label{Seq:Ichifast} \end{equation} with \begin{equation} f_n(\chi)=\frac{(-1)^n \sin(2 \chi/\lambda+\psi_n)-\sin \psi_n}{2|z_n|}\,, \end{equation} and where $\psi_n$ and $|z_n|$ are the phase and the modulus, respectively, of the complex number $z_n$ defined in eq.~(19) of the main text. In order to compute the marginal probability density $\rho(\chi)$ in the steady state, we impose that the probability current in eq.~\eqref{density1variable} equals the constant $J$. Accordingly, one has to solve the following differential equation, \begin{equation} \frac{D}{1+q}\partial_\chi \rho(\chi)+v_w \rho(\chi)=I(\chi)-J, \label{diff_eq_rho} \end{equation} with $I(\chi)$ given in eq.~\eqref{Seq:Ichifast}. This can be done by first computing the Green function $G_1$, defined by \begin{equation} \left(\frac{D}{1+q}\partial_\chi +v_w\right) G_1(\chi-\chi_0)=\delta(\chi-\chi_0), \end{equation} which reads (in the case of $v_w>0$) \begin{equation} G_1(\chi-\chi_0)=\frac{(1+q)}{D} \Theta(\chi-\chi_0)\exp\left\{-\frac{(1+q)v_w}{D}(\chi-\chi_0) \right\}, \end{equation} and then the following convolution: \begin{equation} \rho(\chi)=\frac{(1+q)}{D} \int_{-\infty}^{\chi}d\chi'\,\exp\left\{-\frac{(1+q)v_w}{D}(\chi-\chi') \right\} I(\chi')-\frac{J}{v_w} . \end{equation} The contribution coming from the homogeneous solution of eq. \eqref{diff_eq_rho} vanishes under periodic boundary conditions. Also in this case, the convolution involves Gaussian integrals, the standard calculation of which is not reported here for the sake of space. As a result, the stationary density $\rho(\chi)$ can be expressed as \begin{equation} \begin{split} \rho(\chi)=-\frac{\rho_b \tau v_0^2}{D\lambda(1+q)}e^{-\frac{q^2\ell^2}{2\lambda^2(1+q)^2}} \Bigg[&\frac{\cos (\chi/\lambda + \psi_0+\varphi(\lambda))}{|\zeta(\lambda)||z_0|}-q \frac{\cos \left(\chi/\lambda + \varphi(\lambda) \right)}{|\zeta(\lambda)|\left(1+\frac{(1+q)\tau D}{q\ell^2}\right)}+ \\&+e^{-\frac{q^2\ell^2}{2\lambda^2(1+q)^2}}\sum_{n=0}^\infty \frac{\left(\frac{ q^2 \ell^2}{\lambda^2(1+q)^2} \right)^n}{n!}\left[ g_n(\chi)+qg_{n+1}(\chi)\right]\Bigg] -\frac{J}{v_w}, \end{split} \end{equation} where the functions $g_n(\chi)$ are defined as \begin{equation} g_n(\chi)=\frac{(-1)^n \sin(2\chi/\lambda+\psi_n+\varphi(\lambda/2))}{2|\zeta(\lambda/2)||z_n|}-\frac{\sin(\psi_n)}{2\frac{(1+q)v_w}{D}|z_n|}, \end{equation} and where $\varphi(\lambda)$ and $|\zeta(\lambda)|$ are the phase and the modulus, respectively, of the $\lambda$-dependent complex number \begin{equation} \begin{split} \zeta(\lambda)=\frac{(1+q) v_w}{D}-\mathrm{i}\lambda^{-1}. \end{split} \end{equation} Moreover, by imposing the normalization of the marginal density $\rho(\chi)$, we find the expression of the stationary current in the comoving frame $J$: \begin{equation} J=-\frac{v_w}{L} \left[1- \frac{\tau v_0^2}{2v_w\lambda(1+q)^2}e^{-\frac{q^2\ell^2}{\lambda^2(1+q)^2}}\sum_{n=0}^\infty \frac{\left(\frac{ q^2 \ell^2}{\lambda^2(1+q)^2} \right)^n}{n!} \left(\frac{\sin(\psi_n)}{|z_n|} + \frac{q\sin(\psi_{n+1})}{|z_{n+1}|}\right) \right]\,, \end{equation} and, as a consequence, the average drift velocity: \begin{equation} \frac{v_d}{v_0}= \frac{l_p}{2\lambda(1+q)^2}e^{-\frac{q^2\ell^2}{\lambda^2(1+q)^2}}\sum_{n=0}^\infty \frac{\left(\frac{ q^2 \ell^2}{\lambda^2(1+q)^2} \right)^n}{n!} \left(\frac{\sin(\psi_n)}{|z_n|} + \frac{q\sin(\psi_{n+1})}{|z_{n+1}|}\right). \end{equation} The previous equation is reported in the main text (see eq. (18)) in the limit of small thermal diffusivity $D\tau_{\rm r}\ll\lambda^2$. \bibliographystyle{eplbib} \section{Introduction} The ability to self-propel at the expense of fuel consumption is a fundamental property of active matter~\cite{julicher1997modeling,hanggi2009artificial,marchetti2013hydrodynamics,bechinger2016active}. In the biological context, self-propelling microscopic systems perform functions that require accurate directed transport, for instance, white blood cells chase intruders~\cite{fenteany2004cytoskeletal}, motor proteins transport RNA inside cells~\cite{kanai2004kinesin} and microswimmers such as E.~coli~\cite{berg2004coli} and sperm cells~\cite{friedrich2007chemotaxis} steer themselves towards sources of nutrients. Directed transport is a highly desirable property, in particular for applications in drug delivery at the nanoscale~\cite{mano2017optimal,reinivsova2019micro,ebbens2016active, Garcia2013micromotor,Sanchez2014chemically}. For this purpose, bio-hybrid microswimmers have been designed by integrating biological entities with synthetic constructs, e.g., bacteria capable to transport and drop off passive microscopic cargo to specific target locations~\cite{singh2017microemulsion,alapan2018soft,vaccari2018cargo,senturk2020red}. Bacteria and eukaryotic cells~\cite{Fisher1989-lb,MARTIEL1987807} generally navigate in dynamic activating media and react {\em in vivo} to time-dependent tactic stimuli of various nature. Such an interaction with travelling activity signals, e.g., chemical waves~\cite{Tomchik1981}, leads to fascinating collective behavior~\cite{Gregor2010_onset} and sometimes to unexpected migration phenomena, as in the case of the so-called \emph{chemotactic wave paradox}~\cite{Tomchik1981,HOFER19941}. While synthetic active particles mimic the basic features of self-propulsion and persistence of actual biological active matter, they lack the information processing capacity and motoric control which is essential for directed transport in biological and bio-hybrid systems. Despite their memory-less response to tactic signals, artificial self-propelled particles exhibit directed transport when immersed in travelling waves controlling locally their degree of activity (e.g. their self propulsion velocity), as shown experimentally with phototactic Janus particles exposed to propagating optical pulses~\cite{lozano2019diffusing}. Several theoretical studies have focused on controlling and directing the motion of a single self-propelled particle in a fluctuating environment~\cite{geiseler2016chemotaxis, geiseler2017selfpolarizing, geiseler2017artificial, sharma2017brownian,merlitz2018linear}. However, a fundamental understanding of the behavior of cargo-carrying microswimmers in time-dependent activity is still lacking. Cargo-carrying self-propelled particles have been analyzed in a stationary, but spatially inhomogeneous activity~\cite{vuijk2021chemotaxis}. While a single self-propelled particle always accumulates in regions with low activity, attaching a passive cargo reverses this tendency. In fact, beyond a certain threshold cargo, the particle accumulates in regions with larger activity~\cite{vuijk2021chemotaxis}. While preferential accumulation could be regarded as a signature of the tactic behavior \cite{vuijk2021chemotaxis}, in the case of stationary activity, it causes no transport of the dimer. By contrast, for a time-dependent activity, such as a source emitting activity pulses, the tactic behavior of an active particle can result in motion towards or away from the source. With this motivation, in this paper we study active-passive dimers subject % to a time-dependent activity in the form of a travelling wave. We analytically show that the dimer exhibits directed transport, characterized by a wave-induced drift. The direction of this drift depends on the wave speed, being opposite to its propagation direction for a slow wave but along it for a fast wave. Interestingly, the opposite drift vanishes at a threshold cargo upon increasing its friction, beyond which the dimer only shows drift along the propagation direction. We show that the threshold value of the cargo coincides with that existing in the stationary activity. Our theoretical treatment of the active-passive dimer is based on the active Ornstein-Uhlenbeck particle (AOUP) model of activity~\cite{caprini2019comparative,martin2021statistical,caprini2022parental,gopal2021energetics}. Our analysis shows that the AOUPs are completely equivalent to active Brownian particles \cite{vuijk2021chemotaxis} (ABPs) in terms of their tactic behavior. \section{The model} \label{The_model} In this section we introduce a minimal model for the dynamics of an active microswimmer dragging a passive load in $d$ spatial dimensions within an inhomogeneous and time-dependent environment. The microswimmer at position $\bm{r}$ and time $t$ interacts with a tactic signal described by the activity field $v_{\rm a}(\bm{r}-\bm{v}_wt)$, which propagates with velocity $\bm{v}_w = v_w\bm{e}_0$ along the direction of the unit vector $\bm{e}_0$, as depicted in fig.~\ref{sketch}. As usually done for $\mu$m-sized colloidal particles in a liquid, we assume that viscous forces dominate over inertial effects and therefore we consider an overdamped dynamics for the active-passive dimer, which is governed by the following Langevin equations: \begin{subequations} \label{eq:model} \begin{eqnarray} && \hspace{-1cm}\dot{\bm{r}}_1=-\frac{1}{\gamma}\nabla_{\bm{r}_1} U(\bm{r}_1-\bm{r}_2)+v_{\rm a}(\bm{r}_1-\bm{v}_wt)\bm{\eta} + \sqrt{2D}\bm{\xi}_1,\label{dynamics1}\\ &&\hspace{-1cm}\dot{\bm{r}}_2=-\frac{1}{q\gamma}\nabla_{\bm{r}_2} U(\bm{r}_1-\bm{r}_2) + \sqrt{\frac{2D}{q}}\bm{\xi}_2, \label{dynamics2} \\ &&\hspace{-1cm}\tau \dot{\bm{\eta}}=-\bm{\eta}+\sqrt{\frac{2\tau}{d}}\bm{\xi}_3;\label{dynamics3} \end{eqnarray} \end{subequations} where $\bm{r}_1$ and $\bm{r}_2$ denote the positions of the active microswimmer and the passive cargo, respectively. The interaction $U(\bm{r}_1-\bm{r}_2)$ between them is modeled by an isotropic parabolic potential $U(\bm{r})=\kappa \bm{r}^2/2$, with stiffness $\kappa>0$ and zero rest length. The stochastic forces $\bm{\xi}_1$, $\bm{\xi}_2$, $\bm{\xi}_3$ are three independent zero-mean Gaussian white noises accounting for thermal fluctuations. Moreover, the active carrier exploits local energy injections to self-propel along the direction of the propulsion vector $\bm{\eta}$ which is given by a set of $d$ independent Ornstein-Uhlenbeck processes with variance $1/d$ and correlation time $\tau$. It follows that $\bm{\eta}$ is a zero-mean Gaussian colored noise with autocorrelation function~$\left\langle \eta_\alpha(t) \eta_\beta(s) \right\rangle=(\delta_{\alpha,\beta}/d)\exp\left(-|t-s|/\tau\right)$, where $\delta_{\alpha,\beta}$ denotes Kronecker's delta. This normalization ensures that the average modulus squared of the propulsion vector is $\left< \lVert \bm{\eta} \rVert^2\right>=1$ for all values of $d$. While the time scale $\tau$ sets the persistence of the self-propulsion force, its strength is modulated in space by the activity field $v_{\rm a}$. In order to recover an equilibrium dynamics in the absence of activity $v_{\rm a}=0$, we connect the mobility $\gamma$ and the diffusivity $D$ via the Einstein relation $D=k_{\rm B}T/\gamma$. Moreover, the cargo and the active carrier are assumed to have different friction coefficients, the ratio of which is given by the parameter $q$. In a Newtonian fluid and for spherical colloidal carrier and cargo, $q$ equals the ratio of the radius of the cargo to that of the carrier. \begin{figure}[t] \begin{center} \includegraphics[width=0.49\textwidth]{Figure1} \caption{Sketch of the stochastic model described by eqs.~\eqref{eq:model} in two spatial dimensions. A self-propelled active microswimmer (blue ellipse) in a fluid drags a passive cargo (gray circle) via a harmonic interaction (blue spring). The instantaneous self-propulsion velocity of the microswimmer (blue arrow) is locally controlled by a sinusoidal traveling wave of activity, which propagates through the fluid with phase velocity $v_w$ along the unit vector $\bm{e}_0$. For illustration we sketch here two examples of active-passive dimers, one with a low-friction cargo ($q$ small, left) and the other with a high-friction cargo ($q$ large, right). } \label{sketch} \end{center} \end{figure} The Langevin dynamics in eqs.~\eqref{eq:model} can be more conveniently written in terms of the dimer position in the comoving frame, which we identify with the centre of friction $\bm{\chi}=(\bm{r}_1+q\bm{r}_2)/(1+q)-\bm{v}_wt$ and the distance $\bm{r}=\bm{r}_1-\bm{r}_2$. Changing variables $(\bm{r}_1,\bm{r}_2, \bm{\eta})\to (\bm{\chi},\bm{r},\bm{\eta})$ to the new coordinate system, the Fokker-Planck equation for the probability density $P(\bm{\chi},\bm{r},\bm{\eta},t)$ associated to the stochastic dynamics in eqs.~\eqref{eq:model} reads: \begin{equation} \begin{split} &\partial_t P(\bm{\chi},\bm{r},\bm{\eta},t)=\frac{1}{d\tau}\hat{\mathcal{L}}_{\bm{\eta}} P\,+\\&-\nabla_{\bm{\chi}} \cdot \left[ -\bm{v}_w P + \frac{1}{1+q}v_{\rm a}\left(\bm{\chi}'\right)\bm{\eta}P - \frac{D}{1+q}\nabla_{\bm{\chi}}P \right]+\\ &-\nabla_{\bm{r}}\cdot \left[ -\frac{1+q}{q\gamma} \nabla_{\bm{r}} UP+v_{\rm a}\left(\bm{\chi}'\right)\bm{\eta}P - \frac{1+q}{q}D \nabla_{\bm{r}} P \right]\,, \end{split} \end{equation} with $\bm{\chi}'=\bm{\chi}+q\bm{r}/(1+q)$ and $\hat{\mathcal{L}}_{\bm{\eta}} P=\nabla^2_{\bm{\eta}}P+d \nabla_{\bm{\eta}} \cdot \left(\bm{\eta} P \right)$. \section{Transport properties for slow activity waves} In order to estimate the extent to which the propagating tactic signal affects the directed motion of the cargo-carrying microswimmer, we focus on transport properties induced by the activity travelling wave. With the help of a mean-field hydrodynamic theory, we derive an effective dynamics which describes the evolution of the dimer at time scales longer than $\tau$ and length scales larger than the persistence length $l_p\sim v_{\rm a}\tau$ \cite{cates2013when}. In particular, the predictions deriving from such hydrodynamic theory are expected to be valid for activity fields which are slowly varying on the length scale $l_p$ ({\em large wavelength} approximation). In order to identify all relevant hydrodynamic variables, i.e., those fields the relaxation time of which grows indefinitely upon increasing the wavelength (slow modes), we perform a moment expansion analogous to, e.g., refs.~\cite{cates2013when, Solon2015comp,Adeleke2020}. The evolution of the modes is described by a hierarchical structure, the detailed derivation of which is reported in sec.~1 of Supplementary Material (SM). Importantly, we note that the zeroth order mode $\varphi(\bm{\chi},\bm{r},t)= \int d\bm{\eta}\,P(\bm{\chi},\bm{r},\bm{\eta},t)$, which describes the density related to the spatial marginal variables $\bm{\chi}$ and $\bm{r}$, is the only slow mode of the system. Indeed, $\varphi(\bm{\chi},\bm{r},t)$ is associated with a conservation law and its dynamics has the form of a continuity equation: \begin{equation} \begin{split} &\partial_t \varphi(\bm{\chi},\bm{r},t)= -\partial_\alpha \left[-v_w \delta_{\alpha,0}\varphi + \frac{v_{\rm a}\left(\bm{\chi}'\right) \sigma_\alpha}{(1+q)} -\frac{D}{1+q}\partial_\alpha \varphi \right]\\ &\quad-\partial'_\alpha \left[ -\frac{(1+q)}{q\gamma} \partial'_\alpha U \varphi +v_{\rm a}\left(\bm{\chi}'\right)\sigma_\alpha - \frac{(1+q)D}{q} \partial'_\alpha \varphi \right], \end{split} \label{mode0} \end{equation} where we introduced the shorthand notation $\partial_\alpha \equiv \partial_{\chi_\alpha}$ and $\partial'_\alpha \equiv \partial_{r_\alpha}$, while repeated indices imply summation. Furthermore, $\sigma_\alpha$ is the $\alpha$-th component of the first-order mode $\bm{\sigma}(\bm{\chi},\bm{r},t)=\int d\bm{\eta}\,\bm{\eta}P(\bm{\chi},\bm{r},\bm{\eta},t)$, which is related to the conditional average polarization at fixed spatial variables. Its dynamics is governed by \begin{equation} \begin{split} &\partial_t\sigma_\alpha(\bm{\chi},\bm{r},t)= -\frac{\partial_\alpha \left[ v_{\rm a}\left(\bm{\chi}'\right)\varphi \right]}{(1+q)d}-\frac{\partial'_\alpha \left[v_{\rm a}\left(\bm{\chi}'\right)\varphi\right]}{d}\\ &\quad\quad\quad+\frac{(1+q)}{q\gamma}\partial'_\beta \left[ \partial'_\beta U \sigma_\alpha \right] - \tau^{-1}\sigma_\alpha + \mathcal{O}(\partial^2)\,, \end{split} \label{mode1} \end{equation} where dependencies on higher-order modes are included in $\mathcal{O}(\partial^2)$. Notably, the decay rate due to the sink term $- \tau^{-1}\sigma_\alpha$ makes $\sigma_\alpha(\bm{\chi},\bm{r},t)$ a fast mode which does not obey a conservation law and which can be described by a quasi-static approximation. Moreover, the contribution $\mathcal{O}(\partial^2)$ of higher-order gradients is negligible under the assumption of a slowly varying activity field. \begin{figure}[t] \begin{center} \includegraphics[scale=0.3]{Figure2} \caption{Stationary density $\rho(\chi)$ of the dimer (left axis), in the comoving frame of the traveling activity wave $v_{\rm a}(\chi)$ with sinusoidal shape (green dashed line, eq.~\eqref{activity_field}, right axis), as obtained from numerical simulations (symbols) and from analytical predictions (eq.~\eqref{steady_state_density}, solid lines). The latter hold under the assumption of long wavelength and slow traveling wave and they are reported for both a high-friction cargo with $q = q_{\rm high} >q_{\rm th}$ (blue) and a low-friction cargo with $q = q_{\rm low} <q_{\rm th}$ (red). Both analytical and numerical predictions have been obtained by assuming periodic boundary conditions. The numerical data were obtained from a single Langevin-dynamics simulation of duration $10^8$ using Euler-Maruyama scheme with time step $\Delta t=0.01$. Other simulation parameters: $v_w=10^{-2}$, $v_0=1.0$, $\tau=0.1$, $\kappa=5$, $\gamma=1.0$, $D=10^{-3}$, $\lambda=10/(4\pi)$, $q_{\rm high}=4$ and $q_{\rm low}=1$. } \label{stationary_density} \end{center} \end{figure} The combination of large-wavelength approximation and quasi-stationarity of $\bm{\sigma}(\bm{\chi},\bm{r},t)$ at time scales longer than $\tau$ provides a closure scheme for the hierarchy without needing information about higher-order modes. In particular, after integrating out the relative coordinate $\bm{r}$, we derive an effective drift-diffusion equation for the marginal density $\rho(\bm{\chi},t)=\int d\bm{r}\varphi(\bm{\chi},\bm{r},t)$ (see sec.~2 of SM for the detailed derivation), which reads: \begin{equation} \partial_t \rho(\bm{\chi},t)=-\nabla_{\bm{\chi}}\cdot \left[\bm{V}_{\rm eff}(\bm{\chi})\rho(\bm{\chi},t) - \nabla_{\bm{\chi}}(D_{\rm eff}(\bm{\chi})\rho(\bm{\chi},t)) \right], \label{drift-diff} \end{equation} where the effective drift and effective diffusivity are given, respectively, by \begin{eqnarray} \bm{V}_{\rm eff}(\bm{\chi})&=&(1-\epsilon/2)\nabla_{\bm{\chi}} D_{\rm eff}(\bm{\chi}) -\bm{v}_w,\\ D_{\rm eff}(\bm{\chi})&=&\frac{D}{1+q}+\frac{\tau v^2_{\rm a}(\bm{\chi})}{d(1+q)^2} \,. \label{effective_terms} \end{eqnarray} This expression of $D_{\rm eff}$ reveals an enhancement of the diffusivity $D/(1+q)$ of the center of friction induced by the activity via a term $\propto v^2_{\rm a}(\bm{\chi})$. Interestingly, the alignment of the effective drift with the activity gradient is controlled by the {\em tactic coupling} \begin{equation} \epsilon=1-\frac{q}{1+\frac{1+q}{q}\frac{\tau}{\tau_{\rm r}}}\,, \label{epsilon_coupling} \end{equation} where $\tau_{\rm r}=\gamma/\kappa$ is the characteristic spring relaxation time. The role of $\epsilon$ can be understood by considering the case of static activity field. In fact, for $v_w=0$, the stationary density obtained from eq.~\eqref{drift-diff} is \begin{equation} \rho(\bm{\chi}) = \mathcal{N}^{-1}\left[ 1+\frac{\tau v^2_{\rm a}\left(\bm{\chi}\right)}{dD(1+q)} \right]^{-\epsilon/2}. \label{eq:densityst} \end{equation} Accordingly, $\epsilon$ determines the preferential accumulation of the dimer in the regions with high or low activity depending on its sign. Here, $\mathcal{N}$ is a normalization constant. Equation~\eqref{epsilon_coupling} implies that for a fixed $\tau/\tau_{\rm r}$, the tactic coupling $\epsilon$ is entirely determined by the friction ratio $q$, because it changes sign at the threshold value \begin{equation} q_{\rm th}=\frac{1}{2}\left[1+\tau/\tau_{\rm r}+\sqrt{\left(1+\tau/\tau_{\rm r} \right)^2 + 4\tau/\tau_{\rm r}}\right] \ge 1. \label{q_threshold} \end{equation} For highly mobile cargoes with $q<q_{\rm th}$ one has $\epsilon>0$ and thus the dimer preferentially accumulates in low-activity regions. For slow cargoes with $q>q_{\rm th}$, instead, $\epsilon<0$ and the dimer preferentially accumulates in high-activity regions. Interestingly, as in the single-particle case (see, e.g., ref.~\cite{caprini2022dynamics}), the equivalence with a cargo-carrying ABP~\cite{vuijk2021chemotaxis} with rotational diffusivity $D_r$ is fully recovered by imposing $\tau^{-1}=(d-1)D_r$. \begin{figure}[t] \begin{center} \includegraphics[scale=0.4]{Figure3} \caption{ (a) Average drift $v_d$ as a function of the phase velocity $v_w$ in the slow-wave regime $v_w<v_0$ (eq.~\eqref{vd_expression}). For low-friction cargoes with $q=q_{\rm low}<q_{\rm th}$ (red line), the microswimmer exhibits a negative tactic behavior. At the threshold value $q_{\rm th}$ (black line), the average drift vanishes for all wave velocities $v_w$, whereas for $q=q_{\rm high}>q_{\rm th}$ (blue solid line), the dimer is characterized by positive taxis. Numerical results (symbols) have been obtained by computing the quantity $(\chi(t)+v_w t -\chi(0))/t$ for each of the $N=10^3$ independent stochastic trajectories of length $t$, and averaging over different realizations. The remaining simulation parameters are $v_0=1.0$, $\tau=0.1$, $\kappa=5$, $\gamma=1.0$, $D=10^{-2}$, $\lambda=10/(4\pi)$, $q_{\rm high}=4$ and $q_{\rm low}=1$. In the inset, we report the slope of the linear relation $v_d\approx c v_w$ (blue dashed line) at small wave velocities as a function of $q$, and for thermal diffusivity $D\in \{0.05,\,0.03,\,0.01,\,0.001 \}$ (solid lines from bottom to top). (b) Stochastic trajectory of a cargo-carrying microswimmer in the comoving frame $(\chi_0,\chi_1)$ in two spatial dimensions. For a high-friction cargo $(q=20)$ and small thermal diffusivity $D=10^{-3}$ the dimer ``surfs'' the propagating activity wave by localizing around its maximum while traveling with the same velocity, i.e., $v_d=v_w$. } \label{drift_small} \end{center} \end{figure} In order to analyze the general case of an activity travelling wave ($v_w\neq0$), we assume for simplicity that the activity field $v_{\rm a}$ varies only along $\bm{e}_0$. Accordingly, we denote the effective drift and diffusivity with $D_{\rm eff}(\chi_0)$ and $V_{\rm eff,\alpha}(\chi_0)$ as they now depend only on $\chi_0=\bm{\chi}\cdot \bm{e}_0$. As an example, we hereafter consider the sinusoidal wave \begin{equation} v_{\rm a}(\chi_0)=v_0 \left[1+\sin(\chi_0/\lambda) \right], \label{activity_field} \end{equation} with wavelength $\lambda$. The resulting stationary density $\rho(\bm{\chi})$ in the comoving frame can be determined by considering an ensemble of non-interacting dimers with initial bulk density $\rho_b=L^{-d}$, $L$ being their typical interparticle distance. In this way, from eq.~\eqref{drift-diff} we find \begin{equation} \frac{\rho(\bm{\chi})}{\rho_b}=\frac{L\, D_{\rm eff}^{-1}(\chi_0) \int_0^L dx\exp \left\{-\int_{\chi_0}^{\chi_0+x}dy\frac{V_{\rm eff,0}(y)}{D_{\rm eff}(y)} \right\}}{\int_0^L\,du\int_0^Ldx\, D_{\rm eff}^{-1}(u)\exp \left\{-\int_{u}^{u+x}dy\frac{V_{\rm eff,0}(y)}{D_{\rm eff}(y)} \right\}}, \label{steady_state_density} \end{equation} which is illustrated in fig.~\ref{stationary_density} and which also features the transition in the preferential accumulation illustrated above for $v_w=0$. Moreover, the interaction with a propagating activity field induces a non-trivial tactic response in the microswimmer, which is now able to sustain a non-vanishing stationary flux $J_0$ in the comoving frame, acquiring an average drift velocity $v_d=(\langle\dot{\bm{r}}_1\rangle +q\langle\dot{\bm{r}}_2\rangle)/(1+q) = J_0/\rho_b+v_w$ along $\bm{e}_0$ in the lab frame. This drift is given by~\cite{hanggi1990reaction, goel2016stochastic} \begin{equation} v_d=\frac{L \left[ 1-\exp \left\{-\int_{0}^{L}dy\frac{V_{\rm eff,0}(y)}{D_{\rm eff}(y)} \right\}\right]}{\int_0^L\,du\int_0^Ldx\, D_{\rm eff}^{-1}(u)\exp \left\{-\int_{u}^{u+x}dy\frac{V_{\rm eff,0}(y)}{D_{\rm eff}(y)} \right\}}+v_w, \label{vd_expression} \end{equation} and it strongly depends on the tactic coupling $\epsilon$ and therefore on $q$. More precisely, it can be shown analytically that $v_d$ vanishes at the static threshold value $q=q_{\rm th}$ in eq.~\eqref{q_threshold} (see sec.~3 of SM). Additionally, for sufficiently small thermal diffusivity $D$, the threshold value $q=q_{\rm th}$ also separates two distinct tactic regimes with respect to the wave propagation: {\em positive taxis} for $q>q_{\rm th}$, where the microswimmer navigates along the propagating tactic signal with $v_d/v_w>0$, and {\em negative taxis} for $q<q_{\rm th}$, where the microswimmer navigates against it, with $v_d/v_w<0$, see fig.~\ref{drift_small}(a). This predicted negative taxis as well as the fact that its magnitude decreases upon increasing $D$ are consistent with what occurs for a single active particle \cite{geiseler2017selfpolarizing, geiseler2016chemotaxis}, which is retrieved as the limit $q\to0$ of our model. Conversely, when $q>q_{\rm th}$, the cargo-carrying microswimmer travels along the sinusoidal wave due to its tendency to localize close to the propagating activity crests, performing the \emph{active surfing} shown in fig.~\ref{drift_small}(b). Interestingly, an analogous effect was observed experimentally with single self-polarizing phototactic particles in traveling light pulses~\cite{lozano2019diffusing}. While in ref.~\cite{lozano2019diffusing} this behavior is caused by an aligning torque, in our model it emerges as a cooperative effect between the active carrier and the passive cargo. Note, however, that the ability of the microswimmer to catch up with the travelling wave crests, i.e., $v_d\simeq v_w$ is limited to the case of slowly propagating activity wave, which explains the non-monotonicity of the blue curve in fig.~\ref{drift_small}(a). In order to quantify the efficiency of this surfing, we determine the slope $c$ of the linear relation $v_d\approx c v_w$, which holds at small wave velocities $v_w$. Its dependence on $q$ and the thermal diffusivity $D$ is reported in the inset of fig.~\ref{drift_small}, which shows, as expected, that $c\leq1$ and that the directed transport is highly efficient (i.e., $c\simeq 1$) for $D\ll\tau v_0^2$. We recall here that the predictions presented above follow from a coarse graining which assumes that the activity field varies slowly on a length scale of the order of $l_p=v_0\tau$. In the static case $v_w=0$, this condition is met for $\lambda \gg l_p$. However, for a traveling wave, the coarse graining additionally requires that the distance $\sim v_w\tau$ traveled by the active wave on a time scale $\sim \tau$ does not exceed $\sim l_p$, which happens for $v_w<v_0$. Accordingly, in order to investigate the transport properties in the opposite case $v_w>v_0$, we pursue below an alternative analytical approach. \section{Transport properties for fast activity waves} For simplicity, and without loss of generality, we restrict the analysis of the case $v_w>v_0$ to one-dimensional systems and to a sinusoidal traveling wave as in eq.~\eqref{activity_field}. The main difference compared to the slow-wave approximation discussed above lies in the closure scheme used to combine the mode eqs.~\eqref{mode0} and \eqref{mode1}. More precisely, as the small gradients approximation is no longer applicable for $v_w>v_0$, we explore this regime by considering small self-propulsion forces by keeping in the effective dynamics only contributions of the lowest order in $v_0$~\cite{sharma2016communication,merlitz2018linear,dal2019linear}. To this aim, we rewrite eq.~\eqref{mode1} in the more convenient form \begin{equation} \hat{\mathcal{L}}_{\sigma}\sigma(\chi,r,t)=-\frac{\partial_\chi \left[ v_{\rm a}(\chi')\varphi \right]}{(1+q)}-\partial_r \left[ v_{\rm a}(\chi')\varphi \right]+\Upsilon(\chi,r,t), \label{eq_pol_2} \end{equation} where $\chi'=\chi+qr/(1+q)$ is the position of the active carrier in the comoving frame, $\Upsilon(\chi,r,t)$ includes all contributions of higher-order modes, and the operator $\hat{\mathcal{L}}_{\sigma}$ is defined as \begin{equation} \hat{\mathcal{L}}_{\sigma}=\partial_t + \frac{1}{\tau} -v_w\partial_\chi -\frac{D}{1+q} \partial^2_\chi - \frac{(1+q)D}{q}\left[\partial^2_r + \frac{1}{\ell^2}\partial_r r\right], \label{op_sigma} \end{equation} with the characteristic length $\ell=\sqrt{D \tau_{\rm r}}$. \begin{figure}[t] \begin{center} \includegraphics[scale=0.28]{Figura4} \caption{Average drift velocity $v_d$ as a function of the phase velocity $v_w$ of the activity wave for $v_w>v_0$ (eq.~\eqref{vd_predicted}). The cargo-carrying microswimmer acquires a positive drift independently of the value of the friction ratio $q$, which takes here the same values as those of the corresponding curves in fig.~\ref{drift_small}. The results from numerical simulations and analytical predictions have been obtained as described in the caption of fig.~\ref{drift_small}, with the same set of parameters.} \label{drift_big} \end{center} \end{figure} To solve for $\sigma(\chi,r,t)$, we then determine the Green function of $\hat{\mathcal{L}}_{\sigma}$ and compute the convolution with the r.h.s.~of eq.~\eqref{eq_pol_2}. In doing this, we assume that the contribution $\Upsilon(\chi,r,t)$ of higher-order modes is negligible in the limit of small self-propulsion forces, thus closing the hierarchy. Analogously to the previous approach, after integrating over the relative coordinate $r$, we obtain a continuity equation for the marginal density $\rho(\chi,t)$, i.e., \begin{equation} \partial_t \rho(\chi,t)= -\partial_\chi \left[ I(\chi,t) -\frac{D}{1+q}\partial_\chi \rho-v_w \rho \right], \label{density_eq_nosep} \end{equation} where \begin{equation} \!\!I(\chi,t)=\int_{-\infty}^{\infty} \!\!\!dr \,\frac{ v_{\rm a}\left(\chi'\right) \sigma(\chi,r,t)}{(1+q)} =\frac{\left<v_{\rm a}(\chi')\eta\,| \, \chi \right>}{1+q}\rho(\chi,t), \label{I_wave} \end{equation} and $\left<\cdot | \chi \right>$ denotes the conditional average at fixed $\chi$. We derive a close yet cumbersome analytical expression for $I(\chi,t)$ which is related to the local average swim speed of the center of friction due to self-propulsion, see eq.~\eqref{I_wave} and sec.~4 of SM). Similarly, we also derive in the SM analytical expressions for the stationary density and the flux in the comoving frame, which we use to analyze the directed transport in the regime of fast active traveling waves. In particular, for $D \tau_{\rm r} \ll \lambda^2$, the average drift velocity $v_d$ reads \begin{equation} \frac{v_d}{v_0}=\frac{l_p }{2\lambda(1+q)^2}\left[ \frac{\sin \psi_0}{|z_0|} + q \frac{\sin \psi_1}{|z_1|} \right], \label{vd_predicted} \end{equation} where we recall that $l_p = v_{0}\tau$ is the persistence length of the microswimmer, while $\psi_n$ and $|z_n|$ are the phase and the modulus, respectively, of the complex number \begin{equation} z_n=1 + \frac{\tau D}{\lambda^2(1+q)}+\frac{(1+q)\tau D}{q\ell^2} n +\mathrm{i} \frac{\tau v_w}{\lambda}, \end{equation} where $\mathrm{i}$ is the imaginary unit. A general expression of the drift velocity for an arbitrary thermal diffusivity is given in sec.~4 of SM. Figure~\ref{drift_big} shows the behavior of the average drift $v_d$ as a function of the wave velocity $v_w$ in the regime $v_w>v_0$ of fast traveling waves. Unlike the case of $v_w<v_0$ (see fig.~\ref{drift_small}), the tactic behavior of the microswimmer does not exhibit a qualitative change as a function of the friction ratio $q$, with the drift occurring always along the direction of the active wave. However, as $q$ increases, this drift decreases because of the reduced mobility of the dimer. The drift velocity of the microswimmer attains its maximum value at a wave speed which scales as $v_w/v_0 \sim \lambda /l_p$. This can be qualitatively understood as following. Consider a single pulse of activity of spatial extent $\lambda$ travelling with a speed $v_w$. A microswimmer with its polarization against the direction of the travelling pulse will rapidly exit the pulse from the receding front. However, when the polarization is along the direction of the pulse, the microswimmer will be carried along with it until it switches its polarization which will cause it to exit the pulse. The optimum scenario corresponds to the condition $v_w \tau - v_0 \tau \sim \lambda$ in which the microswimmer effectively traverses the whole pulse before switching polarization. This results in a maximum of the drift speed at $v_w \sim \lambda /\tau$. While the drift velocity of the dimer in fig.~\ref{drift_big} features a single peak, we find both analytically and via numerical simulations that a second peak may appear at larger $v_w$, for large values of $q$ and persistence time $\tau$. The location of this additional peak depends on the spring relaxation time scale $\tau_{\rm r}$ but we defer a thorough investigation of its features and microscopic origin to future investigations. \section{Discussion} \label{conclusions} Our work shows that self-propelled cargo-carrying microswimmers interacting with a traveling wave of activity display a rich tactic behavior. Their response to such a wave is actually independent of the details of the activity, as evidenced by the equivalence of cargo carrying AOUPs and ABPs in terms of their coarse grained dynamics. The tactic transition which emerges in the presence of slowly propagating waves relies on the possibility to control the preferential accumulation of the microswimmer in high/low activity regions, by tuning the friction of its cargo. In particular, we find a surfing effect when the directed migration along the activity wave is induced by an effective localization around the wave maxima. Considering, e.g., the experimental realization of Janus microswimmers as in ref.~\cite{lozano2019diffusing}, eq.~\eqref{q_threshold} implies $q_{\rm th} \simeq \kappa/(0.02 {\rm \,pN/\mu m})$ for $q_{\rm th}\gtrsim 1$. Accordingly, assuming for the cargo-carrier binding an elastic constant $\kappa \simeq 0.1 {\rm \,pN/\mu m}$, typical for soft matter, the tactic transition is predicted to occur at a cargo radius $\simeq 8\,{\rm \mu m}$, which is within experimental reach. We speculate that a qualitatively similar tactic behavior may emerge spontaneously in a binary mixture of mutually attractive active and passive particles, upon formation of clusters of different sizes. It has been recently shown that also molecules composed of two rigidly connected active particles~\cite{vuijk2022active} and dimers made of two active chiral particles~\cite{muzzeddu2022active} exhibit a transition in their effective localization in high/low activity regions. It will be interesting to study such active-matter systems subject to active traveling waves, and in the presence of external potentials~\cite{caprini2019entropy,garcia2021run}. We expect our predictions to have an impact on experimental studies on soft matter, biophysics, and nanotechnology. Important examples include cases in which synthetic Janus particles~\cite{baraban2012catalytic} and bacteria~\cite{akin2007bacteria} have been used to efficiently transport and deliver microscopic objects in specific target sites. % Moreover, our investigation could inspire future optimal design of existing {\em biohybrid} micromachines such as spermbots formed by assembling syntetic materials with sperm cells~\cite{medina2016cellular,magdanz2017spermatozoa}. The taxis transition unveiled by our minimal stochastic model may also have implications in biological processes at the microscale in which traveling waves play a key role, e.g., sound transduction in the cochlea~\cite{roberts1988hair,duke2003active} and signaling waves in cell development~\cite{di2022waves}. \bibliographystyle{eplbib}
1,314,259,995,411
arxiv
\section{Introduction} \label{Sec:Introduction} Astrophysical and cosmological observations indicate that $26 \%$ of the total energy density and $84 \%$ of the total matter content of the Universe is dark matter (DM) \cite{Planck2015}, the identity and properties of which still remain a mystery. One of the leading candidates for cold DM is the axion, a pseudoscalar particle which was originally hypothesised to resolve the strong CP problem of quantum chromodynamics (QCD) \cite{PQ1977A,PQ1977B,Weinberg1978,Wilczek1978,Kim1979,Zakharov1980,Zhitnitsky1980A,*Zhitnitsky1980B,Srednicki1981}. Apart from the canonical QCD axion, various axion-like particles have also been proposed, for example in string compactification models \cite{Witten1984,Conlon2006,Witten2006,Arvanitaki2010,Arias2012,Marsh2015Review}. Low-mass ($m_a \lesssim 0.1~\textrm{eV}/c^2$) axions can be produced efficiently via non-thermal production mechanisms, such as vacuum misalignment \cite{Preskill1983cosmo,Sikivie1983cosmo,Dine1983cosmo} in the early Universe, and subsequently form a coherently oscillating classical field \footnote{Although non-thermal production mechanisms typically impart negligible kinetic energy to the produced axions, the gravitational interactions between axions and ordinary matter during galactic structure formation subsequently virialise galactic axions ($v_{\textrm{vir}}^{\textrm{local}} \sim 300$~km/s), giving an oscillating galactic axion field the finite coherence time:~$\tau_{\textrm{coh}} \sim 2\pi / m_a v_{\textrm{vir}}^2 \sim 2\pi \cdot 10^6 / m_a $, i.e., $\Delta \omega / \omega \sim 10^{-6}$. }:~$a = a_0 \cos(\omega t)$, with the angular frequency of oscillation given by $\omega \approx m_a c^2 / \hbar$, where $m_a$ is the axion mass (henceforth, we shall adopt the units $\hbar = c = 1$). The oscillating axion field carries the energy density $\rho_a \approx m_a^2 a_0^2 /2$. Due to its effects on structure formation \cite{Khlopov1985}, ultra-low-mass axion DM in the mass range $10^{-24}~\textrm{eV} \lesssim m_a \lesssim 10^{-20}$ eV has been proposed as a DM candidate that is observationally distinct from, and possibly favourable to, archetypal cold DM \cite{Hu2000,Marsh2014,Schive2014,Marsh2015Review,Hui2017}. The requirement that the axion de Broglie wavelength does not exceed the DM size of the smallest dwarf galaxies and consistency with observed structure formation \cite{Marsh2015B,Schive2015,Marsh2017} give the lower axion mass bound $m_a \gtrsim 10^{-22}$ eV, if axions comprise all of the DM. However, axions with smaller masses can constitute a sub-dominant fraction of DM~\cite{Hlozek15}. It is reasonable to expect that axions interact non-gravitationally with standard-model particles. Direct searches for axions have thus far focused mainly on their coupling to the photon (see the review \cite{Axion-Review2015} and references therein). Recently, however, it has been proposed to search for the interactions of the coherently oscillating axion DM field with gluons and fermions, which can induce oscillating electric dipole moments (EDMs) of nucleons \cite{Graham2011} and atoms \cite{Stadnik2014A,Roberts2014A,Roberts2014B}, and anomalous spin-precession effects \cite{Flambaum2013Patras,Stadnik2014A,Graham2013}. The frequency of these oscillating effects is dictated by the axion mass, and more importantly, these effects scale linearly in a small interaction constant \cite{Graham2011,Stadnik2014A,Roberts2014A,Roberts2014B,Flambaum2013Patras,Graham2013}, whereas in previous axion searches, the sought effects scaled quadratically or quartically in the interaction constant \cite{Axion-Review2015}. In the present work, we focus on the axion-gluon and axion-nucleon couplings: \begin{align} \label{Axion_couplings} \mathcal{L}_{\textrm{int}} = \frac{C_G}{f_a} \frac{g^2}{32\pi^2} a G^{b}_{\mu \nu} \tilde{G}^{b \mu \nu} - \frac{C_N}{2f_a} \partial_\mu a ~ \bar{N} \gamma^\mu \gamma^5 N \, , \end{align} where $G$ and $\tilde{G}$ are the gluonic field tensor and its dual, $b=1,2,...,8$ is the color index, $g^2 / 4 \pi$ is the color coupling constant, {\color{black}$N$ and $\bar{N} = N^\dagger \gamma^0$ are the nucleon field and its Dirac adjoint,} $f_a$ is the axion decay constant, and $C_G$ and {\color{black}$C_N$} are model-dependent dimensionless parameters. Astrophysical constraints on the axion-gluon coupling in (\ref{Axion_couplings}) come from Big Bang nucleosynthesis \cite{Blum2014,StadnikThesis,Stadnik2015D}:~$m_a^{1/4} f_a / C_G \gtrsim 10^{10}~\textrm{GeV}^{5/4}$ for $m_a \ll 10^{-16}~\textrm{eV}$ and $m_a f_a / C_G \gtrsim 10^{-9}~\textrm{GeV}^{2}$ for $m_a \gg 10^{-16}~\textrm{eV}$, assuming that axions saturate the present-day DM energy density, and from supernova energy-loss bounds \cite{Graham2013,Raffelt1990Review}:~$f_a / C_G \gtrsim 10^6 ~\textrm{GeV}$ for $m_a \lesssim 3 \times 10^{7}~\textrm{eV}$. {\color{black}Astrophysical constraints on the axion-nucleon coupling in (\ref{Axion_couplings}) come from supernova energy-loss bounds \cite{Raffelt1990Review,Raffelt2008LNP}:~$f_a / C_N \gtrsim 10^9 ~\textrm{GeV}$ for $m_a \lesssim 3 \times 10^{7}~\textrm{eV}$, while existing laboratory constraints come from magnetometry searches for new spin-dependent forces mediated by axion exchange \cite{Romalis2009_NF}:~$f_a / C_N \gtrsim 1 \times 10^4 ~\textrm{GeV}$ for $m_a \lesssim 10^{-7}~\textrm{eV}$. } The axion-gluon coupling in (\ref{Axion_couplings}) induces the following oscillating EDM of the neutron via a chirally-enhanced 1-loop process \cite{tuningfootnote,Witten1979,*Witten1979B,Pospelov1999}: \begin{equation} \label{eq:nEDM_axion} d_\mathrm{n}(t) \approx +2.4 \times 10^{-16} ~ \frac{C_G a_0}{f_a} \cos(m_a t) ~ e \cdot \textrm{cm} \, . \end{equation} The axion-gluon coupling in (\ref{Axion_couplings}) also induces oscillating EDMs of atoms via the 1-loop-level oscillating nucleon EDMs and tree-level oscillating P,~T-violating intranuclear forces (which give the dominant contribution) \cite{Stadnik2014A,Flambaum1984EDM,*Flambaum1984EDMB}. In the case of $^{199}$Hg, the oscillating atomic EDM is \cite{Stadnik2014A,StadnikThesis,Flambaum1985EDM,Flambaum1985EDMB,Flambaum2002EDM,Dmitriev2003A,Dmitriev2003B,Dmitriev2005,Engel2005,Engel2010}: \begin{equation} \label{199Hg-EDM_axion} d_{\textrm{Hg}}(t) \approx +1.3 \times 10^{-19} ~ \frac{C_G a_0}{f_a} \cos(m_a t) ~ e \cdot \textrm{cm} \, , \end{equation} which is suppressed compared to the value for a free neutron (\ref{eq:nEDM_axion}), as a consequence of the Schiff screening theorem for neutral atoms \cite{Schiff1963}. The amplitude of the axion DM field, $a_0$, is fixed by the relation $\rho_a \approx m_a^2 a_0^2 /2$. In the present work, we assume that axions saturate the local cold DM energy density $\rho_{\rm DM}^{\rm local} \approx 0.4~\textrm{GeV/cm}^3$ \cite{Catena2010}. The derivative coupling of an oscillating galactic axion DM field, $a = a_0 \cos(m_a t - \v{p}_a \cdot \v{r})$, with spin-polarized nucleons in (\ref{Axion_couplings}) induces time-dependent energy shifts according to: \begin{equation} \label{potential_axion-wind} H_{\textrm{int}} (t) = \frac{C_N a_0}{2 f_a} \sin(m_a t) ~ \v{\sigma}_N \cdot \v{p}_a \, . \end{equation} The term $\v{\sigma}_N \cdot \v{p}_a$ is conveniently expressed by transforming to a non-rotating celestial coordinate system (see, e.g., \cite{Kostelecky1999}): \begin{align} \label{sigma-p_a_2} \v{\sigma}_N \cdot \v{p}_a &= \hat{m}_F f(\sigma_N) m_a |\v{v}_a| \notag \\ & \times \left[\cos(\chi) \sin(\delta) + \sin(\chi) \cos(\delta) \cos(\Omega_{\textrm{sid}} t - \eta) \right] \, , \end{align} where $\chi$ is the angle between Earth's axis of rotation and the spin quantization axis ($\chi = 42.5 ^\circ$ at the location of the PSI), $\delta \approx -48 ^\circ$ and $\eta \approx 138 ^\circ$ are the declination and right ascension of the galactic axion DM flux relative to the Solar System \cite{NASA2014web}, $\Omega_{\textrm{sid}} \approx 7.29 \times 10^{-5}~\textrm{s}^{-1}$ is the daily sidereal angular frequency, $\hat{m}_F = m_F / F$ is the normalized projection of the total angular momentum onto the quantization axis, and $f(\sigma_N) = +1$ for the free neutron, while $f(\sigma_N) = -1/3$ for the $^{199}$Hg atom in the Schmidt (single-particle) model. Here, we report on a search for an axion-induced oscillating EDM of the neutron (nEDM) based on an analysis of the ratio of the spin-precession frequencies of stored ultracold neutrons and $^{199}$Hg atoms, which is a system that had previously also been used as a sensitive probe of new non-EDM physics \cite{Altarev2009,Altarev2010,Afach2015_NF}. We divided our analysis into two parts. We first analyzed the Sussex--RAL--ILL nEDM experiment data \cite{Baker2014}, covering oscillation periods longer than days (\emph{long time--base}). Then we extended the analysis to the data of the PSI nEDM experiment \cite{Baker2011}, which allowed us to probe oscillation periods down to minutes (\emph{short time--base}). Our analysis places the first laboratory constraints on the axion-gluon coupling. We also report on a search for an axion-wind spin-precession effect, using the data of the PSI nEDM experiment. Our analysis places the first laboratory constraints on the axion-nucleon coupling from the consideration of an effect that is linear in the interaction constant. \section{Long time-base analysis} The Sussex--RAL--ILL room temperature nEDM experiment ran from 1998 to 2002 at the PF2 beamline at the Institut Laue-Langevin (ILL) in Grenoble, France. This experiment set the current world-best limit on the permanent time-independent neutron EDM, published in 2006 \cite{Baker2006, *Baker2006B}. The data were subsequently reanalyzed to give a revised limit in 2015 \cite{Pendlebury2015}. The technical details of the apparatus are described in full in \cite{Baker2014}, but we summarize the main experimental details here for the reader. The experiment was based on Ramsey interferometry \cite{Ramsey1950} of ultracold neutrons \cite{UCN-Review1979,UCN-Review1994}. The neutrons were stored in parallel or antiparallel electric and magnetic fields, where their Larmor precession frequency is given by \begin{equation} \label{eq:Larmor} h\nu_\mathrm{n} = 2 \left| \mu_\mathrm{n} B \pm d_\mathrm{n} E \right| \, , \end{equation} with the sign depending on the field configuration. $E$ and $B$ are the magnitudes of the electric and magnetic fields, respectively. By measuring the frequency difference between the two field configurations, a value for the neutron EDM, $d_\mathrm{n}$, was inferred. The measurement was conducted in a series of \emph{cycles}, each approximately 5 minutes long. A cycle began with a filling of neutrons polarized along the fields into the precession chamber from the ultracold neutron source~\cite{Steyerl1986}. Once they are in the chamber, enclosed from top and bottom with electrodes, a \unit[29]{Hz} NMR pulse lasting 2 seconds was applied to rotate the neutron spins into the transverse plane of the electromagnetic fields where they began to precess. Prior to the pulse, a population of polarized $^{199}$Hg atoms was released into the chamber, another 2-second \unit[29]{Hz} pulse, in-phase with the first one, was applied. The neutrons were then emptied into a detector through a spin-analyzing foil. Over 1--2 days, many of these cycles were performed. The electric field's polarization was reversed every hour. We term one continuous block of data taking in the same magnetic-field configuration, but including both directions of electric field, a \emph{run}. One run gives a $d_\mathrm{n}$ estimate. In order to suppress cycle--to--cycle changes in the magnetic field, the analysis was performed on the ratio of the neutron and mercury precession frequencies $R$, which, using (\ref{eq:Larmor}), is \cite{Baker2014}: \begin{align} \label{eq:R} R &\equiv \frac{\nu_\mathrm{n}}{\nu_\textrm{Hg}} = \frac{\mu_\mathrm{n}}{\mu_\textrm{Hg}} \pm \left( d_\mathrm{n} - \frac{\mu_\mathrm{n}}{\mu_\textrm{Hg}} \, d_\textrm{Hg} \right) \frac{2 E}{ h \nu_\textrm{Hg}} + \Delta \, , \end{align} where the signs correspond to parallel and antiparallel field configurations. $\Delta$ encapsulates all higher-order terms and systematic effects, which are corrected for when a run is analyzed~\cite{Pendlebury2015}. This analysis is sensitive to oscillations in the quantity $d_\mathrm{n} - \left( \mu_\mathrm{n} / \mu_\textrm{Hg} \right) \, d_\textrm{Hg}$, with $\mu_\mathrm{n} / \mu_\textrm{Hg} = -3.8424574(30)$~\cite{Afach2014magmoment}. In our analysis, we were looking for an oscillating EDM. We performed this search in frequency space by evaluating \emph{periodograms} -- estimators of the power spectrum. An oscillation in the time domain would show up as an excess in the power (or, equivalently, amplitude) relative to the expected distribution due to experimental noise. In the case of the long time--base analysis, we considered the time series of $d_\mathrm{n}$ measurements from individual runs (after having corrected for the ``False EDM'' effect \cite{Pendlebury2004} using the crossing lines procedure \cite{Pendlebury2015}). The measurements are neither evenly spaced nor have equal uncertainties. To calculate the periodogram of the data series, we used the Least Squares Spectral Analysis (LSSA) method \cite{Scargle1982,Cumming2004}, where the amplitude at frequency $f$ was estimated by the amplitude of the best fit oscillation of that frequency. We evaluated the periodogram at a set of 1334 trial frequencies, evenly spaced between \unit[100]{pHz} (arbitrarily chosen, a period of about 300 years, much longer than the four--year span of the data set) and $\unit[10]{\upmu Hz}$ (a period of about a day, the time it typically took to get one $d_\mathrm{n}$ estimate). An axion DM signal, with expected coherence set by $\Delta f \sim 10^{-6} f~$\footnotemark[1], is narrower than the spectral resolution ($\unit[7.49]{nHz}$, the inverse span of the data set) in the whole range of frequencies we were sensitive to. In the LSSA fit, we assumed the free offset to be zero on the grounds that the experiment had already delivered a zero-compatible result for the permanent time-independent neutron EDM \cite{Baker2006, *Baker2006B,Pendlebury2015}. The periodogram of the long time--base dataset is shown as a black line in Fig.\,\ref{fig:ILL_detection}. To obtain the expected distribution of the periodogram, we performed Monte Carlo (MC) simulations. At each frequency, we estimated the cumulative distribution function (CDF) of the LSSA power. Extreme events in the tails of the distribution are expensive to access directly with MC. For this reason, to the discrete CDF estimates we fitted, at each $i^\textrm{th}$ frequency, the functional form of the LSSA-power CDF~\cite{Scargle1982}: \begin{equation} \label{eq:powerdistribution} F_i(\mathcal{P}) = 1 - A_i\,\exp(-B_i\, \mathcal{P}) \, , \end{equation} where $\mathcal{P}$ is the power, while $A_i$ and $B_i$ are fit parameters. The local $p$-values are given by \begin{equation} \label{eq:localpvalue} p_{\mathrm{local}, i} = 1 - F_i(\mathcal{P}_i) \, , \end{equation} where $\mathcal{P}_i$ is the LSSA power of the measured $d_\mathrm{n}$ time series at the $i^\textrm{th}$ frequency. If the local $p$\-/values at different trial frequencies were uncorrelated, the global $p$\-/value would be given by \cite{Algeri2016}: \begin{equation} \label{eq:pvalues} p_\mathrm{global} = 1 - (1 - p_\mathrm{local})^N \, , \end{equation} where $N$ is the number of trial frequencies. However, we did not need to make this assumption. Instead, we made use of the set of MC datasets. In each, we found the minimal local $p$\-/value and estimated its CDF, assuming it has the form (\ref{eq:pvalues}), but left $N$ as a free parameter. We found the best fit value $N_\mathrm{effective} = 1026$. For each frequency, we marked the power necessary to reach the global $p$\-/values corresponding to $1,2,…,5\,\sigma$ levels as orange lines in Fig.\,\ref{fig:ILL_detection}. The minimal local $p$\-/value of the dataset translates to the global $p$\-/value of 0.53, consistent with a non-detection. \begin{figure} \centering \includegraphics[width=\columnwidth]{detection_ill.pdf} \caption{The periodogram of the array of neutron EDM ($d_\mathrm{n}$) estimates from the ILL measurement (black line). We are sensitive to oscillations in the quantity $d_\mathrm{n} - \left( \mu_\mathrm{n} / \mu_\textrm{Hg} \right) \, d_\textrm{Hg}$, where $d_\textrm{Hg}$ is the EDM of the $^{199}$Hg atom. The mean of Monte Carlo (MC)-generated periodograms, assuming no signal is present, is depicted in green. MC is used to deliver false--alarm thresholds (global $p$-values), marked in orange for $1,2,…,5\,\sigma$ levels (from bottom to top). The highest peak has the global $p$\-/value 0.53, consistent with a non-detection.} \label{fig:ILL_detection} \end{figure} In order to obtain limits on the oscillation amplitude parameter, we again used MC simulations. We discretized the space of possible signals, spanned by their frequency and amplitude. We chose a sparser set of 200 frequencies, as we did not expect highly coherent effects in the sensitivity of detection. For each discrete point, we generated a set of 200 MC datasets containing the respective, perfectly coherent signal and assumed that the oscillation is averaged over the duration of the run. In general, the sensitivity is phase-dependent, especially for periods comparable with the length of the dataset. For simplicity, we did not investigate the phase-dependence and in the simulation took it to be random and uniformly distributed. For each fake dataset, we evaluated the LSSA amplitude only at the frequency of the signal and compared its distribution (extrapolating with the functional form of Eq.\,(\ref{eq:powerdistribution})) with the best-fit amplitude in the data and defined the $p$\-/value to be left--sided. We found the 95\% confidence-level exclusion limit as the 0.05 isocontour of the $\mathrm{CL}_s$ statistic \cite{PDG2016}. The limit is shown as the red curve in Fig.\,\ref{fig:ecm_limits}. We are most sensitive to periods shorter than the timespan of the dataset ($\sim 4$ years), but rapidly lose sensitivity for periods shorter than the temporal spacing between data points ($\sim 2$ days), since the expected signal would essentially average to zero over these short time scales. \begin{figure} \centering \includegraphics[width=\columnwidth]{psi_ill_1e-26ecm.pdf} \caption{The 95\% C.L. limits on the amplitude of oscillation in the quantity $d_\mathrm{n} - \left( \mu_\mathrm{n} / \mu_\textrm{Hg} \right) \, d_\textrm{Hg}$, as a function of frequency thereof. The limits from the long (ILL data) and short (PSI data) time--base analyses are depicted by the red and blue curves, respectively, with the area above these curves being excluded. The raw limits delivered by the analysis, with substantial noise, are depicted by the light lines, while the smoothed versions are given in bold. } \label{fig:ecm_limits} \end{figure} \section{Short time-base analysis} In 2009, the Sussex--RAL--ILL apparatus was moved to the new ultracold neutron source at the Paul Scherrer Institute (PSI), Villigen, Switzerland~\cite{Anghel2009,Lauss2011,Lauss2012,Lauss2014}, where a number of improvements were made~\cite{Baker2011,Afach2015USSA,Ban2016NANOSC}. In 2015, the apparatus was fully commissioned and began to take high-sensitivity EDM data. The whole data set, taken from August 2015 until the end of 2016, with a higher accumulated sensitivity than the ILL one, was considered in this analysis. For the PSI experiment's data, we performed a lower--level oscillation search on the array of $R$ measurements. Since an $R$ estimate was obtained every cycle ($\unit[\approx 300]{s}$), rather than every 1--2 days as for a $d_\mathrm{n}$ estimate, it has an increased sensitivity to higher frequencies. Additionally, the analysis could benefit from the addition of 16 atomic cesium vapor magnetometers \cite{Knowles2009, Koch2015}, located directly above and below the precession chamber (inside the electrodes). This made it possible to account for the dominant time--dependent systematic effect on a cycle, rather than run, basis. The dominant time-dependent systematic effect, encapsulated in $\Delta$ of Eq.\,\eqref{eq:R}, would have given rise to non-statistical temporal fluctuations if not accounted for. Namely, $R$ is sensitive to drifts in the vertical gradients of the magnetic field. While the thermal mercury atoms filled the chamber homogeneously, the center of mass of the ultracold neutron population was lower by several millimeters~\cite{Afach2014magmoment, Afach2015, Pendlebury2015}. To evaluate the correction, the drifts of the gradients were estimated on a cycle basis by fitting a second--order parametrization of the magnetic field to the measurements of the cesium magnetometers~\cite{WurstenThesis}. The center-of-mass shift was determined to be \unit[4]{mm} using the method described in \cite{Afach2014magmoment}. The measurement procedure involved working deliberately with gradients affecting $R$ (see the crossing--point method in \cite{Pendlebury2015}). Those intended gradients (up to \unit[60]{pT/cm} in \unit[10]{pT/cm} steps) were much larger than cycle--to--cycle fluctuations (\unit[< 2]{pT/cm} per day). With the high-order shifts in $R$ having been significant, these large shifts could not be corrected using the cesium magnetometers. Additionally, while the cesium magnetometers were precise, their accuracy is limited by the calibration procedure. We defined as a \emph{sequence} a set of data, typically 2--3 days in duration, without a deliberate change in the magnetic-field gradient or a recalibration of the cesium magnetometers. When performing the LSSA fit, we allowed the free offset to be different in each sequence: \begin{equation} A\sin(2 \pi f t) + B\cos(2 \pi f t) + \sum_i C_i\,\Pi_i(t) \, , \end{equation} where $C_i$ is the free offset in the $i^\textrm{th}$ sequence and $\Pi_i(t)$ is a gate function equal to one in the $i^\textrm{th}$ sequence and zero elsewhere. This caused the short time--base analysis to lose sensitivity for periods longer than one sequence. It should also be mentioned that, at the time of this analysis, the PSI data were still blinded, whereby an unknown, but constant, $d_\mathrm{n}$ was injected into them. It does not influence this analysis, as the free offsets are not considered further. We split the $R$ time array into three sets: a control set of data without an applied electric field, and two sets sensitive to an oscillating EDM, namely with parallel and antiparallel applied electric and magnetic fields. A coherent oscillating EDM signal would have an opposite phase in the latter two sets, and be absent in the control set. We did not perform a common fit. Instead, the two sensitive data sets were treated separately in the LSSA fits, and later combined to a limit. Otherwise, the LSSA treatment was the same as in the long time--base analysis. We picked a set of $156\,198$ trial frequencies, spaced apart at intervals determined by the spectral resolution (the inverse of 506 days = $\unit[23]{nHz}$), which here also defines the signal width. The periodogram of the $R$ time array taken with the parallel-field configuration is shown in black in Fig.\,\ref{fig:PSI_detection}. There are two regions of expected rise in the oscillation amplitude due to the time structure of the data collection. The one around $\unit[28]{\upmu Hz}$ (the inverse of 10 hours) corresponds to the period of the electric-field reversal. The very narrow one around $\unit[3.3]{mHz}$ (the inverse of $\unit[300]{s}$) corresponds to the cycle repetition rate. There are five trial frequencies for which the $3\sigma$ false--alarm threshold is exceeded, two of which, including the largest excess with a $6\sigma$ significance, occur in a $\unit[100]{\upmu Hz}$ region around the inverse of \unit[300]{s}, while the other three are in the low-frequency region (inverse days) already excluded by the long time--base analysis. The periodograms for the other two datasets (not shown) are very similar. In the other sensitive set, there are three excesses of the $3\sigma$ threshold (the highest is $5\sigma$), all constrained to the same two regions. In the control dataset, only the $1\sigma$ threshold is exceeded. The periodogram of the $R$ time array without the gradient-drift correction is shown in pink in Fig.\,\ref{fig:PSI_detection} to visualize the frequencies where the correction has an effect. \begin{figure} \centering \includegraphics[width=\columnwidth]{detection_psi_inset_gc.pdf} \caption{ Periodogram of the $R$ time array of the PSI experiment data, sensitive to oscillations in the quantity $d_\mathrm{n} - \left( \mu_\mathrm{n} / \mu_\textrm{Hg} \right) \, d_\textrm{Hg}$, taken with the $\v{E}$ and $\v{B}$ fields parallel (black line). The mean of MC--generated periodograms, assuming no signal, is depicted in green. MC is used to calculate $1,2,…,5\,\sigma$ false--alarm thresholds, depicted in light orange. For clarity, we also plot the smoothed version in orange. There are two regions where a rise in the amplitude is expected, namely around $\unit[28]{\upmu Hz}$ (inverse of 10 hours) and $\unit[3.3]{mHz}$ (inverse of 300 seconds), due to the time structure of the data taking (see the main text for more details). The periodogram of non-gradient-drift-corrected data is shown in pink. } \label{fig:PSI_detection} \end{figure} A non--statistical excess in a periodogram of $R$ may be caused not only by a coherent oscillating signal; for example, fluctuations of a higher--order term in the magnetic field, not compensated by either the mercury or cesium magnetometers, may cause broad--band elevations in LSSA power. We defined strict requirements for an excess to be considered as one induced by axion DM as follows. Firstly, a significant ($>3\sigma$) excess in amplitude had to be observed in both sensitive datasets at the same frequency, but not in the control set. Secondly, the signals must be in antiphase in the parallel and anti-parallel datasets. Lastly, we require high coherence (a narrow peak) equal to the spectral resolution of the dataset. None of the significant excesses passed our discovery criteria. We delivered a limit on the oscillation amplitude similarly to the long time--base analysis, with the exception that we required the product of the two sensitive sets' $\mathrm{CL}_s$ statistics to be 0.05. The limit is shown as the blue curve in Fig.\,\ref{fig:ecm_limits}. With the short time--base analysis, we were most sensitive to periods shorter than the timespan of a sequence (2 -- 3 days), and lost sensitivity to periods shorter than the cycle repetition rate ($\approx 5$ minutes). The PSI dataset has a higher accumulated sensitivity than the ILL dataset, so the limit baseline in the sensitive region is slightly better in the case of the PSI dataset. Following Eq.\,(\ref{eq:nEDM_axion}), we can interpret the limit on the oscillating neutron EDM as limits on the axion--gluon coupling in Eq.\,(\ref{Axion_couplings}). We present these limits in Fig.\,\ref{fig:axion_limits_v2}, assuming that axions saturate the local cold DM energy density $\rho_{\rm DM}^{\rm local} \approx 0.4~\textrm{GeV/cm}^3$ \cite{Catena2010}. Our peak sensitivity is $f_a/C_G \approx \unit[1 \times 10^{21}]{GeV}$ for $m_a \lesssim \unit[10^{-23}]{eV}$, which probes super-Planckian axion decay constants ($f_a > M_{\textrm{Planck}} \approx 10^{19}~\textrm{GeV}$), that is, interactions that are intrinsically feebler than gravity. \section{Axion-wind effect} We also perform a search for the axion-wind effect, Eq.\,(\ref{potential_axion-wind}), by partitioning the entire PSI dataset into two sets with opposite magnetic-field orientations (irrespective of the electric field) and then analyzing the ratio $R = \nu_\mathrm{n} / \nu_\textrm{Hg}$ similarly to our oscillating EDM analysis above. The axion-wind effect would manifest itself through time-dependent shifts in $\nu_\mathrm{n}$ and $\nu_\textrm{Hg}$ (and hence $R$) at three angular frequencies:~$\omega_1 = m_a$, $\omega_2 = m_a + \Omega_{\textrm{sid}}$ and $\omega_3 = |m_a - \Omega_{\textrm{sid}}|$, with the majority of power concentrated in the $\omega_1$ mode. Also, the axion-wind signal would have an opposite phase in the two subsets. We find two overlapping $3\sigma$ excesses in the two subsets (at $\unit[3.42969]{\upmu Hz}$ and $\unit[3.32568]{mHz}$), neither of which have a phase relation consistent with an axion-wind signal. Following Eq.\,(\ref{potential_axion-wind}), we derive limits on the axion-nucleon coupling in Eq.\,(\ref{Axion_couplings}). We present these limits in Fig.\,\ref{fig:axion_limits_v2}, assuming that axions saturate the local cold DM energy density. Our peak sensitivity is $f_a/C_N \approx \unit[4 \times 10^{5}]{GeV}$ for $\unit[10^{-19}]{eV} \lesssim m_a \lesssim \unit[10^{-17}]{eV}$. \section{Conclusions} In summary, we have performed a search for a time-oscillating neutron EDM in order to probe the interaction of axion-like dark matter with gluons. We have also performed a search for an axion-wind spin-precession effect in order to probe the interaction of axion-like dark matter with nucleons. So far, no significant oscillations have been detected, allowing us to place limits on the strengths of such interactions. Our limits improve upon existing astrophysical limits on the axion-gluon coupling by up to 3 orders of magnitude and also improve upon existing laboratory limits on the axion-nucleon coupling by up to a factor of 40. Furthermore, we constrain a region of axion masses that is complementary to proposed ``on-resonance'' experiments in ferroelectrics \cite{CASPEr2014}. Future EDM measurements will allow us to probe even feebler oscillations and for longer periods of oscillation that correspond to smaller axion masses. \begin{figure} \centering \includegraphics[width=\columnwidth]{psi_ill_axion_limits_v7.pdf} \includegraphics[width=\columnwidth]{psi_ill_axion_wind_limits_v1.pdf} \caption{ Limits on the interactions of an axion with the gluons (top) and nucleons (bottom), as defined in Eq.\,(\ref{Axion_couplings}), assuming that axions saturate the local cold DM content. The regions above the thick blue and red lines correspond to the regions of parameters excluded by the present work at the 95\% confidence level (C.L.). The colored regions represent constraints from Big Bang nucleosynthesis (red, 95\% C.L.) \cite{Blum2014,StadnikThesis,Stadnik2015D}, supernova energy-loss bounds (green, order of magnitude) \cite{Graham2013,Raffelt1990Review,Raffelt2008LNP}, consistency with observations of galaxies (orange) \cite{Marsh2015Review,Marsh2015B,Schive2015,Marsh2017}, and laboratory searches for new spin-dependent forces (yellow, 95\% C.L.) \cite{Romalis2009_NF}. The nEDM, {\color{black}$\nu_\mathrm{n} / \nu_\textrm{Hg}$} and Big Bang nucleosynthesis constraints scale as $\propto \sqrt{\rho_a}$, while the constraints from supernovae {\color{black}and laboratory searches for new spin-dependent forces} are independent of $\rho_a$. The constraints from galaxies are relaxed if axions constitute a sub-dominant fraction of DM. We also show the projected reach of the proposed CASPEr experiment (dotted black line)~\cite{CASPEr2014}, and the parameter space for the canonical QCD axion (purple band). } \label{fig:axion_limits_v2} \end{figure} \begin{acknowledgments} We are grateful to Maxim Pospelov for helpful discussions. The experimental data has been taken in part at the ILL Grenoble and at PSI Villigen. We acknowledge the excellent support by the technical groups of both institutions and by various services of the collaborating universities and research laboratories. Dedicated technical support by M.~Meier and F.~Burri is gratefully acknowledged. We remember with gratitude the pioneering contributions of Professors K.~Smith and J.~M.~Pendlebury, without whom these experiments could never have taken place. This work was funded in part by the U.~K.~Science and Technology Facilities Council (STFC) through grants ST/N000307/1 and ST/M503836/1, as well as by the School of Mathematical and Physical Sciences at the University of Sussex. The original apparatus at ILL was funded by grants from the U.~K.’s PPARC (now STFC), and we would like to thank the generations of engineers, students and Research Fellows who contributed to its development. We gratefully acknowledge support of the Swiss National Science foundation under grants number 200020\_172639, 200020\_163413, and 200020\_157079. This work has been supported in part by The National Science Centre, Poland, under the grant No. UMO2015/18/M/ST2/00056. This work has been supported by the Research Foundation - Flanders (FWO). The LPC Caen and the LPSC acknowledge the support of the French Agence Nationale de la Recherche under Reference No.~ANR-09-BLAN-0046. M.~F.~was supported partly by the STFC Grant ST/L000326/1 and also by the European Research Council under the European Union's Horizon 2020 programme (ERC project 648680 DARKHORIZONS). V.~V.~F.~was supported by the Gutenberg Research College Fellowship and by the Australian Research Council. D.~J.~E.~M.~was supported by a Royal Astronomical Society postdoctoral fellowship hosted at King's College London. P.~M.~M.~was supported by the State Secretariat for Education, Research and Innovation (SERI) - Federal Commission for Scholarships for Foreign Students (FCS) grant \#2015.0594. Y.~V.~S.~was supported by the Humboldt Research Fellowship and in part by the Australian Research Council. E.~W.~was supported by a PhD Fellowship of the Research Foundation - Flanders (FWO). \end{acknowledgments}
1,314,259,995,412
arxiv
\section{Introduction} This note addresses Girsanov's question\footnote{See the remark on page~296 in \cite{Girsanov_1960}.} in the context of (not necessarily one-dimensional) solutions to stochastic differential equations: Under which conditions is a stochastic exponential a true martingale? The condition provided here is of probabilistic nature and both sufficient and necessary. It relates the martingale property of a local martingale to the almost sure finiteness of a certain integral functional under a related measure. To illustrate the condition informally, assume for a moment that the stochastic differential equation \begin{align*} \mathrm{d} X_t = b(t,X) \mathrm{d} t + \sigma(t, X) \mathrm{d} W_t, \qquad X_0 = x_0 \end{align*} has a weak solution $X$, defined on some probability space, for some progressively measurable functionals $b$ and ${\sigma}$. Consider a progressively measurable functional $\mu$ and the corresponding nonnegative local martingale $Z,$ given by \begin{align*} Z_t &:= \exp\left( \int_{0}^{t}\mu( s,X )^\mathsf{T} \mathrm{d} X_s -\int_{0}^{t} \left(\frac{1% }{2} \mu( s,X)^\mathsf{T} a(s,X) \mu(s,X) + \mu(s,X)^\mathsf{T} b(s,X)\right) \mathrm{d}% s\right), \end{align*} where $a = \sigma \sigma^\mathsf{T}$. (Below we will generalize this setup to allow $X$ to explode and $Z$ to hit zero.) First, Proposition~\ref{P existence sde} below shows that the stochastic differential equation \begin{align*} \mathrm{d} Y_t = \left(b(t,Y) + a(t, Y) \mu(t,Y)\right)\mathrm{d} t + \sigma(t, Y) \mathrm{d} W_t, \qquad Y_0 = x_0 \end{align*} also has a weak solution $Y$, at least up to the first time that the process $$K := \int_{0}^\cdot \mu( s,Y)^\mathsf{T} a(s,Y) \mu(s,Y) \mathrm{d} s$$ explodes. Indeed, if $Z$ is a uniformly integrable martingale then Girsanov's theorem, applied to the Radon-Nikodym derivative $Z_\infty$, yields directly that a weak solution $Y$ exists and that $K$ has probability zero to explode. This note shows that the reverse direction also holds; namely, if the process $K$ with an appropriate choice of weak solution $Y$ does not explode, then $Z$ is a uniformly integrable martingale. We refer the reader to Theorem~\ref{T 2} below for the precise statement. The conditions in this note are sharp and hold under minimal assumptions but are purely probabilistic and, in particular, often require additional existence and uniqueness results to be applicable. Two examples in Section~\ref{S:ex} illustrate these subtle points. A third example highlights the relevance of the underlying probability space. \subsection*{Related literature} The conditions in \cite{Kabanov/Liptser/Shiryaev:1979}, \cite{Engelbert_Senf_1991}, \cite{BenAri}, and \cite{Blei_Engelbert_2009} are closely related to those discussed here, as they also involve the explosiveness of the quadratic variation of the local martingale's stochastic logarithm. \cite{CFY} also studies the martingale property in the context of a martingale problem. \cite{McKean_1969}, \cite{Elworthy2010}, and \cite{Karatzas_Ruf_2013} work out a precise relationship between explosions of solutions to stochastic differential equations and the martingale property of related processes. \cite{Engelbert_Schmidt_1984} provides analytic conditions on the functionals $b, \sigma$, and $\mu$ for the martingale property of the local martingale $Z$, in the context of time-homogeneous conditions. \cite{Stummer_1993} provides further analytic conditions if the dispersion function is the identity. In the one-dimensional case, a full analytic characterization of the martingale property of $Z$ is provided by \cite{MU_martingale}. In the specific setup of ``removing the drift,'' \cite{Rydberg_1997} and, in the context of stochastic volatility models, \cite{Sin} give easily verifiable conditions. \cite{Blanchet_Ruf_2012} describes a methodology to decide on the martingale property of a nonnegative local martingale, based on weak convergence considerations. For further pointers to a huge amount of literature in this area, we refer the reader to \cite{Ruf_Novikov}. \section{Setup} We now formally introduce the setup of this work. We first consider a specific martingale problem whose solution $\P$ is the starting point of our analysis. We then introduce a nonnegative $\P$--local martingale $Z$. In Section~\ref{S:main} we shall then study a necessary and sufficient condition that $Z$ is a (uniformly integrable) $\P$--martingale. \subsection{Generalized local martingale problem} \label{S:2.1} Fix $d \in \mathbb{N}$, an open set $E\subset {\mathbb{R}}^{d}$, and a ``cemetery state'' $\Delta \notin {\mathbb{R}}^d$. Let $\Omega$ denote the set of all these paths $\omega: [0,\infty) \rightarrow E \bigcup \{\Delta\}$ such that $\omega(t) = \omega(t \wedge \bm{\zeta}(\omega))$ and $\omega$ is continuous on $[0, \bm{\zeta}(\omega))$, where $$\bm{\zeta}(\omega) :=\inf \{t \geq 0\mid \omega(t) = \Delta\}.$$ Here and in the following we use the convention $\inf \emptyset := \infty$. Let $X$ denote the canonical process and $\mathbb{M}=(\mathcal{M}_t)_{t \geq 0}$ the right-continuous modification of the natural filtration generated by $X$ and set $\mathcal M := \mathcal M_\infty := \bigvee_{t \geq 0} \mathcal M_t$. For all closed sets $F \subset E$, introduce the stopping times $$\bm \rho_{F}:=\inf \{t \geq 0 \mid X_t \notin F\}.$$ For a probability measure $\P$ on $(\Omega, \mathcal{M})$ and a stopping time $\bm \eta$, the measurable mapping $s: \Omega \rightarrow \Omega,\, \omega \mapsto \omega(\cdot \wedge \bm \eta)$ induces the push-forward measure $\P^{\bm \eta}$, given by $\P^{\bm \eta}(\cdot) = \P(s^{-1}(\cdot))$. Similarly, for a stochastic process $Y$ and a stopping time ${\bm \eta}$ we write $Y^{\bm \eta}$ to denote the stopped version of $Y$; that is, $Y^{\bm \eta}_t = Y_{{\bm \eta} \wedge t}$ for each $t \geq 0$. Call a function $g: [0,\infty )\times \Omega \rightarrow {\mathbb{R}}^n$, for some $n \in \mathbb{N}$, \emph{progressively measurable} if $g$, restricted to $[0,t] \times \Omega$, is $\mathcal B([0,t]) \otimes \mathcal M_t$--measurable for each $t \geq 0$. For example, the function $g$ is progressively measurable if $g(\cdot,\mathrm x) = \mathbf{g}(\mathrm x(\cdot))$ for all $\mathrm x \in \Omega$, where $\mathbf{g}: E \bigcup \{\Delta\} \rightarrow {\mathbb{R}}$ is measurable. The next definition is in the spirit of Section~1.13 in \cite{Pinsky}: \begin{definition}[Generalized local martingale problem] \label{D generalized} Fix an initial point $x_0 \in E$. Let ${a}: [0,\infty)\times \Omega \rightarrow{\mathbb{R}}^{d \times d}$ and ${b}:[0,\infty)\times \Omega \rightarrow{\mathbb{R}}^{d}$ denote two progressively measurable functions such that the function ${a}$ is symmetric and non-negative definite. \begin{itemize} \item We call a probability measure $\P$ on $(\Omega, \mathcal{M})$ a solution to the generalized local martingale problem corresponding to the quadruple $(E, x_0, {a}, {b})$ if $\P(X_0 = x_0) = 1$ and there exists a nondecreasing sequence $(E_n)_{n \in \mathbb{N}_0}$ of closed subsets of $E$ with $E = \bigcup_{n \in \mathbb{N}_0} E_n$ such that $\P(\bm\rho_{E_n} = \bm\zeta<\infty) = 0$ and \begin{align*} f\left(X^{\bm\rho_{E_{n}}}_\cdot\right) - \int_{0}^{\cdot\wedge \bm\rho_{E_{n}}} \left( \sum_{i=1}^{d} {b}_{i}(t,X) f_{x_{i}}(X_t) + \frac{1}{2} \sum_{i,j=1}^{d} {a}_{i,j}(t,X) f_{x_{i},x_{j}}(X_t) \right) \mathrm{d} t \end{align*} is a $\P$--local martingale for each $n \in\mathbb{N}_0$ and twice continuously differentiable function $f: E \rightarrow {\mathbb{R}}$ with partial derivatives $f_{x_{i}}$ and $f_{x_{i},x_{j}}$. \item Given a stopping time ${\bm \eta}$ we say that a probability measure $\P$ is a solution to the generalized local martingale problem corresponding to the quadruple $(E, x_0, {a}, {b})$ on $[\![ 0, {\bm \eta}[\![$ if there exists a nondecreasing sequence of stopping times $({\bm \eta}_n)_{n \in \mathbb{N}}$ with $\lim_{n \uparrow \infty} {\bm \eta}_n = {\bm \eta}$, $\P$--almost surely, such that the push-forward measure $\P^{{\bm \eta}_n}$ is a solution to the generalized martingale problem corresponding to the quadruple $(E, x_0, {a}^n, {b}^n)$, for each $n \in \mathbb{N}$. Here, ${a}^n(t,\mathrm x) := {a}(t,\mathrm x) \mathbf{1}_{t<{\bm \eta}_n(\mathrm{x})}$ and ${b}^n(t,\mathrm x) := {b}(t,\mathrm x) \mathbf{1}_{t<{\bm \eta}_n(\mathrm{x})}$ for all $(t, \mathrm x) \in [0,\infty) \times \Omega$. \qed \end{itemize} \end{definition} Observe that the initial point $x_0$ is fixed in Definition~\ref{D generalized}; in particular, the solution to a generalized local martingale problem here is not a family of probability measures indexed over the initial point, but one probability measure only. See, for example, \cite{Engelbert_2000} for this subtle point. This weaker requirement allows us to apply the characterization of this note to a larger class of processes. Throughout this note, fix $d \in \mathbb{N}$, an open set $E\subset {\mathbb{R}}^{d}$, an initial point $x_0 \in E$, and progressively measurable functions $b: [0,\infty)\times\Omega \rightarrow{\mathbb{R}}^{d}$ and $a: [0,\infty)\times \Omega \rightarrow{\mathbb{R}}^{d \times d}$, such that $a$ is symmetric and nonnegative definite. We shall work under the following assumption: \begin{assumption} There exists a solution $\P$ to the generalized local martingale problem corresponding to $(E,x_0, a,b)$. \qed \end{assumption} Various sufficient conditions for this standing assumption to hold are provided in Section~1.2 of \cite{Cherny_Engelbert} and in Sections~1.7--1.14 of \cite{Pinsky}. \subsection{A nonnegative local martingale} \label{S:2.2} In this subsection, we introduce a $\P$--local martingale $Z$ as a stochastic exponential. Towards this end, we fix a progressively measurable function $\mu:[0,\infty)\times\Omega \rightarrow{\mathbb{R}}^{d}$ and make the following assumption: \begin{assumption} \label{A2} We have \[ \pushQED{\qed} \P\left(\text{the function $\, \, [0,\infty) \ni t \mapsto \int_0^{t \wedge \bm \zeta} \mu(s, X)^\mathsf{T} a(s,X) \mu(s,X) \mathrm{d} s\,\,$ jumps to $\infty$} \right) = 0. \qedhere \popQED \] \end{assumption} Recall now the nondecreasing sequence $(E_n)_{n \in \mathbb{N}_0}$ of Definition~\ref{D generalized} and consider the stopping times $$\widetilde{\bm{\tau}}_n :=\inf \left\{t \geq 0 \left| \int_0^{t} \mu(s, X)^\mathsf{T} a(s,X) \mu(s,X) \mathrm{d} s > n\right.\right\}, \qquad \bm \theta_n := \widetilde{ \bm \tau}_n \wedge \bm \rho_{E_n} \wedge n$$ for all $n \in \mathbb{N}_0$, and $\bm\theta := \lim_{n \uparrow \infty} \bm\theta_n$. Observe that $\P(\bm\theta_n < \bm\theta)=1$ for all $n \in \mathbb{N}_0$ thanks to Standing Assumption~\ref{A2}. Therefore, the nondecreasing sequence $( \bm\theta_n)_{n \in \mathbb{N}}$ of stopping times announces $\bm \theta$. Next, the processes $$M^n := \int_0^{\cdot \wedge \bm \theta_n} \mu(s,X) ^\mathsf{T} \mathrm d \left(X(s) - \int_0^s b(t,X) \mathrm d t\right)$$ are well defined and indeed uniformly integrable $\P$--martingales, for all $n \in \mathbb{N}_0$. Moreover, for all $m,n \in \mathbb{N}_0$ with $m \leq n$, we have $M^m \equiv (M^n)^{\bm \theta_m}$, and thus, we may ``stick them together'' to obtain the process $$M := \sum_{n =1}^\infty M^n \mathbf{1}_{[\![ \bm \theta_{n-1}, \bm \theta_n [\![},$$ which satisfies $M^{\bm \theta_n} \equiv M^n$ for all $n \in \mathbb{N}_0$ and thus, is a local martingale on $[\![ 0, \bm \theta[\![$. To provide some intuition, the process $M$ is the stochastic integral of the process $\mu(\cdot,X)$ with respect to the local martingale part of $X$ up to the first time that either $X$ or the stochastic integral explodes. We also introduce the process $\langle M \rangle$ by \begin{align*} \langle M\rangle_t &:= \int_0^{t \wedge \bm \theta} \mu(s, X)^\mathsf{T} a(s,X) \mu(s,X) \mathrm{d} s \end{align*} for all $t \geq 0$. Now, define the nonnegative process $Z$ by \begin{align} \label{E T2 M} Z_t &:= \exp\left(M_t -\frac{1}{2} \langle M\rangle_t\right) \,\, \text{for all $t < \bm \theta$} \qquad \text{and} \qquad Z_t := \lim_{s \uparrow \bm \theta } Z_s \,\, \text{for all $t \geq \bm \theta$}. \end{align} By the supermartingale convergence theorem, the limit always exists and the process $Z$ is a nonnegative continuous $\P$--local martingale; see also Lemma~4.14 and Appendix~A in \cite{Ruf_Larsson}. Consider now the stopping times \begin{align*} {\bm{\tau}_n} :=\inf \left\{t \geq 0 \left| \langle M \rangle_t > n\right.\right\} \end{align*} for all $n \in \mathbb{N}_0$. Then we have ${\bm{\tau}_n} \geq \widetilde{\bm{\tau}}_n$ and Novikov's condition yields that the $\P$--local martingale $Z^{\bm{\tau}_n}$ is a uniformly integrable $\P$--martingale for each $n \in \mathbb{N}$. \section{Main result} \label{S:main} We are interested in finding a necessary and sufficient condition for the nonnegative $\P$--local martingale $Z$ to be a true $\P$--martingale. The condition in this note is probabilistic in nature and is formulated under a certain probability measure that is a solution to the generalized local martingale problem corresponding to $(E,x_0,a, \widehat{b})$ on $[\![ 0, \bm{\theta}[\![$, where \begin{align* \widehat{b}(t,\mathrm x) := b(t,\mathrm x) + a(t,\mathrm x) \mu(t,\mathrm x) \end{align*} for all $(t,\mathrm x) \in [0,\infty) \times \Omega$. Note that if $Z$ is a uniformly integrable $\P$--martingale then a solution to this generalized local martingale problem is given by ${\mathbf{Q}}$, defined by $\mathrm{d} {\mathbf{Q}} = Z_\infty \mathrm{d} \P$, thanks to Girsanov's theorem. The following result yields that a solution to this generalized local martingale problem exists even if $Z$ is not a $\P$--martingale: \begin{proposition}[Existence of a solution to the related martingale problem] \label{P existence sde} The generalized local martingale problem corresponding to $(E,x_0,a, \widehat{b})$ on $[\![ 0, \bm{\theta}[\![$ has a solution ${\mathbf{Q}}$ that also satisfies $(\mathrm{d} {\mathbf{Q}}|_{\mathcal M_{\bm \tau_n}}) / (\mathrm{d} \P|_{\mathcal M_{\bm \tau_n}}) = Z^{\bm \tau_n}_\infty$ for each $n \in \mathbb{N}$. \end{proposition} \begin{proof} For any stopping time $\bm \eta$, define the sigma algebra \begin{align*} \mathcal M_{\bm \eta-} := \sigma(X_0) \vee \sigma\left\{A \cap \{\bm \eta > t\} \mid A \in \mathcal M_t, t \geq 0\right\}. \end{align*} Here, $\sigma(X_0) \subset \mathcal M_0$ denotes the sigma algebra generated by $X_0$. Define now the sequence $({\mathbf{Q}}_n)_{n \in \mathbb{N}}$ of probability measures by $\mathrm{d}{\mathbf{Q}}_{n}=Z^{\bm{\tau}_n}_\infty\mathrm{d} \P$ and observe that ${\mathbf{Q}}_n(A) = {\mathbf{Q}}_m(A)$ for all $A \in \mathcal{M}_{(\bm{\tau}_n \wedge \bm{\tau}_m)-}$ and $n,m \in \mathbb{N}$. Thus, the set function ${\mathbf{Q}}: \bigcup_{n \in \mathbb{N}} \mathcal{M}_{\bm{\tau}_n-} \rightarrow [0,1]$ with $A \mapsto {\mathbf{Q}}_n(A)$ for all $A \in \mathcal{M}_{\bm{\tau}_n-}$ is well defined. A standard extension theorem, such as Theorem~V.4.1 in \cite{Pa}, then yields that ${\mathbf{Q}}$ can be extended to a probability measure on $ \bigvee_{n \in \mathbb{N}} \mathcal{M}_{\bm{\tau}_n-}$; see also \cite{F1972} or Appendix~B in \cite{CFR2011}. We may now extend this measure to a probability measure on $(\Omega,\mathcal{M})$; see Theorem~E.2 in \cite{Perkowski_Ruf_2014} and use $\bigvee_{n \in \mathbb{N}} \mathcal{M}_{\bm{\tau}_n-} = \mathcal{M}_{(\lim_{n \uparrow \infty} \bm{\tau}_n)-}.$ With a slight misuse of notation, we again write ${\mathbf{Q}}$ for this probability measure, constructed via an extension argument. Next, fix $n \in \mathbb{N}$ and $A \in \mathcal M_{\bm \tau_n}$ and note that $\bm \tau_n(\omega) < \bm \tau_{n+1}(\omega) $ on $\{\bm \tau_n<\infty\}$ since $\langle M \rangle(\omega)$ is continuous and does not jump to infinity, for any $\omega \in \Omega$, by construction of the stopping time $\bm \theta$. This then yields \begin{align*} {\mathbf{Q}}(A) &= {\mathbf{Q}}(A \cap \{\bm\tau_n < \infty\}) + {\mathbf{Q}}(A \cap \{\bm \tau_n = \infty\}) \\&= {\mathbb{E}}^{\P}\left[Z^{\bm \tau_{n+1}}_\infty \mathbf{1}_{A \cap \{\bm \tau_n < \infty\}}\right] + {\mathbb{E}}^{\P}\left[Z^{\bm \tau_{n}}_\infty \mathbf{1}_{A \cap \{\bm \tau_n = \infty\}}\right] = {\mathbb{E}}^{\P}\left[Z^{\bm \tau_{n}}_\infty \mathbf{1}_A\right] \end{align*} since $A \cap \{\bm \tau_n = \infty\} \in \mathcal M_{\bm \tau_n-}$. The statement then follows. \end{proof} Note that it is a common approach to use a change of measure to prove the existence of a solution to a given martingale problem, as in the proof of Proposition~\ref{P existence sde}; see, for example, \cite{SV_multi}. However, usually only \emph{equivalent} changes of measures are considered. \begin{remark} \label{R uniqueness} Observe that Proposition~\ref{P existence sde} does not make any assertion concerning the uniqueness of the measure ${\mathbf{Q}}$. In general, such uniqueness does not hold. However, after fixing a probability measure $\P$ from the set of solutions to the generalized local martingale problem corresponding to $(E,x_0,a, b)$, the probability measure ${\mathbf{Q}}$ of Proposition~\ref{P existence sde} is uniquely determined on $\bigvee_{n \in \mathbb{N}} \mathcal{M}_{\bm{\tau}_n}$. \qed \end{remark} We are now ready to state a characterization of the martingale property of the $\P$--local martingale $Z$: \begin{theorem}[Characterization of martingale property] \label{T 2} With ${\mathbf{Q}}$ denoting the measure of Proposition~\ref{P existence sde}, the following equivalences hold: The $\P$--local martingale $Z$, given in \eqref{E T2 M}, is a $\P$--martingale if and only if \begin{align} \label{E suffCond} {\mathbf{Q}}\left(\int_{0}^{t \wedge \bm \theta} \mu( s,X)^\mathsf{T} a(s,X) \mu(s,X)\mathrm{d} s < \infty\right) = 1 \end{align} for all $t \geq 0$. The $\P$--local martingale $Z$ is a uniformly integrable $\P$--martingale if and only if \begin{align} \label{E suffCond2} {\mathbf{Q}}\left(\int_{0}^{ \bm\theta} \mu( s,X)^\mathsf{T} a(s,X) \mu(s,X)\mathrm{d} s < \infty\right) = 1. \end{align} \end{theorem} \begin{proof} We start by assuming that $Z$ is a $\P$--martingale. We need to show that ${\mathbf{Q}}(A_n) = 0$ for the nondecreasing sequence of events $(A_n)_{n \in \mathbb{N}}$, defined by \begin{align*} A_n := \left\{ \langle M\rangle_{n \wedge \bm{\theta}} = \infty\right\} = \bigcap_{k \in \mathbb{N}} \{\bm \tau_k < n \wedge \bm \theta \} \in \mathcal{M}_{(n \wedge \bm{\theta})-} \end{align*} for all $n \in \mathbb{N}$. Fix $n \in \mathbb{N}$ and observe that the martingale property of $Z$ yields a measure ${\mathbf{Q}}^Z$, defined by $\mathrm{d} {\mathbf{Q}}^Z = Z_n \mathrm{d} \P= Z_{n \wedge \bm{\theta}} \mathrm{d} \P$. Since $ \mathcal{M}_{(n \wedge \bm{\theta})-} = \bigvee_{m \in \mathbb{N}} \mathcal{M}_{(n \wedge \bm{\tau}_m \wedge \bm\theta)-}$, it is easy to see that ${\mathbf{Q}}^Z|_{\mathcal{M}_{(n \wedge \bm{\theta})-}} = {\mathbf{Q}}|_{\mathcal{M}_{(n \wedge \bm{\theta})-}}$. Thus, we have ${\mathbf{Q}}(A_n) = {\mathbf{Q}}^Z(A_n) = {\mathbb{E}}^\P[Z_n \mathbf{1}_{A_n}] = 0$ since $Z_n = 0$ $\P$--almost surely on $A_n$ by the Dambis-Dubins-Schwarz theorem. For the reverse direction, note that \begin{align*} {\mathbf{Q}}(A) = \lim_{n \uparrow \infty} {\mathbf{Q}}(A \cap \{\bm \tau_n > t\wedge \bm \theta\}) \leq \lim_{n \uparrow \infty} {\mathbb{E}}^\P\left[Z^{ \bm \tau_n}_\infty \mathbf{1}_A \right] = 0 \end{align*} for all $t \geq 0$ and $A \in \mathcal M_{t \wedge \bm \theta}$ with $\P(A) = 0$. Here, we have used the assumption, namely that \eqref{E suffCond} holds, in the first equality. Thus, ${\mathbf{Q}}$ is absolutely continuous with respect to $\P$ on $\mathcal M_{t \wedge \bm \theta}$ for each $t \geq 0$. Define now the $\P$--martingale $R$ by $$R_t := \frac{\mathrm{d} {\mathbf{Q}}|_{\mathcal M_{t \wedge \bm \theta}}}{\mathrm{d} \P |_{\mathcal M_{t \wedge \bm \theta}}}$$ for each $t \geq 0$. The fact that $R^{\bm\tau_n \wedge n} \equiv Z^{\bm \tau_n \wedge n}$ for each $n \in \mathbb{N}$ and taking limits then yield $R \equiv Z$. Thus, $Z$ is a $\P$--martingale. The second equivalence is proven in the same way. \end{proof} We refer the reader to \cite{Musiela_1986}, \cite{Engelbert_Senf_1991}, \cite{Khos_Salminen_Yor}, and \cite{MU_integral} for analytic conditions that yield \eqref{E suffCond} in the case $d=1$. Theorem~\ref{T 2} extends Theorem~1 in \cite{BenAri} to a bigger class of stochastic differential equations; moreover, Proposition~\ref{P existence sde} yields that one does not need to assume the existence of the measure ${\mathbf{Q}}$, as it always exists. We remark that in the one-dimensional time-homogeneous case, under some additional regularity conditions, an analytic characterization of the martingale property of $Z$ has been obtained; most notably, by \cite{MU_martingale}. This characterization is given in terms of the behavior of $X$ under $\P$ and ${\mathbf{Q}}$ at the boundary points of the one-dimensional interval $E$. \begin{corollary}[Pathwise integrability] \label{C:1} If \begin{align*} \int_{0}^{{t \wedge \bm \theta}(\mathrm x)} \mu(s, \mathrm x)^\mathsf{T} a(s,\mathrm x) \mu(s,\mathrm x) \mathrm{d} s < \infty \end{align*} holds for all $(t,\mathrm x) \in[0,\infty) \times\Omega$ then $Z$ is a $\P$--martingale. Moreover, if \begin{align*} \int_{0}^{{\bm \theta}(\mathrm x)} \mu(s, \mathrm x)^\mathsf{T} a(s,\mathrm x) \mu(s,\mathrm x) \mathrm{d} s < \infty \end{align*} holds for all $\mathrm x \in \Omega$ then $Z$ is a uniformly integrable $\P$--martingale. \end{corollary} \begin{proof} The statement follows directly from Thereom~\ref{T 2}. \end{proof} \begin{remark} We emphasize certain caveats concerning Theorem~\ref{T 2}: \begin{itemize} \item The choice of a solution to the generalized local martingale problem corresponding to $(E,x_0,a, b)$ matters for the question whether the local martingale $Z$ is a martingale. Indeed, as Example~\ref{ex1} illustrates, the local martingale $Z$ might be a true martingale under one measure and a strict local martingale under another measure. \item However, the choice of measure ${\mathbf{Q}}$ among the ones that satisfy the conditions in Proposition~\ref{P existence sde}, namely the ones that agree on $\bigvee_{n \in \mathbb{N}} \mathcal M_{\bm \tau_n}$, is not relevant. This is due to the fact that \eqref{E suffCond} and \eqref{E suffCond2} hold either for all such probability measures with the prescribed ``local'' distribution or for none. (See also Remark~\ref{R uniqueness}.) \item The generalized local martingale problem corresponding to $(E,x_0,a, \widehat{b})$ might have a solution that is unique among the subset of non-explosive solutions, but that is not unique among all solutions. Nevertheless, Theorem~\ref{T 2} may be applied, but the probability measure ${\mathbf{Q}}$ needs to be chosen carefully. See Example~\ref{ex2} for an illustration. \item We have not assumed that the $\P$--local martingale $Z$ is strictly positive. For example, consider the parameter constellation $d=1$, $E = (0,\infty)$, $x_0=1$, and $$a(t,\mathrm x) = 1, \qquad b(t,\mathrm x) = 0, \qquad \mu(t,\mathrm x) = \mathbf{1}_{\bm \zeta(\mathrm x) > t } \frac{1}{\mathrm x(t)}$$ for all $(t, \mathrm x) \in [0,\infty) \times \Omega$. The solution to the generalized local martingale problem corresponding to $(E, x_0, a, b) = ((0,\infty),1,1,0)$ then is Brownian motion killed when hitting zero and is unique. In particular, the stopping time $\bm \theta$ of Subsection~\ref{S:2.2} is the first time that the Brownian motion leaves $E$; that is, $\bm \theta = \bm \zeta$. Note that the $\P$--local martingale $Z$ is a $\P$--Brownian motion stopped in zero. Now, under ${\mathbf{Q}}$, the unique solution to the generalized local martingale problem corresponding to $(E, x_0, a, \mu)$, the process $X$ is a three-dimensional Bessel process. In particular, argued for example via Feller's test of explosions, we have ${\mathbf{Q}}(\bm \zeta = \infty) = 1$ and thus \begin{align*} {\mathbf{Q}}\left(\int_0^t \frac{1}{X_s^2} \mathrm{d} s < \infty\right) = 1 \end{align*} for all $t \geq 0$, which yields \eqref{E suffCond}. However, \eqref{E suffCond2} fails. Thus we obtain the obvious statement that the $\P$--local martingale $Z$ is a true $\P$--martingale, but not uniformly integrable. \item The statement of Corollary~\ref{C:1} is wrong, in general, if we replace the underlying filtered space $(\Omega, \mathcal M, \mathbb M)$ by the space of $E$--valued continuous paths, along with the right-continuous modification of the canonical filtration. This is illustrated in Example~\ref{ex3}. \qed \end{itemize} \end{remark} \section{Examples} \label{S:ex} The examples of this section illustrate the subtle points in the application of Theorem~\ref{T 2} and Corollary~\ref{C:1}. \begin{example}[Non-uniqueness] \label{ex1} Let $d=1$, $E = (0,\infty),$ and $x_0 = 1$. Set \begin{align*} a(t,\mathrm x) = \mathbf{1}_{\mathrm x(t) \neq 1}, \qquad b(t, \mathrm x) = \mathbf{1}_{\bm \zeta(\mathrm x) > t } \mathbf{1}_{\mathrm x(t) \neq 1} \frac{1}{\mathrm x(t)}, \qquad \mu(t, \mathrm x) = -b(t,\mathrm x) \end{align*} for all $(t, \mathrm x) \in [0,\infty) \times \Omega$. The generalized local martingale problem corresponding to the quadruple $(E, x_0, a, b)$ has a solution $\P_1$; indeed $\P_1(X_\cdot \equiv 1)=1$ satisfies all conditions. However, the solution is not unique. Another solution $\P_2$ would be the one corresponding to the three-dimensional Bessel process, started in one. Observe that the process $Z$ is a local martingale in each case. In the first case, it is almost surely constant, that is, $\P_1(Z_\cdot\equiv 1) = 1$, and thus the process $Z$ is a (uniformly integrable) $\P_1$--martingale. In the second case, It\^o's formula yields that $Z$ is distributed as the reciprocal of a three-dimensional Bessel process and thus, a strict $\P_2$--local martingale. Consider now the generalized local martingale problem corresponding to the quadruple $(E, x_0, a, b+a \mu) = ((0,\infty), 1, a, 0)$, which also has a solution according to Proposition~\ref{P existence sde}. Indeed, it has several solutions, in particular ${\mathbf{Q}}_1 \equiv \P_1$ and the Brownian motion measure ${\mathbf{Q}}_2$. Note that \eqref{E suffCond} with ${\mathbf{Q}}={\mathbf{Q}}_1$ holds but with ${\mathbf{Q}}={\mathbf{Q}}_2$ fails. This observation is consistent with the fact that $Z$ is a $\P_1$--martingale but a strict $\P_2$--local martingale. \qed \end{example} The next example illustrates that the choice of the probability measure ${\mathbf{Q}}$ in Theorem~\ref{T 2} is highly relevant if several solutions exist to the generalized local martingale problem corresponding to the quadruple $(E, x_0, a, b+a \mu)$. \begin{example}[Uniqueness of non-explosive solution] \label{ex2} Let $d=1$, $E = {\mathbb{R}}$, and $x_0 = 0$. Set $$a(t,\mathrm x) =1-\mathbf{1}_{\min_{s \leq t} \{\mathrm x(s)\} = 0 = \max_{s \leq t} \{\mathrm x(s)\}}, \qquad b(t, \mathrm x) = 0, \qquad \mu(t, \mathrm x) = (\mathrm x(t))^2 \mathbf{1}_{\bm \zeta(\mathrm x) > t }$$ for all $(t, \mathrm x) \in [0,\infty) \times \Omega$. Again, the generalized local martingale problem corresponding to the quadruple $(E, x_0, a, b)$ has several solutions; for example $\P_1$ such that $\P_1(X_\cdot \equiv 0) = 1$ and the Brownian motion measure $\P_2$. Consider now the generalized local martingale problem corresponding to the quadruple $(E, x_0, a, b+a \mu) = ({\mathbb{R}}, 0, a, a \mu)$. Clearly, it has several solutions, in particular the constant process with ${\mathbf{Q}}_1 \equiv \P_1$ and, moreover, ${\mathbf{Q}}_2$, under which $X$ satisfies the stochastic differential equation \begin{align*} X_t = \int_0^t X_s^2 \mathrm{d} s + W_t \end{align*} for each $t \geq 0$ for some ${\mathbf{Q}}_2$--Brownian motion $W$ up to an explosion time, which is finite ${\mathbf{Q}}_2$--almost surely by Feller's test of explosions. Indeed, it is easy to see that the choice of parameters in this example implies that any solution to the generalized local martingale problem corresponding to the quadruple $({\mathbb{R}}, 0, a, a \mu)$ is a process that is either constant zero or explodes almost surely. Thus, this generalized local martingale problem has a unique non-explosive solution. However, note that Theorem~\ref{T 2} does rely on a certain choice of solution ${\mathbf{Q}}$, which does not always correspond to ${\mathbf{Q}}_1$. In particular, $Z$ here is a (uniformly integrable) $\P_1$--martingale but a strict $\P_2$--local martingale. \qed \end{example} \begin{example}[Role of the underling probability space]\label{ex3} We consider now, instead of the filtered space $(\Omega, \mathcal M, \mathbb M)$ of Subsection~\ref{S:2.1} the filtered probability space $(\Omega', \mathcal M', \mathbb M')$, where $\Omega' = C([0,\infty), E)$ denotes the space of $E$--valued continuous paths with canonical process $X'$, $\mathbb{M}'=(\mathcal{M}_t')_{t \geq 0}$ denotes the right-continuous modification of the natural filtration generated by $X'$, and $\mathcal M' = \bigvee_{t \geq 0} \mathcal M_t'$ denotes the smallest sigma algebra that makes $X'$ measurable. Note that $\Omega' \subsetneq \Omega.$ Exactly as in Subsection~\ref{S:2.1}, we can now introduce the notions of progressive measurability and solutions $\P'$ to the generalized local martingale problem. Moreover, given such a solution $\P'$, we can introduce a $\P'$--local martingale $Z'$ exactly as in Subsection~\ref{S:2.2}. Let now $d=1$, $E={\mathbb{R}}$, and $x_0 =0$. Moreover, for some fixed $T>0$ set $$a(t,\mathrm x') = 1, \qquad b(t,\mathrm x') = 0, \qquad \mu(t,\mathrm x') = (\mathrm x'(t))^2 \mathbf{1}_{t \leq T}$$ for all $(t, \mathrm x') \in [0,\infty) \times \Omega'$. Then, there exists a unique solution $\P'$ to the generalized local martingale problem corresponding to $(E,x_0,a,b) = ({\mathbb{R}}, 0,1,0)$ on $(\Omega',\mathcal M')$. Indeed, $\P'$ corresponds to the Wiener measure on $(\Omega, \mathcal M')$. Next, the $\P'$--local martingale $Z'$, given by \begin{align*} Z_t' = \exp\left(\int_0^t (X_s')^2 \mathrm{d} X_s' - \frac{1}{2} \int_0^t (X_s')^4 \mathrm{d} s \right) \end{align*} for all $t \geq 0$, is not a $\P'$--martingale (see Section~3.7 in \cite{McKean_1969}). Thus, there exists $u>0$ such that ${\mathbb{E}}^{\P'}[Z_u]<1$ and we set $T=u$. Note that \begin{align*} \int_0^\infty \mu(s, \mathrm x')^\mathsf{T} a(s,\mathrm x') \mu(s,\mathrm x') \mathrm{d} s = \int_0^T (\mathrm x'(s))^4 \mathrm{d} s < \infty \end{align*} by continuity of the path $\mathrm x'$, for all $\mathrm x' \in \Omega'$. This shows that the assertion of Corollary~\ref{C:1} is wrong, in general, if we replace $(\Omega, \mathcal M, \mathbb M)$ by $(\Omega', \mathcal M', \mathbb M')$. To understand, why the assumption of Corollary~\ref{C:1} is not satisfied if we replace $\Omega'$ by $\Omega$ fix the path $\mathrm x \in \Omega \setminus \Omega'$ with $\mathrm x(t) = \tan(t \pi/(2 T)) \mathbf{1}_{t < T} + \Delta \mathbf{1}_{t \geq T}$ for all $t \geq 0$. Then, we have $\int_0^T (\mu(t, \mathrm x))^2 \mathrm{d} t = \infty$. \qed \end{example}
1,314,259,995,413
arxiv
\section{Introduction} Large-scale vortical structures are universal features observed in geophysical, astrophysical and laboratory flows (see, e.g., \cite{L83,P87,C94,GLM97,T98,RAO98}). Formation of vortical structures is related to the Prandtl secondary flows (see, e.g., \cite{P52,T56,P70,B87}). A lateral stretching (or ''skewing") by an existing shear generates streamwise vorticity that results in formation of the first kind of the Prandtl secondary flows. In turbulent flow the large-scale vorticity is generated by the divergence of the Reynolds stresses. This mechanism determines the second kind of the Prandtl turbulent secondary flows \cite{B87}. The generation of large-scale vorticity in a homogeneous nonhelical turbulence with an imposed large-scale linear velocity shear has been recently studied in \cite{EKR03}. Let us discuss a mechanism of this phenomenon. The equation for the mean vorticity ${\bf W} = \bec{\nabla} {\bf \times} {\bf U}$ read \begin{eqnarray} {\partial {\bf W} \over \partial t} = \bec{\nabla} {\bf \times} ({\bf U} {\bf \times} {\bf W} + {\bf F} - \nu \bec{\nabla} {\bf \times} {\bf W}) \;, \label{W10} \end{eqnarray} where ${\bf U}$ is the mean fluid velocity, ${\bf F}_i = - \nabla_j \, \langle u_i u_j \rangle$ is the effective force caused by velocity fluctuations, ${\bf u}$, and $ \nu$ is the kinematic viscosity. The first term, ${\bf U} {\bf \times} {\bf W}$, in Eq.~(\ref{W10}) determines laminar effects of the mean vorticity production caused by the sheared motions, while the effective force ${\bf F}$ determines the turbulent effects on the mean fluid flow. Let us consider a simple large-scale linear velocity shear ${\bf U}^{(s)} = (0, Sx, 0)$ imposed on the small-scale nonhelical turbulence. The equation for the perturbations of the mean vorticity, $\tilde{\bf W} = (\tilde{W}_x(z), \tilde{W}_y(z), 0)$, reads \begin{eqnarray} {\partial \tilde{W}_x \over \partial t} &=& S \, \tilde{W}_y + \nu_{_{T}} \tilde{W}''_x \;, \label{E2}\\ {\partial \tilde{W}_y \over \partial t} &=& - \beta_0 \, S \, l_0^2 \, \tilde{W}''_x + \nu_{_{T}} \tilde{W}''_y \;, \label{E3} \end{eqnarray} (see \cite{EKR03}), where $\tilde{W}'' = \partial^2 \tilde{W} /\partial z^2$, $\, \nu_{_{T}}$ is the turbulent viscosity, $l_0$ is the maximum scale of turbulent motions and the parameter $\beta_0$ is of the order of 1, and depends on the scaling exponent of the correlation time of the turbulent velocity field (see Sect. II). A solution of Eqs.~(\ref{E2}) and~(\ref{E3}) has the form $ \propto \exp(\gamma t + i K_z z)$, where the growth rate of the large-scale instability is given by $\gamma = \sqrt{\beta_0} \, S \, l_0 \, K_z - \nu_{_{T}} \, K_z^2$ and $K_z$ is the wave number. The maximum growth rate of perturbations of the mean vorticity, $ \gamma_{\rm max} = \beta_0 \, (S \, l_0)^2 / 4 \nu_{_{T}}$, is attained at $ K_z = K_m = \sqrt{\beta_0} \, S \, l_0 /2 \nu_{_{T}}$. This corresponds to the ratio $\tilde{W}_y / \tilde{W}_x = \sqrt{\beta_0} \, l_0 \, K_m \approx S \, \tau_0$, where the time $\tau_0 = l_{0} / u_0$ and $u_0$ is the characteristic turbulent velocity in the maximum scale $l_{0}$ of turbulent motions. Note that in a laminar flow this instability does not occur. The mechanism of this instability is as follows (see \cite{EKR03} for details). The first term, $S \tilde{W}_y = ({\bf W}^{(s)}\cdot\bec{\nabla})~\tilde{U}_x$, in Eq.~(\ref{E2}) determines a ''skew-induced" generation of perturbations of the mean vorticity $\tilde{W}_x$ by stretching of the equilibrium mean vorticity ${\bf W}^{(s)}= (0,0,S)$, where $\tilde{\bf U}$ are the perturbations of the mean velocity. In particular, the mean vorticity $\tilde{W}_x {\bf e}_x$ is generated from $\tilde{W}_y {\bf e}_y$ by equilibrium shear motions with the mean vorticity ${\bf W}^{(s)}$, whereby $\tilde{W}_x {\bf e}_x \propto ({\bf W}^{(s)} \cdot \bec{\nabla}) \tilde{U}_x {\bf e}_x \propto \tilde{W}_y {\bf e}_y \times {\bf W}^{(s)} $. Here ${\bf e}_x$, ${\bf e}_y$ and ${\bf e}_z$ are the unit vectors along $x$, $y$ and $z$ axes, respectively. On the other hand, the first term, $- \beta_0 \, S \, l_0^2 \, \tilde{W}''_x$, in Eq.~(\ref{E3}) determines a ''Reynolds stress-induced" generation of perturbations of the mean vorticity $\tilde{W}_y$ by the Reynolds stresses. In particular, this term is determined by $ (\bec{\nabla} {\bf \times} {\bf F})_y$. This implies that the component of the mean vorticity $\tilde{W}_y {\bf e}_y $ is generated by an effective anisotropic viscous term $ \propto - l_0^2 \, \Delta \, (\tilde{W}_x {\bf e}_x \cdot \bec{\nabla}) \, {U}^{(s)}(x) {\bf e}_y \propto - l_0^2 \, S \, \tilde{W}''_x {\bf e}_y .$ This instability is caused by a combined effect of the sheared motions (''skew-induced" generation) and the ''Reynolds stress-induced" generation of perturbations of the mean vorticity. The mechanism for this large-scale instability in a sheared nonhelical homogeneous turbulence is different from that discussed in \cite{MST83,KMT91,CMP94}, where the generation of large-scale vorticity in the helical turbulence occurs due to hydrodynamic alpha effect. The latter effect is associated with the hydrodynamic helicity of turbulent flow. In a nonhelical homogeneous turbulence this effect does not occur. The large-scale instability in a nonhelical homogeneous turbulence has been studied in \cite{EKR03} only for a simple case of unbounded turbulence with an imposed linear velocity shear and when the perturbations of the mean vorticity depend on one spatial variable $z$. In this study the theoretical approach proposed in \cite{EKR03} is further developed and applied for comprehensive investigation of the large-scale instability for different situations with nonuniform shear, inhomogeneous turbulence and a more general form of the perturbations of the mean vorticity $\tilde{\bf W}({\bf r})$ that depends on three spatial variables. In the present study we consider three types of the background large-scale flows, i.e., the Couette flow (linear velocity shear) and Poiseuille flow (quadratic velocity shear) in a small-scale homogeneous turbulence, and the "log-linear" velocity shear in an inhomogeneous turbulence. We have derived new mean-field equations for perturbations of large-scale velocity which depend on three spatial coordinates in a small-scale sheared turbulence, for a nonuniform background large-scale velocity shear and for an arbitrary scaling of the correlation time $\tau(k)$ of the turbulent velocity field. The stability of the laminar Couette and Poiseuille flows in a problem of transition to turbulence has been studied in a number of publications (see, e.g., \cite{DR81,SH01,CJJ03,BOH88,REM03,ESH07}, and references therein). It is known that laminar plane Couette flow and antisymmetric mode of laminar plane Poiseuille flow are stable with respect to small perturbations for any Reynolds numbers. A symmetric mode of laminar plane Poiseuille flow is stable when the Reynolds number is less than 5772 \cite{CJJ03}. In laminar flows the Tollmien-Schlichting waves can be excited. The molecular viscosity plays a destabilizing role in laminar flows which promotes the excitation of the Tollmien-Schlichting waves (see, e.g., \cite{SH01}). These waves are growing solutions of the Orr-Sommerfeld equation. In the present study we have found a turbulent analogue of the Tollmien-Schlichting waves. These waves are excited by a small-scale sheared turbulence, i.e., by a combined effect of the turbulent Reynolds stress-induced generation of perturbations of the mean vorticity and the background sheared motions. The energy of these waves is supplied by the small-scale sheared turbulence. We demonstrate that the off-diagonal terms in the turbulent viscosity tensor play a crucial role in the excitation of the turbulent Tollmien-Schlichting waves. These waves can be excited even in a plane Couette flow imposed on a small-scale turbulence when perturbations of velocity depend on three spatial coordinates. When perturbations of large-scale velocity depend on one or two spatial coordinates the turbulent Tollmien-Schlichting waves can not be excited in a sheared turbulence. In the present study we show that the large-scale Couette and Poiseuille flows imposed on a small-scale turbulence can be unstable with respect to small perturbations. The critical effective Reynolds number (based on turbulent viscosity) required for the excitation of this large-scale instability, is of the order of 200. This paper is organized as follows. In Sect. II the governing equations are formulated. In Sect. III we consider a homogeneous turbulence with a large-scale linear velocity shear (Couette flow), while in Sect. IV we study a homogeneous turbulence with a large-scale quadratic velocity shear (Poiseuille flow). In Sect. V we investigate formation of large-scale vortical structures in an inhomogeneous turbulence with an imposed nonuniform velocity shear. Finally, we draw conclusions in Sec.~VI. \section{Governing equations} The equation for the mean velocity ${\bf U}$ in incompressible flow reads \begin{eqnarray} \bigg( {\partial \over \partial t} + {\bf U} \cdot \nabla \bigg)U_i = - {\nabla_i P \over \rho} + \nabla_j \, \langle u_i u_j \rangle + \nu \Delta U_i \;, \label{B2} \end{eqnarray} where ${\bf U}$ is the mean velocity, $P$ is the mean pressure and $\nu$ is the kinematic viscosity. The effect of turbulence on the mean flow is determined by the Reynolds stresses $\langle u_i u_j \rangle$, where ${\bf u}$ are the fluid velocity fluctuations. We consider a turbulent flow with an imposed mean velocity shear $\nabla_i {\bf U}^{(s)}$, where ${\bf U}^{(s)}$. In order to study a stability of this equilibrium we consider perturbations $\tilde{\bf U}$ of the mean velocity, i.e., the total mean velocity is ${\bf U} = {\bf U}^{(s)} + \tilde{\bf U}$. Thus, the linearized equation for the small perturbations of the mean velocity is given by \begin{eqnarray} \bigg({\partial \over \partial t} + {\bf U}^{(s)} \cdot \bec{\nabla}\bigg) \tilde{U}_i &+& (\tilde{\bf U} \cdot \bec{\nabla})U^{(s)}_i = -{\nabla_i \tilde{P} \over \rho} + F_i \nonumber\\ &+& \nu \Delta \tilde{U}_i \;, \label{B4} \end{eqnarray} where $F_i = - \nabla_j \, f_{ij}(\tilde{\bf U})$ is the effective force, $f_{ij} = \langle u_i u_j \rangle$ and $\tilde{P}$ are the perturbations of the fluid pressure. Equation~(\ref{B4}) is derived by subtracting Eq.~(\ref{B2}) written for the equilibrium velocity ${\bf U}^{(s)}$ from Eq.~(\ref{B2}) for the mean velocity ${\bf U}$. We consider a simple large-scale velocity shear, so that ${\bf U}^{(s)}$ is directed along $y$ direction and is non-uniform in $x$ direction, i.e., $ {\bf U}^{(s)} = (0, U^{(s)}_y(x), 0)$. In order to obtain a closed system of equations, an equation for the effective force $F_i = - {\nabla}_j f_{ij}(\tilde{\bf U})$ has been derived in \cite{EKR03}, where \begin{eqnarray} f_{ij}(\tilde{\bf U}) &=& - 2 \nu_{_{T}} \, (\partial \tilde U)_{ij} - l_0^2 \, \big[4 C_1 \, M_{ij} + C_2 \, (N_{ij} + H_{ij}) \nonumber\\ & & + C_3 \, G_{ij}\big] \;, \label{B15} \end{eqnarray} $(\partial \tilde U)_{ij} = (\nabla_i \tilde U_{j} + \nabla_j \tilde U_{i}) / 2$ and $l_0$ is the maximum scale of turbulent motions. The tensors $\, M_{ij},$ $\, N_{ij} ,$ $\, H_{ij}$ and $G_{ij}$, in the expression for the Reynolds stresses~(\ref{B15}) are given by: \begin{eqnarray*} M_{ij} &=& (\partial {U}^{(s)})_{im} ({\partial \tilde U})_{mj} + (\partial {U}^{(s)})_{jm} ({\partial \tilde U})_{mi} \;, \\ N_{ij} &=& \tilde{W}_n [\varepsilon_{nim} (\partial {U}^{(s)})_{mj} + \varepsilon_{njm} (\partial {U}^{(s)})_{mi}] \;, \\ H_{ij} &=& {W}^{(s)}_n [\varepsilon_{nim} (\partial \tilde U)_{mj} + \varepsilon_{njm} (\partial \tilde U)_{mi}] \;, \\ G_{ij} &=& {W}^{(s)}_i \tilde{W}_j + {W}^{(s)}_j \tilde{W}_i \;, \end{eqnarray*} $\varepsilon_{ijk}$ is the fully antisymmetric Levi-Civita tensor, $(\partial {U}^{(s)})_{ij} = (\nabla_i {U}^{(s)}_{j} + \nabla_j {U}^{(s)}_{i}) / 2 $ and the parameters $C_k$ in Eq.~(\ref{B15}) are given below. The effective force $F_i$ depends on the correlation time of the turbulent velocity field $\tau(k)$, where $k$ is the wave number. In the present study we derive a more general form of the effective force $F_i$ for an arbitrary scaling of the correlation time $\tau(k) = C \, \tau_0 \, (k / k_{0})^{-\mu}$ of the turbulent velocity field, where $k_{0} = 1 / l_{0}$. To this end we use Eq.~(20) derived in \cite{EKR03}. The value of the coefficient $C=(q-1+\mu)/(q-1)$ corresponds to the standard form of the turbulent viscosity in the isotropic turbulence, i.e., $\nu_{_{T}} = \int \tau(k) \, [\langle {\bf u}^2 \rangle \, E(k)] \, dk = \tau_0 \, \langle {\bf u}^2 \rangle /3$. Here $E(k) = (q-1) \, k_{0}^{-1} \, (k / k_{0})^{-q}$ is the energy spectrum of turbulence. For the Kolmogorov's type background turbulence (i.e., for the turbulence with a constant energy flux over the spectrum), the exponent $\mu=q-1$ and the coefficient $C=2$. This case has been studied in \cite{EKR03}. For a turbulence with a scale-independent correlation time, the exponent $\mu=0$ and the coefficient $C=1$. The parameters $C_k$ entering in the Reynolds stresses~(\ref{B15}) are given by $C_1 = 2 C^2 \, (\mu^2 - 11 \, \mu + 28) / 315$, $ \, C_2 = - C^2 \, (7 \, \mu + 1) / 90$ and $C_3 = - C^2 \, (\mu + 3) / 90$. For the derivation of the effective force $F_i$ we use a procedure outlined below (see \cite{EKR03} for details). Using the equation for fluctuations of velocity written in a Fourier space, we derive equation for the two-point second-order correlation function of the velocity fluctuations $\langle u_i \, u_j\rangle$. We introduce a background turbulence with zero gradients of the mean fluid velocity. This background turbulence is determined by a stirring force that is independent of gradients of the mean velocity. In this study we use a model of isotropic, homogeneous and nonhelical background turbulence. Then we subtract the equation for the two-point second-order correlation function of the velocity fluctuations $\langle u_i \, u_j\rangle^{(0)}$ written for the background turbulence from the equation for $\langle u_i \, u_j\rangle$. This yields the equation for the deviations from the background turbulence. The obtained second-moment equation include the first-order spatial differential operators $\hat{\cal N}$ applied to the third-order moments $M^{(III)}$. A problem arises how to close the equation, i.e., how to express the third-order terms $\hat{\cal N} M^{(III)}$ through the lower moments $M^{(II)}$ (see, e.g., \cite{O70,MY75,Mc90}). To this end we use a spectral $\tau$ approximation which postulates that the deviations of the third-moment terms from the contributions to these terms afforded by the background turbulence are expressed through the similar deviations of the second moments (see, e.g., \cite{O70,PFL76,KRR90,EKRZ02,EKR03}). A justification of the $\tau$ approximation for different situations has been performed in numerical simulations and analytical studies in \cite{BF02,FB02,BK04,BSM05,SSB07}. We assume that the characteristic time of variation of the second moment of velocity fluctuations is substantially larger than the correlation time for all turbulence scales. This allows us to obtain a steady state solution of the second moment equation for the deviations from the background turbulence. Integration in ${\bf k}$ space allows us to determine the Reynolds stresses in the form of Eq.~(\ref{B15}). Note that this form of the Reynolds stresses in a turbulent flow with a mean velocity shear can be obtained even by simple symmetry reasoning (see \cite{EKR03} for details). In the next Sections we use Eq.~(\ref{B4}) with the derived effective force (see Eq.~(\ref{B15})) for a study of the dynamics of perturbations of the mean velocity. We show that under certain conditions the large-scale instability can be excited which causes formation of large-scale vortical structures. \section{Linear velocity shear (Couette flow) in homogeneous turbulence} We consider a homogeneous turbulence with a mean linear velocity shear, $ {\bf U}^{(s)} = (0, Sx, 0)$. This velocity field is a steady state solution of the Navier-Stokes equation. Let us first study the case when the velocity perturbations $\tilde{\bf U}(t, x, z)$ are independent of $y$. The equations for the components $\tilde{U}_x$ and $\tilde{U}_y$ of the velocity perturbations read \begin{eqnarray} \Big[{\partial \over \partial t} - \nu_{_{T}} \Delta \Big] \Delta \, \tilde{U}_x &=& l_0^2 \, S \, \beta_0 \, \Delta \, \nabla_z^2 \, \tilde{U}_y \;, \label{TAA1}\\ \Delta \Big[{\partial \over \partial t} - \nu_{_{T}} \, \Delta \Big] \, \tilde{U}_y &=& - S \, \Delta \, \tilde{U}_x \;, \label{TAA2} \end{eqnarray} and the component $\tilde{U}_z$ is determined by the continuity equation $\bec{\nabla} {\bf \cdot} \tilde{\bf U} = 0$, where $\beta_0=C_1 + C_2 - C_3 = C^2 \, (2 \mu^2 - 43 \, \mu + 63) / 315$. In order to derive Eqs. (\ref{TAA1}) and (\ref{TAA2}) we calculate $\bec{\nabla} {\bf \times} (\bec{\nabla} {\bf \times} \tilde{\bf U})$ using Eq.~(\ref{B4}), that allows us to exclude the pressure term from this equation. We also use Eq.~(\ref{B15}) for the Reynolds stresses in the sheared turbulence. For simplicity, in Eq.~(\ref{TAA2}) we neglect the small terms $\sim O[(l_0/L_S)^2]$, where $L_S$ is the characteristic scale of the velocity shear. We seek for a solution of Eqs. (\ref{TAA1}) and (\ref{TAA2}) in the form \begin{eqnarray} \tilde{U}_{x,y} &=& \exp(\gamma t) \, [A_{x,y} \, \cos(K_x \, x) + B_{x,y} \, \cosh(K_z \, x)] \nonumber\\ &&\times \cos(K_z \, z + \phi) \;, \label{S1} \end{eqnarray} where the coefficients $A_{x,y}$, $B_{x,y}$, the angle $\phi$ and the growth rate $\gamma$ of the instability are determined by the boundary conditions. We choose the symmetric solution (relative the point $x=0$), because the maximum growth rate of the symmetric mode is higher than that of antisymmetric mode (see below). Perturbations of the mean velocity grow in time due to the large-scale instability with the growth rate \begin{eqnarray} \gamma = \sqrt{\beta_0} \, S \, l_0 \, K_z - \nu_{_{T}} (K_x^2 + K_z^2) \; . \label{N20} \end{eqnarray} The maximum growth rate of perturbations of the mean velocity, \begin{eqnarray} \gamma_{\rm max} = {\beta_0 \, (S \, l_0)^2 \over 4 \nu_{_{T}}} - \nu_{_{T}} \, K_x^2 \;, \label{S3} \end{eqnarray} is attained at $ K_z = K_m = \sqrt{\beta_0} \, S \, l_0 /2 \nu_{_{T}}$. In order to determine the threshold required for the excitation of the large-scale instability, we consider the solution of Eqs.~(\ref{TAA1}) and~(\ref{TAA2}) with the following boundary conditions for a layer of the thickness $L_S$ in the $x$ direction: at $x=\pm \, L_S / 2$ the functions $\tilde{\bf U} = 0$ and $\nabla_x \, (\tilde{U}_{x,y}) = 0$. This yields the threshold value of the wave number $K_x^{\rm cr}$, determined by the equation \begin{eqnarray} \tan(K_x^{\rm cr} L_S/ 2) = - \tanh(K_x^{\rm cr} L_S/2) \; . \label{S2} \end{eqnarray} The condition $\gamma_{\rm max} > 0$ implies that $K_m \geq K_x^{\rm cr}$. Therefore, the large-scale instability is excited when the value of the shear $S$ exceeds the critical value $S_{\rm cr}$ that is given by \begin{eqnarray} S_{\rm cr} \, \tau_0 = {2 \, K_x^{\rm cr} \, l_0 \over 3 \, \sqrt{\beta_0}} \approx 4.7 \, {l_0 \over L_S} \;, \label{N21} \end{eqnarray} where $K_x^{\rm cr} = 2 \pi /L_S$. Note that the value of $K_x^{\rm cr}$ for the the symmetric mode is smaller than that for antisymmetric mode. This is the reason why the maximum growth rate of the symmetric mode is larger than that of antisymmetric mode. Note that the parameter $\beta_0$ depends on the scaling exponent $\mu$ of the correlation time of the turbulent velocity field, $\tau(k) \propto k^{-\mu}$. In particular, for the Kolmogorov scaling, $\tau(k) \propto k^{-2/3}$, we arrive at $\beta_0 = 0.45$. This case has been considered in \cite{EKR03}. The necessary condition for the large-scale instability $(\beta_0 > 0)$ reads $2 \, \mu^2 - 43 \, \mu + 63 > 0 $, i.e., the instability is excited when $0 \leq \mu < 1.58$ and $\mu > 19.9$. Note that the condition $\mu > 19.9$ is not realistic. In the case of a turbulence with a scale-independent correlation time, the exponent $\mu=0$ and the parameter $\beta_0=0.2$. For small hydrodynamic Reynolds numbers, the scaling of the correlation time $\tau(k) \sim 1/(\nu k^2)$, i.e., $\mu = 2$, and the parameter $\beta_0 < 0$. This implies that the instability of the perturbations of the mean vorticity does not occur for small Reynolds numbers in agreement with the recent results obtained in \cite{RUK06} whereby an instability of the perturbations of the mean vorticity in a random flow with large-scale velocity shear has not been found using the second order correlation approximation and assumption that the correlation time $\tau(k) \sim 1/(\nu k^2)$. This approximation is valid only for small Reynolds numbers (see discussion in \cite{RKL06}). Let us consider now a more general case when the velocity $\tilde{\bf U}$ depends on three spatial coordinates, i.e., $\tilde{\bf U}=\tilde{\bf U}(t, x, y, z)$. The equations for the components $\tilde{U}_x$ and $\tilde{U}_y$ of the velocity perturbations read \begin{eqnarray} \bigg({\partial \over \partial \,t} &+& U^{(s)} \, \nabla_y - \nu_{_T} \Delta \bigg) \, \Delta \tilde{U}_x = l_0^2\,S\, \, \Delta\, \big[\beta_0 \, \Delta_{H} \tilde{U}_y \nonumber \\ &+& (\beta_1 - \beta_2) \, \nabla_x\,\nabla_y \tilde{U}_x\big]\;, \label{AA1}\\ \Delta\bigg({\partial \over \partial \,t} &+& U^{(s)} \, \nabla_y - \nu_{_T} \Delta\bigg)\tilde{U}_y = l_0^2\,S \, \Delta \, \big[\beta_2 \, (\Delta - \nabla_y^2) \,\tilde{U}_x \nonumber \\ &+& (\beta_1 - \beta_0) \, \nabla_x \nabla_y \, \tilde{U}_y \big] + S \big(2 \nabla_y^2 - \Delta \big) \, \tilde{U}_x \;, \label{AA2} \end{eqnarray} and the component $\tilde{U}_z$ is determined by the continuity equation $\bec{\nabla} {\bf \cdot} \tilde{\bf U} = 0$. Here $\Delta_{H} = \Delta - \nabla_x^2$, $\, \beta_{1} = 2 C_1 - C_2 = C^2 \, (8 \, \mu^2 - 39 \, \mu + 231) / 630$ and $\beta_{2} = C_1 + C_3 = C^2 \, (4 \, \mu^2 - 51 \, \mu + 91) / 630$. In order to derive Eqs.~(\ref{AA1}) and (\ref{AA2}) we calculate $\bec{\nabla} {\bf \times} (\bec{\nabla} {\bf \times} \tilde{\bf U})$ using Eq.~(\ref{B4}), that allows us to exclude the pressure term from this equation. For the derivation of Eqs.~(\ref{AA1}) and (\ref{AA2}) we also use Eq.~(\ref{B15}) for the Reynolds stresses in the sheared turbulence. Equations~(\ref{AA1}) and (\ref{AA2}) can be reduced to the Orr-Sommerfeld equation if we replace $\nu_{T}$ by $\nu$ and set $\beta_n=0$ (see, e.g., \cite{DR81,SH01,CJJ03}, and references therein). \begin{figure} \vspace*{2mm} \centering \includegraphics[width=7cm]{Fig1.eps} \caption{\label{FIG-1} Range of parameters ($L_S / L_{H}$; $\varphi$) for which the large-scale instability occurs for Couette background flow and for different values of the large-scale shear: $S \, \tau_0 = 0.2$ (dashed line), $S \, \tau_0 = 0.4$ (solid line). Here $L_S / l_0 = 30$.} \end{figure} \begin{figure} \vspace*{2mm} \centering \includegraphics[width=7cm]{Fig2.eps} \caption{\label{FIG-2} The growth rate (a) of the large-scale instability $\gamma \, \tau_0\, $ and frequencies $\omega \, \tau_0\, $ of the generated modes (b) versus $L_S / L_{H}$ for Couette background flow and for different angles $\varphi$: $\, \varphi = 85^\circ$ (dashed-dotted line), $\varphi = 87^\circ$ (dashed line), $\varphi = 90^\circ$ (solid line). Here $S \, \tau_0 = 0.4$, $\, L_S / l_0 = 30$ and $\omega(\varphi = 90^\circ)=0 $.} \end{figure} \begin{figure} \vspace*{2mm} \centering \includegraphics[width=7cm]{Fig3.eps} \caption{\label{FIG-3} The growth rate $\gamma \, \tau_0\,$ (a) and the frequency $\omega \, \tau_0\,$ (b) versus $L_S / L_{H}$ of the first (solid line) and the second (dashed line) modes which have the highest growth rates for Couette background flow. Here the angle $\varphi = 87^{\circ}$, $\, S \, \tau_0 = 0.4$ and $L_S / l_0 = 30$.} \end{figure} \begin{figure} \vspace*{2mm} \centering \includegraphics[width=7cm]{Fig4.eps} \caption{\label{FIG-4} The growth rate (a) of the large-scale instability $\gamma \, \tau_0\, $ and frequencies $\omega \, \tau_0\, $ of the generated modes (b) versus $L_S / L_{H}$ for Couette background flow and for different angles $\varphi$: $\, \varphi = 87^\circ$ (dashed-dotted line), $\varphi = 88^\circ$ (dashed line), $\varphi = 90^\circ$ (solid line). Here $S \, \tau_0 = 0.2$, $\, L_S / l_0 = 30$ and $\omega(\varphi = 90^\circ)=0$.} \end{figure} \begin{figure} \vspace*{2mm} \centering \includegraphics[width=7cm]{Fig5.eps} \caption{\label{FIG-5} The spatial profiles of the ratios of vorticity components $\tilde W_y/\tilde W_x$ (a) and $\tilde W_z/ \tilde W_x$ (b) for modes with the maximum growth rates of the large-scale instability in Couette background flow and for different angles $\varphi$: $\, \varphi = 85^\circ$ (dashed-dotted line), $\varphi = 87^\circ$ (dashed line), $\varphi = 90^\circ$ (solid line). Here $S \, \tau_0 = 0.4$ and $L_S / l_0 = 30$.} \end{figure} We seek for a solution of Eqs.~(\ref{AA1})-(\ref{AA2}) in the form $\propto \Psi(x) \, \exp(\gamma t + i \, \omega t + i \, {\bf K}_H \, {\bf \cdot \, r})$, where ${\bf K}_H$ is the wave number that is perpendicular to the $x$-axis. After the substitution of this solution into Eqs.~(\ref{AA1})-(\ref{AA2}) we obtain the system of the ordinary differential equations which is solved numerically. We consider the solution of Eqs.~(\ref{AA1})-(\ref{AA2}) with the following boundary conditions for a layer of the thickness $L_S$ in the $x$ direction: at $x=\pm \, L_S / 2$ the functions $\tilde{\bf U} = 0$ and $\nabla_x \, (\tilde{U}_{x,y}) = 0$. These boundary conditions with a linear velocity shear corresponds to the Couette flow. In this Section we show that in a small-scale turbulence the large-scale Couette flow can be unstable under certain conditions. The range of parameters ($L_S / L_{H}$; $\varphi$) for which the large-scale instability occurs is shown in Fig.~\ref{FIG-1}, where $L_H= 2 \pi /K_H$, $\, K_H = (K_y^2 + K_z^2)^{1/2}$ and $\varphi$ is the angle between the wave vector ${\bf K}_H$ and the direction of the mean sheared velocity ${\bf U}^{(s)}$. In Figs.~\ref{FIG-2}-\ref{FIG-4} we show the growth rate of the large-scale instability $\gamma \, \tau_0$ and the frequencies of the generated modes $\omega \, \tau_0$ versus $L_S / L_{H}$. The growth rates of the large-scale instability increase with the increase of the angle $\varphi$, while the frequencies of the generated modes decrease with the angle $\varphi$ so that $\omega(\varphi \to 90^\circ) \to 0$. The growth rate of the large-scale instability reaches the maximum value at $\varphi = 90^\circ$. In addition, the range of angles $\varphi$ for which the large-scale instability occurs, is small and located in the vicinity of $\varphi = 90^\circ$ (see Fig.~\ref{FIG-1}). Therefore, $K_y \ll K_z$ and since $L_z \sim L_S$, the size of the structures in the direction of ${\bf U}^{(s)}$ is much larger than the sizes of the structures along $x$ and $z$ directions. This implies that the large-scale structures formed due to this instability are stretched along the mean sheared velocity ${\bf U}^{(s)}$. The curves in Figs.~\ref{FIG-2}-\ref{FIG-4} have a point $L_{\ast}$ whereby the first derivative of the growth rate of the large-scale instability with respect to the wave number $K_H$ has a singularity. At this point there is a bifurcation which is illustrated in Fig.~\ref{FIG-3}. In particular, the growth rates and the frequencies for the first and the second modes which have the highest growth rates are shown in Fig.~\ref{FIG-3}a and~\ref{FIG-3}b. When the size of perturbations $L_{H} < L_{\ast}$, the frequencies of the first and the second modes are different, but the growth rates are the same. Therefore, at the point $L_{H} = L_{\ast}$, there is a generation of two different modes with the same growth rate. On the other hand, when the size of perturbations $L_{H} > L_{\ast}$, the growth rates of the first and the second modes are different, but the frequencies are the same. The maximum growth rate of perturbations of the mean velocity, $ \gamma_{\rm max}$, is attained at $ K_H = K_m$, and the value $K_m$ increases with the increase of the angle $\varphi$ between the wave vector ${\bf K}_H$ and the direction of the mean sheared velocity ${\bf U}^{(s)}$. The increase of shear $S$ promotes the large-scale instability, i.e., it cause the increase of the range for the instability (see Fig.~\ref{FIG-1}) and the maximum growth rate (see Figs.~\ref{FIG-2} and \ref{FIG-4}). The characteristic spatial scale $L_m = 2 \pi / K_m$ and the time scale $t_{\rm inst} \sim \gamma_{\rm max}^{-1}$ for the instability are much larger than the characteristic turbulent scales. This justifies separation of scales which is required for the validity of the mean-field theory applied in the present study. The spatial profiles of the ratios of vorticity components $\tilde W_y/\tilde W_x$ and $\tilde W_z/\tilde W_x$ for perturbations in Couette background flow are shown in Fig.~\ref{FIG-5}. The function $\tilde W_y/\tilde W_x$ is symmetric relative to the center of the flow at $x=0$, while the function $\tilde W_z/\tilde W_x$ is antisymmetric. Since the function $\tilde W_x \to 0$ at the boundaries of the flow, the ratios of vorticity components $\tilde W_y/\tilde W_x$ and $\tilde W_z/\tilde W_x$ tend to $\to \pm \, \infty$ at the boundaries. The numerical results for the case $\varphi = 90^\circ$ shown in Figs.~\ref{FIG-2}, \ref{FIG-4} and~\ref{FIG-5} coincide with the analytical predictions based on Eqs.~(\ref{S1})-(\ref{N21}). For instance, the threshold value of the shear at $L_S / l_0 = 30$ is $S_{\rm cr} \, \tau_0 \approx 0.157$ in agreement with Eq.~(\ref{N21}). The ratio of vorticity components $\tilde W_y/\tilde W_x \approx 0.3$ at $x=0$ for modes with the maximum growth rate of the large-scale instability. This is in agreement with this ratio of $\tilde W_y/\tilde W_x$ obtained using Eq.~(\ref{S1}). The maximum growth rates of perturbations of the mean velocity are in agreement with Eqs.~(\ref{S3}) and (\ref{S2}). When we switch off the turbulence, the large-scale instability does not excited, etc. The growing modes with a nonzero frequency discussed in this Section can be regarded as the turbulent analogue of the Tollmien-Schlichting waves. In laminar flows the Tollmien-Schlichting waves are growing solutions of the Orr-Sommerfeld equation and the molecular viscosity promotes the excitation of the Tollmien-Schlichting waves (see, e.g., \cite{SH01}). On the other hand, the turbulent Tollmien-Schlichting waves are excited by a small-scale sheared turbulence, i.e., by a combined effect of the turbulent Reynolds stress-induced generation of perturbations of the mean vorticity and the background sheared motions. \section{Quadratic velocity shear (Poiseuille flow) in homogeneous turbulence} Now we consider a homogeneous turbulence with an imposed large-scale quadratic velocity shear, ${\bf U}^{(s)} = S_\ast \, x \, (1-x/L_S) \, {\bf e}_y$. The equations for the components $\tilde{U}_x$ and $\tilde{U}_y$ of the velocity perturbations read \begin{eqnarray} &&\bigg({\partial \over \partial \,t} + U^{(s)} \, \nabla_y - \nu_{_T} \Delta \bigg)\Delta \tilde{U}_x = l_0^2\,S\, \Delta\, \Big(\beta_0\, \Delta_{H} \tilde{U}_y \nonumber \\ &&\qquad + (\beta_1 - \beta_2) \, \nabla_x\,\nabla_y \, \tilde{U}_x \Big) + S' \, \nabla_y \, \tilde{U}_x\, , \label{AB1}\\ &&\Delta\bigg({\partial \over \partial \,t} + U^{(s)} \, \nabla_y - \nu_{_T} \Delta\bigg) \, \tilde{U}_y = S \big(2 \nabla_y^2 - \Delta \big) \, \tilde{U}_x \nonumber \\ &&\qquad - 2 \, S' \, \nabla_x\, \tilde{U}_x + l_0^2\,S \, \Delta \, \Big[(\beta_1 - \beta_0) \, \nabla_x \nabla_y \, \tilde{U}_y \nonumber \\ &&\qquad + \beta_2 \, (\Delta - \nabla_y^2) \,\tilde{U}_x\Big] + l_0^2 \, S' \, \Big[2 \beta_1 \nabla_x \nabla_y (\nabla_x \tilde{U}_y \nonumber \\ &&\qquad - \nabla_y \tilde{U}_x) + \Delta \, \big[( 2 \beta_2 + \beta_1) \, \nabla_x \tilde{U}_x \nonumber \\ &&\qquad + (\beta_2 - \beta_0) \, \nabla_y \tilde{U}_y \big] \Big] \;, \label{AB2} \end{eqnarray} and the component $\tilde{U}_z$ is determined by the continuity equation $\bec{\nabla} {\bf \cdot} \tilde{\bf U} = 0$, where $S(x) = \nabla_x\, U^{(s)}$ and $S'= \nabla_x\, S$. In order to derive Eqs.~(\ref{AB1}) and (\ref{AB2}) we calculate $\bec{\nabla} {\bf \times} (\bec{\nabla} {\bf \times} \tilde{\bf U})$ using Eq.~(\ref{B4}). We seek for a solution of Eqs.~(\ref{AB1}) and (\ref{AB2}) in the form $\propto \Psi(x) \, \exp(\gamma t + i \, \omega t + i \, {\bf K}_H \, {\bf \cdot \, r})$, where ${\bf K}_H$ is the wave number that is perpendicular to the $x$-axis. After the substitution of this solution into Eqs.~(\ref{AB1}) and (\ref{AB2}) we obtain the system of the ordinary differential equations which is solved numerically. We consider the solution of Eqs.~(\ref{AB1})-(\ref{AB2}) with the following boundary conditions for a layer of the thickness $L_S$ in the $x$ direction: at $x=\pm \, L_S / 2$ the functions $\tilde{\bf U} = 0$ and $\nabla_x \, (\tilde{U}_{x,y}) = 0$. These boundary conditions with a quadratic large-scale velocity shear corresponds to the Poiseuille flow. We show below that in a small-scale turbulence the large-scale Poiseuille flow can be unstable with respect to small perturbations. \begin{figure} \vspace*{2mm} \centering \includegraphics[width=7cm]{Fig6.eps} \caption{\label{FIG-6} Range of parameters ($L_S / L_{H}$; $\varphi$) for which the large-scale instability for Poiseuille background flow occurs, and for different values of the large-scale shear: $S_\ast \, \tau_0 = 0.5$ (dashed line) and $S_\ast \, \tau_0 = 0.6$ (solid line). Here $L_S / l_0 = 30$ and $S_\ast=S(x=0)$.} \end{figure} \begin{figure} \vspace*{2mm} \centering \includegraphics[width=7cm]{Fig7.eps} \caption{\label{FIG-7} The growth rate (a) of the large-scale instability $\gamma \, \tau_0\, $ and frequencies $\omega \, \tau_0\, $ of the generated modes (b) versus $L_S / L_{H}$ for Poiseuille background flow and for different angles $\varphi$: $\, \varphi = 84^\circ$ (dashed-dotted line), $\varphi = 87^\circ$ (dashed line), $\varphi = 90^\circ$ (solid line). Here $S_\ast \, \tau_0 = 0.6$, $\, L_S / l_0 = 30$ and $\omega(\varphi = 90^\circ)=0 $.} \end{figure} \begin{figure} \vspace*{2mm} \centering \includegraphics[width=7cm]{Fig8.eps} \caption{\label{FIG-8} The growth rate (a) of the large-scale instability $\gamma \, \tau_0\, $ and frequencies $\omega \, \tau_0\, $ of the generated modes (b) versus $L_S / L_{H}$ for Poiseuille background flow and for different angles $\varphi$: $\, \varphi = 87^\circ$ (dashed-dotted line), $\varphi = 88^\circ$ (dashed line), $\varphi = 90^\circ$ (solid line). Here $S_\ast \, \tau_0 = 0.5$, $\, L_S / l_0 = 30$ and $\omega(\varphi = 90^\circ)=0 $.} \end{figure} \begin{figure} \vspace*{2mm} \centering \includegraphics[width=7cm]{Fig9.eps} \caption{\label{FIG-9} The spatial profiles of the ratios of vorticity components $\tilde W_y/\tilde W_x$ (a) and $\tilde W_z/\tilde W_x$ (b) for modes with the maximum growth rates of the large-scale instability in Poiseuille background flow and for different angles $\varphi$: $\, \varphi = 84^\circ$ (dashed-dotted line), $\varphi = 87^\circ$ (dashed line), $\varphi = 90^\circ$ (solid line). Here $S_\ast \, \tau_0 = 0.6$ and $L_S / l_0 = 30$.} \end{figure} The range of parameters ($L_S / L_{H}$; $\varphi$) for which the large-scale instability in the Poiseuille background flow occurs is shown in Fig.~\ref{FIG-6} for different values of the large-scale shear, where $S_\ast=S(x=0)$. The growth rates of this instability and the frequencies of the generated turbulent Tollmien-Schlichting waves are shown in Figs.~\ref{FIG-7} and \ref{FIG-8}. The spatial profiles of the ratios of vorticity components $\tilde W_y/\tilde W_x$ and $\tilde W_z/\tilde W_x$ in Poiseuille background flow for modes with the maximum growth rates of the large-scale instability are shown in Fig.~\ref{FIG-9}. The general behaviour of the large-scale instability in the Poiseuille background flow is similar to that for the Couette background flow. In particular, the growth rates of the large-scale instability increase with the increase of the angle $\varphi$ between the wave vector ${\bf K}_H$ and the direction of the mean sheared velocity ${\bf U}^{(s)}$, reaching the maximum value at $\varphi = 90^\circ$. The frequencies $\omega \, \tau_0\, $ of the generated turbulent Tollmien-Schlichting waves by the large-scale instability decrease with the increase of the angle $\varphi$ and $\omega \to 0$ at $\varphi \to 90^\circ$. The values $K_m$ at which the growth rates of the large-scale instability reach the maximum values increase with the increase of the angle $\varphi$. The range for the large-scale instability and the growth rates of perturbations in the Poiseuille background flow increases with the increase of shear. This implies that increase of shear promotes the large-scale instability. For the Poiseuille flow the large-scale instability can be excited for smaller angles $\varphi$ than that for the Couette background flow. On the other hand, the thresholds for the instability in the value of shear and in the value of $L_S / L_{H}$ for Poiseuille background flow are larger than that for the Couette background flow. A difference between the Couette and Poiseuille background flows can be also seen in Figs.~\ref{FIG-5} and~\ref{FIG-9} for the spatial profiles of the ratios of vorticity components $\tilde W_y/\tilde W_x$ and $\tilde W_z/\tilde W_x$. This difference is caused by the different geometries in these flows. In particular, the first spatial derivatives of the flow velocity in the Poiseuille background flow are antisymmetric relative to the center of the flow at $x=0$, while they are symmetric (constant) in the Couette background flow. This is the reason of that the spatial profile of $\tilde W_y/\tilde W_x$ is symmetric relative to $x=0$ in the Couette background flow, and it is antisymmetric in the Poiseuille flow. \section{Nonuniform velocity shear in inhomogeneous turbulence} In this Section we consider a more complicated form of nonuniform velocity shear in an inhomogeneous turbulence. For simplicity we consider the case when the small perturbations of the mean velocity $\tilde{\bf U}$ are independent of $y$. The equations for the components $\tilde{U}_x$ and $\tilde{U}_y$ of the velocity perturbations in an inhomogeneous turbulence with a nonuniform shear read \begin{eqnarray} \Delta \Big[{\partial \over \partial t} &-& \nu_{_{T}} \, \Delta \Big] \, \tilde{U}_x = \beta_0 \, \Big[l_0^2 \, S \, \Delta - \nabla_x^2 \, \big(l_0^2 \, S\big) \Big] \, \nabla_z^2 \, \tilde U_y \nonumber\\ &-& 2 \, \Big(\nabla_x^2 \, \nu_{_{T}}\Big) \, \nabla_z^2 \, \tilde{U}_x \;, \label{A1}\\ \Big[{\partial \over \partial t} &-& \nu_{_{T}} \Delta \Big] \, \tilde{U}_y = \Big[-S + \beta_{1} \, \nabla_x \big(l_0^2 \, S \, \big) \, \nabla_x \nonumber\\ &+& \beta_{2} \, l_0^2 \, S \, \Delta \Big] \, \tilde{U}_x + \Big(\nabla_x \, \nu_{_{T}} \Big) \, \nabla_x \, \tilde{U}_y \;, \label{A2} \end{eqnarray} and the component $\tilde{U}_z$ is determined by the continuity equation $\bec{\nabla} {\bf \cdot} \tilde{\bf U} = 0$, where $S(x) = \nabla_x\, U^{(s)}$. Equation~(\ref{A2}) is the $y$ component of Eq.~(\ref{B4}) with $\nabla_y \tilde P =0$, while Eq.~(\ref{A1}) is the $x$ component of $\bec{\nabla} {\bf \times} (\bec{\nabla} {\bf \times} \tilde{\bf U})$ determined from Eq.~(\ref{B4}). We consider the solution of Eqs.~(\ref{A1}) and (\ref{A2}) with the following boundary conditions for a layer of the thickness $L_S$ in the $x$ direction: at $x=\pm \, L_S/2$ the functions $\tilde{\bf U} = 0$ and $\nabla_x \, (\tilde{U}_{x,y}) = 0$. \begin{figure} \vspace*{2mm} \centering \includegraphics[width=7cm]{Fig10.eps} \caption{\label{FIG-10} The spatial profile of the normalized turbulent viscosity $\nu_{_{T}}^\ast(x)$ for different values of the parameter $\alpha$: $\; \alpha=6$ (solid line), $\alpha=10$ (dashed line), $\alpha=20$ (dotted line), $\alpha=50$ (dashed-dotted line).} \end{figure} \begin{figure} \vspace*{2mm} \centering \includegraphics[width=7cm]{Fig11.eps} \caption{\label{FIG-11} The mean velocity profile $U^{(s)}(x)/U_{\rm max}$ for different values of the parameter $\alpha$: $\; \alpha=6$ (solid line), $\alpha=10$ (dashed line), $\alpha=20$ (dotted line), $\alpha=50$ (dashed-dotted line), where $U_{\rm max} = u_{\star} / \kappa$.} \end{figure} \begin{figure} \vspace*{2mm} \centering \includegraphics[width=7cm]{Fig12.eps} \caption{\label{FIG-12} The growth rate of the large-scale instability versus $l_{\rm max} / L_z$ in inhomogeneous turbulence with nonuniform velocity shear for different values of the parameter $\alpha$: $\; \alpha = 6$ (solid line), $\alpha = 10$ (dashed line), $\alpha = 20$ (dotted line) and $\alpha = 50$ (dashed-dotted line). Here $l_{\rm max} = l_0(x \to 0.5 \,L_z)$.} \end{figure} We consider a "log-linear" velocity profile for the background large-scale flow in an inhomogeneous turbulence. In particular, we use the following relationship for the velocity shear $S(x) = u_{\star}^2 / \nu_{_{T}}(x)$ and the eddy viscosity $\nu_{_{T}}(x) = u_{\star} \, l_0(x)$, where $l_0(x) = \kappa \, \eta(x) \, L_S $ is the turbulence length scale, $\kappa$ is the von K\'arm\'an constant, $u_{\star}$ is the friction velocity, $\eta(x)$ is the dimensionless function that characterizes the spatial profile of the background velocity shear and inhomogeneity of small-scale turbulence (see below). These relationships are usually used for the logarithmic boundary layer profiles (see, e.g., \cite{MY75}). The spatial profile $\eta(x)$ for $0 \leq x \leq L_S/2$ is chosen in the form \begin{eqnarray} \eta(x) = a_1 \,\big[1-\exp(-a_0\, \tilde x)\big] + a_2\, \tilde x + a_3\, \tilde x^2 + a_4\, \tilde x^3 \;, \nonumber\\ \label{B30} \end{eqnarray} where $\tilde x= x/L_S-1/2$, the coefficients $a_k$ are determined by the following conditions: at $x=0$ the functions $\eta = 1$, $\, \nabla_x \eta = 0$, $\, \nabla^2_x \eta = 0$, $\, \nabla^3_x \eta = 0$, and at $x=-L_S/2$ the derivative $\nabla_x \eta = \alpha / L_S$. Here $\alpha$ is a free parameter that characterizes the inhomogeneities of small-scale turbulence. The spatial profile of the normalized turbulent viscosity $\nu_{_{T}}^\ast(x) = \nu_{_{T}}(x) / (\kappa \, u_{\star} \, L_S) \equiv \eta(x)$ is shown in Fig.~\ref{FIG-10} for different values of the parameter $\alpha$. The function $\nu_{_{T}}^\ast(x)$ is chosen to be symmetric relative the point $x=0$. The minimum possible value of the parameter $\alpha$ is $\alpha = 6$. We have chosen the velocity shear profile $U^{(s)}(x)$ so that the logarithmic velocity profile near the boundaries can be matched with the linear shear velocity for the central part of the background flow. Such kind of flow is typical for the atmospheric boundary layer. Figure~\ref{FIG-11} shows the mean velocity profile $U^{(s)}(x)/U_{\rm max}$ for different values of the parameter $\alpha$, where $U_{\rm max} = u_{\star} / \kappa$. \begin{figure} \vspace*{2mm} \centering \includegraphics[width=7cm]{Fig13.eps} \caption{\label{FIG-13} (a). The range of parameters ($l_{\rm max} / L_z$; $\alpha$) for which the large-scale instability in inhomogeneous turbulence with nonuniform velocity shear occurs. (b). The maximum growth rate $\gamma_{\rm max} \, \tau_0$ of the large-scale instability versus the parameter $\alpha$. Here $l_{\rm max} = l_0(x \to 0.5 \,L_z)$.} \end{figure} \begin{figure} \vspace*{2mm} \centering \includegraphics[width=7cm]{Fig14.eps} \caption{\label{FIG-14} The spatial profiles of the ratios of vorticity components $\tilde W_y/\tilde W_x$ (a) and $\tilde W_z/\tilde W_x$ (b) for modes with the maximum growth rates of the large-scale instability in inhomogeneous turbulence with nonuniform velocity shear for different values of the parameter $\alpha$: $\; \alpha = 6$ (solid line), $\alpha = 20$ (dashed line) and $\alpha = 50$ (dashed-dotted line). Here $l_{\rm max} = l_0(x \to 0.5 \,L_z)$.} \end{figure} We seek for a solution of Eqs.~(\ref{A1}) and (\ref{A2}) in the form $\propto \Psi(x) \, \exp(\gamma t + i K_z \, z)$. After the substitution of this solution into Eqs.~(\ref{A1}) and (\ref{A2}) we obtain the system of the ordinary differential equations which is solved numerically. The growth rate $\gamma \tau_0$ of the large-scale instability versus $l_{\rm max} / L_z$ is shown in Fig.~\ref{FIG-12}, where $L_z= 2 \pi /K_z$ is the size of perturbations in $z$ direction and $l_{\rm max} = \kappa \, L_S$ is the maximum value of the turbulent length scale $l_0$ when $\eta \to 1$ $\, (x \to 1)$. The range of parameters ($l_{\rm max} / L_z$; $\alpha$) for which the large-scale instability occurs is shown in Fig.~\ref{FIG-13}a. The vertical dashed line in Fig.~\ref{FIG-13} indicates that the minimum possible value of the parameter $\alpha$ is $\alpha_{\rm min} = 6$. Figure~\ref{FIG-13}b demonstrates that the increase of the parameter $\alpha$ causes the increase of the maximum growth rate of the large-scale instability. The growth rate of the large-scale instability for the inhomogeneous turbulence with a large-scale nonuniform shear is much larger than that for the Couette and Poiseuille background flows. The spatial profiles of the ratios of vorticity components $\tilde W_y/\tilde W_x$ and $\tilde W_z/\tilde W_x$ for modes with the maximum growth rates of the large-scale instability are shown in Fig.~\ref{FIG-14}. These profiles are different from that for the Couette and Poiseuille background flows. The components $\tilde W_y$ and $\tilde W_z$ of perturbations of the mean vorticity in the central part of the flow are usually much smaller than the component $\tilde W_x$. Inspection of Figs.~\ref{FIG-12} and~\ref{FIG-13}a shows that the parameter $l_{\rm max} / L_z < 0.17$. The characteristic time scale for the instability is much larger than the characteristic turbulent time. This justifies separation of scales which is required for the validity of the mean-field theory used here. Note that in the interval $-L_S / 2 \leq x \leq 0$ the obtained results discussed in this Section imply a stability theory for the turbulent boundary layer. Our study shows that the turbulent boundary layer can be unstable under certain conditions. \section{Discussion} In this study the theoretical approach proposed in \cite{EKR03} is further developed and applied to investigate the large-scale instability in a nonhelical turbulence with a nonuniform shear and a more general form of the perturbations of the mean vorticity. In particular, we consider three types of the background large-scale sheared flows imposed on small-scale turbulence: Couette flow (linear velocity shear) and Poiseuille flow (quadratic velocity shear) in a small-scale homogeneous turbulence, and a more complicated nonuniform velocity shear with the logarithmic velocity profile near the boundaries matched with the linear shear velocity for the central part of the background flow. This nonuniform velocity shear is imposed on an inhomogeneous turbulence. The latter flow is typical for the atmospheric boundary layer. We show that the large-scale Couette and Poiseuille flows imposed on a small-scale turbulence are unstable with respect to small perturbations due to the excitation of the large-scale instability. This instability causes generation of large-scale vorticity and formation of large-scale vortical structures. The size of the formed vortical structures in the direction of the background velocity shear is much larger than the sizes of the structures in the directions perpendicular to the velocity shear. Therefore, the large-scale structures formed during this instability are stretched along the mean sheared velocity. Increase of shear promotes the large-scale instability. The thresholds for the excitation of the large-scale instability in the value of shear and the aspect ratio of structures for Poiseuille background flow are larger than that for the Couette background flow. The growth rate of the large-scale instability for the inhomogeneous turbulence with the "log-linear" velocity shear is much larger than that for the Couette and Poiseuille background flows. The characteristic spatial and time scales for the instability are much larger than the characteristic turbulent scales. This justifies separation of scales which is required for the validity of the mean-field theory applied in the present study. The large-scale instability results in excitation of the turbulent Tollmien-Schlichting waves. The mechanism for the excitation of these waves is different from that for the Tollmien-Schlichting waves in laminar flows. In particular, the molecular viscosity plays a crucial role in the excitation of the Tollmien-Schlichting waves in laminar flows. Contrary, the turbulent Tollmien-Schlichting waves are excited by a combined effect of the turbulent Reynolds stress-induced generation of perturbations of the mean vorticity and the background sheared motions. The energy of these waves is supplied by the small-scale sheared turbulence, and the off-diagonal terms in the turbulent viscosity tensor play a crucial role in the excitation of the turbulent Tollmien-Schlichting waves. Note that this study is principally different from the problems of transition to turbulence whereby the stability of the laminar Couette and Poiseuille flows are investigated (see, e.g., \cite{DR81,SH01,CJJ03,BOH88,REM03,ESH07}, and references therein). Here we do not analyze a transition to turbulence. We study the large-scale instability caused by an effect of the small-scale anisotropic turbulence on the mean flow. This anisotropic turbulence is produced by an interaction of equilibrium large-scale Couette or Poiseuille flows with a small-scale isotropic background turbulence produced by, e.g., a steering force. The anisotropic velocity fluctuations are generated by tangling of the mean-velocity gradients with the velocity fluctuations of the background turbulence \cite{EKR03,EKRZ02}. The "tangling" mechanism is an universal phenomenon that was introduced in \cite{W57,BH59} for a passive scalar and in \cite{G60,M61} for a passive vector (magnetic field). The Reynolds stresses in a turbulent flow with a mean velocity shear is another example of tangling anisotropic fluctuations \cite{L67}. For instance, these velocity fluctuations are anisotropic in the presence of shear and have a steeper spectrum $\propto k^{-7/3}$ than, e.g., a Kolmogorov background turbulence (see, e.g., \cite{L67,WC72,SV94,IY02,EKRZ02}). The anisotropic velocity fluctuations determine the effective force and the Reynolds stresses in Eq.~(\ref{B15}). This is the reason for the new terms $\propto \beta_n \, l_0^2$ appearing in Eqs.~(\ref{AA1})-(\ref{A2}). The obtained results in this study may be of relevance in different turbulent astrophysical, geophysical and industrial flows. Turbulence with a large-scale velocity shear is a universal feature in astrophysics and geophysics. In particular, the analyzed effects may be important, e.g., in accretion disks, extragalactic clusters, merged protostellar and protogalactic clouds. Sheared motions between interacting clouds can cause an excitation of the large-scale instability which results in generation of the mean vorticity and formation of large-scale vortical structures (see, e.g., \cite{P80,ZN83,C93}). Dust particles can be trapped by the vortical structures to enhance agglomeration of material and formation of particle clusters \cite{BS95,BR98,EKR98,CH00,JAB04}. The suggested mechanism can be used in the analysis of the flows associated with Prandtl's turbulent secondary flows (see, e.g., \cite{P52,B87}). However, in this study we have investigated only simple physical mechanisms to describe an initial (linear) stage of the formation of vortical structures. The simple models considered in this study can only mimic the flows associated with turbulent secondary flows. Clearly, the comprehensive numerical simulations of the nonlinear problem are required for quantitative description of the turbulent secondary flows. \begin{acknowledgments} This research was supported in part by the Israel Science Foundation governed by the Israeli Academy of Science, and by the Israeli Universities Budget Planning Committee (VATAT). \end{acknowledgments}
1,314,259,995,414
arxiv
\section{Introduction} Recent works in deep learning based on visual recognition methods have delivered enormous advantages towards computer-assisted interventions (CAI)~\citep{ward2021computer}. CAI tools have been primarily focused on specific information gathering, such as the presence or location of lesions. Nonetheless, recent developments in the image recognition tasks with improved accuracy have led to expansion of its scope to several other areas including intra-operative decision support systems~\citep{bouget2017}. These applications provide contextual information to the surgeon during the surgery, as a post-operative feedback~\citep{bhatia2007,sarikaya2017} for surgical training and video content analysis~\citep{wang2019graph}. More recently, CAI systems capable of performing effectively the sub-tasks such as surgical phase recognition, identification of the tool presence, as well as their recognition, localization and instance-based segmentation are getting increased attention~\citep{bouget2017}. The development of these task-based automated approaches can ensure improved surgical care, patient safety and alleviate surgeon fatigue. Deep learning based surgical tool detection task has attracted a lot of attention in recent years. However, most of the state-of-the-art (SOTA) methods have employed fully supervised approaches~\citep{jin2018, zhang2020} and only a few weakly supervised methods, mostly implementing classification models for determining tool presence~\citep{vardazaryan2018} have been proposed. Nonetheless, training complex deep learning (DL) models under the supervised setting requires difficult-to-acquire and precisely annotated datasets, which is a time consuming task and susceptible to intra and inter-observer bias in annotations. As a result, only a few labeled surgical tool datasets are publicly available~\citep{sarikaya2017,jin2018} and this lack of annotated datasets has essentially hindered the development of robust and generalizable deep architectures for the surgical instrument detection. \begin{figure}% \captionsetup{justification=justified} \subfloat[\centering]{{\includegraphics[scale=0.4]{Comparison.pdf} }}% \subfloat[\centering]{{\includegraphics[scale=0.52]{fig2augmentation.pdf}}}% \caption{\textbf{Comparison at different percentage of supervision and augmentation strategies in a Teacher-Student paradigm.} (a) The proposed approach efficiently leverages unlabeled data and produces substantial improvement over supervised baseline and on the SOTA Unbiased Teacher (Ubteacher)~\cite{unbiased} framework. (b) Data augmentation workflow of the proposed Teacher-Student mutual learning approach. Unlabeled data given with weak augmentation to Teacher and with strong augmentation to Student. Student gets pseudo labels through NMS and thresholding from the Teacher. }% \label{Overall result and augmentation}% \end{figure} Alternatively, the annotation cost could be greatly mitigated by exploiting unlabeled data through efficient semi-supervised learning (SSL) frameworks. The core idea of SSL is to be able to extract information from the unlabeled data that is essential for label prediction. One solution is to train a network to solve a pre-defined pretext task (Teacher model generating pseudo labels) and then using the learned knowledge in the downstream task (Student network). Recently, SSL has shown promising outcomes in improving model performance and is receiving growing attention of the computer vision research community \citep{van2020,sohn2020fixmatch}. Despite these progresses, most of these advances are in the domain of image classification rather than object detection as the bounding box annotations require more time and effort to generate. Traditionally, SSL can be approached with adapting SOTA image classification methods such as \citep{sohn2020fixmatch} to object detection. However, existence of some unique characteristics such as foreground-background and foreground class imbalance makes object detection interact poorly with those methods. The class imbalance problem may greatly impede the use of pseudo-labeling based training pipelines since Teacher generated pseudo labels will be overly biased towards dominant classes and ignoring minor and less dominant classes. As a result, these models in their vanila arrangement will exacerbate class-imbalance problem and cause severe overfitting. To overcome these issues, we propose a jointly trained Teacher-Student model on m2cai16-tool-locations dataset~\citep{jin2018} which is initialised by a supervised detector. We argue that slowly updating the Teacher by exponential moving average (EMA) via the Student can alleviate pseudo-labeling bias problem and improve pseudo label quality, hence overall performance improvement. Additionally, we propose a multi-class distance and margin-based classification loss in the ROI head of the detector network to boost the classification performance. This is achieved by maximising the distance between foreground classes and the background. To the best of our knowledge, our approach is the first effort towards leveraging Teacher-Student joint training paradigm for addressing data scarcity problem in surgical tool detection applications. We employ strong and weak augmentation pipelines to improve model robustness (Fig.~\ref{Overall result and augmentation}(b)). Our proposed pipeline outperforms supervised baseline and other SOTA semi-supervised methods in terms of classification and localisation performance (Fig.~\ref{Overall result and augmentation}(a)). In the rest of the paper, we discuss related work (section \ref{Literature Review}), materials and method (section \ref{Materials and Method}), quantitative and qualitative results (section \ref{results}), ablation studies (section \ref{ablation study}) and lastly discussion and conclusion (section \ref{conclusion}). \section{Related Work}\label{Literature Review} Some of the early works on surgical tool detection used radio frequency identification tags \citep{kranzfelder2013}, Viola-Jones detection algorithm~\citep{lalys2011} and segmentation, contour delineation and three-dimensional modeling \citep{speidel2009}. With the advent of deep learning-based approaches using convolutional neural networks, computer vision methods have evolved with remarkable growth and demonstrated promising outcomes~\citep{imagenet}. In the surgical domain, several works have leveraged deep learning approaches to obtain SOTA performance on surgical instrument detection~\citep{jin2018,sahu2016tool,twinanda2016,endonet}. Most of the studies conducted on surgical tool detection have proposed supervised pipelines or only have targeted frame-level tool presence detection. For example, AGNet \citep{agnet} used global and local prediction networks to obtain visual cues for tool presence detection and showed a significant improvement over m2cai16-tool challenge \citep{raju2016m2cai} winners. Jin \emph{et al.}~\citep{jin2018} proposed region-based convolutional neural network to perform surgical skill assessment adapted to tool presence detection, spatial localization and tracking. The authors also extended the m2cai16-tool dataset \citep{dataset} to include tool bounding boxes (subsequently named as m2cai16-tool-locations) which we have used in this work. Sarikaya \emph{et al.} used image and temporal motion cues to train multi-modal CNN models \citep{sarikaya2017} for tool detection and localization in robotic-assisted surgical training task videos. Tool detection and pose estimation was also studied in \citep{reiter2012} but it was limited to robotic arms that return kinematic data. Shi \emph{et al.} proposed a lightweight attention-guided framework \citep{shi2020} for tool detection and conducted an ablation study on three different datasets (two public datasets, EndoVis Challenge \citep{kurmann2017} and ATLAS Dione \citep{sarikaya2017} and one self-prepared cholec80-locations). However, their model performed well on all tools except grasper and irrigator classes. In another study \citep{zhang2020}, irrigator can be observed as worst performing instrument with average precision of 41.6\%, followed by grasper with 54.1\% in a supervised setting at IOU threshold of 50\%. A ghost feature maps-based pipeline was used to reduce the computational burden for tool detection in \citep{yang2021efficient}. A CNN-based hidden Markov model was proposed by Twinanda in \cite{twinanda2016} for surgical tool detection from laparoscopic videos. A combination of CNN to extract spatial features and long short-term memory (LSTM) for temporal cues was proposed to perform surgical tool detection from laparoscopic videos~\citep{mishra2017}. Although the results of some of these approaches have been mostly encouraging, they have reported only one mAP results \citep{sarikaya2017, shi2020:IEEE} which is not quite sufficient to gauge the classification and localisation performance. Furthermore, previous approaches require completely labeled datasets to train the model. Such datasets are either scarcely available or the process of annotating them can lead to other issues such as introducing unintended biases in the trained model. In this work, we aim to demonstrate the advantages of an SSL approach and propose a novel semi-supervised Teacher-Student framework to alleviate the limited data problem and annotation cost requirement for training on larger datasets. Our literature search revealed that there are only two studies conducted on semi-supervised learning in the medical domain where one is based on cataract surgery dataset \citep{jiang2021semi} while another study \citep{yoon2020semi} used a tracker to detect instruments from unlabeled private surgery videos. To the best of our knowledge, this is the first approach that investigates the effectiveness of unlabeled data through a Teacher-Student learning pipeline for tool detection on a minimally invasive surgery dataset. We report results from our model in terms of mAP on various IOU thresholds to demonstrate the effectiveness of our approach in detecting and localising surgical tools. \section{Materials and Method}\label{Materials and Method} \subsection{Dataset} \label{dataset} In this work, we use an extended version of the m2cai16-tool dataset which was originally released for M2CAI 2016 Tool Presence Detection Challenge \citep{endonet}. This dataset consists of 15 videos each with duration from 20 to 75 minutes of cholecystectomy procedures performed at the University Hospital of Strasbourg in France. After down sampling at 1 fps, it leaves 23,000 frames annotated with tool presence classification. Later, m2cai16-tool-locations dataset was build with spatial bounding box annotations~\citep{jin2018}. This dataset consists of a total of 2812 frames that were annotated under supervision and spot-checking from clinical experts. We have used 80\%, 10\%, and 10\% for training, validation and test splits, respectively. The annotations breakdown per class is given in \textbf{supplementary material (Table 1)} and the tool instances with example box annotations are presented in Fig.~\ref{tools and labels}. We use average precision (AP) computed per class and mean average precision (mAP) for all seven classes which are the standard object detection evaluation metric. These metrics are evaluated at different IoU thresholds, usually denoted as \(mAP_{IoU- threshold}\). We report results for 50, 75, 50:95 (average of AP values for IoU thresholds from 50 to 95 with interval of 5), medium and large IoU thresholds. \begin{figure}[t!] \captionsetup{justification=justified} \includegraphics[width= \textwidth]{tools_and_labels.pdf} \caption{(Top) Samples from \textit{m2cai16-tool-locations} dataset with representative classes of seven tools. (Bottom) Example frames with bounding box annotations where color of the box refers to tool class} \label{tools and labels} \end{figure} \subsection{Data Augmentation} We have used two data augmentation strategies in this work, which we refer as weak and strong augmentations (Fig.~\ref{Overall result and augmentation}(b)). For the weak augmentation, we apply random horizontal flips whilst for strong augmentation, we randomly perform several photometric augmentations like grayscale, color jittering, Gaussian blur, patch masking and cutout patches~\cite{devries2017}. For the complete description of data augmentation with parameter values, please refer to \cite{unbiased}. \section{Method} \label{method} In this work we address multi-instance surgical tool detection problem in a semi-supervised setting. Let the training set in various arrangements of labeled data sets be denoted as \(D_s = \{x_i^s, y_i^s\}_{i=1}^{N_s}\) and unlabeled data sets be \(D_u = \{x_i^u\}_{i=1}^{N_u}\), where \(N_s\) and \(N_u\) represent number of supervised and unsupervised training samples while \(y^s\) represent bounding box annotation of each labeled image \(x^s\). Here, \(y^s\) consists of bounding boxes for all object instances, height and width of image and instance category names. It is important to mention that since all the training data samples contain labels, during training we removed the labels of the portion we categorise as unlabeled. The overall training pipeline is divided into two stages as shown in Fig.~\ref{General Block DIagram}. The first stage is the initialization stage (section \ref{initial}), while the second is the Teacher-Student joint learning mechanism (section~\ref{joint learning}). In the second stage, the Teacher generates pseudo-labels and Student network is trained on both pseudo labeled data and supervised data. Each stage is detailed separately below along with Student learning and Teacher update scheme and margin . \subsection{Initialization stage} \label{initial} The initialization stage acts as a trigger point for Teacher-Student joint learning. It sets the stage for the Teacher model to be able to generate qualitative pseudo-labels for better Student learning. In this stage, we exploit the available labeled data \(D_s = \{x_i^s, y_i^s\}_{i=1}^{N_s}\) to train the Faster-RCNN detector model (\(\theta\)) with supervised loss \( \mathcal{L}_{sup} \). The standard Faster-RCNN model makes use of four losses: RPN classification loss \( \mathcal{L}_{cls}^{rpn} \), RPN regression loss \( \mathcal{L}_{reg}^{rpn} \), ROI classification loss \( \mathcal{L}_{cls}^{roi} \) and ROI regression loss \( \mathcal{L}_{reg}^{roi} \) (Eq. (1)). \begin{equation} \mathcal{L}_{sup} = \sum_i^{N_s} \mathcal{L}_{cls}^{rpn} (x_i^s, y_i^s) + \mathcal{L}_{reg}^{rpn} (x_i^s, y_i^s) + \mathcal{L}_{cls}^{roi} (x_i^s, y_i^s) + \mathcal{L}_{reg}^{roi} (x_i^s, y_i^s) \end{equation} The weights and architecture of the model trained during this initialization phase are then copied to be used for both the Student and Teacher models \begin{math} (\theta_\mathcal{T} \leftarrow \theta, \theta_\mathcal{S} \leftarrow \theta) \end{math}. The trained detector from this stage provides a good initialization for next stage, where we further exploit unsupervised data to improve object detection. \begin{figure}[h!] \captionsetup{justification=justified} \includegraphics[width= \textwidth, keepaspectratio]{General_Block_Diagram.pdf} \caption{Overview of the proposed Surgical tool detection model. It consists of two modules: 1) An initialisation module, where a supervised model makes use of strongly augmented labeled data, 2) A Teacher-Student mutual learning module, where the Student is trained with strongly augmented unlabaled data with Teacher-generated pseudo labels. The Student transfers learned weights to the Teacher gradually through Exponential Moving Average (EMA).} \label{General Block DIagram} \end{figure} \subsection{Teacher-Student joint learning stage} \label{joint learning} The proposed knowledge distillation framework leverages Student and Teacher joint training to address lack of data problem. During training, Teacher generates pseudo labels on unlabeled data and Student is trained on those labels. Thus, a continuously learning Student passes on the learned knowledge to the Teacher. We posit that this evolving mutual learning would result in better detection performance by generating stable and reliable pseudo labels. Weak and strong augmentation pipelines ensure reliable pseudo label generation by Teacher and diversity in Student models respectively. \subsection{Student learning and Teacher update scheme} We tackle the pseudo-label noise problem which may cause severe performance degradation \citep{sohn2020fixmatch} by confidence thresholding (\(\tau\)). Although this step could have sufficed in the case of image classification, for object detection tasks, additional steps must be enforced as duplicated bounding box predictions and class imbalanced prediction problems are typically encountered in these settings. We address the duplicated box predictions problem by applying class-wise non-maximum suppression (NMS) before a confidence thresholding step. As simple confidence thresholding only removes samples with low confidence on predicted object categories and does not take into account the quality of bounding box locations, we do not use unsupervised loss on bounding box regression which is thus represented as below with $\theta_{S}$ as weight updates between both supervised $\mathcal{L}_{sup}$ and unsupervised $\mathcal{L}_{unsup}$ losses: \begin{align} \mathcal{L}_{unsup} = \sum_i^{N_u} \mathcal{L}_{cls}^{rpn} (x_i^u, \Tilde{y}_i^u) + \mathcal{L}_{cls}^{roi} (x_i^u, \Tilde{y}_i^u)\\ \theta_{\mathcal{S}} \leftarrow \theta_{\mathcal{S}} + \gamma \frac{\partial(\mathcal{L}_{sup} + \lambda_u \mathcal{L}_{unsup})}{\partial\theta_\mathcal{S}}, \end{align} \noindent{where} \(\gamma\) is the learning rate and \(\lambda_u\) is unsupervised loss weight. The overall unsupervised loss in Eq. (2) consists of the sum of RPN and ROI head classification losses. Eq. (3) depicts the Student weight update scheme which includes both supervised and unsupervised losses with a loss weight parameter $\lambda_u$. Finally, we perform Teacher model refinement by using EMA following \emph{Mean Teacher} to slowly update Teacher network which in turn will generate stable and reliable pseudo labels. The update can be represented as: \begin{equation} \theta_{\mathcal{T}} \leftarrow \alpha\theta_{\mathcal{T}} + (1- \alpha)\theta_{\mathcal{S}}, \end{equation} \noindent{where} \(\alpha\) is the EMA rate, and \(\theta_\mathcal{T}\), \(\theta_\mathcal{S}\) are the network weights for Teacher and Student. \subsection{Logistic loss with added margin and distance penalization} In the surgical domain, foreground class imbalance exists in every dataset due to the fact that tool usage frequency varies from one tool to another \citep{mishra2017}. In this work, to address the class imbalance problem, we target the foreground and background class imbalance problem by introducing a multi-class loss function based on a margin, which tries to maximise foreground-background distance. Unlike the focal or cross entropy losses, our proposed loss tries to predict relative distance between inputs. Specifically, we divide classification logits between foreground and background instances and then compute \textit{sigmoid} probability, respectively. We then sum the \textit{softmax} of the probabilities over all the batch for the foreground \(\rho\) and background \(\beta\) instances. These probabilities are then used to maximise foreground-background distance in the final loss computation which is in the form of a logistic loss function for classification defined as: \begin{equation}{\label{eq:5}} \mathcal{L}_{cls}^{roi} = \sum_n w_l \log(1+ \frac{e^{ \textstyle s\cdot(\beta-\rho+\sigma)}}{s}), \end{equation} \noindent{where} $n$ is the mini-batch size, \(w_l\) represents loss weight, s is the smoothness parameter and \(\sigma\) denotes margin. Apart from the multi-class loss, Teacher update with EMA will also help reduce pseudo label bias since new Teacher is regularised by previous Teacher model which prevents drastic movement of the decision boundary towards under-represented classes. \begin{algorithm} \caption{Multi-class distance and margin based classification loss}\label{loss} \begin{algorithmic}[1] \Procedure{loss}{$logits, targets$} \State $ classes \gets class\_indices$ \State $fg\_logits \gets logits(targets = classes)$ \State $ bg\_logits \gets logits(targets != classes)$ \State $fg\_prob \gets sigmoid(fg\_logits)$ \State $bg\_prob \gets sigmoid(bg\_logits)$ \State $\textcolor{blue}{\rho} \gets \sum softmax(fg\_prob)$ \State $\textcolor{blue}{\beta} \gets \sum softmax(bg\_prob)$ \State $ loss \gets Eq:5 $ \EndProcedure \end{algorithmic} \end{algorithm} \section{Experiments and results}\label{results} \subsection{Implementation Details} The implementation of our proposed framework is based on Faster-RCNN detector model with ResNet50-FPN bachbone, whose network weights are initialized by ImageNet pretrained model. We use a confidence threshold (\(\tau\)) of 0.7, regularization co-efficient for unsupervised loss (\(\mathcal{\lambda}_u\)) of 0.2 and EMA rate (\(\alpha\)) of 0.9996. We use \textit{WarmupMultiStepLR} as a learning rate (\(\alpha\)) scheduler in initialization stage while a constant learning rate of 0.01 for the Teacher-Student mutual learning stage. In the initialization stage, we use strong augmentation, while during the Teacher-Student mutual learning, we use weak augmentation for the Teacher and strong augmentation for Student. We report results in terms of mAP on different IOU thresholds. We use a batch size of 8 (4 labeled images and 4 unlabeled images) throughout the experiments. We performed network training through detectron2 \citep{wu2019detectron2} object detection framework using 4 GPUs on NVIDIA Tesla P100-SXM2-16GB system. We use fixed seed values for generating the data splits to make the results more reproducible. \subsection{Results} \subsubsection{Quantitative Results} We evaluate our model with different labeled and unlabeled data protocols and present the results on a 10\% held-out set in Table \ref{Experimental results}. The table also includes results on the supervised baseline, UnbiasedTeacher \citep{unbiased} with both CrossEntroppy and focal losses. and SoftTeacher \citep{softteacher}. Table \ref{perclassAP} shows per class \(mAP_{50:95}\) results on 1\% labeled data setting. Furthermore, we also conduct a paired t-test between \(AP_{50}\) obtained by our proposed model and \(AP_{50}\) obtained by other SOTA methods. The resulting box-plot on 1\%, 2\%, 5\% and 10\% labeled data setting are given Fig.\ref{fig:box-plot} and \textit{p}-values are shown in Table \ref{Experimental results}. \begin{table}[t!] \tbl{Experimental results with ResNet50-FPN as backbone .} {\begin{tabular}{lcccccc} \toprule & \multicolumn{5}{c}{ \textbf{1\% Labeled data}}& \textit{p}-{values} \\ \cmidrule{2-6} Method& $mAP_{50}$ & $AP_{50:95}$& $mAP_{75}$& $mAP_{m}$& $mAP_{l}$ \\ \midrule Supervised & 23.578 & 7.673 & 2.322 & 6.189 & 9.050 & $5.996e-17$ \\ Unbiased Teacher\textsuperscript{$*$} \citep{unbiased} & 34.374&14.145& 7.855 &10.687 & 15.880 & $5.626e-02$ \\ Unbiased Teacher\textsuperscript{$**$} \citep{unbiased} & 42.382 & 18.008 & 11.387 & 13.041 & 20.135 & $6.229e-03$ \\ SoftTeacher \citep{softteacher} & 38.421 & 13.556 & 6.623 & 16.756 & 13.045 & $5.526e-02$ \\ Ours & 50.632 & 20.094 & 12.713 & 15.219 & 21.774 & -- \\ \bottomrule & \multicolumn{5}{c}{ \textbf{2\% Labeled data}} \\ \cmidrule{2-6} Supervised & 47.140 & 18.609 & 9.480 & 24.033 & 18.586 & $2.558e-14$ \\ Unbiased Teacher\textsuperscript{$*$} \citep{unbiased} & 71.608 & 31.752 & 20.479 & 27.871 & 32.430 & $3.975e-04$ \\ Unbiased Teacher\textsuperscript{$**$} \citep{unbiased} & 72.416 & 31.490 & 21.446 & 26.767 & 32.666 & $2.010e-01$ \\ SoftTeacher \citep{softteacher} & 60.366 & 25.421 & 14.767 & 17.991 & 28.323 & $2.558e-8$ \\ Ours & 72.341 & 32.311 & 21.614 & 29.780 & 33.556 & -- \\ \bottomrule & \multicolumn{5}{c}{ \textbf{ 5\% Labeled data}} \\ \cmidrule{2-6} Supervised & 71.082 & 32.249 & 21.995 & 29.505 & 35.041 & $5.866e-05$ \\ Unbiased Teacher\textsuperscript{$*$} \citep{unbiased} & 84.721 & 42.269 & 32.826 & 35.697 & 44.204 & $1.298e-01$ \\ Unbiased Teacher\textsuperscript{$**$} \citep{unbiased} & 82.592 & 40.393 & 30.735 & 33.665 & 42.904 & $3.424e-04$ \\ SoftTeacher \citep{softteacher} & 83.211 & 38.857 & 26.643 & 30.567 & 40.718 & $4.566e-04$ \\ Ours & 84.427 & 42.392 & 33.376 & 31.156 & 44.610 & -- \\ \bottomrule & \multicolumn{5}{c}{ \textbf{10\% Labeled data}} \\ \cmidrule{2-6} Supervised & 80.193 & 38.640 & 30.625 & 29.845 & 40.958 & $7.108e-03$ \\ Unbiased Teacher\textsuperscript{$*$} \citep{unbiased} & 92.981 & 47.369 & 41.049 & 41.137 & 48.714 & $2.291e-01$ \\ Unbiased Teacher\textsuperscript{$**$} \citep{unbiased} & 90.353 & 45.972 & 45.103 & 39.247 & 47.787 & $1.00$ \\ SoftTeacher \citep{softteacher} & 89.362 & 42.717 & 41.522 & 38.312 & 43.849 & 1.00 \\ Ours & 90.250 & 46.886 & 46.234 & 42.635 & 48.644 & -- \\ \bottomrule \end{tabular}} \tabnote{\textsuperscript{*} with Focal Loss ; \textsuperscript{**} with Cross Entropy Loss.} \label{Experimental results} \end{table} \begin{table}[t!] \centering \begin{threeparttable}[b] \caption{The average precision (\(AP_{50:95}\)) per class on 1\% labeled data.}\label{perclassAP} \footnotesize \begin{tabular}{p{2cm}P{1.3cm}P{1.6cm}P{1.7cm}P{1.5cm}P{1.5cm}} \midrule Class & Supervised & Ubteacher\textsuperscript{*} & Ubteacher\textsuperscript{**}& SoftTeacher & Ours \\ \midrule Grasper & 12.434 & 23.457 & 23.046 & 7.0 & 20.203 \\ \midrule Bipolar & 13.450 & 19.472 & 33.384 & 22.2 & 29.499 \\ \midrule Hook & 11.349 & 38.529 & 44.614 & 29.8 & 43.924 \\ \midrule Scissors & 3.592 & 4.130 & 5.052 & 3.8 & 6.860 \\ \midrule Clipper & 4.273 & 5.045 & 4.800 & 0.0 & 4.970 \\ \midrule Irrigator & 4.022 & 3.393 & 8.468 & 27.9 & 10.331 \\ \midrule SpecimenBag & 4.592 & 4.986 & 6.692 & 4.1 & 24.873 \\ \bottomrule \end{tabular} \begin{tablenotes} \item[*] Unbiased Teacher with focal loss; \item[**] Unbiased Teacher with cross entropy loss; \end{tablenotes} \end{threeparttable} \end{table} \begin{figure}[t!] \captionsetup{justification=justified} \centering \includegraphics[width=\textwidth,height=3in, keepaspectratio]{Q-1.pdf} \includegraphics[width=\textwidth,height=3in, keepaspectratio]{Q-2.pdf} \caption{Qualitative Results: First row shows images with ground truth. Second, third, fourth and fifth row presents results on 10\%, 5\%, 2\% and 1\% setting respectively, Green and red boxes indicate correct and wrong predictions} \label{qualitative} \end{figure} \subsubsection{Qualitative Results} In this section, we report the qualitative performance of our model as shown in Fig.~\ref{qualitative}. The example surgical scenes are carefully chosen to contain several instances in one frame (column two from left), only partially visible instrument (column three from left), irregular orientation (column four from left). Results on all data settings have been presented to see how well model performs in terms of detection and localisation. \begin{figure}[t!] \centering \includegraphics[width= \textwidth]{figure/Mansoor_CAI22_v3.pdf} \caption{Box-plots for paired t-test on 1\%, 2\%, 5\% and 10\% labeled data setting. Here the AP\_50 scores above 1 are only shown to represent standard deviation in the scores.} \label{fig:box-plot} \end{figure} \subsection{Ablation Study}\label{ablation study} Several ablation studies were conducted to validate the effectiveness of different parameters. We evaluated the effect of initialization, confidence threshold (\(\tau\)), EMA rates and normalization parameter (s) on model performance. We trained the model with and without initialization stage and concluded that such process does improve the overall performance by a substantial margin (\textbf{Supplementary material section 7.1}). We also evaluated the model on different values of \(\tau\) where \(\tau\)=0.7 gives the best performance (\textbf{Supplementary material section 7.2}). We also performed multiple experiments to evaluate the impact of Teacher update with EMA rate on model performance for which EMA of 0.9996 gave the optimum performance (\textbf{Supplementary material section 7.3}). Here, we present ablation for use of different loss functions and our proposed loss with different normalization parameter `s' values. It can be observed that our proposed loss with $s=5$ provided the best performance with the highest mAP over all IoU thresholds (see Table~\ref{ab:Grid Search}). \begin{table}[t!] \tbl{Normalization parameter $s$ grid search} {\begin{tabular}{lccccc} \toprule Loss & \(mAP_{50:95}\) & \(mAP_{50}\) & \(mAP_{75}\) & \(mAP_{m}\) & \(mAP_{l}\) \\ \midrule Focal & 14.145 & 34.374 & 7.855 & 10.687 & 15.880 \\ \midrule Cross Entropy & 18.008 & 42.382 & 11.387 & 13.041 & 20.135 \\ \midrule Proposed loss (s=3) & 16.260 & 41.438 & 8.260 & 10.801 & 18.848 \\ \midrule Proposed loss (s=4) & 18.475 & 44.534 & 9.252 & 13.246 & 20.483 \\ \midrule Proposed loss (s=5) & \textbf{20.094} & \textbf{50.632} & \textbf{12.713} & \textbf{15.216} & \textbf{21.774} \\ \midrule Proposed loss (s=6) & 18.993 & 47.597 & 11.156 & 10.380 & 22.006\\ \bottomrule \end{tabular}} \label{ab:Grid Search} \end{table} \section{Discussion and conclusion} \label{conclusion} We demonstrate that our proposed approach performs favourably against the SOTA semi-supervised models \citep{unbiased} and \citep{softteacher} . In 1\% setting our proposed model outperforms unbiased Teacher with focal loss by a large margin and cross entropy loss by a 8 points on every evaluation metric while also outperforming SoftTeacher \citep{softteacher} model. It is worth noting that our approach achieves 50.632\% \(mAP_{50}\) on 1\% labeled data which is even higher than supervised baseline trained on 2\% labeled data and this trend can be witnessed in all settings. This improvement can be attributed to several crucial factors such as gradual improvement in pseudo label quality through EMA training which is in contrast to previous approaches in which Teacher model is freezed after training on labeled data. Another factor is the introduction of loss function which effectively increases the foreground-background distance and helps in improving detection performance. Furthermore, the proposed framework performs much better on \(mAP_{75}\) in all settings consistently which indicates improved localisation performance. On the 2\% labeled data setting, our model obtained 72.341\% \textit{mAP}on 50\% IoU thresholds, while unbiased Teacher on focal and cross entropy losses achieved 71.608\% and 72.416\% which is just slightly greater than our model. However, if we compare the performance of our model on \textit{mAP} at 50:95 and 75 IoU thresholds, we observe that our model consistently gives superior performance. Moreover, on 5\% and 10\% setting, Unbiased Teacher \citep{unbiased} with focal loss achieves slightly higher performance on \(mAP_{50}\), but the proposed method gives superior performance in terms of \(mAP_{50:95}\) and \(mAP_{75}\). This validates the effectiveness of our method on both classification and localization performance. We also present per class average precision (\(AP_{50:95}\)) in Table~\ref{perclassAP}. Here we observe significant improvement in average precision on all instances specially hard-to-detect classes like Specimen Bag (20 points), Irrigator (6 points) and Bipolar (16 points) against the supervised baseline. The qualitative results also indicate strong performance of our approach as most of the tools (even when four tools in one frame) are detected and localised correctly. The localisation accuracy increases as we add more labeled data as is evident from Fig.~\ref{qualitative} from bottom to top, however the detection performance remains largely unchanged. There are some missed detections on 1\% of the labeled data setting (see row 5 in column 3) and incorrect class label prediction (see row 5 in column 4). The missed detection occurred mostly on 1\% labeled data setting where model did not see enough annotated examples. Incorrect class prediction in the bottom right may be due to less discrminative features between both instances. Similarly, the missed detection in second last image on the bottom row can be because the tool was only partly visible. \\ The paired t-test \textit{p}-values computed between the proposed method and SOTA methods are given in the Table \ref{Experimental results}. Also, we have shown a bar-plot with median and deviations and significance between the SOTA and proposed methods (see Fig.~\ref{fig:box-plot}). We can observe that our proposed approach performs well on different data settings. From Fig.~\ref{fig:box-plot}, it can be observed that for 1\% setting our method is significantly different compared to other SOTA methods with the highest median AP$_{50}$ value reported. Similarly, on the 2\% setting our model and Unbiased Teacher model on cross entropy loss (UbTeacher\_ce) performed equally well (\textit{p}-value = 0.20) but still with the highest median value compared to other methods. Similar performance changes can be observed for 5\% data where Unbiased Teacher model on focal loss (UbTeacher\_focal) has \textit{p}-value = 0.13 (computed at AP$_{50}$) while on mAP$_{75}$ our method is still better. The reason behind competitive scores in these cases is because the reported APs are only done at 50\% IoU threshold, while it is evident from Table \ref{Experimental results} that our method performance for other mAPs at higher IoU thresholds has distinguishable improvements. However, with the 10\% labeled data setting, we reach non-significant difference in \textit{p}-values for other unbiased models and SoftTeacher model. This is because 10\% in this case is enough data for supervision during training. In this paper, we addressed a lack of annotated data problem in surgical domain for the first time by proposing a knowledge distillation framework. We tackle a multi-label, multi-class detection problem by implementing an end to end Teacher-Student learning with a multi-class foreground-background distance loss. We used strong and weak augmentation strategies to improve model robustness and class-wise NMS and EMA to improve pseudo label quality. Our experiments on m2cai16-tool dataset show the effectiveness of our model in terms of mAP on various supervision protocols against SOTA semi-supervised models. We also conducted extensive ablation experiments to demonstrate the validity of our proposed framework. \\ \vspace{-0.3cm} \section*{Acknowledgments} \vspace{-0.1cm} The authors wish to thank the AI Hub and the CIIOT at ITESM for their support for carrying the experiments reported in this paper in their NVIDIA's DGX computer. \bibliographystyle{tfcse}
1,314,259,995,415
arxiv
\section{Introduction} \begin{figure}[h] \begin{center} \includegraphics[scale=0.3,angle=270]{serminatopasaY89fig1a.ps} \includegraphics[scale=0.3,angle=270]{serminatopasaBa138fig1b.ps} \includegraphics[scale=0.3,angle=270]{serminatopasaPb208fig1c.ps} \caption{Top panel: Theoretical prediction for $^{89}$Y production factors versus metallicity using AGB models with initial mass $M$ = 1.5 $M_\odot$. Middle and bottom panel: Analogous plots for $^{138}$Ba and for $^{208}$Pb} \label{Ba138m1p5} \end{center} \end{figure} \begin{figure}[h] \begin{center} \includegraphics[scale=0.3,angle=0]{serminatopasaSFRfig2.eps} \caption{Star formation rate versus metallicity.}\label{sfr} \end{center} \end{figure} \begin{figure}[h] \begin{center} \includegraphics[scale=0.3,angle=0]{serminatopasabafefig4a.eps} \includegraphics[scale=0.3,angle=0]{serminatopasalafefig4b.eps} \caption{Top panel: Evolution of [Ba/Fe] s-fraction as function of [Fe/H] in the halo, thick disc and thin disc are shown as dashed lines. Solid lines are for the total s+r Ba theoretical expectations. Spectroscopic observations of Galactic disc and halo stars for [Ba/Fe] versus [Fe/H] from literature (\cite{Travaglio99} implemented with more recent observations as detailed in the text). Error bars are shown only when reported for single objects by the authors. The dotted line connects a star observed by different authors. Bottom panel: Analogous plot for [La/Fe]. }\label{bafeagbprova3} \end{center} \end{figure} \begin{figure}[h] \begin{center} \includegraphics[scale=0.3,angle=0]{serminatopasaeufefig5a.eps} \includegraphics[scale=0.3,angle=0]{serminatopasapbfefig5b.eps} \caption{Top panel: Galactic chemical evolution of [Eu/Fe] versus [Fe/H] compared with spectroscopic observations. Bottom panel: Analogous plot for [Pb/Fe] versus [Fe/H]. }\label{eufetotprova3} \end{center} \end{figure} \begin{figure}[h] \begin{center} \includegraphics[scale=0.3,angle=0]{serminatopasasryzrfefig3.eps} \caption{Galactic chemical evolution of [Sr/Fe] versus [Fe/H] (upper panel), [Sr/Fe] versus [Fe/H] (middle panel), and [Zr/Fe] versus [Fe/H] (lower panel) compared with spectroscopic observations. }\label{sryzrfeagbprova3} \end{center} \end{figure} \begin{figure}[h] \begin{center} \includegraphics[scale=0.3,angle=0]{serminatopasabaeufig6a.eps} \includegraphics[scale=0.3,angle=0]{serminatopasalaeufig6b.eps} \caption{Top panel: Galactic chemical evolution of [Ba/Eu] versus [Fe/H] including both s- and r- process contributions in the thin disc (long-dashed line), thick disc (dotted line) and halo (solid line). Error bars are shown only when reported by the authors. Bottom panel: Analogous plot for [La/Eu] versus [Fe/H]. }\label{baeufetotprova3} \end{center} \end{figure} According to the classical analysis of the s-process, the abundance distribution in the solar system was early recognized as the combination of three components (K\"appeler et al. 1982, Clayton and Rassbach 1977): the main component, accounting for s-process isotopes in the range from A $\sim$ 90 to A $<$ 208, the weak component, accounting for s-process isotopes up to A $\sim$ 90, and the strong component, introduced to reproduce about 50\% of double magic $^{208}$Pb. The main component itself cannot be interpreted as the result of a single neutron exposure, but as a multi-component, like an exponential distribution of neutron exposures. It is clear that the s-process is not originated in a unique astrophysical environment. In this paper we study the Galactic chemical evolution of the s-process as the outcome of the nucleosynthesis occurring in low to intermediate mass asymptotic giant branch (AGB) stars of various metallicities. These calculations have been performed with an updated network of neutron capture cross sections and $\beta$ decay rates. The paper is organized as follows: \S~2 we briefly introduce the stellar evolutionary model FRANEC and the post-process network we use to compute the nucleosynthesis in AGB stars. In \S~3 we introduce the Galactic chemical evolution model (GCE) adopted. In \S~4 we present the s-elements contributions at the solar system formation by introducing in the GCE code the AGB s-yields only obtained at various metallicities. The corresponding r-process contribution to solar abundances are then deduced with the r-process residual method. We recalculate with the GCE model the global s+r contribution to the Galactic chemical evolution of heavy elements as a function of [Fe/H]. Our predictions are compared with spectroscopic data of Sr, Y, Zr, characterising the first s-peak (light-s, ls), of Ba and La, characterising the second s-peak (heavy-s, hs), and Pb at the third s-process peak, together with Eu, an element of most r-process origin. Finally, in \S~5 we summarise the main conclusions and point out few aspects deserving further analysis. \section{FRANEC and s yields} The FRANEC (Frascati Raphson Newton Evolutionary code, Chieffi \& Straniero 1999) self-consistently reproduce the third dredge up episodes in AGB stars and the consequent recurrent mixing of freshly synthesised s-processed material (together with $^{4}$He and $^{12}$C) with the surface of the star. Nucleosynthesis in AGB stars of different masses and metallicities is followed with a post-process code, which uses the pulse by pulse results of the FRANEC code: the mass of the He intershell, the mass involved in the third dredge up (TDU), the envelope mass that is progressively lost by intense stellar winds, the temporal behaviour of the temperature and density in the various layers of the zones where nucleosynthesis takes place. For numerical details on the key parameters affecting the s-process nucleosynthesis in AGB stars of low mass we refer to Straniero et al. (2003). The network contains more than 400 isotopes and is sufficiently extended to take into account all possible branchings that play a role in the nucleosynthesis process. The neutron capture network is updated with the recommended (n,$\gamma$) rates by \cite{Bao00}, complemented by a series of more recent experimental results (for more details see Bisterzo et al. 2006). Stellar $\beta$-decays are treated following Takahashi and Yokoi (1987). The production of s-process elements in AGB stars proceeds from the combined operation of two neutron sources: the dominant reaction $^{13}$C($\alpha$,n)$^{16}$O, which releases neutrons in radiative conditions during the interpulse phase, and the reaction $^{22}$Ne($\alpha$,n)$^{25}$Mg, marginally activated during thermal instabilities. In the model, the dominant neutron source is not based on physical principles (Gallino et al. 1998): during TDU, a small amount of hydrogen from the envelope may penetrate into $^{12}$C-rich and $^{4}$He-rich (He-intershell) inner zone. Then, at H-shell reignition, a thin $^{13}$C-pocket may form in the top layers of the He-intershell, by proton capture on the abundant $^{12}$C. We artificially introduce a $^{13}$C pocket, which is treated as a free parameter. The total mass of the $^{13}$C pocket is kept constant with pulse number and the concentration of $^{13}$C in the pocket is varied in a large range, from values 0.005 $-$ 0.08 up to 2 times with respect to the profile indicated as ST by \cite{Gallino98}, corresponding to the choice of the mass of $^{13}$C of 3.1 $\times$ 10$^{-6}$ $M_\odot$. A too high proton concentration would favour the production of $^{14}$N by proton capture on $^{13}$C. Note that the minimum $^{13}$C-pocket efficiency decreases with metallicity, since the neutron exposure depends on the ratio of the neutrons released to Fe seeds. This choice was shown to better reproduce the main component with AGB models of half-solar metallicity (Arlandini et al. 1999), and is a first approach to the understanding of solar system s-process abundances. In reality, the solar system composition is the outcome of all previous generations of AGB stars having polluted the interstellar medium up to the moment of condensation of the solar system. A spread of $^{13}$C-pocket efficiencies has been shown to reproduce observations of s-enhanced stars at different metallicities (see, e.g., Busso et al. 1999, 2001, Sneden et al. 2008). In AGB stars of intermediate mass the s-process is less efficient. As for the choice of the $^{13}$C neutron source, because of the much shorter interpulse phases in these stars ($\sim$6500 yr for 5 $M_\odot$ and $\sim$1500 yr for 7 $M_\odot$) with respect to LMS-AGBs ($\sim$3$-$6 $\times$ 10$^{4}$ yr), the He intershell mass involved is smaller by one order of magnitude. Consequently, also the TDU of s-process-rich material from the He-intershell into the surface is reduced, again by roughly one order of magnitude. Given the above reasons, for the 5 $M_\odot$ and the 7 $M_\odot$ cases, as in Travaglio et al. (1999, 2004), we have considered as a standard choice for IMS-AGBs (ST-IMS) a $^{13}$C mass scaled accordingly [$M$($^{13}$C)$_{\rm ST-IMS}$= 10$^{-7}$ $M_\odot$]. On the other hand, in IMS stars the $^{22}$Ne($\alpha$,n)$^{25}$Mg reaction is activated more efficiently (Iben 1975; Truran \& Iben 1977) since the temperature at the base of the convective pulse reaches values of $T$ = 3.5$\times$10$^{8}$ K. Also, the peak neutron density during the TP phase is consistently higher than in AGBs ($N_n \sim$ 10 $^{11}$ n cm$^{-3}$, see Vaglio et al. 1999; Straniero et al. 2001), overfeeding a few neutron-rich isotopes involved in important branchings along the s-process path, such as $^{86}$Kr, $^{87}$Rb and $^{96}$Zr. We took a set of low mass stars (LMS) (1.5 and 3 solar masses) and intermediate mass stars (IMS) (5 and 7 solar masses), and a set of 27 metallicities from [Fe/H] = 0.30 down to [Fe/H] = $-$3.60. \subsection{s-yields} In Fig. \ref{Ba138m1p5} we show the theoretical predictions versus [Fe/H], for AGB stars of initial mass $M$ = 1.5 $M_\odot$, of the production factors in the astrated s-process ejecta of $^{89}$Y, $^{138}$Ba and $^{208}$Pb, taken as representative of the three s-process peaks. Each line corresponds to a given $^{13}$C-pocket efficiency. The production factors are given in terms of the isotope abundance divided by the initial abundance, solar-scaled with metallicity. For low neutrons/seed ratios, the neutron fluence mainly feeds the ls nuclei (like $^{89}$Y), whereas for higher exposures the hs peak (like $^{138}$Ba) is favoured. Increasing further the neutron exposure, the neutron flow tends to overcome the first two s-peaks, directly feeding $^{208}$Pb at the termination point of the s-proces path. There is therefore a very complex s-process dependence on metallicity. \section{Galactic chemical evolution model} The model for the chemical evolution of the Galaxy was described in detail by \cite{Ferrini92} and it was updated by \cite{Travaglio99,Travaglio01,Travaglio04}. The Galaxy is divided into three zones, halo, thick disc and thin disc, whose composition of stars, gas (atomic and molecular) and stellar remnants is computed as function of time up to the present epoch $t_{\rm Gal}$ = 13 Gyr. Stars are born with an initial composition equal to the composition of the gas from which they formed. The formation of the Sun takes place 4.5 Gyr ago, at epoch $t_\odot$ = 8.5 Gyr. The matter in the Galactic system has different phases of aggregation, interacting and interchanging one into the other. Therefore the evolution of the system (the time dependence of the total mass fraction in each phase and the chemical abundances in the ISM and in stars) is determined by the interaction between these phases. It means that the star formation rate (SFR) (see Fig. \ref{sfr}) $\psi(t)$ is not assumed $a priori$, but is obtained as the result of a self-regulating process occurring in the molecular gas phase, either spontaneous or simulated by the presence of other stars. The thin disc is divided into concentric annuli, without any radial flow, and is formed from material infalling from the thick and the halo. In the present work, as in Travaglio et al. previous works, we neglect any dependence on Galactocentric radius in the model results as well as in the observational data and we concentrate on the evolution inside the solar annulus, located at 8.5 kpc from the Galactic center. However, we must point out that the Galactic chemical evolution model by \cite{Ferrini92} that we use is now believed to be incorrect. The main problem is that the thick disk cannot form from gas from the halo, as demonstrated by \cite{WG1992}. These authors showed that the distribution of angular momentum of halo stars differs markedly from that of the thick and thin disks. \cite{Pardi1995} also demonstrated that the scenario we assume cannot reproduce at the same time the stellar metallicity distributions of the halo, thick disk, and thin disk. See also \cite{Pagelbook} and \cite{Matteuccibook}. To overcome the problems of the model by \cite{Ferrini92}, \cite{Cescutti2006} studied the chemical evolution of the heavy elements using the two-infall model proposed by \cite{Chiappini1997}. Also this model, although widely adopted, presents some shortcomings, for example, it is not possible to distinguish the thick disk from the thin disk. In the present paper we focus on analysing the changes made by using updated reaction rates on the chemical evolution of the elements heavier than iron rather than the changes made by using an updated model of the evolution of the Galaxy. Thus, we use the same model of \cite{Travaglio99,Travaglio01,Travaglio04} where we introduce a new and extended grid of AGB yields. \section{Results for the Galactic chemical evolution of s- and r- elements} In this section we present the results for the evolution of Sr, Y, Zr, La, Ba, Eu and Pb in the Galaxy, by considering separately the s- and r-contributions. Then we compute the Galactic abundances of these elements resulting from the sum of the two processes, comparing model results with the available spectroscopic observations of field stars at different metallicities. \subsection{Galactic chemical evolution of s-elements} The s-contribution to each isotope at the epoch of the formation of the solar system is determined by following the GCE heavy elements contributed by AGB stars only. Then, using the r-process residual method (s = 1 $-$ r) we determined for each isotope the solar system r-process fraction. As a second step, we recalculate the GCE contribution of the heavy elements accounting for both the s- and the r-process, assuming that the production of r-nuclei is a primary process occurring in Type II supernovae, independent of the metallicity. Galactic chemical s-process expectations depend on several uncertainties, among which are the knowledge of solar abundances, of the neutron capture network and on the choice of the specific stellar evolutionary code. To this one may add the uncertainties connected with the treatment of the Galactic chemical evolution model. Among the most important uncertainties is the evaluation of the global ejecta from the AGB winds of stars of different masses and metallicities, which in turn depend on the mass mixed with the envelope by the various third dredge up episodes, and by the the weighted average s-process yields over the assumed $^{13}$C-pocket efficiencies. This would provide a very poor expectation. However, a strong constraint is given by the heavy s-only isotopes, whose solar abundance derives entirely from the s-process in AGB stars. Among the s-only isotopes, the unbranched $^{150}$Sm, whose neutron capture cross section at astrophysical temperatures and solar abundance are very well known, with a total uncertainty of less than 3\% (Arlandini et al. 1999), may be chosen as normalisation. One may then deduce the relative s-process isotope percentage for all heavy elements. For LMS we averaged the s-process yields over 13 $^{13}$C-pocket, excluding the case ST $\times$ 2. For IMS, the effect of the $^{13}$C neutron source is negligible with respect to the one induced by $^{22}$Ne neutron source. In Table \ref{comparisonlmsnorm} we show values of AGB percentage to solar abundance at $t$ = $t_\odot$ for LMS and IMS respectively obtained by present calculations compared with \cite{Travaglio99} results. In Table 1 a choice of selected isotopes is made, among which the s-only isotopes $^{124}$Te, $^{136}$Ba, $^{150}$Sm and $^{204}$Pb, together with $^{89}$Y, $^{138}$Ba, and $^{208}$Pb of major s-process contribution. In turn $^{151}$Eu is chosen as representative of the r-process, as clearly indicated by its only 6\% to solar $^{151}$Eu. We compare our results with spectroscopic abundances of [Sr,Y,Zr/Fe], [Ba,La/Fe], and [Pb/Fe] that are typical of the s-process peaks, as well as [Eu/Fe], which in turn is a typical r-process element. Let us first consider [Ba/Fe] and [La/Fe] versus [Fe/H]. Figs. \ref{bafeagbprova3} show in the top panel the [Ba/Fe] versus [Fe/H] with spectroscopic observations and theoretical s-curves, and in the bottom panel the analogous plot [La/Fe] versus [Fe/H]. In this figure and the following we compare with the set of stellar observations used by \cite{Travaglio99,Travaglio01,Travaglio04} implemented with more recent observations of elemental abundances in field stars, as listed below, with their associated symbols in the figures: \cite{Mashonkina06} blue asterisks; \cite{Ivans06} cyan full hexagons; \cite{AokiB08} red open squares; \cite{AokiH08} blue asterisks; \cite{Lai08} green full hexagons; \cite{Cohen07} yellow full hexagons; \cite{Norris07} blue full triangles; \cite{Frebel07} full blue squares; \cite{Mashonkina08} red asterisks; \cite{Roederer08} full red hexagons; \cite{AokiH05} red crosses; \cite{Francois07} cyan asterisks; \cite{Cohen08} red open circles; \cite{AokiF06} red open triangles pointing to the right; \cite{Yushch05} blue full triangles; \cite{Vaneck} black open triangles; \cite{Cowan02} green crosses. The dashed lines show the theoretical GCE expectations using only the AGB s-process products for halo, thick and thin disc separately. Although the s-contributions to solar Ba and La are 78.2\% and 66.3\%, respectively, it is clear that s-process alone does not explain all spectroscopic observations. In Fig. \ref{eufetotprova3} analogous plots are shown for [Eu/Fe] (top panel) and [Pb/Fe] (bottom panel) versus [Fe/H]. While the s-process contribution to Eu is negligible (5.6\% to solar Eu), the s-contribution to solar Pb is 83.9\%. Comparing with previous plots, spectroscopic [Pb/Fe] observations are scarce because of the difficulty to extract Pb abundances from unevolved stars. As we explained before, the classical analysis of the main component cannot explain the $^{208}$Pb abundances. The GCE calculation provide 83.9\% to solar Pb, and 91.1\% to $^{208}$Pb, thanks to the contribution of different generations of AGB stars. In particular, low metallicities AGB stars are the main contributors to $^{208}$Pb. Finally, in Fig. \ref{sryzrfeagbprova3} are presented the analogous plots for [Sr/Fe] versus [Fe/H] (top panel), [Y/Fe] versus [Fe/H] (middle panel), and [Zr/Fe] versus [Fe/H] (lower panel). GCE calculations provide 64.1\% to solar Sr, 66.5\% to solar Y, and 60.3\% to solar Zr. Note that the classical analysis of the main component would provide 85\%, 92\%, and 83\%, respectively (Arlandini et al. 1999), making clear also in this case that the classical analysis is only a rough approximation. \begin{table}[h] \begin{center} \caption{Galactic LMS-AGB (1.5 to 3 $M_\odot$) and IMS-AGB (5 to 8 $M_\odot$) contributions, at $t$ = $t_\odot$ = 8.5Gyr, expressed as percentages to solar abundances.}\label{comparisonlmsnorm} \begin{tabular}{lcc} \\ \hline \hline isotope & Travaglio 99 & our work \\ \hline & LMS-AGB (\% to solar)& \\ \hline $^{89}$Y & 61.5 & 62.7 \\ $^{124}$Te & 72.0 & 70.0 \\ $^{136}$Ba & 92.1 & 85.1 \\ $^{138}$Ba & 84.0 & 82.3 \\ $^{139}$La & 61.4 & 65.5 \\ $^{150}$Sm & 98.1 & 99.1 \\ $^{151}$Eu & 6.4 & 5.7 \\ $^{204}$Pb & 93.8 & 85.1 \\ $^{208}$Pb & 93.6 & 90.7 \\ \\ \hline \hline & IMS-AGB (\% to solar)& \\ \hline $^{89}$Y & 7.5 & 3.8 \\ $^{124}$Te & 4.7 & 2.2 \\ $^{136}$Ba & 4.1 & 2.2 \\ $^{138}$Ba & 2.5 & 1.2 \\ $^{139}$La & 1.7 & 0.8 \\ $^{150}$Sm & 2.8 & 0.9 \\ $^{151}$Eu & 0.2 & 0.06 \\ $^{204}$Pb & 2.5 & 0.9 \\ $^{208}$Pb & 1.2 & 0.4 \\ \hline \end{tabular} \end{center} \end{table} \subsection{The r- process yields and Galactic chemical evolution} From the theoretical point of view, the r-process origin is still a matter of debate. The analytical approach followed here to derive the r-process yields has been presented first by \cite{Travaglio99}. The enrichment of r-process elements in the interstellar medium (ISM) during the evolution of the Galaxy is quantitatively constrained on the basis of the results for the s-process contribution at $t$ = $t_\odot$. The so called r-process residual for each isotope is obtained by subtracting the corresponding s-process contribution N$_s$/N$_\odot$ from the fractional abundances in the solar system taken from \cite{AndersGrevesse89}: \begin{equation} N_{s}/N_{\odot}= (N_{\odot}-N_{s})/N_{\odot} \end{equation} In the case of Ba \cite{Travaglio99} obtain a r-residual of 21$\%$. The assumption that the r-process is of primary nature and originates from massive stars allows us to estimate the contribution of this process during the evolution of the Galaxy. In the case of Ba, for example \begin{equation} (\frac{\rm Ba}{\rm O})_{r,\odot}\sim 0.21(\frac{\rm Ba}{\rm O})_{\odot}. \end{equation} Since the s-process does not contribute at low metallicity for Population II stars \begin{equation} \left(\frac{\rm Ba}{\rm O}\right) \sim \left(\frac{\rm Ba}{\rm O}\right)_{r,\odot} \end{equation} assuming a typical [O/Fe]$\sim$0.6 dex for Population II stars. Thus, the r-process contribution for [Fe/H]$\leq$ $-$1.5 dominates over the s-contribution and roughly reproduces the observed values. The procedure followed to extrapolate the r-process yields is independent of the chemical evolution model adopted and has been described in \S~2. The solution shown in the plots adopts SNIIe in the mass range 8 $\leq$ $M$/$M_\odot$ $\leq$ 10 as primary producers of r-nuclei. The [element/Fe] ratios provide information about the enrichment relative to Fe in the three Galactic zones, making clear that a delay in the r-process production with respect to Fe is needed in order to match the spectroscopic data at [Fe/H] $\leq$ $-$2. The observations show that [Ba/Fe] begin to decline in metal-poor stars and this trend can be naturally explained by the finite lifetimes of stars at the lower end of the adopted mass range: massive stars in the early times of evolution of the Galaxy evolve quickly, ending a s SNII producing O and Fe. Later, less massive stars explode as SNII, producing r-process elements and causing the sudden increase in [element/Fe]. At [Fe/H] $\sim$ $-$1 halo stars, thick disc stars and thin disc stars are mixed up. The large scatter observed in [Ba/Fe], [La/Fe] and in [Eu/Fe] in halo stars can be ascribed to an incomplete mixing in the Galactic halo. This allows the formation of very metal-poor stars strongly enriched in r-process elements, like CS 22892-052 (Sneden 2000a). This star, with [Fe/H] $\sim$ $-$3.1, shows r-process enhancements of 40 times the solar value ([Eu/Fe] $\sim$ +1.7), and [Ba/Fe] $\sim$ +0.9. Nevertheless its [Ba/Eu] is in agreement with the typical r-process ratios. \subsection{The s+r process evolution} The global results for the Galactic chemical evolution of heavy elements from iron to lead based on the assumptions discussed before, namely that the s-process contribution of these elements derives from low mass AGB stars and the r-process contribution originates from SNII in the range 8 $\leq$ $M$/$M_\odot$ $\leq$ 10, are shown as solid lines in Figs. 3, 4, 5. Fig. \ref{baeufetotprova3} show [Ba/Eu] versus [Fe/H] (top panel) and [La/Eu] versus [Fe/H] (bottom panel) for spectroscopic observations and theoretical curves computed by adding the s and r process contribution. Since Eu is mostly produced by r-process nucleosynthesis (94$\%$ at $t$ = $t_\odot$), the [element/Eu] abundance ratios (bottom panel) provide a direct way to judge the relative importance of the s and r channels during the evolution of the Galaxy. At low metallicity the r-process contribution is dominant, and the [element/Eu] ratio is given by the elemental r-fraction computed with the r-residuals described before. On the other hand, for [Fe/H] $\geq$ $-$1.5, the s-process contribution takes over, and the [element/Fe] ratios rapidly increase approaching the solar values. For elements from Ba to Pb, the estimated r-process contribution at $t$ = $t_\odot$ has been derived by subtracting the s-fraction from solar abundances (r-process residual method). Instead, for elements lighter than Ba a more complex treatment is needed. In particular for Sr, Y and Zr, besides the s-process component, one has to consider three other components: the weak-s component (which decreases linearly with the metallicity), the r-component and the LEPP-component, which are both independent of metallicity (Travaglio et al. 2004). As reported above, the GCE contribution by AGB stars are 64.1\% to solar Sr, 66.5\% to solar Y, and 60.3\% to solar Zr. The weak s-process is estimated to contribute to 9\% to solar Sr, 10\% to solar Y, and 0\% to solar Zr. This leaves for the LEPP component a contribution of 17.9\% to solar Sr, 18.5\% to solar Y, and 28.7\% to solar Zr very close to Travaglio et al. (2004) expectations. The residual r-process contributions would then be 9\% of Sr, 5\% of Y and 11\% of Zr. Summing up all contributions, the solid lines shown in Fig. 5 give a good explanation of spectroscopic data, both in the halo and in the Galactic disc. A refined analysis is difficult to determine and is still matter of debate. \section{Conclusions} We have studied the evolution of the heavy elements in the Galaxy, adopting a refined set of models for s-processing in AGB stars of different metallicities and compared with observational constraints of unevolved field stars for Sr, Y, Zr, Ba, La, Eu and Pb. In the first part stellar yields for s-process elements have been obtained with post-process calculations based on AGB models with different masses and metallicities, computed with FRANEC. In the second part we have adopted a Galactic chemical evolution model in which the Galaxy has been divided into three zones (halo, thick disc and thin disc), whose composition of stars, gas (atomic and molecular) and stellar remnants, is computed as a function of time up to the present epoch. Introducing as a first step in the GCE model the AGB s-yields only, we have obtained the s-process enrichment of the Galaxy at the time of formation of the solar system. Major uncertainties connected with the AGB models, with the adopted average of the large spread of $^{13}$C-pocket efficiencies, as well as of the basic parameters introduced in the CGE model are strongly alleviated once we normalise the s-process isotope abundances computed at the epoch of the solar formation to $^{150}$Sm, an unbranched s-only isotope with both a well determined solar abundance and neutron capture cross section at astrophysical temperatures. Assuming that the production of r-nuclei is a primary process occurring in SNII of 8$-$10 solar masses, the r- contribution to each nucleus has then been computed as the difference between its total solar abundance and its s-process abundance. Finally we compare our predictions with spectroscopic observations of the above listed elements along the life of the Galaxy. \section*{Acknowledgments} We acknowledge the anonymous referees for their very useful comments. We thank Maria Lugaro for a very careful reading of the manuscript. Work supported by the Italian MIUR-PRIN 2006 Project "Final Phases of Stellar Evolution, Nucleosynthesis in Supernovae, AGB Stars, Planetary Nebulae".\\
1,314,259,995,416
arxiv
\section{The Hamiltonian and the expansion in bosons} We consider Heisenberg Hamiltonian of 2D triangular lattice (Eq. (3) of the main text), and expand it to sixth order in Holstein Primakoff bosons around the ferromagnetic state, which holds at $h > h_{sat}$. We then move to fields below the saturation value by introducing magnon condensates and using the technique of dilute Bose-gas expansion. The Hamiltonian in terms of Holstein Primakoff bosons has the form \begin{eqnarray} {\cal H}& = & {\cal H}^{\rm(2)} + {\cal H}^{\rm(4)} + {\cal H}^{\rm(6)},\nonumber\\ \label{eq:H^2} {\cal H}^{\rm(2)} &= &\sum_{\bf k} (\omega_{\bf k} - \mu) a_{\bf k}^\dag a_{\bf k} ,\\ \label{eq:H^4} {\cal H}^{\rm(4)} &= &\frac{1}{2N}\sum_{\bf k,k',q}V_{\bf q}({\bf k,k'}) a_{\bf k+q}^\dag a_{\bf k'-q}^\dag a_{\bf k'} a_{\bf k} ,\\ {\cal H}^{\rm(6)} &= &\frac{1}{\rm 16SN^{2}}\sum_{\bf k, k',k'',q,p}U_{\bf q,p}({\bf k,k',k''}) a_{\bf k+q+p}^\dag a_{\bf k'-q}^\dag a_{\bf k''-p}^\dag a_{\bf k''} a_{\bf k'} a_{\bf k} . \label{eq:H^6} \end{eqnarray} Here, $a,a^\dag $ are boson operators, $ \omega_k $ is the magnon dispersion, $ \mu =h_{\rm sat} -h $ is the chemical potential, and $V_{\bf q}({\bf k,k'}), U_{\bf q,p}(\bf k,k',k'') $ are 2- and 3-body interaction potentials which we list below separately for isotropic and anisotropic models. Both $\omega_{\bf k}$ and $h_{sat}$ are of order $S$, and we consider $\mu$ also of order $S$. \subsection{Isotropic Heisenberg Model} \label{sec:A1} In the isotropic case \begin{eqnarray} \label{eq:omega} \omega_{\bf k} & = & S (J_{\bf k} - J_{\bf Q}),\\ \label{eq:V^4} V_{\bf q}({\bf k,k'}) &= &\frac{1}{2}[J_{\bf k-k'+q}+J_{\bf q}-\frac{1}{2}(J_{\bf k+q}+J_{\bf k'-q}+J_{\bf k}+J_{\bf k'})],\\ U_{\bf q,p}(\bf k,k',k'') &= &\frac{1}{9}\Big (J_{\bf k+q}+J_{\bf k''+q}+J_{\bf k+k''-k'+q}+J_{\bf k+p}+J_{\bf k'+p}+J_{\bf k+k'-k''+p}\nonumber\\ && +J_{\bf k'+k''-k-q-p} +J_{\bf k''-q-p}+J_{\bf k''-q-p}\Big)\nonumber\\ && -\frac{1}{6}\Big(J_{\bf k+q+p}+J_{\bf k'-q}+J_{\bf k''-p}+J_{\bf k}+J_{\bf k'}+J_{\bf k''}\Big). \label{eq: U^6} \end{eqnarray} where $J_{\bf k} = 2J( \cos[k_x] + 2 \cos[\frac{k_x}{2}] \cos[\frac{\sqrt{3} k_y}{2}]),$ with its minimum $ J_{\bf Q}$ at $ {\bf Q}=(Q_0,0),$ and $Q_0=4\pi/3.$ \subsection{Anisotropic $J$-$J'$ Model} In this model, $\omega_{\bf k}$, $V_{\bf q}({\bf k,k'})$, and $U_{\bf q,p}({\bf k,k',k''})$ are all in the same form as $J_{\bf k}$ above, except replacing all $J_{\bf k}$ with $\tilde J_{\bf k}$, where $\tilde{J}_{\bf k} = 2(J \cos[k_x] + 2 J'\cos[\frac{k_x}{2}] \cos[\frac{\sqrt{3} k_y}{2}])$. $\tilde{J}_{\bf k}$ has minimum $\tilde J_{\bf Q}$ at $ {\bf Q}=(Q_{\rm i},0)$, and $ Q_{\rm i}=2\cos^{-1}[-J'/2J]$. \subsection{ XXZ Model} In this model, $\omega_{\bf k}$ is same as Eq.\eqref{eq:omega}, and $U_{\bf q,p}(\bf k,k',k'')$ is same as Eq.\eqref{eq: U^6}. The difference comes from $V_{\bf q}(\bf k,k')$, which now contains the exchange anisotropy in the $z$ direction: \begin{equation} V_{{\bf q}}({\bf k,k'})=\frac{1}{2}\Big[J^{ z}_{\bf k-k'+q}+J^{z}_{\bf q}-\frac{1}{2}(J_{\bf k+q}+J_{\bf k'-q}+J_{\bf k}+J_{\bf k'})\Big], \end{equation} where $J^{z}_{\bf k} = 2J^{z}( \cos[k_x] + 2 \cos[\frac{k_x}{2}] \cos[\frac{\sqrt{3} k_y}{2}])$. The minimum of $J^{z}_{\bf k}$ is at $ {\bf k}=(Q_0,0).$ \section{ Calculation of $\Gamma_1,\Gamma_2,\Gamma_3$} We follow~\cite{griset} and split magnon operators into condensate and non-condensate fractions as \begin {equation} a_{\bf k}=\sqrt{N}\psi_{1}\delta_{\bf k,Q}+\sqrt{N}\psi_{2}\delta_{\bf k,-Q}+\tilde a_{\bf k}, \label{eq:gs} \end {equation} where $\psi_{1,2}$ describe condensates at momenta ${\bf k} = {\bf Q}$ and ${\bf k} = -{\bf Q}$, and $\tilde a_{\bf k}$ describes non-condensate magnons. The ground state energy density reads \begin{eqnarray} E_0/N= -\mu (|\psi_1|^2 + |\psi_2|^2) + \frac{1}{2} \Gamma_1(|\psi_1|^4 + |\psi_2|^4) +\Gamma_2 |\psi_1|^2 |\psi_2|^2 + \Gamma_3 ((\bar{\psi}_1 \psi_2)^3 +\rm\bf h.c.) \end{eqnarray} The classical expressions for $\Gamma_1$ and $ \Gamma_2$ (the ones at order $1/S^0$) are obtained by neglecting all non-condensate modes and are shown schematically in Fig.\ref{fig:A1}. These contributions are related to potential $V_{\bf q}({\bf k, k'})$ via \begin{eqnarray} && \Gamma_{1}^{(0)}= V_{\bf 0}(\bf Q,Q),\\ && \Gamma_{2}^{(0)}= V_{\bf 0}(\bf Q,-Q)+V_{\rm 2\bf Q}(-Q,Q). \end{eqnarray} The classical expression for $\Gamma_3$ (at order $1/S$) is shown schematically in Fig.\ref{fig:A1} and it is related to potential $V_{\bf q}({\bf k, k'})$ and $U_{\bf q,p}({\bf k, k', k''})$ via \begin{equation} \Gamma_{3}^{(1)}= \frac{U_{2\bf Q,\rm 2\bf Q}(\bf Q,Q,Q)}{16S}-\frac {[\rm V_{2\bf Q}(\bf Q,Q)]^2}{\omega_{\rm 3\bf Q}}. \label{eq:gamma3} \end{equation} Here the first term comes directly from the Hamiltonian \eqref{eq:H^6}, and the second one originates from the condensate $\psi_0 \equiv \langle \tilde a_{0}\rangle \ne 0$, which is induced at the momentum ${\bf k} = 3 {\bf Q} = 0$ in the case of {\em commensurate} ordering at wave vector $ {\bf Q}=(4\pi/3_0,0)$. This novel condensate adds the term $|\psi_{0}|^2 \omega_{\bf 0}+V_{2\bf Q}(\bf Q,Q)[\psi_{\bf 0}(\bar \psi_{\rm1} \rm \psi_{2}^ {2}+\psi_{1} ^{2}\bar \psi_{2})+\bf h.c]$ to the ground state energy. Minimizing this extra energy contribution, we find the expression for $\psi_{0}$ \begin{equation} \psi_{0}=-\frac{V_{2\bf Q}(\bf Q,Q)}{ \omega_{0}}(\bar \psi_{\rm1} \rm \psi_{2}^ {2}+\psi_{1} ^{2}\bar \psi_{2})=\rm \frac{1}{\rm 4S} (\bar \psi_{\rm1} \rm \psi_{2}^ {2}+\psi_{1} ^{2}\bar \psi_{2}). \label{eq:psi 00} \end{equation} It is important to keep in mind that this result is derived for $ {\bf Q}=(4\pi/3_0,0)$, when $e^{i 3 {\bf Q} \cdot {\bf r}} = 1$ for all sites of the triangular lattice ${\bf r}$. \begin{figure}[htp] \includegraphics[scale=0.25]{Fig-Classical-diagram.pdf} \caption{Diagrams for $\Gamma_1$, $\Gamma_2 $ and $\Gamma_3$ in the classical limit. } \label{fig:A1} \end{figure} The expressions for $\Gamma_{1}^{(0)}, \Gamma_{2}^{(0)}$, and $\Gamma_{3}^{(1)}$ are different in the isotropic case and in the two anisotropic cases.\\ For the isotropic model, \begin{eqnarray} && \Gamma_{1}^{(0)}= J_{\bf 0}-J_{\bf Q}, \Gamma_{2}^{(0)}= J_{\bf 0}+J_{2\bf Q}-J_{\bf Q},\nonumber\\ && \Gamma_{3}^{(1)}=0. \end{eqnarray} For $J-J'$ model, \begin{eqnarray} &&\Gamma_{2}^{(0)}-\Gamma_{1}^{(0)}= \tilde J_{2\bf Q}-\tilde J_{\bf Q}=J(2+ \frac{J'}{J})^2 (1- \frac{J'}{J})^2 \approx \frac{9(\delta J)^2}{J} ,\nonumber\\ && \Gamma_{3}^{(1)}= 0. \end{eqnarray} For XXZ model, \begin{eqnarray} &&\Gamma_{2}^{(0)}-\Gamma_{1}^{(0)}= J^{z}_{2 \bf Q}-J_{\bf Q}=3J\Delta,\nonumber\\ &&\Gamma_{3}^{(1)}=\frac{J_{\bf 0}-J_{\bf Q}}{16S}-\frac {(4 J^z_{\bf Q}-3J_{\bf Q}-J_{\bf 0})^2}{16S(J_{\bf 0} -J_{\bf Q})} =\frac{J}{2S}(1+\frac{J_z}{J})(1-\frac{J_z}{J})\approx \frac{3J\Delta}{2S}. \end{eqnarray} \section {Quantum corrections to $\Gamma_1,\Gamma_2,\Gamma_3$} In this section, we compute quantum corrections to $\Gamma_1,\Gamma_2,\Gamma_3$. Because these corrections already contain extra factor of $1/S$, they can be calculated by neglecting anisotropy. Quantum corrections to $\Gamma_{1,2}$ are of order $1/S$, and quantum corrections to $\Gamma_3$ are of order $(1/S)^2$. In both cases, quantum term has extra factor $1/S$ compared to classical results. Each quantum correction is a sum of the two terms -- one comes from normal ordering of Holstein-Primakoff bosons, and the other from second and third-order terms in the perturbation expansion in $1/S$. \subsection{Corrections from normal ordering} The Holstein-Primakoff transformation \begin{equation} S^z (r) = S - a^+_{\bf r} a_{\bf r}, S^{+} = \sqrt{2S - a^+_{\bf r} a_{\bf r}} a_{\bf r}, S^{-} = \sqrt{2S} a^+_{\bf r} \sqrt{2S - a^+_{\bf r} a_{\bf r}} \end{equation} contains the square-root $\sqrt{2S - a^+_{\bf r} a_{\bf r}}$, which needs to be expanded in the {\em normal-ordered} form to perform dilute gas analysis (all $a^+_{\bf r}$ have to stand to the left of $a_{\bf r}$). Because $a^+_{\bf r} a_{\bf r} = a_{\bf r} a^+_{\bf r}-1$, i.e., $(a^+_{\bf r} a_{\bf r})^2 = a^+_{\bf r} a^+_{\bf r} a_{\bf r} a_{\bf r} + a^+_{\bf r} a_{\bf r}$, etc, the prefactors in this {\em normal-ordering} are not simply powers of $1/S$ but rather contain series of $1/S$ terms. To order $1/S^3$ we have \begin{eqnarray} \label{eq:HP} S^{-}_{\bf r} = \sqrt{2S} a_{\bf r}^+ \Big\{1 - \frac{1}{4S}(1 + \frac{1}{8S} + \frac{1}{32S^2}) a^+_{\bf r} a_{\bf r} -\frac{1}{32 S^2} (1 + \frac{3}{4S}) a^+_{\bf r} a^+_{\bf r} a_{\bf r} a_{\bf r} - \frac{a^+_{\bf r} a^+_{\bf r} a^+_{\bf r} a_{\bf r} a_{\bf r} a_{\bf r}}{128 S^3} + O(1/S^{4})\Big\}\nonumber \end{eqnarray} The $1/S$ corrections to the prefactors modify Eqs.\eqref{eq:H^4} and \eqref{eq:H^6} to \begin{eqnarray} \label{eq:delta H4} &&\delta{\cal H}^{\rm(4)}=-\frac{J}{32S}\sum_{\bf r,\bf \delta}( a_{\bf r}^\dag a_{\bf r}^\dag a_{\bf r} a_{\bf r+\bf \delta}+\bf h.c),\\ &&\delta{\cal H}^{\rm(6)}=\frac{J}{128S^2}\sum_{\bf r,\bf \delta}( a_{\bf r}^\dag a_{\bf{ r+\bf \delta}}^\dag a_{\bf r+\delta}^\dag a_{\bf r} a_{\bf r} a_{\bf r+\delta}+{\bf h.c})-\frac{3J}{128S^2}\sum_{\bf r,\delta}( a_{\bf r}^\dag a_{\bf r}^\dag a_{\bf r+\delta}^\dag a_{\bf r} a_{\bf r} a_{\bf r}+\bf h.c). \label{eq:delta H6} \end{eqnarray} Substituting the form of the condensate in real space \begin{equation} \langle a_{\bf r}\rangle = \frac{1}{\sqrt{N}}\sum_{\bf k} e^{i {\bf k} \cdot {\bf r}} \langle a_{\pm {\bf Q}}\rangle = \psi_1 e^{i {\bf Q} \cdot {\bf r}} + \psi_2 e^{- i {\bf Q} \cdot {\bf r}} . \label{eq:cond} \end{equation} we obtain $1/S$ corrections to classical expressions for $\Gamma_{1,2,3}$: \begin{eqnarray} &&\Delta \Gamma_{a}^{(1)}=\Gamma_{2a}^{(1)}-\Gamma_{1a}^{(1)}=(-\frac{J_{\bf Q}}{4S})-(-\frac{J_{\bf Q}}{8S})=\frac{3J}{8S},\nonumber\\ &&\Gamma_{3a}^{(2)}=\frac{5J_{\bf 0}}{128S^2}+\frac{J_{\bf 0}}{128S^2}=\frac{9J}{32S^2}. \label{eq:delta gam3} \end{eqnarray} \subsection{Corrections from quantum fluctuations} To find quantum corrections to parameters $\Gamma_{1,2,3}$, we evaluate corrections to the ground state energy density $\delta E$ from non condensed modes $\tilde{a}_{\bf k}$ in \eqref{eq:gs} in perturbation theory up to third order and obtain the correction to the ground state energy density $\Delta E$ to sixth order in the condensates $\psi_1$ and $\psi_2$. The prefactors for the $\psi^4$ and $\psi^6$ term in $\Delta E$ yield quantum corrections to interaction parameters $\Gamma_{1,2,3}$ . Quite generally, under perturbation $H_i$, the partition function is \begin{eqnarray} Z= \int \prod_{\bf k} da_{\bf k}^{\dag} da_{\bf k} e^{\int_0^\beta d\tau (L_0 - H_i)}= Z_0 \frac{ \int \prod_{\bf k} da_{\bf k}^{\dag} da_{\bf k} e^{\int_0^\beta d\tau (L_0 - H_i)}}{ \int \prod_{\bf k} da_{\bf k}^{\dag} da_{\bf k}e^{\int_0^\beta L_0}} \equiv Z_0\langle e^{-\int_0^\beta H_i}\rangle_0. \end{eqnarray} Here $L_0 = \sum_{k} (a_k^\dag \frac{\partial }{\partial \tau}a_k) - {\cal H}^{(2)}$ represents Lagrangian of non-interacting magnons described by the quadratic Hamiltonian \eqref{eq:harmonic}, and $\beta = 1/T$. The internal energy density is \begin{eqnarray} &&E=-\frac{\partial \ln Z}{\partial \beta}\approx-\frac{\partial \ln Z_0}{\partial \beta}-\frac{\partial (\beta \ln \langle e^{-H_i}\rangle)}{\partial \beta} =E_0+ \Delta E \end{eqnarray} The correction term $\Delta E$ is represented by the standard cumulant expansion, which involves only connected averages of the perturbation $H_i$ \begin{equation} \Delta E= \langle H_{i}\rangle_{0}-\frac{1}{2!}\langle \int_\tau H_{i}^{2}\rangle_{0}+\frac{1}{3!}\langle \int_\tau \int_{\tau'} H_{i}^{3}\rangle_{0}+\dots. \end{equation} In the the zero-temperature limit, in which all our calculations are done, $E = E_0 + \Delta E$ determines the ground state energy. Integration over relative times $\tau, \tau' \dots$ ensures conservation of frequencies in the internal vertices of the diagrams. The role of the perturbation $H_i$ is played by interacting Hamiltonians \eqref{eq:H^4}, \eqref{eq:H^6} expressed in terms of condensates $\psi_{1,2}$ and non-condensed magnons $\tilde{a}_{\bf k}$ after the substitution \eqref{eq:gs}. We remind that the averaging is over the free-boson Hamiltonian for isotropic system at $h = h_{sat}$. \subsubsection{Quantum corrections to $\Gamma_{1,2}$} Quantum corrections to $\Gamma_{1,2}$ al of order $1/S$, and to get them we only need the fourth-order term in bosons \eqref{eq:H^4}: \begin{equation} {\cal H_ {\rm i,\bf k}}=\sum_{\bf k}\Big[\Big( \frac{1}{2}V_{\bf {k}}({\bf Q,Q})\psi_{1}^{2}a_{\bf Q+k}^{\dag} a_{\bf Q-k}^{\dag}+V_{\bf k}({\bf Q,-Q})\psi_{1}\psi_{2}a_{\bf Q+k}^{\dag} a_{\bf -Q-k}^{\dag}+ \frac{1}{2}V_{\bf k}({\bf -Q,-Q})\psi_{2}^{2}a_{\bf -Q+k}^{\dag} a_{\bf -Q-k}^{\dag}\Big)+\bf h.c.\Big], \label{eq:os2} \end{equation} where $V_{\bf q}(k,k')$ is defined in Eq.\eqref{eq:V^4}. The first-order correction to the energy density obviously vanishes, and the second-order perturbative correction yields \begin{eqnarray} \Delta E&=&-\frac{1}{2}\sum_{\bf k,q}\langle{\cal H_ {\rm i,\bf k}}\cdot {\cal H_ {\rm i,\bf q}}\rangle_0\nonumber\\ &=&-\sum_{\bf k,q}\Big[\frac{1}{4}|\psi_{1}|^{4}V_{\bf k}({\bf Q,Q})V_{\bf q}({\bf Q,Q})\langle a_{\bf Q+k}^{\dag} a_{\bf Q-k}^{\dag}a_{\bf Q+q} a_{\bf Q-q}\rangle_0\nonumber\\ &\hspace{8mm}& +\frac{1}{4}|\psi_{2}|^{4}V_{\bf k}({\bf -Q,-Q})V_{\bf q}({\bf -Q,-Q})\langle a_{\bf -Q+k}^{\dag} a_{\bf -Q-k}^{\dag}a_{\bf -Q+q}a_{\bf -Q-q}{\rangle_{0}}\nonumber \\ &\hspace{8mm}& + \frac{1}{2}|\psi_1|^{2}|\psi_2|^{2}V_{\bf k}({\bf Q,-Q})V_{\bf q}({\bf Q,-Q})\langle a_{\bf Q+k}^{\dag} a_{\bf -Q-k}^{\dag}a_{\bf Q+q}a_{\bf -Q-q}\rangle_0\Big]. \end{eqnarray} By Wick's theorem, \begin{equation} \langle a_{k_1}^{\dag}a_{k_2}^{\dag}a_{k_3}a_{k_4}\rangle_0=\langle a_{k_1}^{\dag} a_{k_3}\rangle_0\langle a_{k_2}^{\dag} a_{k_4}\rangle_0+\langle a_{k_1}^{\dag} a_{k_4}\rangle_0\langle a_{k_2}^{\dag} a_{k_3}\rangle_0. \label{Wick} \end{equation} where the pair average is~\cite{popov} \begin{equation} \langle a_{k_1}^{\dag} a_{k_2}\rangle_{0} =-\delta_{k_{1},k_{2}}G_{0}(k_1), \label{pair aver} \end{equation} and $G_{0}(k) \equiv G_{0}(k, \epsilon)$ is the free boson Green's function \begin{equation} G_{0}(k)=(i\omega - \epsilon_k)^{-1}, \end{equation} Utilizing the properties of \eqref{Wick} and \eqref{pair aver}, we obtain the terms in the form \begin{equation} \sum_{\bf k,q}V_{\bf k}({\bf Q,Q})V_{\bf q}({\bf Q,Q})\langle a_{\bf Q+k}^{\dag} a_{\bf Q-k}^{\dag}a_{\bf Q+q} a_{\bf Q-q}\rangle_0=\sum_{\bf k,\rm \omega}\frac{2V_{\bf k}^{2}({\bf Q,Q})}{(i \omega - \epsilon_{\bf Q+k})(i \omega - \epsilon_{\bf Q-k})} \end{equation} Using \begin{equation} ~{\text{with}}~\sum_{\rm \omega} \frac{1}{(i \omega - \epsilon_{1})(i \omega - \epsilon_{2})}=\int \frac{d\omega}{2\pi} \frac{1}{(i \omega - \epsilon_{1})(i \omega - \epsilon_{2})}=\frac{1}{\epsilon_{1}+\epsilon_{2}} \end{equation} and collecting prefactors we obtain the corrections to $ \Gamma_{1,2}$ in the form \begin{eqnarray} && \Gamma_{1b}^{(1)}=-\sum_{\bf k}\frac{V_{\bf k}^{2}({\bf Q,Q})}{\omega_{\bf Q+k}+\omega_{\bf Q-k}}=-\frac{1}{16S}\sum_{\bf k}\frac{(J_{\bf 0}+5J_{\bf k})^2}{J_{\bf 0}-J_{\bf k}},\nonumber\\ &&\Gamma_{2b}^{(1)}=-\sum_{\bf k}\frac{V_{\bf k}^{2}({\bf Q,-Q})}{\omega_{\bf Q+k}}=-\frac{1}{16S}\sum_{\bf k}\frac{(J_{\bf 0}-4J_{\bf Q+k})^2}{J_{\bf Q+k}-J_{\bf k}}. \label{eq:gamma12-quantum} \end{eqnarray} These corrections can be equally obtained diagrammatically, by evaluating second-order corrections to $\phi^4$ vertices, as in Fig. \ref{fig:A2}. \begin{figure}[b] \begin{center} \includegraphics[scale=0.25]{Fig-correct1,2.pdf} \caption{Diagrammatic representation of perturbative corrections to $\Gamma_1$ and $\Gamma_2 $.} \label{fig:A2} \end{center} \end{figure} Each of the two integrals above is logarithmically divergent, but these divergences cancel out in their difference, resulting in a finite result \begin{equation} \Delta \Gamma_{b}^{(1)}=\Gamma_{2b}^{(1)}- \Gamma_{1b}^{(1)}=-\frac{1.97J}{S}, \end{equation} Adding $\Delta \Gamma_{a}^{(1)}$, Eq.\eqref{eq:delta gam3}, to this result we obtain the total quantum correction $\Delta \Gamma^{(1)} = \Delta \Gamma_{a}^{(1)} + \Delta \Gamma_{b}^{(1)} = -1.595 J/S \approx -1.6 J/S$, as quoted in Eq.\eqref{eq:d-gamma} of the main text. \subsubsection{Quantum corrections to $\Gamma_{3}$} Correction to $\Gamma_{3}$ is in order of $(1/S)^2$, and to get such term in the ground state energy density we need to incude both four-boson and six-boson terms in the Hamiltonian, Eqs. \eqref{eq:H^4} and \eqref{eq:H^6}. We have \begin{eqnarray} \cal H_ {\rm i}^{(\rm 4)}&=&\frac{1}{8}\sum_{\bf k}(5J_{\bf k}-2J_{\bf Q})\Big[(\bar{\psi_{1}^{2}}a_{\bf Q+k}a_{\bf Q-k}+\bar{\psi_{2}^{2}}a_{\bf -Q+k} a_{\bf -Q-k})+\bf h.c.\Big]\nonumber\\ &&-\frac{1}{4}\sum_{\bf k}(J_{\bf k}-J_{\bf Q})\Big[(\psi_{0}\psi_{2}a_{\bf Q+k}^{\dag} a_{\bf Q-k}^{\dag}+\psi_{0}\psi_{1}a_{\bf -Q+k}^{\dag} a_{\bf -Q-k}^{\dag})+\bf h.c.\Big],\\ \cal H_ {\rm i}^{(\rm 6)}&=&\frac{1}{16S}\sum_{\bf k}(\frac{5}{2}J_{\bf k}-4J_{\bf Q})\Big[(\bar{\psi_{1}}\psi_{2}^{3}a_{\bf Q+k}^{\dag} a_{\bf Q-k}^{\dag}+\psi_{1}^{3}\bar{\psi_{2}}a_{\bf -Q+k}^{\dag} a_{\bf -Q-k}^{\dag})+\bf h.c.\Big]. \end{eqnarray} We use the expression of $\psi_0$ in Eq.\eqref{eq:psi 00}, to rewrite $\cal H_ {\rm i}^{(\rm 4)}$ as, \begin{eqnarray} \cal H_ {\rm i}^{(\rm 4)}&=& \frac{1}{8}\sum_{\bf k}(5J_{\bf k}-2J_{\bf Q})\Big[(\bar{\psi_{1}^{2}}a_{\bf Q+k}a_{\bf Q-k}+\bar{\psi_{2}^{2}}a_{\bf -Q+k} a_{\bf -Q-k})+\bf h.c.\Big]\nonumber\\ &&-\frac{1}{16S}\sum_{\bf k}(J_{\bf k}-J_{\bf Q})\Big[(\bar{\psi_{1}}\psi_{2}^{3}a_{\bf Q+k}^{\dag} a_{\bf Q-k}^{\dag}+\psi_{1}^{3}\bar{\psi_{2}}a_{\bf -Q+k}^{\dag} a_{\bf -Q-k}^{\dag})+\bf h.c.\Big]. \end{eqnarray} The total perturbation Hamiltonian is now \begin{eqnarray} \cal H_ {\rm i,\bf k}&=&\cal H_ {\rm i}^{(\rm 4)}+\cal H_ {\rm i}^{(\rm 6)}\nonumber\\ &=&\frac{1}{8}\sum_{\bf k}(5J_{\bf k}-2J_{\bf Q})\Big[(\bar{\psi_{1}^{2}}a_{\bf Q+k}a_{\bf Q-k}+\bar{\psi_{2}^{2}}a_{\bf -Q+k} a_{\bf -Q-k})+{\bf h.c.}\Big]\nonumber\\ &&-\frac{3}{32S}\sum_{\bf k}(J_{\bf k}-2J_{\bf Q})\Big[(\bar{\psi_{1}}\psi_{2}^{3}a_{\bf Q+k}^{\dag} a_{\bf Q-k}^{\dag}+\psi_{1}^{3}\bar{\psi_{2}}a_{\bf -Q+k}^{\dag} a_{\bf -Q-k}^{\dag})+\bf h.c.\Big], \label{ac_1} \end{eqnarray} Because of two terms in (\ref{ac_1}), there are two contributions to $\Delta E$ to order $\psi^6/S^2$. One comes from taking the product of $\psi^2$ and $\psi^4$ terms in the second-order perturbation theory. This yields \begin{eqnarray} \Delta E_a&=&-\frac{1}{2}\sum_{\bf k,q}\langle{\cal H_ {\rm i,\bf k}}\cdot {\cal H_ {\rm i,\bf q}}\rangle_0 = -\frac{3}{128S}\sum_{\bf k,q}(5J_{\bf k}-2J_{\bf Q})(J_{\bf q}-2J_{\bf Q})\times\nonumber\\ &\times& \Big[\psi_{1}^{3}\bar{\psi_{2}^{3}}\langle a_{\bf Q+k}^{\dag} a_{\bf Q-k}^{\dag}a_{\bf Q+q} a_{\bf Q-q}\rangle_{0}+\bar{\psi_{1}^{3}}\psi_{2}^{3}\langle a_{\bf -Q+k}^{\dag} a_{\bf -Q-k}^{\dag}a_{\bf -Q+q} a_{\bf -Q-q}\rangle_{0}\Big] \label{ac_2} \end{eqnarray} and \begin{equation} \Delta \Gamma_{3,a}^{(2)}=-\frac{3}{64S^2}\sum_{\bf k}\frac{(5J_{\bf k}-2J_{\bf Q})(J_{\bf k}-2J_{\bf Q})}{J_{\bf 0}-J_{\bf k}}. \label{eq:gam3.1} \end{equation} Diagrammatically, this correction to $\Gamma_{3}$ is given by the first two diagrams in Fig.\ref{fig:A3}, \begin{figure}[b] \begin{center} \includegraphics[scale=0.25]{Fig-correct3.pdf} \caption{Diagrams for $1/S$ corrections to $\Gamma_3 $. The first two diagrams are 2nd order perturbation corrections from the product of $\psi^2$ and $\psi^4$ terms in Eq (\ref{ac_1}), the last diagram is 3th order perturbative correction from \eqref{eq:A33}.} \label{fig:A3} \end{center} \end{figure} Another contribution to $\Delta E$ of order $\psi^6/S^3$ comes from taking $\psi^2$ term in (\ref{ac_1}) to 3rd order in perturbation theory. The corresponding term in the perturbative Hamiltonian (\ref{ac_1}) comes from fourth-order term in Holstein-Primakoff bosons and we write it separately: \begin{eqnarray} \cal H_ {\rm i}^{(\rm 4)}&=&\sum_{\bf k}\Big[\frac{1}{8}(5J_{\bf k}-2J_{\bf Q})(\psi_{1}^{2}a_{\bf Q+k}^{\dag}a_{\bf Q-k}^{\dag}+\bar{\psi_{2}^{2}}a_{\bf -Q+k} a_{\bf -Q-k})+\bf h.c.\Big]\nonumber\\ &&+\sum_{\bf k}\frac{3}{2}J_{\bf Q-k}(\psi_{1}{\bar{\psi_{2}}}a_{\bf k}^{\dag}a_{\bf Q+k}+\bf h.c.). \label{eq:A33} \end{eqnarray} The third-order perturbative correction to the ground state density is \begin{eqnarray} \Delta E_b&=&\frac{1}{3!}\sum_{\bf k,q,l}\langle{\cal H_ {\rm i,\bf k}}\cdot {\cal H_ {\rm i,\bf q}}\cdot {\cal H_ {\rm i,\bf l}}\rangle_{0}\nonumber\\ &&= \frac{3}{128}\sum_{\bf k,q,l}\frac{3}{2}J_{\bf Q-k}(5J_{\bf q}-2J_{\bf Q})(5J_{\bf l}-2J_{\bf Q})(\psi_{1}^{3}\bar{\psi_{2}^{3}}+{\bf h.c.})\langle a_{\bf k}^{\dag} a_{\bf Q+q}^{\dag} a_{\bf Q-q}^{\dag}a_{\bf Q+k}a_{\bf -Q+l}a_{\bf -Q-l}\rangle_{0} \end{eqnarray} This leads to second $1/S^2$ contribution to $\Gamma_3$ in the form \begin{eqnarray} \Gamma_{3b}^{(2)} &=& \frac{3}{32 S^2} \sum_{{\bf k}} \frac{J_{\bf Q-k}(5 J_{\bf k} + J_0)(5 J_{{\bf Q} + {\bf k}} + J_0) } {(J_0 - J_{\bf k}) (J_0 - J_{{\bf Q} + {\bf k}})} . \label{eq:gam3.2} \end{eqnarray} In diagrammatic approach, this correction comes from the third diagram in Fig.\ref{fig:A3}, The total $\Gamma_{3}^{(2)}$ is the sum of terms in Eqs.\eqref{eq:gam3.1} and Eq.\eqref{eq:gam3.2} \begin{eqnarray} \Gamma_{3}^{(2)} &=& \frac{3}{32 S^2} \sum_{{\bf k}} \Big(\frac{J_{\bf Q - k}(5 J_{\bf k} + J_0)(5 J_{{\bf Q} + {\bf k}} + J_0)} {(J_0 - J_{\bf k}) (J_0 - J_{{\bf Q} + {\bf k}})} - \frac{(5 J_{\bf k} + J_0) (J_{\bf k} + J_0)}{2 (J_0 - J_{\bf k})} \Big) = -\frac{0.97J}{ S^2}. \label{eq:gam3} \end{eqnarray} Here again we observe the cancellation of logarithmic singularities, present in the individual integrals. \section{ Intermediate double cone state for $J-J'$ model} In this Section, we analyze the phase transition from the cone to the coplanar state, when magnetic field $ h$ is below $h_{\rm sat}$, i.e., $\mu=h_{\rm sat}-h$ is positive. We remind that at $\mu = 0+$, the cone state is stable at $\delta J = J-J' > \delta J_{c}=0.42J/\sqrt S$. Accordingly, we treat $\delta J \approx \delta J_c$ as a small parameter. Our goal will be to obtain the spin-wave spectrum in the cone state to leading order in $\delta J$ and with quantum corrections. The magnon modes in the cone state are \begin{equation} a_{\bf k}=\sqrt{N}\psi_{1}\delta_{\bf k,Q}+\tilde a_{\bf k}. \label{eq:cone-a} \end{equation} where, we remind, ${\tilde a}_{\bf k}$ describe non-condensed bosons and $\psi_{1} \propto \sqrt{S}$ describes the condensate fraction. We first consider classical spin-wave excitations at the leading order in $1/S$, but a non-zero $\delta J$, and then add quantum $1/S$ corrections to the excitation spectrum. As before, the latter already contain $1/S$ and can be computed in the isotropic $\delta J = 0$ limit. \subsection{Classical spin-wave excitations} Spatially anisotropic Hamiltonian to second order in ${\tilde a}_{\bf k}$ reads \begin{eqnarray} {\cal H}_{\rm anis}&=&H_1+H_2\nonumber\\ \label{H1} H_1&=&{\cal H}_{\rm anis}^{(2)}=\sum_{\bf k} \Big[S(\tilde J_{\bf k}-\tilde J_{\bf Q})- \mu\Big] \tilde a_{\bf k}^\dag \tilde a_{\bf k} ,\\ \label{H2} H_2&=&\frac{1}{8}\sum_{\bf q} \Big[(5\tilde J_{\bf q}-2\tilde J_{\bf Q})\psi_{1}^{2}\tilde a_{\bf Q+q}^\dag \tilde a_{\bf Q-q}+\bf h.c.\Big]\nonumber\\ &&+\sum_{\bf k}(\tilde J_{\bf 0}-\tilde J_{\bf Q}+\tilde J_{\bf Q-k}-\tilde J_{\bf k})|\psi_{1}|^2 \tilde a_{\bf k}^\dag \tilde a_{\bf k}, \end{eqnarray} where, we remind, $\tilde J_{\bf k}$, where $\tilde{J}_{\bf k} = 2(J \cos[k_x] + 2 J'\cos[\frac{k_x}{2}] \cos[\frac{\sqrt{3} k_y}{2}])$. $\tilde{J}_{\bf k}$ has minimum $\tilde J_{\bf Q}$ at $ {\bf Q}=(Q_{\rm i},0)$, and $ Q_{\rm i}=2\cos^{-1}[-J'/2J]$. At small $\delta J \sim \delta J_c$, ${\bf Q}$ by $\bf{Q}\approx \rm (4\pi/3-\Delta Q,0)$, where $ \Delta Q= 4\pi/3 - Q_{\rm i} = 2\delta J/ {\sqrt{3}}$. Our goal is to obtain the renormalization of the excitation spectrum $\omega_k$ to second order in the condensate, i.e., to order $\psi^2$. The first term in $H_2$ is irrelevant for this purpose as it describes excitations with momentum transfer $2{\bf Q}$,which can only contribute to $\omega_k$ at second order in perturbation theory, but such term will be of order $\psi^4$. The remaining term in $H_2$ is quadratic in non-condensed bosons and directly contribute to spin-wave spectrum to second order in $\psi$ We will be interested in magnon excitations for ${\bf k}$ near $-{\bf Q}=-(Q_{\rm i},0)$. Accordingly, we set ${\bf k}=-{\bf Q}+{\bf p}$ and treat ${\bf p}$ as small momentum. Restricting with small ${\bf p}$ and using the approximate form of ${\bf Q}$, we re-write Eqs.\eqref{H1} and \eqref{H2} as \begin{eqnarray} {\cal H}_{\rm anis}=\sum_{\bf p}\Big[ \frac{3}{4}SJ (p_{x}^{2}+p_{y}^{2}) + J |\psi_{1}|^2\Big(\frac{h_{\rm sat}}{SJ}+\frac{9}{2}p_{x}\Delta Q+\frac{27}{4}(\Delta Q)^2\Big)-\mu \Big] \tilde a_{\bf -Q+p}^\dag \tilde a_{\bf -Q+p} , \end{eqnarray} where $h_{\rm sat}=S(\tilde J_{\bf 0}-\tilde J_{\bf Q})=S\Gamma_{1}^{(0)}$. Completing the square and rearranging, and setting ${\bf k}=-{\bf Q}+{\bf p}$ again, we obtain \begin{equation} \label{eff H} {\cal H}_{\rm anis}=\sum_{\bf k}S\omega_{\bf k}^{(1)}\tilde a_{\bf k}^\dag \tilde a_{\bf k}, \end{equation} where \begin{eqnarray} \omega_{\bf k}^{(1)}&=&\frac{3}{4}J\Big[(k_{x}+{\bar Q}_{\rm i})^{2}+k_{y}^{2}+\varepsilon_{\rm min}\Big],\\ \label{energy} \varepsilon_{\rm min}&=&9\frac{|\psi_{1}|^2}{S}(1-\frac{|\psi_{1}|^2}{S})(\Delta Q)^2 + \frac{4}{3}\frac{1}{SJ}(\frac{|\psi_{1}|^2}{S}h_{\rm sat}-\mu). \end{eqnarray} Here ${\bar Q}_{\rm i} =4\pi/3 - \Delta Q +3|\psi_{1}|^2\Delta Q/S$, and the minimum of $\omega_{\bf k}^{(1)}$ is at $(-{\bar Q}_{\rm i}, 0)$. In the classical approximation (leading order in $1/S$), the condensate density is $|\psi_{1}|^2/S = \mu/(S \Gamma_{1}^{(0)}) = \mu/h_{\rm sat}$, and we obtain \begin{equation} \varepsilon_{\rm min, class}=\frac{12\mu}{h_{\rm sat}J^{2}}\frac{h}{h_{\rm sat}} (\delta J)^2. \label{eq:emin0} \end{equation} To this order, the second term in \eqref{energy} nullifies exactly. To the same accuracy, ${\bar Q}_{\rm i} = Q_{\rm i} + (4 \pi/3 -Q_{\rm i}) (3\mu/h_{\rm sat}) + O(1/S)$. \subsection{Quantum corrections} Since at $\mu=0$ the critical value of $\delta J_c \sim 1/\sqrt{S}$, we recognize that in fact $\varepsilon_{\rm min, class} \sim 1/S$ in the relevant range of $\delta J$, where the transition between the cone and the coplanar state takes place. This means that Eq.\eqref{eq:emin0} is not complete -- one needs to add to it quantum $1/S$ contributions. These come from several sources as we now describe. The first quantum correction comes from the fact that the relation between the condensate wave function $\psi_1$ and $\Gamma_1$: \begin{equation} \frac{|\psi_{1}|^2}{S}=\frac{\mu }{S\Gamma_{1}} \end{equation} contains $1/S$ terms because $\Gamma_1 = \Gamma_1^{(0)} + \Gamma_1^{(1)}$, where $\Gamma_1^{(0)} = h_{\rm sat}/S = \tilde J_{\bf 0}-\tilde J_{\bf Q}\sim J$ represents classical ($S=\infty$) contribution already accounted for in deriving \eqref{eq:emin0}, while $\Gamma_1^{(1)} = \Gamma_{1a}^{(1)} + \Gamma_{1b}^{(1)} \sim J/S$ represents the leading $1/S$ correction to it. The term with subindex $a$ describes contribution from normal ordering, $-J_{\bf Q}/(8S)$ in \eqref{eq:delta gam3}, while the one with subindex $b$ describes the contribution from quantum fluctuations, Eq.\eqref{eq:gamma12-quantum}. Hence, in the cone state, \begin{equation} \frac{|\psi_{1}|^2}{S}=\frac{\mu }{S\Gamma_{1}}= \frac{\mu }{S(\Gamma_1^{(0)} + \Gamma_1^{(1)})} = \frac{\mu }{h_{\rm sat}}(1 - \frac{\Gamma_1^{(1)}}{\Gamma_1^{(0)}}) \end{equation} contains quantum correction, $\sim \Gamma_1^{(1)}$. Substituting the full form of $\psi$ into Eq. (\ref{energy}) and collecting $1/S$ terms we obtain first $1/S$ correction $\Delta \varepsilon_{\rm min,1}$ \begin{equation} \Delta \varepsilon_{\rm min,1}= -\frac{4}{3}\frac{\mu}{h_{\rm sat}J}\Gamma_{1}^{(1)} + O(1/S^2), \label{eq:emin1} \end{equation} The two other quantum corrections are associated with $\Gamma_2$ processes. One $\Gamma_2$ correction comes from Eq.\eqref{eq:delta H4}, which, we remind, emerges when we normal order bosonic operators in the Holstein-Primakoff transformation. It is easiest to obtain this contribution via a real-space representation \begin{equation} a_{\bf r}=\psi_1 e^{i {\bf Q} \cdot {\bf r}}+\tilde a_{\bf r}, \end{equation} where, as before, $\tilde a_{\bf r}$ describes non-condensate magnons. Substituting this into \eqref{eq:delta H4} we obtain \begin{equation} \delta{\cal H}^{\rm(4)}=-\frac{|\psi_{1}|^2}{8S}\sum_{\bf k}(\tilde J_{\bf k}+\tilde J_{\bf Q})\tilde a_{\bf k}^\dagger\tilde a_{\bf k}\approx -\frac{|\psi_{1}|^2}{4S}\sum_{\bf p}\tilde J_{\bf Q} \tilde a_{\bf -Q+p}^\dag \tilde a_{\bf -Q+p}, \end{equation} for ${\bf k} \approx - {\bf Q}$. Adding this to \eqref{energy} we obtain a $\Gamma_{2a}^{(1)}$ correction to $\varepsilon_{\rm min}$, \begin{equation} \label{dmin2} \Delta\varepsilon_{\rm min,2}= \frac{4}{3 J}\frac{\mu}{h_{\rm sat}J} (- \frac{\tilde J_{\bf Q}}{4S}) =\frac{4}{3}\frac{\mu}{h_{\rm sat}J}\Gamma_{2a}^{(1)}. \end{equation} Observe that, because we already have $1/S$ in the prefactor, we can neglect the difference between $\tilde J_{\bf Q}$ and $J_{\bf Q}$. The third quantum correction (also associated with $\Gamma_2$) comes from terms cubic in non-condensate magnons ${\tilde a}_{\bf k}$ taken to second order in perturbation theory. The cubic terms are generated from \eqref{eq:H^4} via the substitution \eqref{eq:cone-a}. Such terms are necessarily linear in $\psi_1$: \begin{equation} \label{H3} H_3=\frac{1}{\sqrt N}\sum_{\bf k,q} V_{\bf q}({\bf k,Q})\Big(\psi_1 \tilde a_{\bf Q-q}^\dag \tilde a_{\bf k+q}^\dag \tilde a_{\bf k}+\bf h.c.\Big). \end{equation} A second-order in perturbation theory in \eqref{H3} produces a $1/S$ correction to the dispersion of ${\tilde a}_{\bf k}$ magnons with ${\bf k} \approx -{\bf Q}$ in the form \begin{eqnarray} \Delta\varepsilon_{\rm min,3}&=&-\frac{4}{3}\frac{1}{JS}\sum_{\bf q,q'} V_{\bf q}({\bf-Q,Q})V_{\bf q'}({\bf-Q,Q})|\psi_{1}|^2\langle \tilde a_{\bf Q-q}^\dag \tilde a_{\bf -Q+q}^\dag \tilde a_{\bf Q-q'}\tilde a_{\bf -Q+q'}\rangle_0,\nonumber\\ \label{dmin3} &=& \frac{4}{3}\frac{|\psi_{1}|^2}{SJ} \Gamma_{2b}^{(1)}\approx\frac{4}{3}\frac{\mu}{h_{\rm sat}J} \Gamma_{2b}^{(1)}. \end{eqnarray} Adding Eqs. \eqref{eq:emin1}, \eqref{dmin2}, and \eqref{dmin3} to the classical result for $\varepsilon_{\rm min}$, we obtain the final expression for the minimal energy $\varepsilon_{\rm min}$ of the magnons at ${\bf k} \approx -{\bf Q}$: \begin{equation} \varepsilon_{\rm min,tot}=\frac{12\mu}{h_{\rm sat}J^2}\Big[\frac{h}{h_{\rm sat}}(\delta J)^2 + \frac{\Gamma_2^{(1)} - {\Gamma_1^{(1)}}}{9}\Big] =\frac{12\mu}{h_{\rm sat}J^2}\Big[\frac{h}{h_{\rm sat}}(\delta J)^2-(\delta J_c)^2\Big] \approx \frac{12\mu}{h_{\rm sat}J^2}\Big[(\delta J)^2-(\delta J_c)^2(1+\frac{\mu}{h_{\rm sat}})\Big]. \end{equation} Observe that $\Gamma_{2}^{(1)}-\Gamma_{1}^{(1)} = \Delta \Gamma^{(1)} = -1.6J/S$ and $\delta J_c = \sqrt{1.6 J^2/(9 S)} \approx 0.42 J/\sqrt{S}$, see Eq.\eqref{eq:d-gamma} and description below it in the main text. At $\mu = +0$, magnon energy of ${\bf k} = - {\bf Q}_{\rm i}$ vanishes at $\delta J = \delta J_c$, as expected. However, at a finite $\mu$, the instability occurs at $\delta J _h = h_{\rm sat} \delta J_{c}/h > \delta J_{c}$ and the mode that condenses carries momentum ${\bf k} = (-{\bar Q}_{\rm i},0) \neq - {\bf Q}_{\rm i}$ different from $- {\bf Q}_{\rm i}$. This gives rise to the development of the second condensate with momentum $(-{\bar Q}_{\rm i},0)$. The resulting state is the double cone phase described in the main text.
1,314,259,995,417
arxiv
\section{introduction} One of the famous Ramsey theoretic results is so called Van der Waerden\textquoteright s Theorem which guarantees that atleast one cell of any partition $\{C_{1},C_{2},\ldots,C_{r}\}$ of $\mathbb{N}$ contains arithmetic progressions of arbitrary length. Since arithmetic progressions are invariant under shifts, it follows that every piecewise syndetic set contains arbitrarily long arithmetic progressions. \begin{thm} \label{Thm 1}Given any $r,l\in\mathbb{N}$, there exists $N(r,l)\in\mathbb{N}$, such that for any $r-$partition of $[1,N]$, atleast one of the partition contains an $l-$length arithmetic progressions. \end{thm} In various times, mathematicians studied abundance in progression in different types of large sets Like \textbf{syndetic sets, central sets, thick sets, piecewise syndetic sets }etc. we have seen many abundance results from \cite{key-4}, \cite{key-5}, \cite{key-6}, \cite{key-3} etc. All of this results shows that if $A\subseteq\mathbb{N}\text{ or \ensuremath{S}}$, (Where S is any countable commutative semigroup) be large in some sense then some special configuration contained in those sets are also large in some sense. However, there also remains types of large sets where abundance are yet to be explicate, like \textbf{C-sets, D-sets, J-sets }etc. All of these aforementioned sets have a common property: They all contain arbitrary length of arithmatic progressions, this type of sets are called \textbf{sets of A.P. rich}. Here we have given easiest elementary combinatorial proof of abundance for these type of sets. Also we have seen that if $A$ is a \textbf{set of A.P. rich}, then it is \textbf{set of A.P. rich} of all order. Throughout this paper, $S$ is a countable commutative semigroup. Although sometimes countability or commutativity does not appear in the proof. \section{Main results} \begin{thm} \label{Thm 2} Let $S$ is any semigroup. Then If $A\subseteq S$ is a set of A.P. rich, then \[ B=\:\{(a,b):\:\left\{ a,\:a+b,\:a+2b,....,\:a+lb\right\} \subset A\} \] is a set of A.P. rich. \end{thm} \begin{proof} Now as $A$ contains arithmatic progression of arbitrary length, fixed $l\in\mathbb{N}$ and for any $1\leq r\leq l$, it must contains arithmatic progression of length $r+(r+1)l$, which implies that $(c,d)+r(d,d)\in B$, which shows that $B$ contains \[ \left\{ (c,d),\:(c,d)+(d,d),\:(c,d)+2(d,d),.....,\:(c,d)+l(d,d)\right\} . \] \end{proof} This proves the theorem. Suppose, for any $l\in\mathbb{N}$, $AP_{l+1}$ be the subsemigroup of $S^{l+1}$ defined by: \[ AP_{l+1=\:}\left\{ \left(a,\:a+b,\:a+2b,.....,\:a+lb\right):\:a,b\in S\right\} . \] Now we derive a result which is one of the main application of \cite{key-3} for some large sets. \begin{cor} \label{Corollary 3} Let $S$ be a cancellative semigroup and $A\subseteq S$ is a set of A.P. rich, then $A^{l+1}\cap AP_{l+1}$ is also a set of A.P. rich. \end{cor} \begin{proof} Take the semigroup epimorphism, $\varphi:\:S\times S\longrightarrow AP_{l+1}$ defined by, $\varphi\left(a,b\right)=\:\left(a,\:a+b,\:a+2b,......,\:a+lb\right)$. Then for any $A\subseteq S$ is A.P. rich, from \ref{Thm 2} $B$ is also a set of A.P. rich. So, for any $l-$length arithmatic progression \[ \left\{ \left(a,b\right),\:\left(a,b\right)+\left(e,f\right),\:\left(a,b\right)+2\left(e,f\right),......,\:\left(a,b\right)+l\left(e,f\right)\right\} \] in $B$, we have \[ \varphi\left\{ \left(a,b\right)+i\left(e,f\right)\right\} =\:\varphi\left(a,b\right)+\:i\varphi\left(e,f\right)\in A^{l+1}\cap AP_{l+1}\text{ for all \ensuremath{1\leq i\leq l}}\text{.} \] Which concludes the result. \end{proof}
1,314,259,995,418
arxiv
\section*{Introduction} A transparent polarisation sensitive phase pattern introduces a position-dependent change in the phase of transmitted light depending on the polarisation of incident light. A classical image of such a pattern can be obtained by measuring the position-wise polarisation shift of the transmitted classical electromagnetic field in the image plane by a method known as polarisation contrast imaging \cite{pci1, pci2, pci3, faradayimaging, br_oct, brphase1, brphase2}. However, an image formed by a single photon exposure of the pattern in each execution of the experiment is defined as a quantum image. In this way, one can gain the quantum mechanical advantage over the classical imaging. One such advantage is the quantum secure transfer of images. Furthermore, by incorporating quantum entangled photons, one can utilise properties of quantum entanglement and quantum measurements to construct an image of the pattern. Where a single photon of an entangled pair of photons interacts with the pattern. However, in contrast to the classical polarisation contrast imaging, complete information of the pattern is shared non-locally by both photons as a consequence of quantum entanglement, even if they are separated by a large distance when a photon interacted with the pattern. Here, one cannot obtain complete pattern information just by measuring a single photon. Therefore, a joint measurement becomes necessary and a quantum image of the pattern can be constructed by correlating the measurement outcomes. Because of quantum entanglement, quantum images can be transferred to another location securely and directly. This paper presents the first free space long path experiment of quantum imaging of a transparent polarisation sensitive phase pattern with hyper-entangled photons. Where a hyper-entangled state is a product of quantum entangled states involving different degrees of freedom \cite{kwiat_hyp1, kwiat_hyp2}. In the experiment, the hyper-entangled state consists of a product of polarisation entanglement \cite{chsh} and momentum entanglement of two photons \cite{zeirev1, horne3, shimony, mandelrev, eprboyd, dds_m}. Where the momentum entanglement corresponds to Einstein-Podolsky-Rosen type of quantum entangled state originating from a finite region. Each photon of a hyper-entangled bi-photon state carries information of polarisation and momentum of the other photon but an individual photon has no well-defined position, momentum and polarisation. A non-birefringent transparent phase object can be classically imaged with well-known phase-contrast imaging methods where the phase information is converted to intensity \cite{pc1,pc2,pc3}. In the context of the development of imaging experiments, a phase-contrast imaging method is applied for a non-destructive detection of a Bose-Einstein condensate \cite{nondesbec1, nondesbec2}. Two-photon coincidence quantum imaging of absorptive objects has been studied theoretically and experimentally with spatially correlated photons from a perspective of foundations of quantum mechanics \cite{gimage1, gimage3, gimage5, ghost4, ghim, qimaging, bar, shapiro2, raza,duality}. Quantum imaging of an absorptive object has been realised experimentally without detecting a photon interacting with the object \cite{zeilinger_1, zeilinger_2}. Bell's inequality violation experiments are performed with images \cite{bell_i, padgett_rev}. Two-atom ghost imaging of an absorptive pattern for atoms has been experimentally realised with metastable helium atoms \cite{gitruscot1, gitruscot2}. A polarisation-sensitive metasurface is imaged in the near field with polarisation entangled photons \cite{br_pol}. In this paper, experiments involve a multi-dimensional entanglement in the form of a hyper-entangled state, to quantum image a transparent polarisation sensitive phase pattern in free space, where the pattern is positioned at a distance 16.91~m from a coincidence imaging camera. However, this particular experiment is not aiming to close loopholes. In the experiment, polarisation as well as momentum degrees of freedom of the hyper entangled photons are utilised for quantum imaging. \section{Hyper-entangled photons} \begin{figure*} \centering \includegraphics[scale=1.4]{figures/fig_1.eps} \caption{\label{fig1} \emph{A schematic diagram of a quantum imaging experiment of a transparent polarisation sensitive phase pattern. A single photon-2 of a hyper-entangled photon pair is passed through the pattern. Polarisation state of each photon is measured by passing through polarisers. Photon-$1$ is measured in a particular quantum superposition of its position after its polarisation selection by passing through a lens $L_{o}$ followed by a detection by a single photon detector. An event signal produced by the single photon detector is sent to a single photon sensitive camera to detect a position of photon-2 corresponding to a particular measurement outcome of photon-$1$. An image is gradually formed by accumulating photon-$2$ detections on the camera by repeating the same experiment. }} \end{figure*} Consider a hyper-entangled photons pair emitted by a source in opposite directions as shown in a schematic diagram Fig.~\ref{fig1}. Where photon-$2$ is passed through a transparent polarisation sensitive phase pattern. Since photons are polarisation entangled and momentum entangled separately therefore, a polarisation sensitive phase pattern induces a position-dependent variation of the quantum entangled state of photons. Polarisation measurement outcomes of photon-$1$ and photon-2 are determined by a measurement setting (orientation of pass axis) of polariser-$1$ and polariser-$2$, respectively. After passing through polariser-$1$, photon-$1$ is focused by a lens $L_{o}$ on a single photon detector after passing through a narrow aperture. A lens $L_{o}$ permits detection in a quantum superposition basis of the position of photon-1. A photon detection produces a pulse named an event signal, which is sent to a single photon detection camera to allow registration of photon-$2$ corresponding to a selected measurement outcome. A quantum image is gradually formed by accumulating the photon registrations on camera for each repetition of the experiment with the same measurement setting. In this way, this is a coincidence imaging where the individual photon has no well-defined momentum and polarisation however, each photon is carrying momentum and polarisation information of the other photon by quantum entanglement. Therefore, measurement outcomes of both photons are required to construct a quantum image. Consider Einstein Rosen Podolsky state (EPR) \cite{epr} of two particles in a position space, $|\alpha\rangle=\int^{\infty}_{-\infty}|x\rangle_{1}|x+x_{o}\rangle_{2} \mathrm{d}x$, where subscripts $1$ and $2$ are the labels of particles. In this state, both particles are equally likely to exist at all points in the position space with a constant separation $x_{o}$. In this way, they are spatially entangled. The EPR state in a momentum space can be written as $|\alpha\rangle=\int^{\infty}_{-\infty}e^{i \frac{p x_{o}}{\hslash}}|p\rangle_{1}|-p\rangle_{2}\mathrm{d}p$, where both particles have opposite momenta and are equally likely to exist at all points in the momentum space such that they are momentum entangled. Where $\hslash=h/2\pi$ is the reduced Planck's constant. In this way, both the position and momentum of each particle are completely unknown. Consider a source of a finite extension producing a pair of photons such that both photons are originating from the same position in the source. This is a case, if $x_{o}=0$ in the one-dimensional EPR state and in this paper, it is extended to three dimensions by including the polarisation of photons. Quantum state of photons originating from a position vector $\mathbf{r}'$, in time independent case, leads to a finite amplitude to find a photon at a point $o_{a}$ in region-$a$ and another photon at a point $o_{b}$ in region-$b$ as shown in Fig.~\ref{fig1}. Consider points $o_{a}$ and $o_{b}$ prior to any optical element from the source. A joint amplitude for photons to be at these points in three-dimensions is $\frac{e^{ip_{1}|\mathbf{r}_{a}-\mathbf{r}'|/\hslash}}{|\mathbf{r}_{a}-\mathbf{r}'|}\frac{e^{ip_{2}|\mathbf{r}_{b}-\mathbf{r}'|/\hslash}}{|\mathbf{r}_{b}-\mathbf{r}'|}$ \cite{horne1, horne2} where $p_{1}$ and $p_{2}$ are magnitudes of momentum of photon-$1$ and of photon-$2$, respectively. Position vectors of points $o_{a}$ and $o_{b}$ from origin are $\mathbf{r}_{a}$ and $\mathbf{r}_{b}$, respectively. Distances of $o_{a}$ and $o_{b}$ from $\mathbf{r}'$ are $|\mathbf{r}_{a}-\mathbf{r}'|$ and $|\mathbf{r}_{b}-\mathbf{r}'|$, respectively. Here, both photons are produced from a same location within the source. Consider, polarisation state of photon-$1$ propagating in region-$a$ is horizontal $|H\rangle_{a}$ and of photon-$2$ propagating in region-$b$ is vertical $|V\rangle_{b}$. A total amplitude to find photon-$1$ at $o_{a}$ and photon-2 at $o_{b}$ is a linear quantum superposition of amplitudes originating from all points within the source. Therefore, a combined two-photon quantum state can be written as \begin{equation}\label{eq1} |\Psi\rangle_{12}=A_{o} \int^{\infty}_{-\infty}\int^{\infty}_{-\infty}\int^{\infty}_{-\infty} \psi(x',y',z') \frac{e^{ip_{1}|\mathbf{r}_{a}-\mathbf{r}'|/\hslash}}{|\mathbf{r}_{a}-\mathbf{r}'|}\frac{e^{ip_{2}|\mathbf{r}_{b}-\mathbf{r}'|/\hslash}}{|\mathbf{r}_{b}-\mathbf{r}'|} \mathrm{d}x'\mathrm{d}y'\mathrm{d}z' \otimes|H\rangle_a|V\rangle_b \end{equation} where $A_{o}$ is a normalisation constant and $\psi(x',y',z')$ is an amplitude of pair creation at a location $r'(x',y',z')$ that is considered to be $ \frac{e^{-\frac{x'^{2}}{2\sigma^2_{x}}}}{(2\pi)^{1/2} \sigma_{x}} \frac{e^{-\frac{y'^{2}}{2\sigma^2_{y}}}}{(2\pi)^{1/2} \sigma_{y}} \frac{e^{-\frac{z'^{2}}{2\sigma^2_{z}}}}{(2\pi)^{1/2} \sigma_{z}}$. For a source extension much smaller than distances of points $o_{a}$ and $o_{b}$ from origin, Eq.~\ref{eq1} is written as \begin{equation} \label{eq3} |\Psi\rangle_{12}=\Phi_{12}(r_{a};r_{b})\otimes|H\rangle_a|V\rangle_b \end{equation} where $\Phi_{12}(r_{a};r_{b})$ is the amplitude to find photon-$1$ at $o_{a}$ and photon-$2$ at $o_{b}$, which is calculated by solving the integral in Eq.~\ref{eq1}, \begin{equation}\label{eq4} \Phi_{12}(r_{a};r_{b})= A_{o} \frac{e^{i(p_{1}r_{a}+p_{2}r_{b})/\hslash}}{r_{a} r_{b}} e^{-((p_{1x}+p_{2x})\sigma_{x})^2/2\hslash^{2}} e^{-((p_{1y}+p_{2y})\sigma_{y})^2/2 \hslash^{2}} e^{-((p_{1z}+p_{2z})\sigma_{z})^2/2\hslash^{2}} \end{equation} where $p_{1x}=p_{1}\sin\theta_{a} \cos\phi_{a}$, $p_{1y}=p_{1}\sin\theta_{a} \sin\phi_{a}$ and $p_{1z}=p_{1}\cos\theta_{a}$ are the $x$, $y$ and $z$ components of momentum of photon-$1$ in the spherical polar coordinate system. Similarly $p_{2x}=p_{2}\sin\theta_{b} \cos\phi_{b}$, $p_{2y}=p_{2}\sin\theta_{b} \sin\phi_{b}$ and $p_{2z}=p_{2}\cos\theta_{b}$ are the $x$, $y$ and $z$ components of momentum of photon-$2$. Photons are momentum entangled due to inseparability of $\Phi_{12}(r_{a};r_{b})$. Assume that photons have same energy therefore, magnitudes of their momenta are equal, $p_{1}=p_{2}$. Since photons are identical particles and there exist different amplitudes that cannot be distinguished if a photon is detected at $o_{a}$ and another photon is detected at $o_{b}$. An amplitude to find photon-$1$ at $o_{a}$ and photon-2 at $o_{b}$ is indistinguishable from finding photon-1 at $o_{b}$ and photon-2 at $o_{a}$ after exchange of photons if their polarisation is not measured. Therefore, due to the Bosonic nature of identical photons, their symmetric quantum state is written as $|\Psi\rangle= \frac{1}{\sqrt{2}}(|\Psi\rangle_{12}+|\Psi\rangle_{21})$, where $|\Psi\rangle_{21}$ is a quantum state Eq.~\ref{eq1} after the exchange of photons. After the exchange of photons, Eq.~\ref{eq3} becomes $|\Psi\rangle_{21}=\Phi_{21}(r_{a};r_{b})\otimes|V\rangle_a|H\rangle_b$. However, from Eq.~\ref{eq4}, $\Phi_{21}(r_{a};r_{b})=\Phi_{12}(r_{a};r_{b})$. Therefore, a combined quantum state becomes \begin{equation}\label{eq5} |\Psi\rangle=\Phi_{12}(r_{a};r_{b})\otimes\frac{1}{\sqrt{2}} \left(|H\rangle_a|V\rangle_b+|V\rangle_a|H\rangle_b\right) \end{equation} This is a hyper-entangled state since photons are momentum entangled due to inseparability of $\Phi_{12}(r_{a};r_{b})$ and polarisation entangled due to inseparability of $\frac{1}{\sqrt{2}} (|H\rangle_a|V\rangle_b+|V\rangle_a|H\rangle_b)$, where each quantum entangled state is symmetric. However, a combined quantum state is a product of each quantum entangled state. From here onwards, a photon in region-$a$ is labeled as photon-1 and a photon in region-$b$ is labeled as photon-2. \section{Imaging with hyper-entangled photons} In a realistic experiment, a source emits entangled photons in particular directions. To incorporate this directional dependence, quantum state is multiplied by an additional directional function such that the highest probability of finding photons is around a point $o_{a}$ in region-$a$ and $o_{b}$ in region-$b$. Let $\psi_{a}(x_{1},y_{1}, z_{1})$ and $\psi_{b}(x_{2},y_{2}, z_{2})$ are the envelops of wavefunctions for a photon around $o_{a}$ and a photon around $o_{b}$. From these considerations, a total symmetric quantum state of photons can be written as \begin{equation}\label{eq6} |\Psi\rangle_{s}=\frac{1}{\sqrt{2}}[\psi_{a}(x_{1},y_{1}, z_{1})\psi_{b}(x_{2},y_{2}, z_{2}) \pm \\ \psi_{a}(x_{2},y_{2}, z_{2})\psi_{b}(x_{1},y_{1}, z_{1})]a_{s}\Phi_{12}(r_{a};r_{b})\otimes\frac{1}{\sqrt{2}} \left(|H\rangle_1|V\rangle_2+e^{i\phi}|V\rangle_1|H\rangle_2\right) \end{equation} Where the phase $\phi$ is not arbitrary, it is either zero or $\pi$. The total quantum state is symmetrised with a plus sign if $\phi=0$. For $\phi=\pi$, the polarisation entangled state is antisymmetric after the exchange of photons therefore, the quantum state in the external degrees of freedom has to be antisymmetric. Here, $a_{s}$ is a normalisation constant. In the experiment, a phase difference $\phi$ between $|H\rangle$ and $|V\rangle$ quantum states of a photon is introduced either by placing a half-wave plate in the path of a photon or by tilting the nonlinear crystal source of entangled photons. In this paper, an antisymmetric polarisation entanglement is produced by the source. Since photons are propagating in different regions $a$ and $b$ such that $\psi_{a}(x_{1},y_{1}, z_{1})$ and $\psi_{b}(x_{2},y_{2}, z_{2})$ are non-overlapping therefore, a second term $\psi_{a}(x_{2},y_{2}, z_{2})\psi_{b}(x_{1},y_{1}, z_{1})a_{s}\Phi_{21}(r_{a};r_{b})$, in Eq.~\ref{eq6} is negligible. To form an image with a hyper-entangled state, consider photon-$2$ in region-$b$ is passed through a transparent polarisation sensitive phase pattern oriented perpendicular to $z$-axis as shown in Fig.~\ref{fig1}. The pattern introduces a position-dependent phase difference between the horizontal and the vertical polarisation of a photon passing through it such that $|H\rangle_{2}\rightarrow e^{i\phi(x_{2}, y_{2})}|H\rangle_{2} $ and $|V\rangle_{2}\rightarrow |V\rangle_{2} $. In the experiment, a phase difference $\phi(x_{2},y_{2})$ is either zero or $\pi$. Consider photon-2 of a hyper-entangled state, with an antisymmetric polarisation entangled state $|\Psi^-\rangle_{p}=\frac{1}{\sqrt{2}}\left(|H\rangle_1|V\rangle_2-|V\rangle_1|H\rangle_2\right)$, is incident on the pattern. A polarisation dependent phase is imprinted on photon-2 in region-$b$. Assume that the pattern is located at $z=d_{2}$ in region-$b$. An arbitrary location of photon-2 on the pattern just after the phase imprint is represented by coordinates ($x_{2}$, $y_{2}$) in a $x$-$y$ plane. An arbitrary location of photon-1 in a plane oriented perpendicular to $z$-axis at $z=-d_{1}$ in region-$a$ is represented by coordinates ($x_{1}$, $y_{1}$). Therefore, after the phase imprint and normalisation, the quantum state of photons is written as \begin{equation}\label{eq7} |\Psi\rangle_{I}=a_{s}\psi_{a}(x_{1},y_{1}, -d_{1})\psi_{b}(x_{2},y_{2}, d_{2})\Phi_{12}(r_{a};r_{b}) \otimes\frac{1}{\sqrt{2}} \left(|H\rangle_1|V\rangle_2-e^{i\phi (x_{2}, y_{2})}|V\rangle_1|H\rangle_2\right) \end{equation} A transparent polarisation sensitive phase pattern is imprinted in the phase of polarisation entangled state and a spread of a photon on the pattern is due to the momentum entangled part of the hyper-entangled state. Photons are also spatially correlated due to their spatial entanglement in position space. The position dependent polarisation entangled state of photons is $|\Psi^-\rangle_{p}$ for $\phi (x_{2}, y_{2})=0$ and $|\Psi^+\rangle_{p}=\frac{1}{\sqrt{2}}\left(|H\rangle_1|V\rangle_2+|V\rangle_1|H\rangle_2\right)$ for $\phi (x_{2}, y_{2})=\pi$. The location and polarisation information of each photon is carried by the other photon because of their quantum entanglement. After the imprint of a pattern, the position coordinates ($x_{2}$, $y_{2}$) at $z=d_{2}$ of photon-$2$ are measured by imaging with a lens on a single photon sensitive camera after its polarisation selection by polariser-$2$ as shown in Fig.~\ref{fig1}. The camera registers photon-$2$ only if photon-$1$ is detected by a single photon detector after passing through polariser-$1$. Consider a lens $L_{o}$ is not placed but a narrow aperture single photon detector is placed at a location $(x_{o1},y_{o1}, z=-d_{1})$ and orientation of pass axis of polariser-$1$ is vertical and of polariser-$2$ is horizontal. This measurement setting corresponds to a coincidence detection when photon-1 is detected at $(x_{o1},y_{o1}, z=-d_{1})$ in the vertical polarisation and photon-$2$ is detected in the horizontal polarisation state on the camera. The same experiment is repeated for a chosen measurement setting many times, a location of photon-$2$ on camera varies each time and gradually an image is formed on the camera. A corresponding probability of coincidence detection is written as $P_{V_{1},H_{2}}(x_{2}, y_{2})=\frac{1}{2}| a_{s}\psi_{a}(x_{o1},y_{o1}, -d_{1})\psi_{b}(x_{2},y_{2}, d_{2})\Phi_{12}(r_{a};r_{b})(-e^{i\phi (x_{2}, y_{2})})|^{2}$. Where subscripts of $P_{V_{1},H_{2}}$ denote orientation of pass-axes of polariser-$1$ and polariser-$2$. It is clear that a phase information is lost for this polarisation setting. Similarly, a polarisation sensitive phase information cannot be recovered for other combination of orientations of polarisers along horizontal and vertical directions. Let the orientation of pass axes of polarisers is now changed to perform polarisation measurements in the diagonal basis that is $|d^{+}\rangle_{j}=\frac{|H\rangle_{j}+|V\rangle_{j}}{\sqrt{2}}$ and $|d^{-}\rangle_{j}=\frac{|H\rangle_{j}-|V\rangle_{j}}{\sqrt{2}}$, where a photon label $j$ is $1$ or $2$. If both polarisers are oriented to pass quantum states $|d^{-}\rangle_{j}$ and photon-$1$ is detected at a location $(x_{o1},y_{o1}, z=-d_{1})$, then the corresponding probability of coincidence detection is \begin{equation}\label{eq8} P_{d^{-}_{1},d^{-}_{2}}(x_{2},y_{2})=\frac{1}{4}| a_{s}\psi_{a}(x_{o1},y_{o1}, -d_{1})\psi_{b}(x_{2},y_{2}, d_{2}) \Phi_{12}(r_{a};r_{b})|^{2} (1-\cos\phi(x_{2},y_{2})) \end{equation} which contains a phase information of the pattern. Since $\phi(x_{2},y_{2})$ is either $\pi$ or zero therefore, $P_{d^{-}_{1},d^{-}_{2}}(x_{2},y_{2})$ is a two level image. Where an image of the transparent polarisation sensitive phase pattern is obtained by measuring coincidence detection probability. If on the other hand, polariser-$1$ is aligned to pass $|d^{+}\rangle$ and polariser-$2$ is aligned to pass $|d^{-}\rangle$ then the corresponding probability of coincidence detection is written as \begin{equation}\label{eq9} P_{d^{+}_{1},d^{-}_{2}}(x_{2},y_{2})=\frac{1}{4}| a_{s}\psi_{a}(x_{o1},y_{o1}, -d_{1})\psi_{b}(x_{2},y_{2}, d_{2}) \Phi_{12}(r_{a};r_{b})|^{2} (1+\cos\phi(x_{2},y_{2})) \end{equation} This image is exactly inverted as compared to Eq.~\ref{eq8}, where the maximum and the minimum levels of the image are interchanged. To make a measurement on photon-$1$ in a quantum superposition basis of position $(x_{1}, y_{1})$, a convex lens $L_{o}$ is placed at $z=-d_{1}$ and photon-$1$ is detected close to its focal point by a narrow aperture single photon detector placed on $z$-axis. The lens transforms a particular quantum superposition of position (or an incident momentum state) in a plane at $z=-d_{1}$ to a single point in its focal plane. Therefore, probability of coincidence detection of photons for two different orientation $(d^{+}_{1},d^{-}_{2})$ and $(d^{-}_{1},d^{-}_{2})$ of polarisers is succinctly written as \begin{equation}\label{eq10} P'_{d^{\pm}_{1},d^{-}_{2}}(x_{2},y_{2})=\frac{1}{4} |\int^{\infty}_{-\infty}\int^{\infty}_{-\infty} a_{s}\psi_{a}(x_{1},y_{1}, -d_{1}) \psi_{b}(x_{2},y_{2}, d_{2})\Phi_{12}(r_{a};r_{b}) \mathrm{d}x_{1} \mathrm{d}y_{1}|^{2} (1\pm\cos\phi(x_{2},y_{2})) \end{equation} It is evident that opposite level images are formed when orientation of polarisers is changed from $(d^{-}_{1},d^{-}_{2})$ to $(d^{+}_{1},d^{-}_{2})$. However, no image is formed if only photon-2 is detected without considering the measurement outcome of photon-$1$ or if photon-$1$ is not measured at all. \section{Experiment and discussion} A schematic diagram of the experimental setup is shown in Fig.~\ref{fig2}. Hyper-entangled photons are produced by type-II spontaneous parametric down conversion (SPDC) \cite{mon1, mon2, souto, howell, stch, mandel_th} in a beta-barium-borate (BBO) nonlinear crystal. A linearly polarised pump laser beam of wavelength 405~nm and narrow beam diameter is passed through a BBO crystal. Hyper-entangled photons are produced at wavelength 810~nm in a first crystal, where a second crystal is placed to compensate transverse and longitudinal walk-offs. A polarisation sensitive phase pattern is produced by a reflection type spatial light modulator (SLM). After the collimation, photon-$2$ is reflected from the spatial light modulator and a position-dependent phase is imprinted such that $|H\rangle_{2}\rightarrow e^{i\phi(x_{2},y_{2})}|H\rangle_{2}$ and $|V\rangle_{2}\rightarrow|V\rangle_{2}$. Photon-$2$ is imaged on an intensified-charge-coupled-device (ICCD) camera, after passing through polariser-$2$, with a three lens ($L_{1}$, $L_{2}$ and $L_{3}$) telescopic configuration. The telescope is aligned such that the object plane coincides with the reflecting surface of the spatial light modulator. In this way, a location of photon-2 immediately after the phase imprint is measured by imaging it on the camera. The focal length of lens $L_{1}$ of diameter 5~cm is 40~cm. The focal length of lenses $L_{2}$ and $L_{3}$ are 10~cm and 100~cm, respectively. Photon-$1$ is passed through polariser-$1$ to perform a polarisation measurement and a lens $L_{o}$ to perform a measurement in the quantum superposition basis of position after detection by a narrow area single photon detector of very low dark counts. These are joint measurements. Pass-axis of polariser-$1$ is subtending an angle $\delta_{1}$ and of polariser-$2$ is subtending an angle $\delta_{2}$ with the horizontal axis. These angles can be varied independently according to a measurement setting. An electrical signal produced by a single photon detector corresponds to a measurement outcome for a chosen measurement setting for photon-$1$. This signal is connected to a direct gate terminal of the ICCD camera. The direct gate terminal activates an electronic shutter of the camera that allows detection signal of photon-$2$ produced by the imaging sensor of camera to reach the charge-coupled-device (CCD) sensor of the camera after amplification. This process efficiently detects a coincidence event of photons with a 20~ns insertion delay. The distance of the imaging sensor surface of the ICCD camera from the spatial light modulator is 16.91~m. The spatial light modulator is placed at a distance 89~cm from the first BBO crystal. A first lens $L_{1}$ of the telescope is placed at a distance 15.24~m from the spatial light modulator surface. This lens produces an inverted real image of the spatial light modulator surface, which acts as a real image for a second lens $L_{2}$. A second lens is placed such that it produces a magnified virtual image in a virtual image plane as shown in Fig.~\ref{fig2}. A third lens $L_{3}$ of focal length $f$~=~100~cm is positioned such that the photon detection plane of the ICCD camera and a virtual image plane are positioned at $+2f$ and $-2f$, respectively from lens $L_{3}$. In this $2f$-$2f$ imaging configuration, an image can be fine-tuned on the ICCD camera by displacing the second lens $L_{2}$ only and keeping other lenses stationary. The image demagnification of the telescope is 0.52 and its spatial resolution at the spatial light modulator surface is about $\sim$~0.3~mm. In the experiment, a structure size of the pattern is 1~mm, which is imaged by the ICCD camera from a distance 16.91~m. \begin{figure} \centering \includegraphics[scale=1.25]{figures/fig_2.eps} \caption{\label{fig2} \emph{Main schematic diagram of the experiment. Hyper-entangled photons are produced by a BBO crystal. A polarisation sensitive phase pattern is produced by an SLM. The pattern is imaged in free space by ICCD camera located at 16.91~m from the pattern by a lens combination in a telescopic configuration. ICCD camera is activated by a measurement outcome event signal of photon-1 to register photon-2 to form a quantum image.}} \end{figure} \begin{figure}[ht] \centering \includegraphics[scale=0.35]{figures/fig_3.eps} \caption{\label{fig3} \emph{(a) A transparent polarisation sensitive phase pattern, a darker region represents $\phi(x_{2},y_{2})=0$ and a lighter region represents $\phi(x_{2},y_{2})=\pi$. (b) An image captured by ICCD camera, when measured polarisation states of photon-1 and photon-2 are $|d^{-}\rangle_{1}$ and $|d^{-}\rangle_{2}$. (c) An image with inverted levels, when measured polarisation state of photon-1 is $|d^{+}\rangle_{1}$ and of photon-2 is $|d^{-}\rangle_{2}$. Each image is captured for ten minutes time of exposure.}} \end{figure} \begin{figure}[ht] \centering \includegraphics[scale=0.3]{figures/fig_4.eps} \caption{\label{fig4} \emph{ (a) A transparent polarisation sensitive phase pattern with $1~mm\times 1~mm$ size of each square, a darker region represents $\phi(x_{2},y_{2})=0$ and a lighter region represents $\phi(x_{2},y_{2})=\pi$. (b) An image of the pattern captured by ICCD camera, when measured polarisation states of photon-1 and photon-2 are $|d^{-}\rangle_{1}$ and $|d^{-}\rangle_{2}$. (c) When measured polarisation state of photon-1 is $|V\rangle_{1}$ and of photon-2 is $|d^{-}\rangle_{2}$. In this case, only the edges of squares are absent due to the diffraction limit of telescope. (d) No image is formed when measured polarisation state of photon-1 is $|H\rangle_{1}$ and of photon-2 is $|d^{-}\rangle_{2}$.}} \end{figure} Consider, photon-$j$ is detected after passing through a polariser-$j$ with its pass axis inclined at an angle $\delta_{j}$ \emph{w.r.t.} horizontal axis. Its measured state of polarisation is, $|H\rangle_{j}$ for $\delta_{j}=0$, $|V\rangle_{j}$ for $\delta_{j}=90^{o}$, $|d^{+}\rangle_{j}$ for $\delta_{j}=45^{o}$ and $|d^{-}\rangle_{j}$ for $\delta_{j}=-45^{o}$. Polarisation state of photon-$2$ is measured once it is detected by ICCD camera at any location after passing through polariser-$2$. If a single photon detector detects photon-$1$ after passing through a polariser-$1$ but without a lens $L_{o}$, then it corresponds to a position measurement of photon-1 $(x_{01}, y_{01})$ in a plane at $z=-d_{1}$ along with an outcome of measured state of polarisation of photon-$1$. On the other hand, if a single photon detector detects a photon-$1$ after passing through a polariser and a lens $L_{o}$, then it corresponds to a measurement in the quantum superposition basis of position along with an outcome of a measured state of polarisation of photon-$1$. For this measurement setting, a coincidence photon detection probability is given by Eq.~\ref{eq10}. A transparent polarisation sensitive phase pattern shown in Fig.~\ref{fig3}(a) is displayed on the spatial light modulator, where a position-dependent grey level of the pattern determines a phase shift, which is either zero or $\pi$. Lighter grey regions represent a $\pi$ phase shift and since a source produces an antisymmetric polarisation entangled state therefore, polarisation entanglement of photons, after passing photon-$2$ through this region, transformed to $|\Psi^{+}\rangle_{p}=\frac{1}{\sqrt{2}} (|H\rangle_1|V\rangle_2+|V\rangle_1|H\rangle_2)$. Darker grey regions represents a zero phase shift and polarisation entangled state of photons remains same as produced by the source that is $|\Psi^{-}\rangle_{p}=\frac{1}{\sqrt{2}} (|H\rangle_1|V\rangle_2-|V\rangle_1|H\rangle_2)$. Therefore, the information of a two-photon quantum image is contained by the momentum and the polarisation entanglement parts of $|\Psi\rangle_{I}$. In the first experiment, to produced a quantum image, both polarisers are aligned at $\delta_{1}=-45^{o}$ and $\delta_{2}=-45^{o}$ to detect a same state of polarisation of photons in the diagonal basis that is $|d^{-}\rangle_{1}$ of photon-$1$ and $|d^{-}\rangle_{2}$ of photon-$2$. Photon-1 is also measured in the quantum superposition basis of position. The experiment is repeated for this setting and an image, in the form of gated photon counts of the pattern, is accumulated for 10~minutes exposure of the ICCD camera. A background image is captured by displaying a uniform grey level corresponding to $\phi=0$ on the spatial light modulator with the same experimental setting. A coincidence photon image is constructed by subtracting a corresponding background image as shown in Fig.~\ref{fig3}(b). The ICCD camera also captures accidental counts in the coincidence window. These accidental counts are originated from the scattered photons and camera noise. These counts accumulate with the actual coincidence photon counts. A typical ratio of a coincidence to accidental counts is 0.35-0.45. This is why a background correction is necessary. The noise of accidental photon counts in the background image, which is defined as a ratio of standard deviation to an average of counts for twenty-five hundred pixels is typically 0.12. All images shown in this paper are background corrected. Each image on the ICCD camera is reduced by 0.52 image demagnification. In a similar experiment, polariser-$1$ is aligned at $\delta_{1}=+45^{o}$ to detect photon-$1$ in $|d^{+}\rangle_{1}$ and polariser-2 is kept at $\delta_{2}=-45^{o}$ to detect photon in $|d^{-}\rangle_{2}$ . The coincidence image photon counts accumulated by the ICCD camera are shown in Fig.~\ref{fig3}(c), which corresponds to an image with inverted levels as compared to an image shown in Fig.~\ref{fig3}(b). This result is in agreement with Eq.~\ref{eq10} that shows an inversion of levels of images in the diagonal basis. These images are two-level images corresponding to the two-level phase shift of the pattern. \begin{figure*}[ht] \centering \includegraphics[scale=0.35]{figures/fig_5.eps} \caption{\label{fig5} \emph{(a) Edges of a pattern shown in Fig.~\ref{fig3}(a) appear when measured polarisation state of photon-1 is $|V\rangle_{1}$ and of photon-2 is $|d^{-}\rangle_{2}$. (b) No image is formed when measured polarisation state of photon-1 is $|H\rangle_{1}$ and of photon-2 is $|d^{-}\rangle_{2}$. (c) A single photon image of low depth captured by ICCD camera. Edges of the pattern appear as low coincidence photon counts darker regions due to the resolution limit of the telescope.}} \end{figure*} In another experiment, a different transparent polarisation sensitive phase pattern as shown in Fig.~\ref{fig4}(a) is displayed on the spatial light modulator. Where the gray levels corresponding to the position-dependent phase shifts are the same as in the previous pattern. A background-corrected image of this pattern shown in Fig.~\ref{fig4}(b) is accumulated with the ICCD camera for 10~minutes time of exposure in the diagonal basis, for $\delta_{1}=\delta_{2}=-45^{o}$ and with a lens $L_{o}$. This image is then compared with another background-corrected image captured with a different setting of polariser-$1$, $\delta_{1}=+90^{o}$ as shown in Fig.~\ref{fig4}(c) keeping all other settings the same. According to the theoretical analysis shown in the previous section, the coincidence probability corresponding to a measurement of $|H\rangle$ or $|V\rangle$ state of photon-$1$ does not contain the phase information of the pattern therefore, no image should be formed. A measurement of photon-$1$ polarisation state in $|V\rangle_{1}$ collapses photon-$2$ polarisation state onto $|H\rangle_{2}$ and this polarisation state is transformed by the spatial light modulator such that $|H\rangle_{2}\rightarrow e^{i\phi(x_{2},y_{2})}|H\rangle_{2}$. This is a pure phase pattern and it diffracts photons. However, a telescope is located at a far distance 15.24~m from the pattern and its spatial resolution at this distance is about $\sim$~0.3~mm, where the size of each square in the pattern is 1~mm$\times$1~mm. Due to the finite lens diameter, the telescope is not capturing all the diffracted photons carrying information of sharp edges and it leads to the fundamental diffraction limit of resolution. Thus, the edges or phase boundaries in the pattern lead to low two-photon coincidence counts. Therefore, the edges appear as low-level regions in the image. In this case, if all diffracted photons are also captured then no image of the pattern will result. This effect is exactly matching for all the sharp edges by a comparison of an image shown in Fig.~\ref{fig4}(c) with a corresponding image shown in Fig.~\ref{fig4}(b). Another image is captured for setting, $\delta_{1}=0$ to detect photon-$1$ in $|H\rangle_{1}$ keeping all other settings the same as for the image shown in Fig.~\ref{fig4}(b). Since this setting collapses the polarisation state of photon-$2$ to $|V\rangle_{2}$ and this quantum state remains unchanged by the spatial light modulator therefore, no pattern is formed on ICCD camera as shown in Fig.~\ref{fig4}(d). A similar experiment is repeated for $\delta_{1}=90^{o}$ to detect $|V\rangle_{1}$ and $\delta_{2}=-45^{o}$ to detect $|d^{-}\rangle_{2}$ for a pattern shown in Fig.~\ref{fig3}(a). A corresponding background-corrected image is shown in Fig.~\ref{fig5}(a), where the sharp edges appear due to the diffraction limit of the telescope. As explained, for $\delta_{1}=0$ and $\delta_{2}=-45^{o}$, no pattern is formed as shown in Fig.~\ref{fig5}(b). A single photon background-corrected image of the same pattern is shown in Fig.~\ref{fig5}(c). In this case, photon-1 is not measured and the ICCD camera shutter is kept open for continuous exposure of photon-2 for ten minutes. This image has low depth and only the sharp edges appear because of the diffraction limit of the telescope. \section{Conclusion} This paper presents quantum imaging experiments of transparent polarisation sensitive phase patterns with hyper-entangled photons. The paper begins with the main concept of the experiment and then it is shown, how a hyper-entangled state arises from the indistinguishability of identical photons. Furthermore, a detailed theoretical analysis of quantum imaging is presented. In the experiment, hyper-entangled photons are produced by a type-II SPDC source and each polarisation sensitive phase pattern is imaged from a distance 16.91~m with the ICCD camera. In this type of coincidence imaging, polarisation entanglement and momentum entanglement of photons are involved. Images of patterns are constructed after background correction by accumulating coincidence photons on the ICCD camera for different measurement settings for both photons. In a diagonal basis, image levels are inverted if a measured outcome of polarisation state of photon-$1$ is changed from $|d^{-}\rangle_{1}$ to $|d^{+}\rangle_{1}$. Edges of the pattern are not captured because of the diffraction limit of the telescope. Eventually, low-level regions corresponding to edges appear in a two-photon captured image. Which is experimentally shown in detail. If photon-$1$ is measured in a quantum state $|H\rangle_{1}$ then photon-$2$ collapses to a quantum state $|V\rangle_{2}$, which is unaffected by the spatial light modulator and it leads to no pattern in the image. A single photon accumulation of photon-$2$ on the camera by continuous exposure without considering photon-$1$ produces a shallow image due to the diffraction limit of the telescope, which contains partial information of the pattern. \section*{Methods} \subsection{Experimental setup} Hyper-entangled photons are produced by a type-II SPDC process in a first BBO crystal, which is a negative uniaxial crystal. A focused pump laser beam of extraordinary polarisation is incident on the crystal. Crystal is tuned such that the emission cones of ordinary and extraordinary polarised photons are intersecting. Hyper-entanglement is produced in the intersection regions. Down converted photons at wavelength 810~nm are passed through a half-wave plate to interchange their linear polarisation. A second crystal of half thickness is placed parallel to the first crystal with a same orientation of its optic axis as the first crystal. This crystal compensates for transverse and longitudinal walk-offs of down converted photons. The source is aligned to produce an antisymmetric polarisation entangled state $|\Psi^-\rangle_{p}$ and a momentum entangled state. In a position basis, the momentum entangled photons are spatially entangled. Momentum entanglement is a result of the linear momentum conservation of photons. Photons are polarisation entangled and momentum entangled separately. Photon-$1$ and photon-$2$ of an entangled pair are passed through two polarisers aligned with their pass axes subtending angles $\delta_{1}$ and $\delta_{2}$ with the horizontal direction. The photons transmitted by the polarisers are passed through bandpass filters of peak transmission at 810~nm. To check polarisation entanglement the photons are then detected by single photon detectors where single photon counts of each photon detector and coincidence photon counts are measured by a coincidence event counting module. However, to image the pattern the experimental setup is shown in Fig.~\ref{fig2}. \subsection{Violation of CHSH inequality} Einstein's locality condition strictly forbids the faster than light influence. Consider two polarisation entangled photons separated by a large distance. Any measurement performed on one photon should not affect the measurement outcome of the other photon immidiately if the locality condition is true. Consider two different observables $A_{1}$ and $A'_{1}$ of photon-$1$ where any one of them is measured for a photon pair. Similarly, any one of the two different observables $B_{2}$ and $B'_{2}$ of photon-$2$ is measured for a photon pair. A measured observable outcome corresponds to a polarisation measurement of a photon. A measurement outcome corresponds to one of the two orthogonal polarisations with measured value $+1$ for one polarisation and $-1$ for the other. In the local realistic model, the values of all such observables are well-defined prior to any measurement. Consider a correlation function where each observable can have a value +1 or -1, therefore \begin{equation} \label{eqm1} C_{l}= (A_{1}+A'_{1})B'_{2}+(A_{1}-A'_{1})B_{2}=\pm2 \end{equation} its value cannot exceed the bound $|\langle C_{l}\rangle|\leq2$. However, quantum mechanical observables cannot always have a well-defined value and incompatible observables cannot be measured simultaneously. An observable value is obtained by the measurement as a possible outcome that depends on the quantum state and observable being measured. Quantum mechanics violates local realistic prediction expressed in quantum observables such that \begin{eqnarray} \label{eqm2} S=\langle (\hat{A}_{1}+\hat{A}'_{1})\hat{B}'_{2}+(\hat{A}_{1}-\hat{A}'_{1})\hat{B}_{2} \rangle =\langle \hat{A}_{1}\hat{B}_{2}\rangle-\langle\hat{A}'_{1}\hat{B}_{2}\rangle+\langle\hat{A}_{1}\hat{B}'_{2}\rangle+\langle\hat{A}'_{1}\hat{B}'_{2}\rangle \end{eqnarray} Therefore, the following Clauser, Horne, Shimony and Holt (CHSH) inequality is violated by quantum mechanics \cite{chsh_th} \begin{equation} 0\leq|S|\leq 2 \end{equation} Expectation value of each term of product of observables in Eq.~\ref{eqm2} can be evaluated from coincidence photon counts $C(\delta_{1}, \delta_{2})$ measured for a corresponding orientation of pass axes of polarisers and from the coincidence photon counts $C(\delta^{\perp}_{1}, \delta^{\perp}_{2})$ where $\delta^{\perp}_{j}=\delta_{j}+90^{o}$ \cite{chsh}. The CHSH inequality is violated for the polarisation entanglement. A plot of coincidence photon counts measurements is shown in Fig.~\ref{mtfig_1} for different orientations of polarisers. The CHSH correlation parameter $S$, Eq.~\ref{eqm2}, is expressed in the form of angle of pass axes of polarisers as \begin{equation} \label{eqm3} S=E(\delta_{1},\delta_{2})-E(\delta'_{1},\delta_{2})+E(\delta_{1},\delta'_{2})+E(\delta'_{1},\delta'_{2}) \end{equation} which is evaluated from the coincidence photon measurements. Where $E(\delta_{1},\delta_{2})$ represents an expectation value of the joint observable measurements performed on both photons, which is written as \begin{equation} \label{eqm4} E(\delta_{1},\delta_{2})=\frac{C(\delta_{1}, \delta_{2})+C(\delta^{\perp}_{1}, \delta^{\perp}_{2})-C(\delta_{1}, \delta^{\perp}_{2})-C(\delta^{\perp}_{1}, \delta_{2})}{C(\delta_{1}, \delta_{2})+C(\delta^{\perp}_{1}, \delta^{\perp}_{2})+C(\delta_{1}, \delta^{\perp}_{2})+C(\delta^{\perp}_{1}, \delta_{2})} \end{equation} A measured value of parameter $S=-2.66$ where $0\leq|S|\leq 2$ for a local realistic model, thus CHSH inequality is violated. The CHSH inequality is also violated by taking coincidence photon measurements of observables with the ICCD camera gated by the single photon detector signal. \begin{figure} \centering \includegraphics[scale=0.6]{figures/fig_6.eps} \caption{\label{mtfig_1} \emph{Coincidence photon counts measurements for $|\Psi^{-}\rangle_{p}$ at different orientations of pass axes of polarisers. Solid line represents a fit of $\sin^{2}(\delta_{1}-\delta_{2})$ corresponding to the coincidence detection probability.}} \end{figure} \subsection{Spatial correlations} In the regions of the intersection of cones, the down converted photons are diverging and are spatially correlated in the transverse planes as a consequence of their spatial entanglement. These correlations are a result of momentum entanglement of photons which is represented by $\Phi_{12}(r_{a};r_{b})$ in Eq.~\ref{eq5}. Spatial entanglement correlates locations of photons. It is due to the spread of photons wavefunction the whole pattern is exposed by a single photon. Once a location of a photon is determined on the pattern its information is shared with the other photon due to their spatial correlations thus each photon carries the spatial location information of the other photon. Position correlations are measured in two planes oriented perpendicular to the mean direction of propagation of each photon. A single-slit of width 0.8~mm is placed at a distance $d_{1}=1.247$~m, immediately in front of a convex lens $L_{o}$, where the lens is placed in front of a narrow area single photon detector such that the detector is almost at its focal point. The length of the slit is very large as compared to its width. Polariser-$1$ and polariser-$2$ are removed from paths of photons and the remaining setup is the same as shown in Fig.~\ref{fig2}. \begin{figure*} \centering \includegraphics[scale=0.60]{figures/fig_7ab.eps} \caption{\label{mtfig_2} \emph{ (a) Vertical position measurements of photons. Each vertical position of photon-$1$ is measured by passing it through a horizontally aligned single narrow slit placed in front of a lens $L_{o}$. A corresponding position of photon-$2$ is measured by the ICCD camera in the plane of the spatial light modulator. Each data circle represents mean vertical positions of photons corresponding to a particular position of a single slit. A straight line is a linear fit which shows that vertical positions of photons are equal and opposite at equal distances from the BBO crystal. (b) Horizontal position measurements of photons when narrow single slit is aligned vertically and displaced horizontally. A straight line is a linear fit which shows that horizontal positions of photons are equal and opposite at equal distances from the BBO crystal.}} \end{figure*} If the single-slit is oriented horizontally and photon-$1$ is passed through it and detected then a vertical position of photon-1 is measured with a precision equals to the slit width because photon-$1$ is detected only if it is transmitted by the slit. However, this measurement does not measure horizontal position because of large length of the slit in the horizontal direction. Similarly, a corresponding mean position of photon-$2$ is measured on the ICCD camera, which corresponds to a measured mean position of photon-$2$ on the spatial light modulator placed at $d_{2}=0.89$~m reduced by 0.52 due to the image demagnification of the telescope. Mean vertical positions of photon-$1$ and photon-$2$ are measured after repeating the experiment for different vertical displacements of the single-slit. Correlation between the mean vertical positions of photon-$1$ and of photon-$2$ is shown in Fig.~\ref{mtfig_2}(a). Similar measurements are performed when a single-slit is oriented vertically and displaced horizontally. This setting performs a horizontal position measurement of photon-$1$ with a resolution equals to the slit width. Mean horizontal position of photon-$1$ and of photon-$2$ are measured by repeating the experiment for each horizontal displacement of the single-slit. A corresponding correlation of mean positions of photons is shown in Fig.~\ref{mtfig_2}(b). The telescope demagnification is 0.52 and distances $d_{1}$ and $d_{2}$ are different in the position correlation plots. For $d_{1}=d_{2}$ and after taking into account the demagnification, the measured mean position of photon-$1$ is equal and opposite to the measured mean position of photon-$2$ in the transverse planes. \subsection{Classical and quantum imaging} Most of the imaging methods in everyday applications are based on classical imaging. These methods utilise properties of classical fields such as intensity, polarisation, phase, frequency and coherence. In the case of optical imaging, classical fields are electromagnetic wave fields. If more than one electromagnetic wave fields are involved then a classical image can be constructed by correlating their classical observable properties. This type of imaging technique, which is known as classical coincidence imaging, has many applications in biomedical imaging and astronomy. A classical ghost imaging is also a coincidence imaging technique, where only one electromagnetic field interacts with the object and its intensity is measured position-wise while the other electromagnetic wave is detected by a bucket detector. Most of these imaging methods can image absorptive objects only. A pure phase object is transparent and cannot be imaged by such methods. However, a method of phase-contrast imaging, which transforms a phase variation to intensity variation across the object is used to image a pure phase object. A phase-contrast microscope can see transparent micro-organisms and transparent objects. Such a method can also image a Bose-Einstein condensate non-destructively. In addition, a transparent polarisation sensitive phase object can be imaged with a polarisation-contrast microscope \cite{pci1, pci2,pci3}, which measures the polarisation shift caused by the object. However, quantum imaging methods are based on the quantum mechanical properties of photons originating from their polarisation quantum states, their quantum entanglement and quantum coherence. Quantum entanglement enhanced resolution microscope can image microscopic structures beyond the diffraction limit \cite{enh1}. Quantum image construction also depends on how measurements are performed on each photon since incompatible observables can not be measured simultaneously. In experiments presented in this paper, two joint measurements of compatible observables are performed on each photon since they are hyper entangled in two different degrees of freedom. The first measurement is polarisation measurement in a chosen polarisation basis and the second measurement corresponds to a measurement of the momentum of the non-interacting photon and position for the interacted photon. The individual photon carries no complete image information and this method can be useful to send images of the polarisation sensitive phase pattern securely and directly over a large distance. The experiment shown in this paper is the first demonstration of quantum imaging with hyper-entangled photons over a long path. \subsection*{Acknowledgement} Mandip Singh acknowledges research funding by the Department of Science and Technology, Quantum Enabled Science and Technology grant for project No. Q.101 of theme title ``Quantum Information Technologies with Photonic Devices", {\bf{DST/ICPS/QuST/Theme-1/2019 (General)}}. \subsection*{Author contributions statement} MS conceptualised the idea and setup the experiment, both authors performed the experiment, MK took data, MS wrote the manuscript and supervised the project.
1,314,259,995,419
arxiv
\section{Introduction and main result}\label{s1} In this paper, we investigate the existence of standing waves $\psi(x,t)=e^{-\frac{iEt}{\hbar}}u(x)$, $E\in \R,u: \R^{N}\rightarrow\mathbb{C}$ to the time-dependent nonlinear Schr\"{o}dinger equation with an external electromagnetic field \begin{equation}\label{1.1} i \hbar \frac{\partial \psi}{\partial t}=\Big(\ds\frac{\hbar}{i}\nabla -A(x)\Big)^{2}\psi+G(x)\psi-f(x,\psi),\,\,x\in \R^{N}, \end{equation} which arises in various physical contexts such as nonlinear optics or plasma physics where one simulates the interaction effect among many particles by introducing a nonlinear term (see \cite{ss}). The function $\psi(x,t)$ takes on complex values, $\hbar$ is the Planck constant, $i$ is the imaginary unit. Here $A$ denotes a magnetic potential and the Schr\"{o}dinger operator is defined by $$ \Big(\ds\frac{\hbar}{i}\nabla-A(x)\Big)^{2}\psi: =-\hbar^{2}\Delta \psi-\frac{2\hbar}{i}A\cdot\nabla\psi +|A|^{2}\psi-\frac{\hbar}{i}\psi div A. $$ Actually, in general dimension, the magnetic field $B$ is a 2-form where $B_{k,j}=\partial_{j}A_{k}-\partial_{k}A_{j}$; in the case $N=3, B=curl A$. The function $G$ represents an electric potential. Assuming $f(x,e^{i \theta }u)=e^{i \theta}f(x,u),\theta\in \R^{1}$ and substituting this ansatz $\psi(x,t)=e^{-\frac{iEt}{\hbar}}u(x)$ into \eqref{1.1}, one is led to solve the complex semilinear elliptic equation \begin{equation}\label{1.2} \Big(\ds\frac{\hbar}{i}\nabla-A(x)\Big)^{2}u+(G(x)-E)u=f(x,u),\,\,\,\,x\in\R^{N}. \end{equation} For simplicity, let $V(x)=(G(x)-E)$ and assume that $V$ is strictly positive on the whole space $\R^{N}.$ The transition from quantum mechanics to classical mechanics can be formally described by letting $\hbar\rightarrow0$, and thus the existence of solutions for $\hbar$ small has physical interest. Standing waves for $\hbar$ small are usually referred as semi-classical bound states (see \cite{h}). When $A(x)\equiv 0$, problem \eqref{1.2} arises in various applications, such as chemotaxis, population genetics, chemical reactor theory, and the study of standing waves of certain nonlinear Schr\"{o}dinger equations. In recent years, a considerable amount of work has been devoted to study wave solutions of \eqref{1.2} with $A(x)\equiv 0.$ Among of them, we refer to \cite{bl1,cps,df,df1,dn,fw,l1,l2,o,r,w}. Recently, in \cite{aw}, Ao and Wei applying localized energy method obtained infinitely many positive solutions for \eqref{1.2} with non-symmetric potential. On the contrary, there are still relatively few papers which deal with the case $A(x)\not\equiv 0$, namely when a magnetic field is present. The first result on magnetic nonlinear Schr\"{o}dinger equation is due to Esteban and Lions in \cite {el}. They obtained the existence of standing waves to \eqref{1.2} for~$\hbar $~fixed and for special classes of magnetic fields by solving an appropriate minimization problem for the corresponding energy functional in the cases of $N=2,3.$ In \cite{ct}, Cao and Tang constructed semiclassical multi-peak solutions for \eqref{1.2} with bounded vector potentials. In \cite{cs1}, using a penalization procedure, Cingolani and Secchi extended the result in \cite{cs} to the case of a vector potential $A$, possibly unbounded. The penalization approach was also used in \cite{bdp} by Bartsch, Dancer and Peng to obtain multi-bump semiclassical bound for problem \eqref{1.2} with more general nonlinear term $f(x,u)$. In \cite{k}, Kurata proved the existence of least energy solution of \eqref{1.2} for $\hbar>0$ under a condition relating $V(x)$ and $A(x)$. In \cite{h,h1}, Helffer studied asymptotic behavior of the eigenfunctions of the Schr\"{o}dinger operators with magnetic fields in the semiclassical limit. See also \cite{b} for generalization of the results and in \cite{hs} for potentials which degenerate at infinity. In \cite{lpw}, Li, Peng and Wang applied the finite reduction method to obtain infinitely many non-radial complex valued solutions for \eqref{1.2} with radial electromagnetic fields satisfying some algebraic decaying conditions. Liu and Wang in \cite {lw} extends the result to some weaker symmetric conditions. In \cite{pw}, Pi and Wang obtained multi-bump solutions for \eqref{1.2} with $\hbar=1,f(x,u)= |u|^{p-2}u$ and an electrical potential satisfying a condition by applying the finite reduction method. In this paper, inspired by \cite{aw,wy1}, our main idea is to use the Lyapunov-Schmidt reduction method. We want to point out that the only assumption we need is the non-degeneracy of the bump. We have no requirements on the structure of the nonlinearity. If $\hbar=1$, $A(x)=A_{0}+\epsilon \tilde{A}(x)$, $V(x)=1+\epsilon \tilde{V}(x)$ and $f(x,u)=f(u)$, then (1.2) is reduced to the following complex problem $$ \Big(\ds\frac{\nabla}{i}-A_{0}-\epsilon \tilde{A}(x)\Big)^{2}u+(1+\epsilon \tilde{V}(x))u=f(u),~~~~~~u\in H^{1}(\R^{N},\mathbb{C}). $$ For simplicity of notations, in the sequel, we denote $$ A_{\epsilon}(x)=A_{0}+\epsilon \tilde{A}(x)\,\,\,\,\,\text{and}\,\,\,\,V_{\epsilon}(x)=1+\epsilon \tilde{V}(x). $$ Then we are concerned with the following problem \begin{equation}\label{1.3} \Big(\ds\frac{\nabla}{i}-A_{\epsilon}(x)\Big)^{2}u+V_{\epsilon}(x)u=f(u),~~~~~~u\in H^{1}(\R^{N},\mathbb{C}). \end{equation} In order to state our main result, we give the conditions imposed on $\tilde{A}(x)$, $\tilde{V}(x)$ and $f$: \\ $(A_{1})$ $\lim\limits_{|x|\rightarrow \infty} |\tilde{A}(x)|=0$; \\ $(A_{2})$ $\exists 0<\alpha_{1}<1$, $\lim\limits_{|x|\rightarrow \infty} |\tilde{A}(x)|^{2}e^{\alpha_{1}|x|}=+\infty$; \\ $(A_{3})$ $\exists 0<\alpha_{2}<1$, $\lim\limits_{|x|\rightarrow \infty} |div \tilde{A}(x)|^{2}e^{\alpha_{2}|x|}=+\infty$; \\ $(A_{4})$ $\lim\limits_{|x|\rightarrow \infty} |\nabla \tilde{A}(x)|=0$;\\ $(V_{1})$ $\tilde{V}(x)\in C(\R^{N},\R)$ and $\lim\limits_{|x|\rightarrow \infty} |\tilde{V}(x)|=0$; \\ $(V_{2})$ $\exists 0<\alpha_{3}<1$, $\lim\limits_{|x|\rightarrow \infty} |\tilde{V}(x)|e^{\alpha_{3}|x|}=+\infty$; \\ $(f_{1})$ $f : \mathbb{C} \rightarrow \mathbb{C}$ is of class $C^{1+\delta}$ for some $0 <\delta \leq 1$,~$f'(0)=0$;\\ $(f_{2})$ $f(e^{i\theta}u)=e^{i\theta}f(u),\theta\in \R^{1};$\\ $(f_{3})$ The equation \begin{eqnarray}\label{1.4} \left\ \begin{array}{ll} \ds -\Delta w+w=f(w),& w>0 ~\text{in}~ \mathbb{R}^{N}, \vspace{0.2cm}\\ \ds \lim_{|x|\rightarrow \infty}w(x)=0,& w(0)=\ds\max_{x\in \mathbb{R}^{N}}w(x), \end{array} \right. \end{eqnarray} has a non-degenerate solution $w$, i.e., $$ ker(\Delta-1+f'(w))\cap L^{\infty}(\R^{N})=span\Big\{\frac{\partial w}{\partial x_{1}},\ldots,\frac{\partial w}{\partial x_{N}}\Big\}. $$ Particularly, $f(u)=|u|^{p-1}u$ satisfies $ (f_{2}).$\\ Under the above assumptions, the spectrum of the linearized operator $$ \Delta \varphi -\varphi + f'(w)\varphi = \lambda\varphi, ~~\varphi \in H^{1}(\mathbb{R}^{N}) $$ admits the following decompositions $$ \lambda_{1}>\lambda_{2}>\ldots>\lambda_{n}>\lambda_{n+1}=0>\lambda_{n+2} $$ where each of the eigenfunction corresponding to the positive eigenvalue $\lambda_{j}$ decays exponentially. These eigenfunctions will play an important role in our secondary Lyapunov-Schmidt reduction(see Section 3 below). \begin{rem}\label{rem1.4} It is easy to find that $w$ is a solution of \eqref{1.4} if and only if $e^{iA_{0}\cdot x}w$ is a solution of the following problem \begin{eqnarray}\label{ea0} \left\ \begin{array}{ll} \ds \Big(\frac{\nabla}{i} -A_{0}\Big)^{2}u+u=f(u),& x\in\mathbb{R}^{N}, \vspace{0.2cm}\\ \ds \lim_{|x|\rightarrow \infty}|u(x)|=0,& |u(0)|=\ds\max_{x\in \mathbb{R}^{N}}|u(x)|, \end{array} \right. \end{eqnarray} from which and $(f_{3})$ we can deduce that \eqref{ea0} has a non-degenerate solution $e^{i \sigma+iA_{0}\cdot x}w,$ i.e. $$ ker\Big(-\Big(\frac{\nabla}{i}-A_{0}\Big)^{2}-1+f'(w)\Big)=span\Big\{\frac{\partial (e^{i \sigma+iA_{0}\cdot x}w)}{\partial x_{1}},\ldots,\frac{\partial (e^{i \sigma+iA_{0}\cdot x}w)}{\partial x_{N}},\frac{\partial (e^{i \sigma+iA_{0}\cdot x}w)}{\partial \sigma}\Big\}. $$ \end{rem} In the sequel, the Sobolev space $H^{1}(\mathbb{R}^{N})$ is endowed with the standard norm $$ \|u\|=\Bigl(\int|\nabla u|^{2}+|u|^{2}\Bigl)^{\frac{1}{2}}, $$ which is induced by the inner product $$ \left\langle \nabla u,\nabla v\right\rangle =\int(\nabla u\nabla v+uv). $$ Denote $\alpha=\min\{\alpha_{1},\alpha_{2},\alpha_{3}\}.$ Our main result of this paper is as follows: \begin{thm}\label{thm1.1} Assume that $(A_{1})$-$(A_{4})$, $(V_{1})$-$(V_{2})$ and $(f_{1})$-$(f_{3})$ hold. Then there exists $\epsilon_{0} > 0$ such that $0 < \epsilon < \epsilon_{0}$, problem (1.3) has infinitely many complex-valued solutions. \end{thm} In the following, we sketch the main idea in the proof of Theorem 1.2. We introduce some notations first. Let $\mu > 0$ be a real number such that $w(x) \leq ce^{-|x|}$ for $|x| > \mu$ and some constant $c$ independent of $\mu$ large. Now we define the configuration space $$ \Omega_{1} = \mathbb{R}^{N}, \Omega_{m}:= \Bigl\{ \textbf{Q}_{m} = (Q_{1}, Q_{2},\ldots, Q_{m})\in\mathbb{R}^{mN}:~\min_{k\neq j}|Q_{k} - Q_{j} |\geq \mu \Bigl\} , \forall m > 1. $$ Let $w$ be the non-degenerate solution of \eqref{1.4} and $m \geq1$ be an integer. Define the sum of $m$ spikes as $$ w_{Q_{j}}=w(x-Q_{j}),\,\xi_{j}=e^{i\sigma+iA_{0}\cdot(x-Q_{j})},\, z_{Q_{j}}=\xi_{j}w(x-Q_{j})\,\,\text{and}\,\,z_{ \textbf{Q}_{m}}=\sum_{j=1}^{m}z_{Q_{j}}, $$ where $\sigma\in [0, 2\pi]$. Let the operator be $$ \mathcal {S}(u)=-\Bigl(\frac{\nabla}{i}-A_{\epsilon}(x)\Bigl)^{2}u-V_{\epsilon}(x)u+f(u). $$ Fixing $(\sigma,\textbf{Q}_{m}) = (\sigma,Q_{1},\ldots, Q_{m})\in [0,2\pi]\times\Omega_{m},$ we define the following functions as the approximate kernels: $$ D_{j,k}=\frac{\partial (e^{i\sigma+iA_{0}\cdot(x-Q_{j})}w_{Q_{j}})}{\partial x_{k}}\eta_{j}(x),~\text{for}~j=1,\ldots,m,k=1,\ldots,N$$ and $$ D_{j,N+1}=\frac{\partial (e^{i\sigma+iA_{0}\cdot(x-Q_{j})}w_{Q_{j}})}{\partial \sigma}\eta_{j}(x),j=1,\ldots,m, $$ where $\eta_{j}(x)=\eta(\frac{2|x-Q_{j}|}{\mu-1})$ and $\eta(t)$ is a cut off function, such that $\eta(t)=1$ for $|t|\leq1$ and $\eta(t)=0$ for $|t|\geq \frac{\mu^{2}}{\mu^{2}-1}$. Note that the support of $D_{j,k}$ belongs to $B_{\frac{\mu^{2}}{2(\mu+1)}}(Q_{j})$. Applying $z_{\textbf{Q}_{m}}$as the approximate solution and performing the Lyapunov-Schmidt reduction, we can show that there exists a constant $\mu_{0}$, such that for $\mu\geq\mu_{0}$, and $\epsilon<c_{\mu}$, for some constant $c_{\mu}$ depending on $\mu$ but independent of $m$ and $\textbf{Q}_{m}$, we can find a $\varphi_{\sigma,\textbf{Q}_{m}}$ such that $$ \mathcal {S}(z_{\textbf{Q}_{m}}+\varphi_{\sigma,\textbf{Q}_{m}})=\sum_{j=1}^{m}\sum_{k=1}^{N+1}c_{j,k}D_{j,k}, $$ and we can show that $\varphi_{\sigma,\textbf{Q}_{m}}$ is $C^{1}$ in $(\sigma,\textbf{Q}_{m}).$ This is done in Section 2. After that, for any $m$, we define a new function \begin{equation}\label{m} \mathcal {M}(\sigma,\textbf{Q}_{m})=J(z_{\textbf{Q}_{m}}+\varphi_{\sigma,\textbf{Q}_{m}}), \end{equation} we maximize $\mathcal {M}(\sigma,\textbf{Q}_{m})$ over $[0,2\pi]\times \bar{\Omega}_{m}.$ At the maximum point of $\mathcal {M}(\sigma,\textbf{Q}_{m}),$ we show that $c_{j,k}=0$ for all $j,k.$ Therefore we prove that the corresponding $z_{\textbf{Q}_{m}}+\varphi_{\sigma,\textbf{Q}_{m}}$ is a solution of \eqref{1.3}. By the arguments before, we know that there exists $\mu_{0}$ large such that $\mu\geq \mu_{0}$ and $\epsilon\leq c_{\mu}$ and for any $m$, there exists a spike solution to \eqref{1.3} with $m$ spikes in $\Omega_{m}$. Considering that $m$ is arbitrary, then there exists infinitely many spikes solutions for $\epsilon< c_{\mu_{0}}$ independent of $m.$ There are three main difficulties in the maximization process. Firstly, we need to show that the maximum points will not go to infinity. Secondly, we have to detect the difference in the energy when the spikes move to the boundary of the configuration space. In the second step, we use the induction method and detect the difference of the m-th spikes energy and the (m+1)-th spikes energy. A crucial estimate is Lemma 3.2, where we prove that the accumulated error can be controlled from step $m$ to step $m + 1$. To this end, we make a secondary Lyapunov-Schmidt reduction. This is done in Section 3. Compared with \cite{aw}, since there is a magnetic filed in our problem, we have to overcome some new difficulties which involves many technical estimates. Our paper is organized as follows. In section 2, we carry out Lyapunov-Schmidt reduction. Then we perform a second Liapunov-Schmidt reduction in section 3. Finally, we prove our main result in section 4. \textbf{Notations:} 1. We simply write $\int f$ to mean the Lebesgue integral of $f(x)$ in $\R^{N}.$ 2. The complex conjugate of any number $z\in\mathbb{C}$ will be denoted by $\bar{z}$. 3. The real part of a number $z\in\mathbb{C}$ will be denoted by $Re z$. 4. The ordinary inner product between two vectors $a,b\in \R^{N}$ will be denoted by $a\cdot b$. {\bf Acknowledgements:} This paper was partially supported by NSFC (No.11301204; No.11371159), self-determined research funds of CCNU from the colleges' basic research and operation of MOE (CCNU14A05036). \section{Finite-dimensional reduction}\label{s2} In this section, we perform a finite-dimensional reduction. Let $\gamma\in(0, 1)$ and we define \begin{equation}\label{2.1} E(\cdot):= \sum_{j=1}^{m} e^{-\gamma|\cdot-Q_{j}|},\,\,\,\,\text{where}\,\,\,\, \textbf{Q}_{m}\in\Omega_{m}. \end{equation} Consider the norm \begin{equation}\label{2.2} \|f\|_{*}= \sup_{x\in\mathbb{R}^{N}}| E(x)^{-1}f(x)|, \end{equation} which was first introduced in \cite{mpw} and also used in \cite{aw,wy1}. Now we investigate \begin{eqnarray}\label{2.3} \left\ \begin{array}{ll} L(\varphi_{\sigma,\textbf{Q}_{m}}):=-\Bigl(\ds \frac{\nabla}{i}-A_{\epsilon}(x)\Bigl)^{2}\varphi_{\sigma,\textbf{Q}_{m}}-V_{\epsilon}(x)\varphi_{\sigma,\textbf{Q}_{m}} +f'(z_{\textbf{Q}_{m}})\varphi_{\sigma,\textbf{Q}_{m}} \vspace{0.2cm}\\ =h+\ds\sum_{j=1}^{m}\sum_{k=1}^{N+1}c_{j,k}D_{j,k},~in~\mathbb{R}^{N},\vspace{0.2cm}\\ \ds Re\int\varphi_{\sigma,\textbf{Q}_{m}}\bar{D}_{j,k}=0~for~j=1,\ldots,m,k=1,\ldots,N+1. \end{array} \right. \end{eqnarray} Firstly, we give a result which will be used later. \begin{lem}\label{number}(\cite{dwy}, Lemma 3.4) There exists a constant $C_{N}=6^{N}$ such that for any $m\in \mathbb{N}^{+}$ and any $\textbf{Q}_{m}=(Q_{1},Q_{2},...,Q_{m})\in \R^{mN},$ \begin{equation}\label{s1.1} \sharp\Big\{Q_{j}\Big| \frac{l}{2}\mu\leq|x-Q_{j}|<\frac{(l+1)}{2}\mu\Big\}\leq C_{N}(l+1)^{N-1} \end{equation} for all $x\in \R^{N}$ and all $l\in \mathbb{N}.$ Particularly, we have \begin{equation}\label{s1.2} \sharp\Big\{Q_{j}\Big|0\leq|x-Q_{j}|<\frac{\mu}{2}\Big\}\leq C_{N}. \end{equation} \end{lem} \begin{lem}\label{lem2.1} Let $h$ with $\|h\|_{*}$ bounded and assume that $(\varphi_{\sigma,\textbf{Q}_{m}}, {c_{j,k}})$ is a solution to \eqref{2.3}. Then there exist positive numbers $\mu_{0}$ and $C$, such that for all $0 < \epsilon < e^{-2\mu},$ $\mu>\mu_{0}$ and $(\sigma,\textbf{Q}_{m})\in [0, 2\pi]\times \Omega_{m},$ one has \begin{equation}\label{2.4} \|\varphi_{\sigma,\textbf{Q}_{m}}\|_{*}\leq C\|h\|_{*}, \end{equation} where $C$ is a positive constant independent of $\mu,m$ and $\textbf{Q}_{m}\in\Omega_{m}$. \end{lem} \begin{proof} We prove it by contradiction. Assume that there exists a solution $\varphi_{\sigma,\textbf{Q}_{m}}$ to \eqref{2.3} and $\|h\|_{*}\rightarrow0$, $\|\varphi_{\sigma,\textbf{Q}_{m}}\|_{*}=1$. Multiplying the equation in \eqref{2.3} by $\bar{D}_{j,k}$ and integrating in $\mathbb{R}^{N}$, we get \begin{equation}\label{2.5} Re\int L(\varphi_{\sigma,\textbf{Q}_{m}})\bar{D}_{j,k}=Re\int h\bar{D}_{j,k}+c_{j,k}\int|D_{j,k}|^{2}. \end{equation} Considering the exponential decay at infinity of $\frac{\partial w(x)}{\partial x_{k}}$ and the definition of $D_{j,k}(k=1,\ldots,N+1)$, we have \begin{equation}\label{2.6} \begin{array}{ll} &\ds\int|D_{j,k}|^{2}=\ds\int\Bigl|\bigl(iA_{0,k}z_{Q_{j}}+\frac{\partial w_{Q_{j}}}{\partial x_{k}}\xi_{j}\bigl)\eta_{j}\Bigl|^{2}\vspace{0.2cm}\\ &\ds =\int A_{0,k}^{2}w_{Q_{j}}^{2}\eta^{2}\Bigl(\frac{2|x-Q_{j}|}{\mu-1}\Bigl)+\int \Bigl(\frac{\partial w_{Q_{j}}}{\partial x_{k}}\Bigl)^{2}\eta^{2}\Bigl(\frac{2|x-Q_{j}|}{\mu-1}\Bigl)\vspace{0.2cm}\\ &= \ds A_{0,k}^{2}\int w^{2}+A_{0,k}^{2}\int_{B^{C}_{\frac{\mu-1}{2}}(0)}\Bigl[\eta^{2}\Bigl(\frac{2|x|}{\mu-1}\Bigl)-1\Bigl]w^{2} \vspace{0.2cm}\\ &\quad+\ds\int \Bigl(\frac{\partial w}{\partial x_{k}}\Bigl)^{2}+\ds\int_{B^{C}_{\frac{\mu-1}{2}}(0)}\Bigl[\eta^{2}\Bigl(\frac{2|x|}{\mu-1}\Bigl)-1\Bigl)\Bigl]\Bigl(\frac{\partial w}{\partial x_{k}}\Bigl)^{2}\vspace{0.2cm}\\ &= A_{0,k}^{2}\ds\int w^{2}+\int \Bigl(\frac{\partial w}{\partial x_{k}}\Bigl)^{2}+O(e^{-\mu}),~as~\mu\rightarrow+\infty,~k=1,2,\ldots,N \end{array} \end{equation} and \begin{equation}\label{2.7} \begin{array}{ll} \ds\int|D_{j,N+1}|^{2}&=\ds\int\Bigl|iz_{Q_{j}}\eta\Bigl(\frac{2|x-Q_{j}|}{\mu-1}\Bigl)\Bigl|^{2}=\int\Bigl|w_{Q_{j}}\eta\Bigl(\frac{2|x-Q_{j}|}{\mu-1}\Bigl)\Bigl|^{2}\vspace{0.2cm}\\ & =\ds\int w^{2}+\int_{B^{C}_{\frac{\mu-1}{2}}(0)}\Bigl[\eta^{2}\Bigl(\frac{2|x|}{\mu-1}\Bigl)-1\Bigl]w^{2} =\ds\int w^{2}+O(e^{-\mu}),~as~\mu\rightarrow+\infty. \end{array} \end{equation} On the other hand, by Lemma \ref{lemw} we have \begin{equation}\label{2.8} \begin{array}{ll} \ds\Bigl|Re\int h\bar{D}_{j,k}\Bigl| &=\ds\Bigl|Re\int h\bigl(-iA_{0,k}\bar{z}_{Q_{j}}+\frac{\partial w_{Q_{j}}}{\partial x_{k}}\bar{\xi}_{j}\bigl)\eta_{j}\Bigl| \vspace{0.2cm}\\ &\leq \ds\int|h||A_{0,k}|w_{Q_{j}}|\eta_{j}|+\int|h|\Big|\frac{\partial w_{Q_{j}}}{\partial x_{k}}\Big||\eta_{j}|\vspace{0.2cm}\\ &\leq C\|h\|_{*}\ds\int_{B_{\frac{\mu^{2}}{2(\mu+1)}}(Q_{j})}|A_{0,k}|\sum_{j=1}^{m}e^{-\gamma|x-Q_{j}|}w(x-Q_{j}) \Bigl|\eta\Bigl(\frac{2|x-Q_{j}|}{\mu-1}\Bigl)\Bigl|\vspace{0.2cm}\\ &\ds\,\,\,\,\,\,+C\|h\|_{*}\int_{B_{\frac{\mu^{2}}{2(\mu+1)}}(Q_{j})}\sum_{j=1}^{m} e^{-\gamma|x-Q_{j}|}\Bigl|\frac{\partial w(x-Q_{j})}{\partial x_{k}}\Bigl|\Bigl|\eta\Bigl(\frac{2|x-Q_{j}|}{\mu-1}\Bigl)\Bigl|\vspace{0.2cm}\\ \end{array} \end{equation} \begin{equation*} \begin{array}{ll} &\leq C\|h\|_{*}\ds\int_{B_{\frac{\mu}{2}}(Q_{j})}e^{-\gamma|x-Q_{j}|}w(x-Q_{j})+C\|h\|_{*}\int_{B_{\frac{\mu}{2}}(Q_{j})} e^{-\gamma|x-Q_{j}|}\Bigl|\frac{\partial w(x-Q_{j})}{\partial x_{k}}\Bigl|\vspace{0.2cm}\\ &\leq C\|h\|_{*}\ds\int_{0}^{\frac{\mu}{2}}e^{-(1+\gamma) t}t^{N-1}dt \ds\leq C\|h\|_{*},\quad\quad\quad\quad k=1,2,\ldots,N \end{array} \end{equation*} and \begin{equation}\label{2.9} \begin{array}{ll} \ds\Bigl|Re\int h\bar{D}_{j,N+1}\Bigl|&\ds\leq\int|h|||\bar{D}_{j,N+1}|\leq\int|h|||iz_{Q_{j}}\eta_{j}|\vspace{0.2cm}\\ &\leq C\|h\|_{*}\ds\int\sum_{j=1}^{m} e^{-\gamma|x-Q_{j}|}w(x-Q_{j})\eta\Bigl(\frac{2|x-Q_{j}|}{\mu-1}\Bigl) \vspace{0.2cm}\\ &\leq C\|h\|_{*}\ds\int_{B_{\frac{\mu}{2}}(Q_{j})} e^{-\gamma|x-Q_{j}|}w(x-Q_{j}) \vspace{0.2cm}\\ &\leq C\|h\|_{*}\ds\int_{0}^{\frac{\mu}{2}}e^{-(1+\gamma) t}t^{N-1}dt \leq C\|h\|_{*}. \end{array} \end{equation} Here and in what follows, $C$ stands for a positive constant independent of $\epsilon$ and $\mu$, as $\epsilon\rightarrow0$. Now if we write $\widetilde{D}_{j,k} = \frac{\partial (e^{i\sigma+iA_{0}\cdot(x-Q_{j})}w_{Q_{j}})}{ \partial x_{k}}$ , then we have \begin{equation}\label{2.10} \begin{array}{ll} &\ds Re\int L(\varphi_{\sigma,\textbf{Q}_{m}})\bar{D}_{j,k}=Re\int L(D_{j,k})\bar{\varphi}_{\sigma,\textbf{Q}_{m}}\vspace{0.2cm}\\ &= Re\ds\int\Bigl[-\Bigl(\frac{\nabla}{i}-A_{\epsilon}(x)\Bigl)^{2}D_{j,k}\bar{\varphi}_{\sigma,\textbf{Q}_{m}}-V_{\epsilon}(x) D_{j,k}\bar{\varphi}_{\sigma,\textbf{Q}_{m}}+f'(z_{\textbf{Q}_{m}})D_{j,k}\bar{\varphi}_{\sigma,\textbf{Q}_{m}}\Bigl]\vspace{0.2cm}\\ &= Re\ds\int\Bigl[-\Bigl(\frac{\nabla}{i}-A_{0}\Bigl)^{2} D_{j,k}\bar{\varphi}_{\sigma,\textbf{Q}_{m}}-D_{j,k}\bar{\varphi}_{\sigma,\textbf{Q}_{m}}+f'(z_{Q_{j}})D_{j,k}\bar{\varphi}_{\sigma,\textbf{Q}_{m}}\Bigl]\vspace{0.2cm}\\ &\,\,\,\,\,\, -Re\ds\int\epsilon \tilde{V} D_{j,k}\bar{\varphi}_{\sigma,\textbf{Q}_{m}}+Re\int[f'(z_{\textbf{Q}_{m}}) -f'(z_{Q_{j}})]D_{j,k}\bar{\varphi}_{\sigma,\textbf{Q}_{m}}\vspace{0.2cm}\\ &\,\,\,\,\,\, +Re\ds\int\Bigl(\frac{\epsilon}{i}div\tilde{A}-2\epsilon A_{0}\cdot \tilde{A}-\epsilon^{2}|\tilde{A}|^{2}\Bigl)D_{j,k}\bar{\varphi}_{\sigma,\textbf{Q}_{m}}+Re\int\frac{2\epsilon}{i}\tilde{A}(x)\cdot\nabla D_{j,k}\bar{\varphi}_{\sigma,\textbf{Q}_{m}}\vspace{0.2cm}\\ &\leq Re\ds\int_{B_{\frac{\mu^{2}}{2(\mu+1)}}(Q_{j})}\Bigl[-\Bigl(\frac{\nabla}{i}-A_{0}\Bigl)^{2} \widetilde{D}_{j,k}-\widetilde{D}_{j,k}+f'(z_{Q_{j}})\widetilde{D}_{j,k}\Bigl]\eta_{j}\bar{\varphi}_{\sigma,\textbf{Q}_{m}}\vspace{0.2cm}\\ &\,\,\,\,\,\, +Re\ds\int_{B_{\frac{\mu^{2}}{2(\mu+1)}}(Q_{j})\backslash B_{\frac{\mu-1}{2}}(Q_{j})} \Bigl[\widetilde{D}_{j,k}\Delta\eta_{j}+2\nabla\eta_{j}\cdot\nabla\widetilde{D}_{j,k} +\frac{2}{i}A_{0}\cdot\nabla\eta_{j}\widetilde{D}_{j,k}\Bigl]\bar{\varphi}_{\sigma,\textbf{Q}_{m}}\vspace{0.2cm}\\ &\,\,\,\,\,\, -Re\ds\int_{B_{\frac{\mu^{2}}{2(\mu+1)}}(Q_{j})}\epsilon \tilde{V} \widetilde{D}_{j,k}\eta_{j}\bar{\varphi}_{\sigma,\textbf{Q}_{m}}+Re\int_{B_{\frac{\mu^{2}}{2(\mu+1)}}(Q_{j})}[f'(z_{\textbf{Q}_{m}}) -f'(z_{Q_{j}})]\widetilde{D}_{j,k}\eta_{j}\bar{\varphi}_{\sigma,\textbf{Q}_{m}}\vspace{0.2cm}\\ &\,\,\,\,\,\, +Re\ds\int_{B_{\frac{\mu^{2}}{2(\mu+1)}}(Q_{j})}\Bigl(\frac{\epsilon}{i}div\tilde{A}-2\epsilon A_{0}\cdot \tilde{A}-\epsilon^{2}|\tilde{A}|^{2}\Bigl)\widetilde{D}_{j,k}\eta_{j}\bar{\varphi}_{\sigma,\textbf{Q}_{m}}\\ &\,\,\,\,\,\, +Re\ds\int_{B_{\frac{\mu^{2}}{2(\mu+1)}}(Q_{j})}\frac{2\epsilon}{i}\tilde{A}(x)\cdot\nabla \widetilde{D}_{j,k}\eta_{j}\bar{\varphi}_{\sigma,\textbf{Q}_{m}} +Re\ds\int_{B_{\frac{\mu^{2}}{2(\mu+1)}}(Q_{j})}\frac{2\epsilon}{i}\tilde{A}(x)\cdot\nabla \eta_{j}\widetilde{D}_{j,k}\bar{\varphi}_{\sigma,\textbf{Q}_{m}}. \end{array} \end{equation} Since $$-\Bigl(\frac{\nabla}{i}-A_{0}\Bigl)^{2} \widetilde{D}_{j,k}-\widetilde{D}_{j,k}+f'(z_{Q_{j}})\widetilde{D}_{j,k}=0,$$ we have \begin{equation}\label{2.11} Re\int_{B_{\frac{\mu^{2}}{2(\mu+1)}}(Q_{j})}\Bigl[-\Bigl(\frac{\nabla}{i}-A_{0}\Bigl)^{2} \widetilde{D}_{j,k}-\widetilde{D}_{j,k}+f'(z_{Q_{j}})\widetilde{D}_{j,k}\Bigl]\eta_{j}\bar{\varphi}_{\sigma,\textbf{Q}_{m}}=0. \end{equation} Moreover, by Lemma \ref{lemw} we have \begin{equation}\label{2.12} \begin{array}{ll} &\ds\Bigl|Re\int_{B_{\frac{\mu^{2}}{2(\mu+1)}}(Q_{j})\backslash B_{\frac{\mu-1}{2}}(Q_{j})}\Bigl(\widetilde{D}_{j,k}\Delta\eta_{j}+2\nabla\eta_{j}\cdot\nabla\widetilde{D}_{j,k} +\frac{2}{i}A_{0}\cdot\nabla\eta_{j}\widetilde{D}_{j,k}\Bigl)\bar{\varphi}_{\sigma,\textbf{Q}_{m}}\Bigl| \vspace{0.2cm}\\ &\leq C\|\varphi_{\sigma,\textbf{Q}_{m}}\|_{*}\ds\int_{B_{\frac{\mu^{2}}{2(\mu+1)}}(Q_{j})\backslash B_{\frac{\mu-1}{2}}(Q_{j})} \sum_{j=1}^{m}e^{-\gamma|x-Q_{j}|}\Big(\Big|\frac{\partial w_{Q_{j}}}{\partial x_{k}}\Big|+w_{Q_{j}}+\Big|\nabla w_{Q_{j}}\Big|+\Big|\nabla \frac{\partial w_{Q_{j}}}{\partial x_{k}}\Big|\Big) \vspace{0.2cm}\\ &\leq C\|\varphi_{\sigma,\textbf{Q}_{m}}\|_{*}\ds\int^{\frac{\mu^{2}}{2(\mu+1)}}_{\frac{\mu-1}{2}}e^{-(1+\gamma)s}s^{N-1}ds\leq Ce^{-(1+\beta)\frac{\mu}{2}}\|\varphi_{\sigma,\textbf{Q}_{m}}\|_{*} \end{array} \end{equation} for some $\beta>0$. Observing that $$ \Bigl|f'(z_{\textbf{Q}_{m}}) -f'(z_{Q_{j}})\Bigl|\leq C\Bigl|\sum_{k\neq j}z_{Q_{k}}\Bigl|^{\delta}, $$ by $(f_{1})$ we have \begin{equation}\label{2.13} \begin{array}{ll} &\ds \Bigl|Re\int_{B_{\frac{\mu^{2}}{2(\mu+1)}}(Q_{j})}(f'(z_{\textbf{Q}_{m}}) -f'(z_{Q_{j}}))\widetilde{D}_{j,k}\eta_{j}\bar{\varphi}_{\sigma,\textbf{Q}_{m}}\Bigl|\vspace{0.2cm}\\ &\ds\leq C\|\varphi_{\sigma,\textbf{Q}_{m}}\|_{*}\int_{B_{\frac{\mu^{2}}{2(\mu+1)}}(Q_{j})}\Bigl|\sum_{k\neq j}z_{Q_{k}}\Bigl|^{\delta}\sum_{j=1}^{m} e^{-\gamma|x-Q_{j}|}\Big(\Big|\frac{\partial w_{Q_{j}}}{\partial x_{k}}\Big|+w_{Q_{j}}\Big)\vspace{0.2cm}\\ &\ds\leq C\|\varphi_{\sigma,\textbf{Q}_{m}}\|_{*}\int_{B_{\frac{\mu^{2}}{2(\mu+1)}}(Q_{j})}\sum_{k\neq j}|w_{Q_{k}}|^{\delta}\sum_{j=1}^{m} e^{-\gamma|x-Q_{j}|}\Big(\Big|\frac{\partial w_{Q_{j}}}{\partial x_{k}}\Big|+w_{Q_{j}}\Big)\vspace{0.2cm}\\ &\ds\leq C\|\varphi_{\sigma,\textbf{Q}_{m}}\|_{*}\int_{B_{\frac{\mu^{2}}{2(\mu+1)}}(Q_{j})}e^{-\frac{\delta}{2}\mu}\sum_{j=1}^{m} e^{-\gamma|x-Q_{j}|}\Big(\Big|\frac{\partial w_{Q_{j}}}{\partial x_{k}}\Big|+w_{Q_{j}}\Big)\vspace{0.2cm}\\ &\ds\leq C\|\varphi_{\sigma,\textbf{Q}_{m}}\|_{*}e^{-\frac{\delta}{2}\mu}\int^{\frac{\mu^{2}}{2(\mu+1)}}_{0}e^{-(1+\gamma)s}s^{N-1}ds\leq Ce^{-\beta\frac{\mu}{2}}\|\varphi_{\sigma,\textbf{Q}_{m}}\|_{*} \end{array} \end{equation} and \begin{equation}\label{2.14} \begin{array}{ll} &\ds \Bigl|Re\int_{B_{\frac{\mu^{2}}{2(\mu+1)}}(Q_{j})}\epsilon \tilde{V} \widetilde{D}_{j,k}\eta_{j}\bar{\varphi}_{\sigma,\textbf{Q}_{m}}\Bigl|\leq\epsilon \int_{B_{\frac{\mu^{2}}{2(\mu+1)}}(Q_{j})} |\tilde{V}| |\widetilde{D}_{j,k}||\bar{\varphi}_{\sigma,\textbf{Q}_{m}}|\vspace{0.2cm}\\ &\ds\leq Ce^{-2\mu}\|\varphi_{\sigma,\textbf{Q}_{m}}\|_{*}\int_{B_{\frac{\mu^{2}}{2(\mu+1)}}(Q_{j})}|\tilde{V}| |\widetilde{D}_{j,k}|\sum_{j=1}^{m} e^{-\gamma|x-Q_{j}|}\vspace{0.2cm}\\ &\ds\leq Ce^{-2\mu}\|\varphi_{\sigma,\textbf{Q}_{m}}\|_{*}\int_{B_{\frac{\mu^{2}}{2(\mu+1)}}(Q_{j})}\Big(\Big|\frac{\partial w_{Q_{j}}}{\partial x_{k}}\Big|+w_{Q_{j}}\Big)\sum_{j=1}^{m} e^{-\gamma|x-Q_{j}|}\vspace{0.2cm}\\ &\ds\leq C\|\varphi_{\sigma,\textbf{Q}_{m}}\|_{*}e^{-\frac{\delta}{2}\mu}\int^{\frac{\mu^{2}}{2(\mu+1)}}_{0}e^{-(1+\gamma)s}s^{N-1}ds\leq Ce^{-\beta\frac{\mu}{2}}\|\varphi_{\sigma,\textbf{Q}_{m}}\|_{*}. \end{array} \end{equation} Similarly, we can get \begin{equation}\label{2.15} \Bigl|Re\int_{B_{\frac{\mu^{2}}{2(\mu+1)}}(Q_{j})}\frac{\epsilon}{i}div\tilde{A} \widetilde{D}_{j,k}\eta_{j}\bar{\varphi}_{\sigma,\textbf{Q}_{m}}\Bigl|\leq C\epsilon\int_{B_{\frac{\mu^{2}}{2(\mu+1)}}(Q_{j})}|\widetilde{D}_{j,k}||\bar{\varphi}_{\sigma,\textbf{Q}_{m}}|\leq Ce^{-\beta\frac{\mu}{2}}\|\varphi_{\sigma,\textbf{Q}_{m}}\|_{*}, \end{equation} \begin{equation}\label{2.16} \Bigl|Re\int_{B_{\frac{\mu^{2}}{2(\mu+1)}}(Q_{j})}2\epsilon A_{0}\cdot \tilde{A }\widetilde{D}_{j,k}\eta_{j}\bar{\varphi}_{\sigma,\textbf{Q}_{m}}\Bigl|\leq C\epsilon\int_{B_{\frac{\mu^{2}}{2(\mu+1)}}(Q_{j})}|\widetilde{D}_{j,k}||\bar{\varphi}_{\sigma,\textbf{Q}_{m}}|\leq Ce^{-\beta\frac{\mu}{2}}\|\varphi_{\sigma,\textbf{Q}_{m}}\|_{*}, \end{equation} \begin{equation}\label{2.17} \Bigl|Re\int_{B_{\frac{\mu^{2}}{2(\mu+1)}}(Q_{j})}\epsilon^{2}|\tilde{A}|^{2}\widetilde{D}_{j,k}\eta_{j}\bar{\varphi}_{\sigma,\textbf{Q}_{m}}\Bigl|\leq C\epsilon\int_{B_{\frac{\mu^{2}}{2(\mu+1)}}(Q_{j})}|\widetilde{D}_{j,k}||\bar{\varphi}_{\sigma,\textbf{Q}_{m}}|\leq Ce^{-\beta\frac{\mu}{2}}\|\varphi_{\sigma,\textbf{Q}_{m}}\|_{*}, \end{equation} \begin{equation}\label{2.18} \Bigl|Re\int_{B_{\frac{\mu^{2}}{2(\mu+1)}}(Q_{j})}\frac{2\epsilon}{i}\tilde{A}(x)\cdot\nabla\eta_{j}\widetilde{D}_{j,k}\bar{\varphi}_{\sigma,\textbf{Q}_{m}}\Bigl|\leq C\epsilon\int_{B_{\frac{\mu^{2}}{2(\mu+1)}}(Q_{j})}|\widetilde{D}_{j,k}||\bar{\varphi}_{\sigma,\textbf{Q}_{m}}|\leq Ce^{-\beta\frac{\mu}{2}}\|\varphi_{\sigma,\textbf{Q}_{m}}\|_{*} \end{equation} and \begin{equation}\label{2.19} \Bigl|Re\int_{B_{\frac{\mu^{2}}{2(\mu+1)}}(Q_{j})}\frac{2\epsilon}{i}\tilde{A}(x)\cdot\nabla\widetilde{D}_{j,k}\eta_{j}\bar{\varphi}_{\sigma,\textbf{Q}_{m}}\Bigl|\leq C\epsilon\int_{B_{\frac{\mu^{2}}{2(\mu+1)}}(Q_{j})}|\nabla\widetilde{D}_{j,k}||\bar{\varphi}_{\sigma,\textbf{Q}_{m}}|\leq Ce^{-\beta\frac{\mu}{2}}\|\varphi_{\sigma,\textbf{Q}_{m}}\|_{*}, \end{equation} for some $\beta>0$. It follows from \eqref{2.5} to \eqref{2.19} that \begin{equation}\label{2.20} |c_{j,k}| \leq C(e^{-\beta\frac{\mu}{2}}\|\varphi_{\sigma,\textbf{Q}_{m}}\|_{*}+\|h\|_{*}). \end{equation} Let now $\theta\in(0, 1)$. It is easy to check that the function $E(x)$ in \eqref{2.1} satisfies \begin{equation}\label{l} |-L(E(x))| \geq\frac{1}{2}(1-\theta^{2})E(x), ~in ~\mathbb{R}^{N}\backslash\bigcup_{j=1}^{m}B_{\bar{\mu}}(Q_{j}) \end{equation} provided $\bar{\mu}$ is large enough and $\bar{\mu}\leq \frac{\mu}{2}.$ Indeed, by Lemma \ref{number} we have \begin{eqnarray*} w_{\textbf{Q}_{m}}&\leq&\sum_{|x-Q_{j}|<\frac{1}{2}\mu}w(x-Q_{j}) +\sum_{l=1}^{\infty}\sum_{\frac{l}{2}\mu\leq|x-Q_{j}|<\frac{l+1}{2}\mu}w(x-Q_{j})\\ &\leq&Cw(\bar{\mu})+C\sum_{l=1}^{\infty}l^{N-1}e^{-\frac{l}{2}\mu} \leq Cw(\bar{\mu}). \end{eqnarray*} Then \begin{equation}\label{ll1} |f'(z_{\textbf{Q}_{m}})|\leq C(w_{\textbf{Q}_{m}})^{\delta} \leq Cw^{\delta}(\bar{\mu})\leq \frac{1-\theta^{2}}{4},\,\,\,\text{in}\,\,\,\R^{N}\backslash \cup_{j=1}^{m}B_{\bar{\mu}}(Q_{j}). \end{equation} From \eqref{ll1} and direct computation, we have \begin{eqnarray*} |-L(E(x))|&=&\Big|\Big(\frac{\nabla}{i}-A_{\epsilon}(x)\Big)^{2}E(x)+V_{\epsilon}E(x)- f'(z_{\textbf{Q}_{m}})E(x)\Big| \\ &=&\sum_{j=1}^{m} \Big|-\gamma^{2}+1+|A_{0}|^{2}+\gamma\frac{(N-1)-2iA_{\epsilon}\cdot (1,1,...,1)}{|x-Q_{j}|} \\ &&\quad\quad +\epsilon (\tilde{V}(x)+2A_{0}\cdot \tilde{A}+idiv\tilde{A}+\epsilon|\tilde{A}|^{2})-f'(z_{\textbf{Q}_{m}})\Big|e^{-\gamma|x-Q_{j}|}\\ &\geq & \frac{1}{2}(1-\theta^{2})E(x),\,\,\,\,\text{in}\,\,\,\R^{N}\backslash \cup_{j=1}^{m}B_{\bar{\mu}}(Q_{j}), \end{eqnarray*} which yields that \eqref{l} is true. Hence the function $E(x)$ can be used as a barrier to prove the pointwise estimate \begin{equation}\label{2.21} |\varphi_{\sigma,\textbf{Q}_{m}}(x)| \leq C\bigl(\|L\varphi_{\sigma,\textbf{Q}_{m}}\|_{*}+\sup_{j}\|\varphi_{\textbf{Q}_{m}(x)}\|_{L^{\infty}(\partial B_{\bar{\mu}}(Q_{j}))}\bigl)E(x), \end{equation} for all $x\in\mathbb{R}^{N}\backslash\bigcup_{j=1}^{m}B_{\bar{\mu}}(Q_{j})$. Now we prove it by contradiction. We assume that there exist a sequence of $\epsilon$ tending to $0$, $\mu$ tending to $\infty$ and a sequence of solutions of \eqref{2.3} for which the inequality is not true. The problem being linear, we can reduce to the case where we have a sequence $\epsilon^{(n)}$ tending to $0$, $\mu^{(n)}$ tending to $\infty$ and sequences $h^{(n)}, \varphi^{(n)}, {c^{(n)}_{j,k}}$ such that $$ \|h^{(n)}\|\rightarrow0\,\,\,\text{and} \,\,\, \|\varphi_{\sigma,\textbf{Q}_{m}}^{(n)}\|_{*} = 1. $$ By \eqref{2.20},we have $$\Bigl\|\sum_{jk}c^{(n)}_{jk}D_{jk}\Bigl\|_{*}\rightarrow0.$$ Then \eqref{2.21} implies that there exists $Q^{(n)}_{j}\in\Omega_{m}$ such that \begin{equation}\label{2.22} \|\varphi_{\textbf{Q}_{m}^{(n)}}\|_{L^{\infty}(B_{\frac{\mu}{2}}(Q_{j}^{(n)}))}\geq C \end{equation} for some fixed constant $C > 0$. Applying elliptic estimates together with Ascoli-Arzela's theorem, we can find a sequence $Q^{(n)}_{j}$ and we can extract, from the sequence $\varphi^{(n)}(\cdot-Q^{(n)}_{j})$ a subsequence which will converge to $\varphi^{\infty}$ a solution of $$\Bigl[-\bigl(\frac{\nabla}{i}-A_{0}\bigl)^{2}-1+f'(e^{i\sigma+iA_{0}\cdot x}w)\Bigl]\varphi^{\infty} = 0,~ x\in~ \mathbb{R}^{N},$$ which is bounded by a constant times $e^{-\gamma|x|}$, with $\gamma> 0$. Moreover, recall that $\varphi^{(n)}_{\textbf{Q}_{m}}$ satisfies the orthogonality conditions in \eqref{2.3}. Therefore, the limit function $\varphi^{\infty}$ also satisfies $$Re\int\varphi^{\infty}\overline{\frac{\partial z}{\partial x_{j}}}=0,~j=1,\ldots,N,~\text{~and~}Re\int\varphi^{\infty}\overline{\frac{\partial z}{\partial \sigma}}=0,$$ where $z=e^{i\sigma+iA_{0}\cdot x}w(x)$. Then we have that $\varphi^{\infty}\equiv0$ which contradicts to \eqref{2.22}. \end{proof} From Lemma \ref{lem2.1}, we can obtain the following result \begin{prop}\label{prop2.2} Then there exist positive numbers $\gamma\in(0, 1)$, $\mu_{0}> 0$ and $C > 0$, such that for all $0 < \epsilon< e^{-2\mu}, \mu > \mu_{0}$ and for any given $h$ with $\|h\|_{*}$ norm bounded, there is a unique solution $(\varphi_{\sigma,\textbf{Q}_{m}}, {c_{j,k}})$ to problem \eqref{2.3}. Moreover, \begin{equation}\label{2.23} \|\varphi_{\sigma,\textbf{Q}_{m}}\|_{*}\leq C\|h\|_{*}. \end{equation} \end{prop} \begin{proof} Here we consider the space $$\mathcal {H}= \Bigl\{ u \in H^{1}(\mathbb{R}^{N}) : Re\int u\bar{D}_{j,k} = 0, (\textbf{Q}_{m},\sigma)\in\Omega_{m}\times[0,2\pi]\Bigl\} .$$ Problem \eqref{2.3} can be rewritten as \begin{equation}\label{2.24} \varphi_{\sigma,\textbf{Q}_{m}} + \mathcal {K}(\varphi_{\sigma,\textbf{Q}_{m}}) = \bar{h}, ~in~ \mathcal {H}, \end{equation} where $\bar{h}$ is defined by duality and $\mathcal {K}: H \rightarrow H$ is a linear compact operator. By Fredholm's alternative, we know that the equation \eqref{2.24} has a unique solution for $\bar{h}=0$ which in turn follows from Lemma \ref{lem2.1}. The estimate \eqref{2.23} follows from directly from \eqref{2.4} in Lemma \ref{lem2.1}. The proof is complete. \end{proof} In the sequel, if $\varphi_{\sigma,\textbf{Q}_{m}}$ is the unique solution given by Proposition \ref{prop2.2}, we denote \begin{equation}\label{2.25} \varphi_{\sigma,\textbf{Q}_{m}} = \mathcal {A}(h). \end{equation} By \eqref{2.23}, we have \begin{equation}\label{2.26} \|\mathcal {A}(h)\|_{*}\leq C\|h\|_{*}. \end{equation} Now, we consider \begin{eqnarray}\label{2.27} \left\ \begin{array}{ll} \ds -\Bigl(\frac{\nabla}{i}-A_{\epsilon}(x)\Bigl)^{2}(z_{\textbf{Q}_{m}}+\varphi_{\sigma,\textbf{Q}_{m}})-V_{\epsilon}(x)(z_{\textbf{Q}_{m}}+\varphi_{\sigma,\textbf{Q}_{m}})+f(z_{\textbf{Q}_{m}}+\varphi_{\sigma,\textbf{Q}_{m}}) \vspace{0.2cm}\\ \ds=\sum_{j=1}^{m}\sum_{k=1}^{N+1}c_{j,k}D_{j,k},~in~\mathbb{R}^{N},\vspace{0.2cm}\\ \ds Re\int\varphi_{\sigma,\textbf{Q}_{m}}\bar{D}_{j,k}=0~for~j=1,\ldots,m,k=1,\ldots,N+1. \end{array} \right. \end{eqnarray} We come to the main result in this section. \begin{prop}\label{prop2.3} Given $\gamma\in(0, 1)$. There exist positive numbers $\mu_{0},C ~and ~\eta > 0$ such that for all $\mu > \mu_{0}$, and for any $(\sigma,\textbf{Q}_{m})\in [0, 2\pi]\times \Omega_{m}$ and $\epsilon< e^{-2\mu},$ there is a unique solution $(\varphi_{\sigma,\textbf{Q}_{m}}, {c_{j,k}})$ to problem \eqref{2.27}. Furthermore, $\varphi_{\sigma,\textbf{Q}_{m}}$is $C^{1}$ in $(\sigma,\textbf{Q}_{m})$ and we have \begin{equation}\label{2.28} \|\varphi_{\sigma,\textbf{Q}_{m}}\|_{*}\leq Ce^{-\beta\mu},~|c_{j,k}|\leq Ce^{-\beta\mu}. \end{equation} \end{prop} Note that the first equation in \eqref{2.27} can be rewritten as \begin{equation}\label{2.29} L(\varphi_{\sigma,\textbf{Q}_{m}})=-\mathcal {S}(z_{\textbf{Q}_{m}})+\mathcal {N}(\varphi_{\sigma,\textbf{Q}_{m}}) +\sum_{j=1}^{m}\sum_{k=1}^{N+1}c_{j,k}D_{j,k}, \end{equation} where \begin{equation}\label{2.30} L(\varphi_{\sigma,\textbf{Q}_{m}})=-\Bigl(\frac{\nabla}{i}-A_{\epsilon}(x)\Bigl)^{2}\varphi_{\sigma,\textbf{Q}_{m}} -V_{\epsilon}(x)\varphi_{\sigma,\textbf{Q}_{m}}+f'(z_{\textbf{Q}_{m}})\varphi_{\sigma,\textbf{Q}_{m}} \end{equation} and \begin{equation}\label{2.31} \mathcal {N}(\varphi_{\sigma,\textbf{Q}_{m}}) =-\big[f(z_{\textbf{Q}_{m}}+\varphi_{\sigma,\textbf{Q}_{m}})-f(z_{\textbf{Q}_{m}})-f'(z_{\textbf{Q}_{m}})\varphi_{\sigma,\textbf{Q}_{m}}\big]. \end{equation} In order to use the contraction mapping theorem to prove that \eqref{2.29} is uniquely solvable in the set that $\|\varphi_{\sigma,\textbf{Q}_{m}}\|_{*}$ is small, we need to estimate $\|\mathcal {S}(z_{\textbf{Q}_{m}})\|_{*}$ and $\|\mathcal {N}(\varphi_{\sigma,\textbf{Q}_{m}})\|_{*}$ respectively. \begin{lem}\label{lem2.4} Given $\gamma\in(0, 1)$. For $\mu$ large enough, and any $(\sigma,\textbf{Q}_{m})\in [0,2\pi]\times \Omega_{m},$ $\epsilon< e^{-2\mu}$, we have \begin{equation}\label{2.32} \|\mathcal {S}(z_{\textbf{Q}_{m}})\|_{*}\leq Ce^{-\beta\mu}, \end{equation} for some constant $\beta > 0$ and $C$ independent of $\mu,m$, $\textbf{Q}_{m}$ and $\sigma$. \end{lem} \begin{proof} Note that \begin{equation}\label{2.33} \begin{array}{ll} \mathcal {S}(z_{\textbf{Q}_{m}})&=\ds-\Bigl(\frac{\nabla}{i}-A_{\epsilon}(x)\Bigl)^{2}z_{\textbf{Q}_{m}}-V_{\epsilon}(x)z_{\textbf{Q}_{m}}+f(z_{\textbf{Q}_{m}})\vspace{0.2cm}\\ &=\ds-\Bigl(\frac{\nabla}{i}-A_{0}\Bigl)^{2}z_{\textbf{Q}_{m}}-z_{\textbf{Q}_{m}}-\epsilon \tilde{V}(x)z_{\textbf{Q}_{m}}+f(z_{\textbf{Q}_{m}})\vspace{0.2cm}\\ &\,\,\,\,\,\,+\ds\frac{\epsilon}{i}div\tilde{A}(x)z_{\textbf{Q}_{m}}+\frac{2\epsilon}{i}\tilde{A}(x)\cdot\nabla z_{\textbf{Q}_{m}}-2\epsilon A_{0}\cdot \tilde{A} z_{\textbf{Q}_{m}}-\epsilon^{2}|\tilde{A}(x)|^{2}z_{\textbf{Q}_{m}}\vspace{0.2cm}\\ &=\ds-\epsilon \tilde{V}(x)z_{\textbf{Q}_{m}}+f(z_{\textbf{Q}_{m}})-\sum_{j=1}^{m}f(z_{Q_{j}})\vspace{0.2cm}\\ &\,\,\,\,\,\,+\ds\frac{\epsilon}{i}div\tilde{A}(x)z_{\textbf{Q}_{m}}+\frac{2\epsilon}{i}\tilde{A}(x)\cdot\nabla z_{\textbf{Q}_{m}}-2\epsilon A_{0}\cdot \tilde{A} z_{\textbf{Q}_{m}}-\epsilon^{2}|\tilde{A}(x)|^{2}z_{\textbf{Q}_{m}}\vspace{0.2cm}\\ &=\ds-\epsilon \tilde{V}(x)z_{\textbf{Q}_{m}}+f(z_{\textbf{Q}_{m}})-\sum_{j=1}^{m}f(z_{Q_{j}})\vspace{0.2cm}\\ &\,\,\,\,\,\,+\ds\frac{\epsilon}{i}div\tilde{A}(x)z_{\textbf{Q}_{m}}+\frac{2\epsilon}{i}\sum_{j=1}^{m}\xi_{j}\tilde{A}(x)\cdot\nabla w_{Q_{j}}-\epsilon^{2}|\tilde{A}(x)|^{2}z_{\textbf{Q}_{m}}. \end{array} \end{equation} By (2.5) and (2.6) of section 2.1 in \cite{aw}, it follows from that \begin{equation}\label{2.34} \Bigl|f(z_{\textbf{Q}_{m}})-\sum_{j=1}^{m}f(z_{Q_{j}})\Bigl|=|e^{i\sigma}|\Bigl|f(w_{\textbf{Q}_{m}})-\sum_{j=1}^{m}f(w_{Q_{j}})\Bigl|\leq Ce^{-\beta\mu}\sum_{j=1}^{m}e^{-\gamma|x-Q_{j}|} \end{equation} for a proper choice of $\beta > 0$. Moreover, by the assumption of $\epsilon$, we can prove that \begin{equation}\label{2.35} |\epsilon \tilde{V}(x)z_{\textbf{Q}_{m}}|\leq Ce^{-\beta\mu}\sum_{j=1}^{m}e^{-\gamma|x-Q_{j}|} \end{equation} for some $\beta> 0$. In fact, on one hand, fix $j\in\{1, 2, \ldots,m\}$ and consider the region $|x -Q_{j}| \leq \frac{\mu}{2} $. In this region, we have $$|\epsilon \tilde{V}(x)z_{\textbf{Q}_{m}}|\leq Ce^{-2\mu}\leq Ce^{-\mu}e^{-2|x -Q_{j}|}\leq Ce^{-\beta\mu}\sum_{j}e^{-\gamma|x-Q_{j}|}.$$ On the other hand, considering the region $|x -Q_{j} | > \frac{\mu}{2} $ for all $j$, we have $$|\epsilon \tilde{V}(x)z_{\textbf{Q}_{m}}|\leq Ce^{-2\mu}|w_{\textbf{Q}_{m}}|\leq Ce^{-\beta\mu}\sum_{j=1}^{m}e^{-\gamma|x-Q_{j}|}.$$ By the same arguments with \eqref{2.35}, we can prove \begin{equation}\label{2.36} \bigl|\frac{\epsilon}{i}div\tilde{A}(x)z_{\textbf{Q}_{m}}\bigl|\leq Ce^{-\beta\mu}\sum_{j=1}^{m}e^{-\gamma|x-Q_{j}|}, \end{equation} \begin{equation}\label{2.37} \Bigl|\frac{2\epsilon}{i}\sum_{j=1}^{m}\xi_{j}\tilde{A}(x)\cdot\nabla w_{Q_{j}}\Bigl|\leq Ce^{-\beta\mu}\sum_{j=1}^{m}e^{-\gamma|x-Q_{j}|} \end{equation} and \begin{equation}\label{2.38} \Bigl|\epsilon^{2}|\tilde{A}(x)|^{2}z_{\textbf{Q}_{m}}\Bigl|\leq Ce^{-\beta\mu}\sum_{j=1}^{m}e^{-\gamma|x-Q_{j}|}. \end{equation} It follows from \eqref{2.33} to \eqref{2.38} that $$\|\mathcal {S}(z_{\textbf{Q}_{m}})\|_{*}\leq Ce^{-\beta\mu}$$ for some $\beta > 0$ independent of $\mu, m$ and $ \textbf{Q}_{m}$. \end{proof} \begin{lem}\label{lem2.5} For any $\textbf{Q}_{m}\in \Omega_{m}$ satisfying $\|\varphi_{\sigma,\textbf{Q}_{m}}\|_{*}\leq 1$, we have \begin{equation}\label{2.39} \|\mathcal {N}(\varphi_{\sigma,\textbf{Q}_{m}})\|_{*}\leq C\|\varphi_{\sigma,\textbf{Q}_{m}}\|_{*}^{1+\delta} \end{equation} and \begin{equation}\label{2.40} \|\mathcal {N}(\varphi^{1}_{\sigma,\textbf{Q}_{m}})-\mathcal {N}(\varphi^{2}_{\sigma,\textbf{Q}_{m}})\|_{*}\leq C(\|\varphi^{1}_{\sigma,\textbf{Q}_{m}}\|_{*}^{\delta}+\|\varphi^{2}_{\sigma,\textbf{Q}_{m}}\|_{*}^{\delta})\|\varphi^{1}_{\sigma,\textbf{Q}_{m}}-\varphi^{2}_{\sigma,\textbf{Q}_{m}}\|_{*}. \end{equation} \end{lem} \begin{proof} By direct computation and applying the mean-value theorem, we have \begin{equation}\label{2.41} \begin{array}{ll} |\mathcal {N}(\varphi_{\sigma,\textbf{Q}_{m}})|& =\ds\bigl|f(z_{\textbf{Q}_{m}}+\varphi_{\sigma,\textbf{Q}_{m}})-f(z_{\textbf{Q}_{m}})-f'(z_{\textbf{Q}_{m}})\varphi_{\sigma,\textbf{Q}_{m}}\bigl|\vspace{0.2cm}\\ &=\ds\bigl|f'(z_{\textbf{Q}_{m}}+\vartheta\varphi_{\sigma,\textbf{Q}_{m}})\varphi_{\sigma,\textbf{Q}_{m}}-f'(z_{\textbf{Q}_{m}})\varphi_{\sigma,\textbf{Q}_{m}}\bigl|\vspace{0.2cm}\\ &\leq\ds C|\varphi_{\sigma,\textbf{Q}_{m}}|^{1+\delta}\leq C\|\varphi_{\sigma,\textbf{Q}_{m}}\|_{*}^{1+\delta}\Bigl(\sum_{j=1}^{m} e^{-\gamma|x-Q_{j}|}\Bigl)^{1+\delta}\vspace{0.2cm}\\ &\ds\leq C\|\varphi_{\sigma,\textbf{Q}_{m}}\|_{*}^{1+\delta}\sum_{j=1}^{m} e^{-\gamma|x-Q_{j}|} \end{array} \end{equation} and \begin{equation}\label{2.42} \begin{array}{ll} &|\mathcal {N}(\varphi^{1}_{\sigma,\textbf{Q}_{m}})-\mathcal {N}(\varphi^{2}_{\sigma,\textbf{Q}_{m}})|\vspace{0.2cm}\\ &=\ds\bigl|f(z_{\textbf{Q}_{m}}+\varphi^{1}_{\sigma,\textbf{Q}_{m}})-f(z_{\textbf{Q}_{m}}+\varphi^{2}_{\sigma,\textbf{Q}_{m}}) -f'(z_{\textbf{Q}_{m}})\varphi^{1}_{\sigma,\textbf{Q}_{m}}+f'(z_{\textbf{Q}_{m}})\varphi^{2}_{\sigma,\textbf{Q}_{m}}\bigl|\vspace{0.2cm}\\ &\ds =\bigl|f'(z_{\textbf{Q}_{m}}+\vartheta(\varphi^{1}_{\sigma,\textbf{Q}_{m}}-\varphi^{2}_{\sigma,\textbf{Q}_{m}}))(\varphi^{1}_{\sigma,\textbf{Q}_{m}}-\varphi^{2}_{\sigma,\textbf{Q}_{m}}) -f'(z_{\textbf{Q}_{m}})(\varphi^{1}_{\sigma,\textbf{Q}_{m}}-\varphi^{2}_{\sigma,\textbf{Q}_{m}})\bigl|\vspace{0.2cm}\\ &\ds\leq C(|\varphi^{1}_{\sigma,\textbf{Q}_{m}}|^{\delta}+|\varphi^{2}_{\sigma,\textbf{Q}_{m}}|^{\delta})|\varphi^{1}_{\sigma,\textbf{Q}_{m}}-\varphi^{2}_{\sigma,\textbf{Q}_{m}}|\vspace{0.2cm}\\ &\ds\leq C(\|\varphi^{1}_{\sigma,\textbf{Q}_{m}}\|_{*}^{\delta}+\|\varphi^{2}_{\sigma,\textbf{Q}_{m}}\|_{*}^{\delta})\|\varphi^{1}_{\sigma,\textbf{Q}_{m}}-\varphi^{2}_{\sigma,\textbf{Q}_{m}}\|_{*} \Big(\sum_{j=1}^{m} e^{-\gamma|x-Q_{j}|}\Big)^{1+\delta}\vspace{0.2cm}\\ &\ds\leq C(\|\varphi^{1}_{\sigma,\textbf{Q}_{m}}\|_{*}^{\delta}+\|\varphi^{2}_{\sigma,\textbf{Q}_{m}}\|_{*}^{\delta})\|\varphi^{1}_{\sigma,\textbf{Q}_{m}}-\varphi^{2}_{\sigma,\textbf{Q}_{m}}\|_{*} \Big(\sum_{j=1}^{m} e^{-\gamma|x-Q_{j}|}\Big). \end{array} \end{equation} From \eqref{2.41} and \eqref{2.42}, we can have $$ \|\mathcal {N}(\varphi_{\sigma,\textbf{Q}_{m}})\|_{*}\leq C\|\varphi_{\sigma,\textbf{Q}_{m}}\|_{*}^{1+\delta} $$ and $$\|\mathcal {N}(\varphi^{1}_{\sigma,\textbf{Q}_{m}})-\mathcal {N}(\varphi^{2}_{\sigma,\textbf{Q}_{m}})\|_{*}\leq C(\|\varphi^{1}_{\sigma,\textbf{Q}_{m}}\|_{*}^{\delta}+\|\varphi^{2}_{\sigma,\textbf{Q}_{m}}\|_{*}^{\delta}) \|\varphi^{1}_{\sigma,\textbf{Q}_{m}}-\varphi^{2}_{\sigma,\textbf{Q}_{m}}\|_{*}. $$ \end{proof} Now, we are ready to prove Proposition \ref{prop2.3}. \begin{proof}[\textbf{Proof of Proposition \ref{prop2.3}.}] We will use the contraction theorem to prove it. Observe that $\varphi_{\sigma,\textbf{Q}_{m}}$ solves \eqref{2.27} if and only if \begin{equation}\label{2.43} \varphi_{\sigma,\textbf{Q}_{m}}= \mathcal {A}(-\mathcal {S}(z_{\textbf{Q}_{m}}) + \mathcal {N}(\varphi_{\sigma,\textbf{Q}_{m}})) \end{equation} where $\mathcal {A}$ is the operator introduced in \eqref{2.25}. In other words, $\varphi_{\sigma,\textbf{Q}_{m}}$ solves \eqref{2.27} if and only if $\varphi_{\sigma,\textbf{Q}_{m}}$ is a fixed point for the operator $$\mathcal {T}(\varphi_{\sigma,\textbf{Q}_{m}}):= \mathcal {A}(-\mathcal {S}(z_{\textbf{Q}_{m}}) + \mathcal {N}(\varphi_{\sigma,\textbf{Q}_{m}})). $$ Define $$\mathcal {B}=\Bigl\{\varphi_{\sigma,\textbf{Q}_{m}}\in H^{1}(\mathbb{R}^{N},\mathbb{C}): \|\varphi_{\sigma,\textbf{Q}_{m}}\|_{*}\leq e^{-(\beta-\tau)\mu}, ~Re\int\varphi_{\sigma,\textbf{Q}_{m}}\bar{D}_{j,k}=0\Bigl\},$$ where $\tau > 0$ small enough. We will prove that $\mathcal {T}$ is a contraction mapping from $\mathcal {B}$ to itself. On one hand, for any $\varphi_{\sigma,\textbf{Q}_{m}}\in\mathcal {B}$, it follows from Lemmas \ref{lem2.4} and \ref{lem2.5} that \begin{eqnarray*} &&\|\mathcal {T}(\varphi_{\sigma,\textbf{Q}_{m}})\|_{*}\leq C\|-\mathcal {S}(z_{\textbf{Q}_{m}}) + \mathcal {N}(\varphi_{\sigma,\textbf{Q}_{m}})\|_{*}\\ &\leq &Ce^{-\beta\mu}+C\|\varphi_{\sigma,\textbf{Q}_{m}}\|_{*}^{1+\delta} \leq Ce^{-\beta\mu}+Ce^{-(1+\delta)(\beta-\tau)\mu}\leq e^{-(\beta-\tau)\mu}. \end{eqnarray*} On the other hand, taking $\varphi_{\sigma,\textbf{Q}_{m}}^{1}$ and $\varphi_{\sigma,\textbf{Q}_{m}}^{2}$ in $\mathcal {B}$, by Lemma \ref{lem2.5} we have \begin{eqnarray*} &&\|\mathcal {T}(\varphi_{\sigma,\textbf{Q}_{m}}^{1}) -\mathcal {T}(\varphi_{\sigma,\textbf{Q}_{m}}^{1})\|_{*}\leq C\|\mathcal {N}(\varphi_{\sigma,\textbf{Q}_{m}}^{1}) - \mathcal {N}(\varphi_{\sigma,\textbf{Q}_{m}}^{2})\|_{*} \\&\leq& C(\|\varphi_{\sigma,\textbf{Q}_{m}}^{1}\|_{*}^{\delta}+\|\varphi_{\sigma,\textbf{Q}_{m}}^{2}\|_{*}^{\delta})\|\varphi_{\sigma,\textbf{Q}_{m}}^{1} - \varphi_{\sigma,\textbf{Q}_{m}}^{2}\|_{*} \leq \frac{1}{2}\|\varphi_{\sigma,\textbf{Q}_{m}}^{1} - \varphi_{\sigma,\textbf{Q}_{m}}^{2}\|_{*}. \end{eqnarray*} Hence by the contraction mapping theorem, for any $(\sigma, \textbf{Q}_{m})\in [0,2\pi]\times \Omega_{m},$ there exists a unique $\varphi_{\sigma,\textbf{Q}_{m}}\in\mathcal {B}$ such that \eqref{2.43} holds. So $$\| \varphi_{\sigma,\textbf{Q}_{m}}\|_{*}=\|\mathcal {T}(\varphi_{\sigma,\textbf{Q}_{m}})\|_{*}\leq Ce^{-\beta\mu}.$$ Now we need to prove that $\varphi_{\sigma,\textbf{Q}_{m}}$ is $2\pi$-periodic with respect to $\sigma.$ Replacing $\sigma$ by $\sigma+2\pi$ in the above reduction process, we get $\varphi_{\sigma+2\pi,\textbf{Q}_{m}}.$ Since $z_{\textbf{Q}_{m}}$ is $2\pi$-periodic, by the uniqueness of $\varphi_{\sigma,\textbf{Q}_{m}}$, we see $\varphi_{\sigma,\textbf{Q}_{m}}=\varphi_{\sigma+2\pi,\textbf{Q}_{m}}.$ Combining \eqref{2.20}, \eqref{2.32}, \eqref{2.39} and \eqref{2.40} we have $$|c_{j,k}|\leq C(e^{-\frac{\beta\mu}{2}}\|\varphi_{\sigma,\textbf{Q}_{m}}\|_{*}+\|\mathcal {S}(\varphi_{\sigma,\textbf{Q}_{m}})\|_{*}+\|\mathcal {N}(\varphi_{\sigma,\textbf{Q}_{m}})\|_{*})\leq Ce^{-\beta\mu}.$$ \end{proof} \section{A secondary Lyapunov-Schmidt reduction} In this section, we present a key estimate on the difference between the solutions in the m-th step and (m+1)-th step. This second Lyapunov-Schmidt reduction has been used in the paper \cite{aw,lw,w}. For $(\sigma,\textbf{Q}_{m})\in [0,2\pi]\times\Omega_{m}$, we denote $u_{\textbf{Q}_{m}}$ as $z_{\textbf{Q}_{m}}+\varphi_{\sigma,\textbf{Q}_{m}}$, where $\varphi_{\sigma,\textbf{Q}_{m}}$ is the unique solution given by Proposition \ref{prop2.3}. The main estimate below states that the difference between $u_{\textbf{Q}_{m+1}}$ and $u_{\textbf{Q}_{m}}+z_{Q_{m+1}}$ is small globally in $H^{1}(\mathbb{R}^{N},\mathbb{C})$ norm. For this purpose, we now write \begin{equation}\label{3.1} u_{\textbf{Q}_{m+1}}=u_{\textbf{Q}_{m}}+z_{Q_{m+1}}+\phi_{m+1}=:\bar{u}+\phi_{m+1}. \end{equation} By Proposition \ref{prop2.3}, we can easily obtain that \begin{equation}\label{3.2} \|\phi_{m+1}\|_{*}\leq Ce^{-\beta\mu}. \end{equation} However the estimate \eqref{3.2} is not sufficient. We need a crucial estimate for $\phi_{m+1}$ which will be given later. (In the following we will always assume that $\gamma > \frac{1}{2}$) In order to obtain the crucial estimate, we will need the following lemma. \begin{lem}\label{lem3.1} (Lemma 2.3, \cite{bl}) For $|Q_{j}-Q_{k}|\geq\mu$ large, it holds that \begin{equation}\label{3.3} \int f(w(x-Q_{j}))w(x-Q_{k})dx=(\vartheta+e^{-\beta\mu})w(|Q_{j}-Q_{k}|) \end{equation} for some $\beta > 0$ independent of large $\mu$ and \begin{equation}\label{3.4} \vartheta=\int f(w)e^{-x_{1}}dx >0. \end{equation} \end{lem} \begin{lem}\label{lem3.2} Let $\mu$, $\epsilon$ be as in Proposition \ref{prop2.3}. Then it holds \begin{equation}\label{3.5} \begin{array}{ll} &\|\phi_{m+1}\|_{H^{1}(\mathbb{R}^{N})}\vspace{0.2cm}\\ & \leq \ds C\Bigl[\epsilon\int|\tilde{V}(x)||w_{Q_{m+1}}|+2\epsilon\int|\tilde{A}(x)||\nabla w_{Q_{m+1}}|+\epsilon\int|divA(x)||w_{Q_{m+1}}| \vspace{0.2cm}\\ &\quad\quad\ds +\epsilon^{2}\int|\tilde{A}(x)|^{2}|w_{Q_{m+1}}|+e^{-\beta\mu}\bigl(\sum_{j=1}^{m}w(|Q_{m+1}-Q_{j}|)\bigl)^{\frac{1}{2}} +\epsilon\bigl(\int|\tilde{V}(x)|^{2}|w_{Q_{m+1}}|^{2}\bigl)^{\frac{1}{2}}\vspace{0.2cm}\\ &\quad\quad +\epsilon\bigl(\ds\int|\tilde{A}(x)|^{2}|\nabla w_{Q_{m+1}}|^{2}\bigl)^{\frac{1}{2}}+\epsilon\bigl(\ds\int|divA(x)|^{2}|w_{Q_{m+1}}|^{2}\bigl)^{\frac{1}{2}} +\epsilon^{2}\bigl(\ds\int|\tilde{A}(x)|^{4}|w_{Q_{m+1}}|^{2}\bigl)^{\frac{1}{2}}\Bigl] \end{array} \end{equation} for some constant $C > 0$, $ \beta> 0$ independent of $\mu,$ $m,$ $\gamma$ and $\textbf{Q}_{m+1} \in\Omega_{m+1}$. \end{lem} \begin{proof} To prove \eqref{3.5}, we need to perform a further decomposition. As we mentioned before, the following eigenvalue problem $$\Delta \varphi -\varphi + f'(w)\varphi = \lambda\varphi, ~~\varphi \in H^{1}(\mathbb{R}^{N}),$$ admits the following set of eigenvalues $$ \lambda_{1}>\lambda_{2}>\ldots>\lambda_{n}>\lambda_{n+1}=0>\lambda_{n+2}\ldots. $$ We denote the eigenfunctions corresponding to the positive eigenvalues $\lambda_{j}$ as $\varphi_{j},$ $j=1,\ldots,n.$ Now, we have the eigenvalue $\lambda_{k}(k=1,\ldots,n)$ with eigenfunction $\tilde{\varphi}_{0,k}=e^{i\sigma+iA_{0}\cdot x}\varphi_{k}$ of the following linearized operator \begin{equation}\label{3.6} -\Bigl(\frac{\nabla}{i}-A_{0}\Bigl)^{2}\varphi-\varphi+f'(w)\varphi=\lambda\varphi. \end{equation} We fix $\tilde{\varphi}_{0,k}$ such that $\max_{x\in\mathbb{R}^{N}} |\tilde{\varphi}_{0,k}| = 1$. Denote by $\tilde{\varphi}_{j,k} =\eta_{j}\tilde{\varphi}_{0,k}(x -Q_{j})$, where $\eta_{j}$ is the cut-off function introduced in section 1. By the equations satisfied by $\phi_{m+1}$, we have \begin{equation}\label{3.7} \bar{L}\phi_{m+1}=-\bar{\mathcal {S}}+\sum_{j=1}^{m+1}\sum_{k=1}^{N+1}c_{j,k}D_{j,k} \end{equation} for some constants ${c_{j,k}}$, where $$\bar{L}=-\Bigl(\frac{\nabla}{i}-A_{\epsilon}(x)\Bigl)^{2}-V_{\epsilon}(x)+f'(\tilde{u}),$$ where \begin{eqnarray*}f'(\tilde{u})= \left\ \begin{array}{ll} \ds \frac{f(\bar{u}+\phi_{m+1})-f(\bar{u})}{\phi_{m+1}},~~&\text{if}~~\phi_{m+1}\neq0, \vspace{0.2cm}\\ \ds f'(\bar{u}),~~~~~~~~~~~~~~~~~~~~~~&\text{if}~~\phi_{m+1}=0, \end{array} \right. \end{eqnarray*} and \begin{equation}\label{3.8} \begin{array}{ll} \bar{\mathcal {S}}&=\ds f(u_{\textbf{Q}_{m}}-z_{Q_{m+1}})-f(u_{\textbf{Q}_{m}})-(1+\epsilon \tilde{V}(x))z_{Q_{m+1}}-\Big(\frac{\nabla}{i}-A_{\epsilon}(x)\Big)^{2}z_{Q_{m+1}}\vspace{0.2cm}\\ &=\ds f(u_{\textbf{Q}_{m}}-z_{Q_{m+1}})-f(u_{\textbf{Q}_{m}})-f(z_{Q_{m+1}})-\epsilon \tilde{V}(x)z_{Q_{m+1}}+\frac{\epsilon}{i}div\tilde{A}(x)z_{Q_{m+1}}\vspace{0.2cm}\\ &\,\,\,\,\,\,\ds+2\frac{\epsilon}{i}\tilde{A}(x)\cdot\nabla z_{Q_{m+1}}-2\epsilon A_{0}\cdot \tilde{A}z_{Q_{m+1}}-\epsilon^{2}|\tilde{A}(x)|^{2}z_{Q_{m+1}}\vspace{0.2cm}\\ &=\ds f(u_{\textbf{Q}_{m}}-z_{Q_{m+1}})-f(u_{\textbf{Q}_{m}})-f(z_{Q_{m+1}})-\epsilon \tilde{V}(x)z_{Q_{m+1}}+\frac{\epsilon}{i}div\tilde{A}(x)z_{Q_{m+1}}\vspace{0.2cm}\\ &\,\,\,\,\,\,\ds+2\frac{\epsilon}{i}\xi_{j}\tilde{A}(x)\cdot\nabla w_{Q_{m+1}}-\epsilon^{2}|\tilde{A}(x)|^{2}z_{Q_{m+1}}. \end{array} \end{equation} Now we proceed the proof into a few steps. First we estimate the $L^{2}$-norm of $\bar{\mathcal {S}}$. By the estimate in Proposition \ref{prop2.3}, we have the following estimate \begin{equation}\label{3.9} \int|f(u_{\textbf{Q}_{m}}+z_{Q_{m+1}})-f(u_{\textbf{Q}_{m}})-f(z_{Q_{m+1}})|^{2}\leq Ce^{-\beta\mu}\sum_{j=1}^{m}w(|Q_{m+1}-Q_{j}|). \end{equation} We also have $$\int|\epsilon \tilde{V}(x)z_{Q_{m+1}}|^{2}\leq C\epsilon^{2}\int \tilde{V}(x)^{2}w_{Q_{m+1}}^{2},$$ $$\int \Big|\frac{\epsilon}{i} div\tilde{A} z_{Q_{m+1}} \Big|^{2}\leq C\epsilon^{2}\int |div\tilde{A}|^{2}w_{Q_{m+1}}^{2},$$ $$\int \Big|2\frac{\epsilon}{i}\xi_{j} \tilde{A}(x)\cdot\nabla w_{Q_{m+1}} \Big|^{2}\leq C\epsilon^{2}\int |\tilde{A}(x)|^{2}|\nabla w_{Q_{m+1}}|^{2}$$ and \begin{equation}\label{3.10} \int \bigl|\epsilon^{2}|\tilde{A}(x)|^{2}z_{Q_{m+1}}\bigl|^{2}\leq C\epsilon^{4}\int |\tilde{A}(x)|^{4}w_{Q_{m+1}}^{2}. \end{equation} It follows from \eqref{3.8} to \eqref{3.10} that \begin{equation}\label{3.11} \begin{array}{ll} \|\bar{\mathcal {S}}\|^{2}_{L^{2}}&\leq \ds Ce^{-\beta\mu}\sum_{j=1}^{m}w(|Q_{m+1}-Q_{j}|)+C\epsilon^{2}\int \tilde{V}(x)^{2}w_{Q_{m+1}}^{2}+C\epsilon^{2}\int |div\tilde{A}|^{2}w_{Q_{m+1}}^{2}\vspace{0.2cm}\\ &\,\,\,\,\,\,\ds+C\epsilon^{2}\int |\tilde{A}(x)|^{2}|\nabla w_{Q_{m+1}}|^{2}+C\epsilon^{4}\int |\tilde{A}(x)|^{4}w_{Q_{m+1}}^{2}. \end{array} \end{equation} By the estimate \eqref{3.2}, we have the following estimate \begin{equation}\label{3.12} \Bigl|\tilde{u}-\sum_{j=1}^{m+1}z(x-Q_{j})\Bigl|=O(e^{-\beta\mu}). \end{equation} Decompose $\phi_{m+1}$ as \begin{equation}\label{3.13} \phi_{m+1}=\psi+\sum_{j=1}^{m+1}\sum_{l=1}^{n}g_{j,l}\tilde{\varphi}_{j,l}+\sum_{j=1}^{m+1}\sum_{k=1}^{N+1}d_{j,k}D_{j,k} \end{equation} for some $g_{j,l}$ , $d_{j,k}$ such that \begin{equation}\label{3.14} Re\int\psi\bar{\tilde{\varphi}}_{j,l}=Re\int\psi \bar{D}_{j,k}=0,j=1,\ldots,m+1,k=1,\ldots,N+1,l=1,\ldots,n. \end{equation} Since \begin{equation}\label{3.15} \phi_{m+1}=\varphi_{\sigma,\textbf{Q}_{m+1}}-\varphi_{\sigma,\textbf{Q}_{m}}, \end{equation} we have for $j = 1, \ldots, m,$ \begin{equation}\label{3.16} d_{j,k}=Re\int\phi_{m+1}\bar{D}_{j,k}=Re\int(\varphi_{\sigma,\textbf{Q}_{m+1}}-\varphi_{\sigma,\textbf{Q}_{m}})\bar{D}_{j,k}=0 \end{equation} and \begin{equation}\label{3.17} d_{m+1,k}=Re\int\phi_{m+1}\bar{D}_{m+1,k}=Re\int(\varphi_{\sigma,\textbf{Q}_{m+1}} -\varphi_{\sigma,\textbf{Q}_{m}})\bar{D}_{m+1,k}=-Re\int\varphi_{\sigma,\textbf{Q}_{m}}\bar{D}_{m+1,k}, \end{equation} where we use the orthogonality conditions satisfied by $\varphi_{\sigma,\textbf{Q}_{m}}$ and $\varphi_{\sigma,\textbf{Q}_{m+1}}$. Hence by Proposition \ref{prop2.3}, we have \begin{equation}\label{3.18} \ds d_{j,k}=0,~for~ j = 1, \ldots, m,\,\,\,\text{and}\,\,\, \ds |d_{m+1,k}|\leq Ce^{-\beta\mu}\sum_{j=1}^{m}e^{-\gamma|Q_{j}-Q_{m+1}|}. \end{equation} By \eqref{3.13}, we can rewrite \eqref{3.7} as \begin{equation}\label{3.19} \begin{array}{ll} \bar{L}(\psi)+\ds\sum_{j=1}^{m+1}\sum_{l=1}^{n}g_{j,l}\bar{L}(\tilde{\varphi}_{j,l})+\ds\sum_{j=1}^{m+1}\sum_{k=1}^{N+1}d_{j,k}\bar{L}(D_{j,k}) =-\bar{\mathcal {S}}+\ds\sum_{j=1}^{m+1}\sum_{k=1}^{N+1}c_{j,k}D_{j,k}. \end{array} \end{equation} In order to estimate the coefficients $g_{j,l}$, we use the equation \eqref{3.19}. First, multiplying \eqref{3.19} by $\tilde{\varphi}_{j,l}$ and integrating over $\mathbb{R}^{N}$, we have \begin{equation}\label{3.20} \begin{array}{ll} \ds~Re~g_{j,l}\int\bar{L}(\tilde{\varphi}_{j,l})\bar{\tilde{\varphi}}_{j,l}&=-\ds\sum_{j=1}^{m+1}\sum_{k=1}^{N+1}Re~d_{j,k}\int\bar{L}(D_{j,k})\bar{\tilde{\varphi}}_{j,l} -\sum_{k\neq l}Re~g_{j,k}\int\bar{L}(\tilde{\varphi}_{j,k})\bar{\tilde{\varphi}}_{j,l} -Re\int\bar{\mathcal {S}}\bar{\tilde{\varphi}}_{j,l}\vspace{0.2cm}\\ &\,\,\,\,\,\,\ds-Re\int\bar{L}(\psi)\bar{\tilde{\varphi}}_{j,l}, \end{array} \end{equation} where \begin{eqnarray}\label{3.21} \left\ \begin{array}{ll} \ds \Bigl|Re\int\bar{\mathcal {S}}\bar{\tilde{\varphi}}_{j,l}\Bigl|\leq Ce^{-\beta\mu}e^{-\gamma|Q_{j}-Q_{m+1}|}+\epsilon\Bigl|Re\int \tilde{V}(x)z_{Q_{m+1}}\bar{\tilde{\varphi}}_{j,l}\Bigl|+2\epsilon\Bigl|Re\int \frac{1}{i}\xi_{j}\tilde{A}(x)\cdot\nabla w_{Q_{m+1}}\bar{\tilde{\varphi}}_{j,l}\Bigl| \vspace{0.2cm}\\ \quad\quad\quad\quad\quad\quad +\epsilon\Bigl|Re\ds\int \frac{1}{i} div \tilde{A}(x)z_{Q_{m+1}}\bar{\tilde{\varphi}}_{j,l}\Bigl|+\epsilon^{2}\Bigl|Re\int |\tilde{A}(x)|^{2}z_{Q_{m+1}}\bar{\tilde{\varphi}}_{j,l}\Bigl|,~j=1,\ldots,m,\vspace{0.2cm}\\ \ds \Bigl|Re\int\bar{\mathcal {S}}\bar{\tilde{\varphi}}_{m+1,l}\Bigl|\leq Ce^{-\beta\mu}\sum_{j=1}^{m}e^{-\gamma|Q_{j}-Q_{m+1}|}+\epsilon\Bigl|Re\int \tilde{V}(x)z_{Q_{m+1}}\bar{\tilde{\varphi}}_{m+1,l}\Bigl| \vspace{0.2cm}\\ \quad\quad\quad\quad\quad\quad\quad +2\epsilon\Bigl|Re\ds\int\frac{1}{i}\xi_{j} \tilde{A}(x)\cdot\nabla w_{Q_{m+1}}\bar{\tilde{\varphi}}_{m+1,l}\Bigl|+\epsilon\Bigl|Re\int\frac{1}{i} div \tilde{A}(x)z_{Q_{m+1}}\bar{\tilde{\varphi}}_{m+1,l}\Bigl|\vspace{0.2cm}\\ \quad\quad\quad\quad\quad\quad\quad +\epsilon^{2}\Bigl|Re\ds\int |\tilde{A}(x)|^{2}z_{Q_{m+1}}\bar{\tilde{\varphi}}_{m+1,l}\Bigl|. \end{array} \right. \end{eqnarray} By the definition of $\tilde{\varphi}_{j,l}$, we have $$\bar{L}(\tilde{\varphi}_{j,k}) = \lambda_{k}\tilde{\varphi}_{j,k} + O ( e^{-\beta\mu} ),$$ thus one has \begin{equation}\label{3.22} Re\int\bar{L}(\tilde{\varphi}_{j,k})\bar{\tilde{\varphi}}_{j,l} = -\delta_{k,l}\lambda_{k} \int\tilde{\varphi}_{0,l}\tilde{\varphi}_{0,k} + O ( e^{-\beta\mu} ). \end{equation} Recall the definition of $\varphi$, we have \begin{equation*} \begin{array}{ll} \ds~Re~\int\bar{L}(\psi)\bar{\tilde{\varphi}}_{j,l}&=-\ds~Re~\int\psi\bar{\bar{L}}(\tilde{\varphi}_{j,l})=\ds -\lambda_{l}\int\tilde{\varphi}_{j,l}\psi + O ( e^{-\beta\mu} )\|\psi\|_{H^{1}(B_{\frac{\mu}{2}} (Q_{j}))}\vspace{0.2cm}\\ &=\ds O ( e^{-\beta\mu} )\|\psi\|_{H^{1}(B_{\frac{\mu}{2}} (Q_{j}))}. \end{array} \end{equation*} Combining \eqref{3.18}, \eqref{3.20}, \eqref{3.21} and \eqref{3.22}, and the orthogonal conditions satisfied by $\psi$ \begin{eqnarray}\label{3.23} \left\ \begin{array}{ll} \ds |g_{j,l}|\leq Ce^{-\beta\mu}e^{-\gamma|Q_{j}-Q_{m+1}|}+\epsilon\Bigl|Re\int \tilde{V}(x)z_{Q_{m+1}}\bar{\tilde{\varphi}}_{j,l}\Bigl|+2\epsilon\Bigl|Re\int \frac{1}{i}\xi_{j}\tilde{A}(x)\cdot\nabla w_{Q_{m+1}}\bar{\tilde{\varphi}}_{j,l}\Bigl| \vspace{0.2cm}\\ \quad\quad\quad+\epsilon\Bigl|Re\ds\int \frac{1}{i} div \tilde{A}(x)z_{Q_{m+1}}\bar{\tilde{\varphi}}_{j,l}\Bigl|+\epsilon^{2}\Bigl|Re\int |\tilde{A}(x)|^{2}z_{Q_{m+1}}\bar{\tilde{\varphi}}_{j,l}\Bigl|\vspace{0.2cm}\\ \quad\quad\quad+e^{-\beta\mu}\|\psi\|_{H^{1}(B_{\frac{\mu}{2}} (Q_{j}))},~j=1,\ldots,m,\vspace{0.2cm}\\ \ds |g_{m+1,l}|\leq Ce^{-\beta\mu}\sum_{j=1}^{m}e^{-\gamma|Q_{j}-Q_{m+1}|}+\epsilon\Bigl|Re\int \tilde{V}(x)z_{Q_{m+1}}\bar{\tilde{\varphi}}_{m+1,l}\Bigl| \vspace{0.2cm}\\ \quad\quad\quad+2\epsilon\Bigl|Re\ds\int\frac{1}{i} \xi_{j}\tilde{A}(x)\cdot\nabla w_{Q_{m+1}}\bar{\tilde{\varphi}}_{m+1,l}\Bigl|+\epsilon\Bigl|Re\int\frac{1}{i} div \tilde{A}(x)z_{Q_{m+1}}\bar{\tilde{\varphi}}_{m+1,l}\Bigl|\vspace{0.2cm}\\ \quad\quad\quad+\epsilon^{2}\Bigl|Re\ds\int |\tilde{A}(x)|^{2}z_{Q_{m+1}}\bar{\tilde{\varphi}}_{m+1,l}\Bigl|+e^{-\beta\mu}\|\psi\|_{H^{1}(B_{\frac{\mu}{2}} (Q_{m+1} ))}. \end{array} \right. \end{eqnarray} Next, we estimate $\psi$. Multiplying \eqref{3.19} by $\psi$ and integrating over $\mathbb{R}^{N}$, we find \begin{equation}\label{3.24} \begin{array}{ll} Re\ds\int\bar{L}(\psi)\bar{\psi}&=-Re\ds\int\bar{\mathcal {S}}\bar{\psi}-\sum_{j=1}^{m+1}\sum_{k=1}^{N+1}d_{j,k}Re\int\bar{L}(D_{j,k})\bar{\psi} -\ds\sum_{j=1}^{m+1}\sum_{l=1}^{n}g_{j,l}Re\int\bar{L}(\varphi_{j,l})\bar{\psi}. \end{array} \end{equation} We claim that \begin{equation}\label{3.25} Re\int(-\bar{L}(\psi)\bar{\psi})\geq c_{0}\|\psi\|^{2}_{H^{1}} \end{equation} for some constant $c_{0} > 0$. Since the approximate solution is exponentially decay away from the points $Q_{j}$ , we have \begin{equation}\label{3.26} Re\int_{\mathbb{R}^{N}\backslash\cup_{j}B_{\frac{\mu}{2}}(Q_{j})}(-\bar{L}(\psi)\bar{\psi})\geq \frac{1}{2}\int_{\mathbb{R}^{N}\backslash\cup_{j}B_{\frac{\mu}{2}}(Q_{j})}(|\nabla\psi|^{2}+|\psi|^{2}). \end{equation} Now we only need to prove the above estimates in the domain $\cup_{j}B_{\frac{\mu}{2}(Q_{j})}$. We prove it by contradiction. Otherwise, there exists a sequence $\mu_{n}\rightarrow\infty$, and $Q^{(n)}_{j}$ such that $$ \int_{B_{\frac{\mu_{n}}{2}}(Q_{j}^{(n)})}(|\nabla\psi_{n}|^{2}+|\psi_{n}|^{2})=1,~Re\int_{B_{\frac{\mu_{n}}{2}}(Q_{j}^{(n)})}(-\bar{L}(\psi_{n})\bar{\psi}_{n})\rightarrow0,~as ~n\rightarrow\infty. $$ Then we can extract from the sequence $\psi_{n}(\cdot-Q_{j}^{(n)})$ a subsequence which will converge weakly in $H^{1}(\mathbb{R}^{N})$ to $\psi_{\infty}$, and $\mu_{n}\rightarrow\infty$, we have \begin{equation}\label{3.27} \int\Bigl|\bigl(\frac{\nabla}{i}-A_{0}\bigl)\psi_{\infty}\Bigl|^{2}+|\psi_{\infty}|^{2}-f'(e^{i\sigma+iA_{0}\cdot x}w)\psi_{\infty}^{2}=0 \end{equation} and \begin{equation}\label{3.28} Re\int\psi_{\infty}\bar{\tilde{\varphi}}_{0,l}=Re\int\psi_{\infty}\frac{\partial(\overline{e^{i\sigma+iA_{0}\cdot x}w})}{\partial x_{j}}=0,~~ j=1,\ldots,N,l=1,\ldots,n. \end{equation} It follows from \eqref{3.27} and \eqref{3.28} that $\psi_{\infty}= 0$. Therefore \begin{equation}\label{3.29} \psi_{n}\rightharpoonup0 ~weakly ~in ~H^{1}(\mathbb{R}^{N}). \end{equation} Hence, we have \begin{equation}\label{3.30} \int_{B_{\frac{\mu_{n}}{2}}(Q_{j}^{(n)})}f'(\tilde{u})\psi_{n}^{2}\rightarrow0,~as ~n\rightarrow\infty. \end{equation} Then $$\|\psi_{n}\|_{H^{1}(B_{\frac{\mu_{n}}{2}}(Q_{j}^{(n)}))}\rightarrow0,~as~ n\rightarrow\infty,$$ which contradicts to the assumption $\|\psi_{n}\|_{H^{1}} = 1$. Therefore \eqref{3.25} holds. It follows from \eqref{3.24} and \eqref{3.25} that \begin{equation}\label{3.31} \begin{array}{ll} &\|\psi\|^{2}_{H^{1}(\mathbb{R}^{N})}\vspace{0.2cm}\\&\leq \ds C\Bigl(\sum_{j,k}|d_{j,k}| \Bigl|Re\int\bar{L}(D_{j,k})\bar{\psi}\Bigl|+\sum_{j,l}|g_{j,l}| \Bigl|Re\int\bar{L}(\varphi_{j,l})\bar{\psi}\Bigl| \ds +\Bigl|Re\int\bar{\mathcal {S}}\bar{\psi}\Bigl| \Bigl) \vspace{0.2cm}\\ &\leq \ds C\Bigl(\sum_{j,k}|d_{j,k}| \|\psi\|_{H^{1}}+\sum_{j,l}|g_{j,l}| \|\psi\|_{H^{1}(B_{\frac{\mu}{2}(Q_{j})})}+\|\bar{\mathcal {S}}\|_{L^{2}}\|\psi\|_{H^{1}}\Bigl). \end{array} \end{equation} By \eqref{3.23} and \eqref{3.31}, we have \begin{equation}\label{3.32} \begin{array}{ll} \|\psi\|_{H^{1}(\mathbb{R}^{N})}&\leq \ds C\Bigl(\sum_{j,k}|d_{jk}|+e^{-\beta\mu}\sum_{j=1}^{m}e^{-\gamma|Q_{j}-Q_{m+1}|}+\|\bar{\mathcal {S}}\|_{L^{2}}+\epsilon\int|\tilde{V}(x)||w_{Q_{m+1}}|\vspace{0.2cm}\\ &\,\,\,\,\,\,\ds +2\epsilon\int|\tilde{A}(x)||\nabla w_{Q_{m+1}}|+\epsilon\int|div\tilde{A}(x)||w_{Q_{m+1}}|+\epsilon^{2}\int|\tilde{A}(x)|^{2}|w_{Q_{m+1}}|\Bigl). \end{array} \end{equation} From \eqref{3.11}, \eqref{3.18} and \eqref{3.32}, recalling that $ \gamma> \frac{1}{2}$ , we get \begin{equation}\label{3.33} \begin{array}{ll} &\|\phi_{m+1}\|_{H^{1}(\mathbb{R}^{N})}\\&\leq \ds C\Bigl[e^{-\beta\mu}\sum_{j=1}^{m}e^{-\gamma|Q_{j}-Q_{m+1}|}+\epsilon\int|\tilde{V}(x)||w_{Q_{m+1}}|+2\epsilon\int|\tilde{A}(x)||\nabla w_{Q_{m+1}}|\vspace{0.2cm}\\ &\quad\quad\ds +\epsilon\int|div\tilde{A}(x)||w_{Q_{m+1}}|+\epsilon^{2}\int|\tilde{A}(x)|^{2}|w_{Q_{m+1}}|\vspace{0.2cm}\\ &\quad\quad\ds +e^{-\beta\mu}\bigl(\sum_{j=1}^{m}w(|Q_{m+1}-Q_{j}|)\bigl)^{\frac{1}{2}}+\epsilon\bigl(\int|\tilde{V}(x)|^{2}|w_{Q_{m+1}}|^{2}\bigl)^{\frac{1}{2}}\vspace{0.2cm}\\ &\quad\quad\ds +\epsilon\bigl(\int|\tilde{A}(x)|^{2}|\nabla w_{Q_{m+1}}|^{2}\bigl)^{\frac{1}{2}}+\epsilon\bigl(\int|div\tilde{A}(x)|^{2}|w_{Q_{m+1}}|^{2}\bigl)^{\frac{1}{2}}+\epsilon^{2}\bigl(\int|\tilde{A}(x)|^{4}|w_{Q_{m+1}}|^{2}\bigl)^{\frac{1}{2}}\Bigl]. \end{array} \end{equation} Since we choose $ \gamma> \frac{1}{2}$, by the definition of the configuration space, we have \begin{equation}\label{3.34} \Bigl(\sum_{j=1}^{m}e^{-\gamma|Q_{j}-Q_{m+1}|}\Bigl)^{2}\leq C\sum_{j=1}^{m}w(|Q_{m+1}-Q_{j}|). \end{equation} It follows from \eqref{3.33} and \eqref{3.34} that \begin{equation}\label{3.35} \begin{array}{ll} &\|\phi_{m+1}\|_{H^{1}(\mathbb{R}^{N})}\\ &\leq \ds C\Bigl[\epsilon\int|\tilde{V}(x)||w_{Q_{m+1}}|+2\epsilon\int|\tilde{A}(x)||\nabla w_{Q_{m+1}}|+\epsilon\int|div\tilde{A}(x)||w_{Q_{m+1}}|\vspace{0.2cm}\\ &\quad\quad\ds +\epsilon^{2}\int|\tilde{A}(x)|^{2}|w_{Q_{m+1}}|+e^{-\beta\mu}\bigl(\sum_{j=1}^{m}w(|Q_{m+1}-Q_{j}|)\bigl)^{\frac{1}{2}}+\epsilon\bigl(\int|\tilde{V}(x)|^{2}|w_{Q_{m+1}}|^{2}\bigl)^{\frac{1}{2}}\vspace{0.2cm}\\ &\quad\quad\ds +\epsilon\bigl(\int|\tilde{A}(x)|^{2}|\nabla w_{Q_{m+1}}|^{2}\bigl)^{\frac{1}{2}}+\epsilon\bigl(\int|div\tilde{A}(x)|^{2}|w_{Q_{m+1}}|^{2}\bigl)^{\frac{1}{2}}+\epsilon^{2}\bigl(\int|\tilde{A}(x)|^{4}|w_{Q_{m+1}}|^{2}\bigl)^{\frac{1}{2}}\Bigl]. \end{array} \end{equation} Hence \eqref{3.5} holds. Moreover, from the estimates \eqref{3.18} and \eqref{3.23}, and taking into consideration that $\eta_{j}$ is supposed in $B_{\frac{\mu}{2}}(Q_{j})$, using the $H\ddot{o}lder$ inequality, we can get a more accurate estimate on $\phi_{m+1}$, \begin{equation}\label{3.36} \begin{array}{ll} &\|\phi_{m+1}\|_{H^{1}(\mathbb{R}^{N})}\\ &\leq \ds C\Bigl[\epsilon\sum_{j=1}^{m+1}\bigl(\int_{B_{\frac{\mu}{2}}(Q_{j})}|\tilde{V}(x)|^{2}|w_{Q_{m+1}}|^{2}\bigl)^{\frac{1}{2}} +2\epsilon\sum_{j=1}^{m+1}\bigl(\int_{B_{\frac{\mu}{2}}(Q_{j})}|\tilde{A}(x)|^{2}|\nabla w_{Q_{m+1}}|^{2}\bigl)^{\frac{1}{2}}\vspace{0.2cm}\\ &\quad\quad\ds+\epsilon\sum_{j=1}^{m+1}\bigl(\int_{B_{\frac{\mu}{2}}(Q_{j})}|div\tilde{A}(x)|^{2}|w_{Q_{m+1}}|^{2}\bigl)^{\frac{1}{2}} +\epsilon^{2}\sum_{j=1}^{m+1}\bigl(\int_{B_{\frac{\mu}{2}}(Q_{j})}|\tilde{A}(x)|^{4}|w_{Q_{m+1}}|^{2}\bigl)^{\frac{1}{2}}\vspace{0.2cm}\\ &\quad\quad\ds+e^{-\beta\mu}\bigl(\sum_{j=1}^{m}w(|Q_{m+1}-Q_{j}|)\bigl)^{\frac{1}{2}}+\epsilon\bigl(\int|\tilde{V}(x)|^{2}|w_{Q_{m+1}}|^{2}\bigl)^{\frac{1}{2}}+\epsilon^{2}\bigl(\int|\tilde{A}(x)|^{4}|w_{Q_{m+1}}|^{2}\bigl)^{\frac{1}{2}}\vspace{0.2cm}\\ &\quad\quad\ds +\epsilon\bigl(\int|\tilde{A}(x)|^{2}|\nabla w_{Q_{m+1}}|^{2}\bigl)^{\frac{1}{2}}+\epsilon\bigl(\int|div\tilde{A}(x)|^{2}|w_{Q_{m+1}}|^{2}\bigl)^{\frac{1}{2}}\Bigl]. \end{array} \end{equation} \end{proof} \section{Proof of the main result}\label{s1} In this section, first we study a maximization problem. Then we prove our main result. Fix $(\sigma,\textbf{Q}_{m})\in [0,2\pi]\times\Omega_{m},$ we define a new functional \begin{equation}\label{4.1} \mathcal {M}(\sigma,\textbf{Q}_{m}) = J(u_{\textbf{Q}_{m}}) = J(z_{\textbf{Q}_{m}} + \varphi_{\sigma,\textbf{Q}_{m}}) : [0,2\pi]\times\Omega_{m}\rightarrow \mathbb{R}. \end{equation} Since both $z_{\textbf{Q}_{m}}$ and $ \varphi_{\sigma,\textbf{Q}_{m}}$ are both $2\pi$-periodic respect to $\sigma,$ we only need to consider the maximum problem of $\mathcal {M}(\sigma,\textbf{Q}_{m}) $ respect to $\textbf{Q}_{m}$ in $\Omega_{m}.$ So in the sequel, for simplicity we denote $\mathcal {M}(\sigma,\textbf{Q}_{m})$ as $\mathcal {M}(\textbf{Q}_{m}).$ Define \begin{equation}\label{4.2} \mathcal {C}_{m}=\sup_{\textbf{Q}_{m}\in\Omega_{m}}\mathcal {M}(\textbf{Q}_{m}) \end{equation} Note that $\mathcal {M}(\textbf{Q}_{m})$ is continuous in $\textbf{Q}_{m}$. We will show below that the maximization problem has a solution. Let $\mathcal {M}(\bar{\textbf{Q}}_{m})$ be the maximum where $\bar{\textbf{Q}}_{m} = (\bar{Q}_{1}, \ldots, \bar{Q}_{m}) \in \bar{\Omega}_{m}$ that is \begin{equation}\label{4.3} \mathcal {M}(\bar{\textbf{Q}}_{m})=\max_{\textbf{Q}_{m}\in\Omega_{m}}\mathcal {M}(\textbf{Q}_{m}) \end{equation} and we denote the solution by $u_{\bar{\textbf{Q}}_{m}}$. First we prove that the maximum can be attained at finite points for each $\mathcal {C}_{m}$. \begin{lem}\label{lem4.1} Let assumptions $(A1)-(A4)$, $(V1)-(V2)$ and the assumptions in Proposition 2.4 be satisfied. Then, for all m:\\ (i)There exists $\textbf{Q}_{m} \in\Omega_{m}$ such that \begin{equation}\label{4.4} \mathcal {C}_{m} =\mathcal {M}(\textbf{Q}_{m}); \end{equation} (ii) There holds \begin{equation}\label{4.5} \mathcal {C}_{m+1} > \mathcal {C}_{m} + I(z), \end{equation} where I(z) is the energy of the solution z of \eqref{ea0}: \begin{equation}\label{4.6} I(z)=\frac{1}{2}\int\Bigl|\Big(\frac{\nabla}{i}-A_{0}\Big)z\Bigl|^{2}+|z|^{2}-\int F(z) \end{equation} \end{lem} \begin{proof} We divide the proof into the following two steps. $\textbf{Step 1}$: $\mathcal {C}_{1} > I(z)$, and $\mathcal {C}_{1}$ can be attained at a finite point. First applying standard Liapnunov-Schmidt reduction, we have \begin{equation}\label{4.7} \|\varphi_{\sigma,Q}\|_{H^{1}}\leq C\|\epsilon \tilde{V}z_{Q}\|_{L^{2}}+C\bigl\|\epsilon^{2} |\tilde{A}|^{2}z_{Q}\bigl\|_{L^{2}}. \end{equation} Assuming that $|Q| \rightarrow\infty$, then we have \begin{equation}\label{4.8} \begin{array}{ll} J(u_{Q})&=\ds\frac{1}{2}\int\Bigl|\Bigl(\frac{\nabla}{i}-A_{\epsilon}(x)\Bigl)u_{Q}\Bigl|^{2}+V_{\epsilon}(x)|u_{Q}|^{2}-\int F(u_{Q}) \vspace{0.2cm}\\ &=\ds\frac{1}{2}\int\Bigl|\Bigl(\frac{\nabla}{i}-A_{0}\Bigl) z_{Q}\Bigl|^{2}+\frac{1}{2}\int\Bigl|\Bigl(\frac{\nabla}{i}-A_{0}\Bigl) \varphi_{\sigma,Q}\Bigl|^{2} +Re\int\Bigl(\frac{\nabla}{i}-A_{0}\Bigl) z_{Q}\overline{\Bigl(\frac{\nabla}{i}-A_{0}\Bigl)\varphi_{\sigma,Q}}\vspace{0.2cm}\\ &\,\,\,\,\,\,\ds+\frac{1}{2}\int\epsilon^{2}|\tilde{A}(x)|^{2}|z_{Q}|^{2}+\frac{1}{2}\int\epsilon^{2}|\tilde{A}(x)|^{2}|\varphi_{\sigma,Q}|^{2}+Re\int\epsilon^{2}|\tilde{A}(x)|^{2}z_{Q}\bar{\varphi}_{\sigma,Q}\vspace{0.2cm}\\ &\,\,\,\,\,\,\ds-Re\int \varepsilon \tilde{A}(x)\Bigl(\frac{\nabla w_{Q}}{i}\xi_{Q}\bar{\varphi}_{\sigma,Q}+\frac{\nabla \varphi_{\sigma,Q}}{i}\bar{z}_{Q}+\frac{\nabla \varphi_{\sigma,Q}}{i}\bar{\varphi}_{\sigma,Q} -A_{0}\varphi_{\sigma,Q}\bar{z}_{Q}-A_{0}\varphi_{\sigma,Q}\bar{\varphi}_{\sigma,Q}\Bigl)\vspace{0.2cm}\\ &\,\,\,\,\,\,\ds+\frac{1}{2}\int|z_{Q}|^{2}+\frac{1}{2}\int|\varphi_{\sigma,Q}|^{2}+Re\int z_{Q}\bar{\varphi}_{\sigma,Q}\vspace{0.2cm}\\ &\,\,\,\,\,\,\ds+\frac{1}{2}\int\epsilon\tilde{V}(x)\bigl(|z_{Q}|^{2}+|\varphi_{\sigma,Q}|^{2}+2Rez_{Q}\bar{\varphi}_{\sigma,Q}\bigl)-\int F(u_{Q})\vspace{0.2cm}\\ &=\ds\frac{1}{2}\int\Bigl|\Bigl(\frac{\nabla}{i}-A_{0}\Bigl) z_{Q}\Bigl|^{2}+\frac{1}{2}\int|z_{Q}|^{2}-\int F(z_{Q})+\frac{1}{2}\int\Bigl|\Bigl(\frac{\nabla}{i}-A_{0}\Bigl) \varphi_{\sigma,Q}\Bigl|^{2}+\frac{1}{2}\int|\varphi_{\sigma,Q}|^{2}\vspace{0.2cm}\\ &\,\,\,\,\,\,\ds+\int F(z_{Q})-\int F(u_{Q})+Re\int\Bigl(\frac{\nabla}{i}-A_{0}\Bigl) z_{Q}\overline{\Bigl(\frac{\nabla}{i}-A_{0}\Bigl)\varphi_{\sigma,Q}}+Re\int z_{Q}\bar{\varphi}_{\sigma,Q}\vspace{0.2cm}\\ &\,\,\,\,\,\,\ds+\frac{1}{2}\int\epsilon\tilde{V}(x)|z_{Q}|^{2}+\frac{1}{2}\int\epsilon^{2}|\tilde{A}(x)|^{2}|z_{Q}|^{2}+\frac{1}{2}\int\epsilon^{2}|\tilde{A}(x)|^{2}|\varphi_{\sigma,Q}|^{2} \vspace{0.2cm}\\ &\,\,\,\,\,\,\ds+Re\int\epsilon^{2}|\tilde{A}(x)|^{2}z_{Q}\bar{\varphi}_{\sigma,Q}+\frac{1}{2}\int\epsilon\tilde{V}(x)|\varphi_{\sigma,Q}|^{2}+Re\int\epsilon\tilde{V}(x)z_{Q}\bar{\varphi}_{\sigma,Q}\vspace{0.2cm}\\ &\,\,\,\,\,\,\ds-Re\int \varepsilon \tilde{A}(x)\Bigl(\frac{\nabla w_{Q}}{i}\xi_{Q}\bar{\varphi}_{\sigma,Q}+\frac{\nabla \varphi_{\sigma,Q}}{i}\bar{z}_{Q}+\frac{\nabla \varphi_{\sigma,Q}}{i}\bar{\varphi}_{\sigma,Q} -A_{0}\varphi_{\sigma,Q}\bar{z}_{Q}-A_{0}\varphi_{\sigma,Q}\bar{\varphi}_{\sigma,Q}\Bigl)\vspace{0.2cm}\\ &\geq\ds I(z)+\frac{\epsilon}{4}\int \tilde{V}(x)|w_{Q}|^{2}+\frac{\epsilon^{2}}{4}\int |\tilde{A}(x)|^{2}|w_{Q}|^{2}-C\|\varphi_{\sigma,Q}\|_{H^{1}}^{2}-\delta\epsilon\int|div\tilde{A}(x)|^{2}|w_{Q}|^{2}\vspace{0.2cm}\\ &\geq\ds I(z)+\frac{\epsilon}{4}\int \tilde{V}(x)|w_{Q}|^{2}+\frac{\epsilon^{2}}{4}\int |\tilde{A}(x)|^{2}|w_{Q}|^{2}\vspace{0.2cm}\\ &\,\,\,\,\,\,\ds-\delta\epsilon\int|div\tilde{A}(x)|^{2}|w_{Q}|^{2}-\int\epsilon^{2} \tilde{V}^{2}(x)|w_{Q}|^{2}-\int\epsilon^{4} |\tilde{A}(x)|^{2}|w_{Q}|^{2}\vspace{0.2cm}\\ &\geq\ds I(z)+\frac{1}{8}\Bigl(\int_{B_{\frac{\rho}{2}}(Q)} \epsilon \tilde{V}(x)|w_{Q}|^{2}-\sup_{B_{\frac{|Q|}{4}}(0)}|w_{Q}|^{\frac{3}{2}}\int_{supp \tilde{V}^{-}} \epsilon|\tilde{V}(x)|w_{Q}^{\frac{1}{2}}\Bigl)\vspace{0.2cm}\\ &\,\,\,\,\,\,\ds+\frac{1}{8}\Bigl(\int_{B_{\frac{\rho}{2}}(Q)} \epsilon^{2} |\tilde{A}|^{2}|w_{Q}|^{2}-\sup_{B_{\frac{|Q|}{4}}(0)}|w_{Q}|^{\frac{3}{2}}\int_{supp \tilde{A}^{-}} \epsilon^{2}|\tilde{A}(x)|^{2}w_{Q}^{\frac{1}{2}}\vspace{0.2cm}\\ &\,\,\,\,\,\,\ds-\sup_{B_{\frac{|Q|}{4}}(0)}|w_{Q}|^{\frac{3}{2}}\int_{supp \tilde{A}^{-}} \epsilon^{2}|div\tilde{A}(x)|^{2}w_{Q}^{\frac{1}{2}}\Bigl)\vspace{0.2cm}\\ \end{array} \end{equation} \begin{equation*} \begin{array}{ll} &\geq\ds I(z)+\frac{1}{8}\int_{B_{\frac{\rho}{2}}(Q)} \epsilon \tilde{V}(x)|w_{Q}|^{2}+\frac{1}{8}\int_{B_{\frac{\rho}{2}}(Q)} \epsilon^{2} |\tilde{A}|^{2}|w_{Q}|^{2}-O(e^{-\frac{9}{8}|Q|}), \end{array} \end{equation*} where we use the fact that \begin{equation}\label{4.9} \begin{array}{ll} &\ds \frac{1}{2}\int\epsilon\tilde{V}(x)|z_{Q}|^{2}+\frac{1}{2}\int\epsilon^{2}|\tilde{A}(x)|^{2}|z_{Q}|^{2} +Re\int\epsilon^{2}|\tilde{A}(x)|^{2}z_{Q}\bar{\varphi}_{\sigma,Q}+Re\int\epsilon\tilde{V}(x)z_{Q}\bar{\varphi}_{\sigma,Q}\vspace{0.2cm}\\ &\ds -Re\int \varepsilon \tilde{A}(x)\Bigl(\frac{\nabla w_{Q}}{i}\xi_{Q}\bar{\varphi}_{\sigma,Q}+\frac{\nabla \varphi_{\sigma,Q}}{i}\bar{z}_{Q}+\frac{\nabla \varphi_{\sigma,Q}}{i}\bar{\varphi}_{\sigma,Q} -A_{0}\varphi_{\sigma,Q}\bar{z}_{Q}-A_{0}\varphi_{\sigma,Q}\bar{\varphi}_{\sigma,Q}\Bigl)\vspace{0.2cm}\\ &\ds\leq \frac{\epsilon}{4}\int \tilde{V}(x)|w_{Q}|^{2}+\frac{\epsilon^{2}}{4}\int |\tilde{A}(x)|^{2}|w_{Q}|^{2}-C\|\varphi_{\sigma,Q}\|_{H^{1}}^{2}-\delta\epsilon\int|div\tilde{A}(x)|^{2}|w_{Q}|^{2}. \end{array} \end{equation} By the slow decay assumption on the potential $\tilde{V}(x)$~and~$\tilde{A}(x)$, we have $$\frac{1}{8}\int_{B_{\frac{\rho}{2}}(Q)} \epsilon \tilde{V}(x)|w_{Q}|^{2}+\frac{1}{8}\int_{B_{\frac{\rho}{2}}(Q)} \epsilon^{2} |\tilde{A}|^{2}|w_{Q}|^{2}-O(e^{-\frac{9}{8}|Q|})>0,~for~|Q|~large.$$ So \begin{equation}\label{4.10} \mathcal {C}_{1}\geq J(u_{Q})>I(z). \end{equation} Now we will prove that $\mathcal {C}_{1}$ can be attained at a finite point. Let ${Q_{j}}$ be a sequence such that $\lim\limits_{j\rightarrow\infty}\mathcal {M}(Q_{j}) = \mathcal {C}_{1}$, and assume that $|Q_{j} |\rightarrow+\infty$, \begin{equation}\label{4.11} \begin{array}{ll} J(u_{Q_{j}})&=\ds\frac{1}{2}\int\Bigl|(\frac{\nabla}{i}-A_{\epsilon}(x))u_{Q_{j}}\Bigl|^{2}+V_{\epsilon}(x)|u_{Q_{j}}|^{2}-\int F(u_{Q_{j}}) \vspace{0.2cm}\\ &=\ds I(z)+\frac{1}{2}\int\Bigl|\Bigl(\frac{\nabla}{i}-A_{0}\Bigl) \varphi_{\sigma,Q_{j}}\Bigl|^{2}+\frac{1}{2}\int|\varphi_{\sigma,Q_{j}}|^{2}\vspace{0.2cm}\\ &\,\,\,\,\,\,\ds+Re\int\Bigl(\frac{\nabla}{i}-A_{0}\Bigl) z_{Q_{j}}\overline{\Bigl(\frac{\nabla}{i}-A_{0}\Bigl)\varphi_{\sigma,Q_{j}}}+Re\int z_{Q_{j}}\bar{\varphi}_{\sigma,Q_{j}}-Re\int f(z_{Q_{j}})\bar{\varphi}_{\sigma,Q_{j}}\vspace{0.2cm}\\ &\,\,\,\,\,\,\ds-\int(F(u_{Q_{j}})-F(z_{Q_{j}}))+Re\int f(z_{Q_{j}})\bar{\varphi}_{\sigma,Q_{j}}+\frac{\epsilon}{2}\int \tilde{V}(x)|u_{Q_{j}}|^{2}+\frac{\epsilon}{2}\int |\tilde{A}(x)|^{2}|u_{Q_{j}}|^{2}\vspace{0.2cm}\\ &\,\,\,\,\,\,\ds-Re\int \varepsilon \tilde{A}(x)\Bigl(\frac{\nabla}{i}-A_{0}\Bigl)(z_{Q_{j}}+\varphi_{\sigma,Q_{j}})\overline{(z_{Q_{j}}+\varphi_{\sigma,Q_{j}})}\vspace{0.2cm}\\ &\leq\ds I(z)+C\|\varphi_{\sigma,Q_{j}}\|^{2}+\frac{\epsilon}{2}\int \tilde{V}(x)|u_{Q_{j}}|^{2}+\frac{\epsilon}{2}\int |\tilde{A}(x)|^{2}|u_{Q_{j}}|^{2}\vspace{0.2cm}\\ &\leq\ds I(z)+O\Big(\int\epsilon^{2}\tilde{V}^{2}|z_{Q_{j}}|^{2}\Big)+\frac{\epsilon}{2}\int \tilde{V}(x)|u_{Q_{j}}|^{2}+\frac{\epsilon}{2}\int |\tilde{A}(x)|^{2}|u_{Q_{j}}|^{2}. \end{array} \end{equation} Since $|\tilde{A}(x)| \rightarrow 0$ and $\tilde{V}(x) \rightarrow 0$ as $|x|\rightarrow \infty$, we have $$O\Big(\int\epsilon^{2}\tilde{V}^{2}|z_{Q_{j}}|^{2}\Big)+\frac{\epsilon}{2}\int \tilde{V}(x)|u_{Q_{j}}|^{2}+\frac{\epsilon}{2}\int |\tilde{A}(x)|^{2}|u_{Q_{j}}|^{2}\rightarrow0,$$ so we have $$\mathcal {C}_{1}=\lim\limits_{j\rightarrow\infty}J(u_{Q_{j}})\leq I(z),$$ which contradicts to \eqref{4.10}. Thus $\mathcal {C}_{1}$ can be attained at a finite point. $\textbf{Step 2}$: Assume that there exists $\bar{\textbf{Q}}_{m} = (\bar{Q}_{1},\ldots, \bar{Q}_{m}) \in\Omega_{m}$ such that $\mathcal {C}_{m} =\mathcal {M}(\textbf{Q}_{m})$ and we denote the solution by $u_{\bar{\textbf{Q}}_{m}}$. Next we prove that there exists $\textbf{Q}_{m+1} \in\Omega_{m+1}$ such that $\mathcal {C}_{m+1}$ can be attained. Let $\textbf{Q}_{m+1}^{(n)}$ be a sequence such that \begin{equation}\label{4.12} \mathcal {C}_{m+1}=\lim\limits_{n\rightarrow\infty}\mathcal {M}(\textbf{Q}_{m+1}^{(n)}). \end{equation} We claim that $\textbf{Q}_{m+1}^{(n)}$ is bounded. We prove it by contradiction. In the following we omit index $n$ for simplicity. By direct computation, we have \begin{equation}\label{4.13} \begin{array}{ll} &J(u_{\textbf{Q}_{m+1}})=J(u_{\textbf{Q}_{m}}+z_{Q_{m+1}}+\phi_{m+1}) \vspace{0.2cm}\\ &=\ds\frac{1}{2}\int\Bigl|\Big(\frac{\nabla}{i}-A_{\epsilon}(x)\Big)(u_{\textbf{Q}_{m}}+z_{Q_{m+1}}+\phi_{m+1})\Bigl|^{2}+V_{\epsilon}(x)|u_{\textbf{Q}_{m}}+z_{Q_{m+1}}+\phi_{m+1}|^{2}\vspace{0.2cm}\\ &\,\,\,\,\,\,\ds-\int F(u_{\textbf{Q}_{m}}+z_{Q_{m+1}}+\phi_{m+1}) \vspace{0.2cm}\\ &=\ds J(u_{\textbf{Q}_{m}}+z_{Q_{m+1}})+Re\int\Bigl(\frac{\nabla}{i}-A_{\epsilon}(x)\Bigl)u_{\textbf{Q}_{m}}\overline{\Bigl(\frac{\nabla}{i}-A_{\epsilon}(x)\Bigl)\phi_{m+1}}\vspace{0.2cm}\\ &\,\,\,\,\,\,\ds +Re\int V_{\epsilon}(x)u_{\textbf{Q}_{m}}\bar{\phi}_{m+1}-Re\int f(u_{\textbf{Q}_{m}})\bar{\phi}_{m+1}\ds+Re\int f(u_{\textbf{Q}_{m}})\bar{\phi}_{m+1}-Re\int f(u_{\textbf{Q}_{m}}+z_{Q_{m+1}})\bar{\phi}_{m+1}\vspace{0.2cm}\\ &\,\,\,\,\,\,\ds-\int F(u_{\textbf{Q}_{m}}+z_{Q_{m+1}}+\phi_{m+1})+\int F(u_{\textbf{Q}_{m}}+z_{Q_{m+1}})+Re\int f(u_{\textbf{Q}_{m}}+z_{Q_{m+1}})\bar{\phi}_{m+1}\vspace{0.2cm}\\ &\,\,\,\,\,\,\ds+Re\int\Bigl(\frac{\nabla}{i}-A_{\epsilon}(x)\Bigl)z_{Q_{m+1}}\overline{\Bigl(\frac{\nabla}{i}-A_{\epsilon}(x)\Bigl)\phi_{m+1}}+\frac{1}{2}\int V_{\epsilon}(x)|\phi_{m+1}|^{2}\vspace{0.2cm}\\ &\,\,\,\,\,\,\ds +\frac{1}{2}\int\Bigl|(\frac{\nabla}{i}-A_{\epsilon}(x))\phi_{m+1}\Bigl|^{2}+Re\int V_{\epsilon}(x)z_{Q_{m+1}}\bar{\phi}_{m+1}\vspace{0.2cm}\\ &=\ds J(u_{\textbf{Q}_{m}}+z_{Q_{m+1}})-\int\sum_{j=1}^{m}\sum_{k=1}^{N+1}c_{jk}D_{jk}\phi_{m+1}\vspace{0.2cm}\\ &\,\,\,\,\,\,\ds+Re\int f(u_{\textbf{Q}_{m}})\bar{\phi}_{m+1}-Re\int f(u_{\textbf{Q}_{m}}+z_{Q_{m+1}})\bar{\phi}_{m+1}+Re\int f(z_{Q_{m+1}})\bar{\phi}_{m+1}\vspace{0.2cm}\\ &\,\,\,\,\,\,\ds-\int f'(u_{\textbf{Q}_{m}}+z_{Q_{m+1}}+\vartheta\phi_{m+1})|\phi_{m+1}|^{2}+\frac{1}{2}\int\Bigl|\Big(\frac{\nabla}{i}-A_{\epsilon}(x)\Big)\phi_{m+1}\Bigl|^{2}\vspace{0.2cm}\\ &\,\,\,\,\,\,\ds+Re\int\Bigl(\frac{\nabla}{i}-A_{\epsilon}(x)\Bigl)z_{Q_{m+1}}\overline{\Bigl(\frac{\nabla}{i}-A_{\epsilon}(x)\Bigl)\phi_{m+1}}\vspace{0.2cm}\\ &\,\,\,\,\,\,\ds +\frac{1}{2}\int V_{\epsilon}(x)|\phi_{m+1}|^{2}+Re\int V_{\epsilon}(x)z_{Q_{m+1}}\bar{\phi}_{m+1}-Re\int f(z_{Q_{m+1}})\bar{\phi}_{m+1}\vspace{0.2cm}\\ &=\ds J(u_{\textbf{Q}_{m}}+z_{Q_{m+1}})+O(\|\phi_{m+1}\|^{2}+\|\mathcal {\bar{S}}(u_{\textbf{Q}_{m}}+z_{Q_{m+1}})\|\|\phi_{m+1}\|)-\ds\int\sum_{j=1}^{m}\sum_{k=1}^{N+1}c_{j,k}D_{j,k}\phi_{m+1}\vspace{0.2cm}\\ &=\ds J(u_{\textbf{Q}_{m}}+z_{Q_{m+1}})+O\Bigl[e^{-\beta\mu}\sum_{j=1}^{m}w(|Q_{m+1}-Q_{j}|)+\epsilon^{2}\bigl(\int|\tilde{V}(x)||w_{Q_{m+1}}|\bigl)^{2}\vspace{0.2cm}\\ &\,\,\,\,\,\,\ds+\epsilon^{4}\bigl(\int|\tilde{A}(x)|^{2}|w_{Q_{m+1}}|\bigl)^{2}+\epsilon^{2}\bigl( \int|\tilde{A}(x)||\nabla w_{Q_{m+1}}|\bigl)^{2}+\epsilon^{2}\bigl(\int|div\tilde{A}(x)||w_{Q_{m+1}}| \bigl)^{2}\vspace{0.2cm}\\ &\,\,\,\,\,\,\ds+\epsilon^{2}\int|\tilde{V}(x)|^{2}|w_{Q_{m+1}}|^{2}+\epsilon^{2}\int|\tilde{A}(x)|^{2}|\nabla w_{Q_{m+1}}|^{2}+\epsilon^{2}\int|div\tilde{A}(x)|^{2}|w_{Q_{m+1}}|^{2}\vspace{0.2cm}\\ &\,\,\,\,\,\,\ds+\epsilon^{4}\int|\tilde{A}(x)|^{4}|w_{Q_{m+1}}|^{2}\Bigl]. \end{array} \end{equation} Moreover, we have \begin{equation}\label{4.14} \begin{array}{ll} &J(u_{\textbf{Q}_{m}}+z_{Q_{m+1}})\vspace{0.2cm}\\ &=\ds\frac{1}{2}\int\Bigl|\Big(\frac{\nabla}{i}-A_{\epsilon}(x)\Big)(u_{\textbf{Q}_{m}}+z_{Q_{m+1}})\Bigl|^{2} +V_{\epsilon}(x)|u_{\textbf{Q}_{m}}+z_{Q_{m+1}}|^{2} -\int F(u_{\textbf{Q}_{m}}+z_{Q_{m+1}}) \vspace{0.2cm}\\ &\leq\ds \mathcal {C}_{m}+\frac{1}{2}\int|z_{Q_{m+1}}|^{2}+\frac{1}{2}\int\Bigl|(\frac{\nabla}{i}-A_{0})z_{Q_{m+1}}\Bigl|^{2}-\int F(z_{Q_{m+1}})\vspace{0.2cm}\\ &\,\,\,\,\,\,\ds +Re\int(1+\epsilon \tilde{V}(x))u_{\textbf{Q}_{m}}\bar{z}_{Q_{m+1}} +Re\int\Bigl(\frac{\nabla}{i}-A_{\epsilon}(x)\Bigl)u_{\textbf{Q}_{m}}\overline{\Bigl(\frac{\nabla}{i}-A_{\epsilon}(x)\Bigl)z_{Q_{m+1}}}\vspace{0.2cm}\\ &\,\,\,\,\,\,\ds-\int F(u_{\textbf{Q}_{m}}+z_{Q_{m+1}})+\int F(u_{\textbf{Q}_{m}})+\int F(z_{Q_{m+1}})+\frac{1}{2}\int\epsilon \tilde{V}(x)|z_{Q_{m+1}}|^{2}\vspace{0.2cm}\\ &\,\,\,\,\,\,\ds+\frac{1}{2}\int\epsilon^{2} |\tilde{A}(x)|^{2}|z_{Q_{m+1}}|^{2}-Re\int\epsilon \tilde{A}(x)\Bigl(\frac{\nabla}{i}-A_{0}\Bigl)z_{Q_{m+1}} \bar{z}_{Q_{m+1}}\vspace{0.2cm}\\ &\leq\ds \mathcal {C}_{m}+I(z)+\frac{1}{2}\int\epsilon \tilde{V}(x)|z_{Q_{m+1}}|^{2}+\frac{1}{2}\int\epsilon^{2} |\tilde{A}(x)|^{2}|z_{Q_{m+1}}|^{2}\vspace{0.2cm}\\ &\,\,\,\,\,\,\ds+Re\int \bigl(f(u_{\textbf{Q}_{m}})-\sum_{j=1}^{m}\sum_{k=1}^{N+1}c_{jk}D_{j,k}\bigl)\bar{z}_{Q_{m+1}}-Re\int f(u_{\textbf{Q}_{m}})\bar{z}_{Q_{m+1}}\vspace{0.2cm}\\ &\,\,\,\,\,\,\ds-Re\int f(z_{Q_{m+1}})\bar{u}_{\textbf{Q}_{m}}+O\Big(e^{-\beta\mu}\sum_{j=1}^{m}w(|Q_{m+1}-Q_{j}|)\Big)\vspace{0.2cm}\\ &\leq\ds \mathcal {C}_{m}+I(z)+\frac{1}{2}\int\epsilon \tilde{V}(x)|z_{Q_{m+1}}|^{2}+\frac{1}{2}\int\epsilon^{2} |\tilde{A}(x)|^{2}|z_{Q_{m+1}}|^{2} -Re\int f(z_{Q_{m+1}})\bar{u}_{\textbf{Q}_{m}}\vspace{0.2cm}\\ &\quad-Re\ds\int \sum_{j=1}^{m}\sum_{k=1}^{N+1}c_{j,k}D_{j,k}\bar{z}_{Q_{m+1}} +O\Big(e^{-\beta\mu}\ds\sum_{j=1}^{m}w(|Q_{m+1}-Q_{j}|)\Big). \end{array} \end{equation} By estimate \eqref{2.28} in Proposition \ref{prop2.3}, and that the definition of $D_{j,k}$, we have \begin{equation}\label{4.15} \Bigl|Re\int\sum_{j=1}^{m}\sum_{k=1}^{N+1}c_{j,k}D_{j,k}\bar{z}_{Q_{m+1}}\Bigl|\leq Ce^{-\beta\mu}\sum_{j=1}^{m}w(|Q_{m+1}-Q_{j}|). \end{equation} By the equation satisfied by $\varphi_{\sigma,\textbf{Q}_{m}}$ \begin{equation}\label{4.16} L(\varphi_{\sigma,\textbf{Q}_{m}})=-\mathcal {S}(z_{\textbf{Q}_{m}})+\mathcal {N}(\varphi_{\sigma,\textbf{Q}_{m}})+\sum_{j=1}^{m}\sum_{k=1}^{N+1}c_{j,k}D_{j,k}, \end{equation} we have \begin{equation}\label{4.17} \begin{array}{ll} &\ds Re\int f(z_{Q_{m+1}})\bar{\varphi}_{\sigma,\textbf{Q}_{m}}\vspace{0.2cm}\\ &=\ds Re\int \Bigl(\frac{\nabla}{i}-A_{0}\Bigl)z_{Q_{m+1}}\overline{\Bigl(\frac{\nabla}{i}-A_{0}\Bigl)\varphi_{\sigma,\textbf{Q}_{m}}}+Re\int z_{\textbf{Q}_{m}}\bar{\varphi}_{Q_{m+1}}\vspace{0.2cm}\\ &=\ds Re\int \Bigl(\frac{\nabla}{i}-A_{0}\Bigl)\varphi_{\sigma,\textbf{Q}_{m}}\overline{\Bigl(\frac{\nabla}{i}-A_{0}\Bigl)z_{Q_{m+1}}}+Re\int \varphi_{\sigma,\textbf{Q}_{m}}\bar{z}_{Q_{m+1}}\vspace{0.2cm}\\ &=\ds Re\int\Bigl(\mathcal {S}(z_{\textbf{Q}_{m}})-\mathcal {N}(\varphi_{\sigma,\textbf{Q}_{m}})-\sum_{j=1}^{m}\sum_{k=1}^{N+1}c_{j,k}D_{j,k}-\epsilon \tilde{V}\varphi_{\sigma,\textbf{Q}_{m}} \vspace{0.2cm}\\ \end{array} \end{equation} \begin{equation*} \begin{array}{ll} &\,\,\,\,\,\,\ds+f'(z_{\textbf{Q}_{m}})\varphi_{\sigma,\textbf{Q}_{m}}\Bigl)\bar{z}_{Q_{m+1}}+Re\int \Bigl(\frac{\nabla}{i}-A_{0}\Bigl)\varphi_{\sigma,\textbf{Q}_{m}}\overline{\Bigl(\frac{\nabla}{i}-A_{0}\Bigl)z_{Q_{m+1}}}\vspace{0.2cm}\\ &\,\,\,\,\,\,\ds-Re\int\bigl(\frac{\nabla}{i}-A_{\epsilon}(x)\bigl)\varphi_{\sigma,\textbf{Q}_{m}}\overline{\bigl(\frac{\nabla}{i}-A_{\epsilon}(x)\bigl)z_{Q_{m+1}}}\vspace{0.2cm}\\ &=\ds Re\int\Bigl(\mathcal {S}(z_{\textbf{Q}_{m}})-\mathcal {N}(\varphi_{\sigma,\textbf{Q}_{m}})-\sum_{j=1}^{m}\sum_{k=1}^{N+1}c_{j,k}D_{j,k}-\epsilon \tilde{V}\varphi_{\sigma,\textbf{Q}_{m}} \vspace{0.2cm}\\ &\,\,\,\,\,\,\ds+f'(z_{\textbf{Q}_{m}})\varphi_{\sigma,\textbf{Q}_{m}}\Bigl)\bar{z}_{Q_{m+1}}+Re\int\epsilon \tilde{A}\Bigl(\frac{\nabla}{i}-A_{0}\Bigl)\varphi_{\sigma,\textbf{Q}_{m}}\bar{z}_{Q_{m+1}}\vspace{0.2cm}\\ &\,\,\,\,\,\,\ds+Re\int\epsilon \tilde{A}\varphi_{\sigma,\textbf{Q}_{m}}\overline{\Bigl(\frac{\nabla}{i}-A_{0}\Bigl)z_{Q_{m+1}}}-Re\int\epsilon^{2} |\tilde{A}|^{2}\varphi_{\sigma,\textbf{Q}_{m}}\bar{z}_{Q_{m+1}}. \end{array} \end{equation*} Moreover, we can choose $\gamma$ that $\gamma+\delta>1$, $(1+\delta)\gamma>1$. Then we can easily get \begin{equation}\label{4.18} \Bigl|Re\int(\mathcal {N}(\varphi_{\sigma,\textbf{Q}_{m}})-f'(z_{\textbf{Q}_{m}})\varphi_{\sigma,\textbf{Q}_{m}})\bar{z}_{Q_{m+1}}\Bigl|\leq Ce^{-\beta\mu}\sum_{j=1}^{m}w(|Q_{m+1}-Q_{j}|) \end{equation} and \begin{equation}\label{4.19} \begin{array}{ll} &\ds \Bigl|Re\int(\mathcal {S}(z_{\textbf{Q}_{m}})-\epsilon \tilde{V}\varphi_{\sigma,\textbf{Q}_{m}})\bar{z}_{Q_{m+1}}+Re\int\epsilon \tilde{A}\Bigl(\frac{\nabla}{i}-A_{0}\Bigl)\varphi_{\sigma,\textbf{Q}_{m}}\bar{z}_{Q_{m+1}}\vspace{0.2cm}\\ &\,\,\,\,\,\,\ds +Re\int\epsilon \tilde{A}\varphi_{\sigma,\textbf{Q}_{m}}\overline{\Bigl(\frac{\nabla}{i}-A_{0}\Bigl)z_{Q_{m+1}}}-Re\int\epsilon^{2} |\tilde{A}|^{2}\varphi_{\sigma,\textbf{Q}_{m}}\bar{z}_{Q_{m+1}}\Bigl|\vspace{0.2cm}\\ &=\ds \Bigl|Re\int\Bigl(f(z_{\textbf{Q}_{m}})-\sum_{j=1}^{m}f(z_{Q_{j}})-\epsilon \tilde{V}\varphi_{\sigma,\textbf{Q}_{m}}-\epsilon \tilde{V}z_{\textbf{Q}_{m}}\vspace{0.2cm}\\ &\,\,\,\,\,\,\ds+\frac{\epsilon}{i}div\tilde{A}(x)z_{\textbf{Q}_{m}}+\frac{2\epsilon}{i}\sum_{j=1}^{m}\xi_{j}\tilde{A}(x)\cdot\nabla w_{Q_{j}}-\epsilon^{2}|\tilde{A}(x)|^{2}z_{\textbf{Q}_{m}}\Bigl)\bar{z}_{Q_{m+1}}\vspace{0.2cm}\\ &\,\,\,\,\,\,\ds+Re\int\frac{\epsilon}{i}\tilde{A}(x)\cdot\nabla\varphi_{\sigma,\textbf{Q}_{m}}\bar{z}_{Q_{m+1}}-Re\int\epsilon A_{0}\cdot \tilde{A}(x) \varphi_{\sigma,\textbf{Q}_{m}}\bar{z}_{Q_{m+1}}\vspace{0.2cm}\\ &\,\,\,\,\,\,\ds-Re\int\frac{\epsilon}{i}\varphi_{\sigma,\textbf{Q}_{m}}\overline{\xi}_{\textbf{Q}_{m}}\tilde{A}(x)\cdot\nabla w_{Q_{m+1}}-Re\int\epsilon^{2}|\tilde{A}(x)|^{2}\varphi_{\sigma,\textbf{Q}_{m}}\bar{z}_{Q_{m+1}}\Bigl|\vspace{0.2cm}\\ &\leq\ds C\Bigl(\epsilon \int \tilde{V}w_{\textbf{Q}_{m}}w_{Q_{m+1}}+ \epsilon e^{-\beta\mu} \int \sum_{j=1}^{m}e^{-\gamma|x-Q_{j}|}\tilde{V}w_{Q_{m+1}}+e^{-\beta\mu}\sum_{j=1}^{m}w(|Q_{m+1}-Q_{j}|)\vspace{0.2cm}\\ &\,\,\,\,\,\,\ds+ \epsilon \int |div\tilde{A}(x)|w_{\textbf{Q}_{m}}w_{Q_{m+1}} + \epsilon \int |\tilde{A}(x)||\nabla w_{\textbf{Q}_{m}}|w_{Q_{m+1}}+ \epsilon^{2} \int |\tilde{A}(x)|^{2}w_{\textbf{Q}_{m}}w_{Q_{m+1}}\vspace{0.2cm}\\ &\,\,\,\,\,\,\ds+ \epsilon e^{-\beta\mu}\int |\tilde{A}(x)|\sum_{j=1}^{m}e^{-\gamma|x-Q_{j}|}w_{Q_{m+1}} + \epsilon e^{-\beta\mu}\int |\tilde{A}(x)|\sum_{j=1}^{m}e^{-\gamma|x-Q_{j}|}w_{Q_{m+1}}\vspace{0.2cm}\\ &\,\,\,\,\,\,\ds+ \epsilon e^{-\beta\mu}\int |\tilde{A}(x)|\sum_{j=1}^{m}e^{-\gamma|x-Q_{j}|}|\nabla w_{Q_{m+1}}|+ \epsilon^{2} e^{-\beta\mu}\int |\tilde{A}(x)|^{2}\sum_{j=1}^{m}e^{-\gamma|x-Q_{j}|}w_{Q_{m+1}}\Bigl). \end{array} \end{equation} From \eqref{4.16} to \eqref{4.19}, we obtain \begin{equation}\label{4.20} \begin{array}{ll} &\ds \Bigl|Re\int f(z_{Q_{m+1}})\bar{\varphi}_{\sigma,\textbf{Q}_{m}}\Bigl|\vspace{0.2cm}\\ &\leq\ds C\Bigl(\epsilon \int \tilde{V}w_{\textbf{Q}_{m}}w_{Q_{m+1}}+ \epsilon e^{-\beta\mu} \int \sum_{j=1}^{m}e^{-\gamma|x-Q_{j}|}\tilde{V}w_{Q_{m+1}}+e^{-\beta\mu}\sum_{j=1}^{m}w(|Q_{m+1}-Q_{j}|)\vspace{0.2cm}\\ &\,\,\,\,\,\,\ds+ \epsilon \int |div\tilde{A}(x)|w_{\textbf{Q}_{m}}w_{Q_{m+1}} + \epsilon \int |\tilde{A}(x)||\nabla w_{\textbf{Q}_{m}}|w_{Q_{m+1}}+ \epsilon^{2} \int |\tilde{A}(x)|^{2}w_{\textbf{Q}_{m}}w_{Q_{m+1}}\vspace{0.2cm}\\ &\,\,\,\,\,\,\ds+ \epsilon e^{-\beta\mu}\int |\tilde{A}(x)|\sum_{j=1}^{m}e^{-\gamma|x-Q_{j}|}w_{Q_{m+1}} + \epsilon e^{-\beta\mu}\int |\tilde{A}(x)|\sum_{j=1}^{m}e^{-\gamma|x-Q_{j}|}w_{Q_{m+1}}\vspace{0.2cm}\\ &\,\,\,\,\,\,\ds+ \epsilon e^{-\beta\mu}\int |\tilde{A}(x)|\sum_{j=1}^{m}e^{-\gamma|x-Q_{j}|}|\nabla w_{Q_{m+1}}|+ \epsilon^{2} e^{-\beta\mu}\int |\tilde{A}(x)|^{2}\sum_{j=1}^{m}e^{-\gamma|x-Q_{j}|}w_{Q_{m+1}}\Bigl). \end{array} \end{equation} Hence by Lemma \ref{lem3.1}, we have \begin{equation}\label{4.21} \begin{array}{ll} &\ds Re\int f(z_{Q_{m+1}})\bar{u}_{\textbf{Q}_{m}}=Re\int f(z_{Q_{m+1}})(\bar{z}_{\textbf{Q}_{m}}+\bar{\varphi}_{\sigma,\textbf{Q}_{m}})\vspace{0.2cm}\\ &\geq\ds\frac{1}{4}\vartheta\sum_{j=1}^{m}w(|Q_{m+1}-Q_{j}|)\vspace{0.2cm}\\ &\quad\ds+ O\Bigl(\epsilon \int \tilde{V}w_{\textbf{Q}_{m}}w_{Q_{m+1}}+ \epsilon e^{-\beta\mu} \int \sum_{j=1}^{m}e^{-\gamma|x-Q_{j}|}\tilde{V}w_{Q_{m+1}}+e^{-\beta\mu}\sum_{j=1}^{m}w(|Q_{m+1}-Q_{j}|) \vspace{0.2cm}\\ &\quad\ds+ \epsilon \int |div\tilde{A}(x)|w_{\textbf{Q}_{m}}w_{Q_{m+1}} + \epsilon \int |\tilde{A}(x)||\nabla w_{\textbf{Q}_{m}}|w_{Q_{m+1}}+ \epsilon^{2} \int |\tilde{A}(x)|^{2}w_{\textbf{Q}_{m}}w_{Q_{m+1}}\vspace{0.2cm}\\ &\quad\ds+ \epsilon e^{-\beta\mu}\int |\tilde{A}(x)|\sum_{j=1}^{m}e^{-\gamma|x-Q_{j}|}w_{Q_{m+1}} + \epsilon e^{-\beta\mu}\int |\tilde{A}(x)|\sum_{j=1}^{m}e^{-\gamma|x-Q_{j}|}w_{Q_{m+1}}\vspace{0.2cm}\\ &\quad\ds+ \epsilon e^{-\beta\mu}\int |\tilde{A}(x)|\sum_{j=1}^{m}e^{-\gamma|x-Q_{j}|}|\nabla w_{Q_{m+1}}|+ \epsilon^{2} e^{-\beta\mu}\int |\tilde{A}(x)|^{2}\sum_{j=1}^{m}e^{-\gamma|x-Q_{j}|}w_{Q_{m+1}}\Bigl). \end{array} \end{equation} Combing \eqref{4.13}, \eqref{4.14}, \eqref{4.15} and \eqref{4.21}, we obtain \begin{equation}\label{4.22} \begin{array}{ll} &\ds J(u_{\textbf{Q}_{m+1}})=J(u_{\textbf{Q}_{m}}+z_{Q_{m+1}}+\phi_{m+1}) \vspace{0.2cm}\\ &\leq\ds \mathcal {C}_{m}+I(z)+\frac{1}{2}\int\epsilon \tilde{V}(x)|z_{Q_{m+1}}|^{2}+\frac{1}{2}\int\epsilon^{2} |\tilde{A}(x)|^{2}|z_{Q_{m+1}}|^{2}-\frac{1}{4}\vartheta\sum_{j=1}^{m}w(|Q_{m+1}-Q_{j}|)\vspace{0.2cm}\\ &+\ds O\Bigl[\epsilon \int \tilde{V}w_{\textbf{Q}_{m}}w_{Q_{m+1}}+ \epsilon e^{-\beta\mu} \int \sum_{j=1}^{m}e^{-\gamma|x-Q_{j}|}\tilde{V}w_{Q_{m+1}}+e^{-\beta\mu}\sum_{j=1}^{m}w(|Q_{m+1}-Q_{j}|)\vspace{0.2cm}\\ &\quad\quad\ds+ \epsilon \int |div\tilde{A}(x)|w_{\textbf{Q}_{m}}w_{Q_{m+1}} + \epsilon \int |\tilde{A}(x)||\nabla w_{\textbf{Q}_{m}}|w_{Q_{m+1}}+ \epsilon^{2} \int |\tilde{A}(x)|^{2}w_{\textbf{Q}_{m}}w_{Q_{m+1}}\vspace{0.2cm}\\ &\quad\quad\ds+ \epsilon e^{-\beta\mu}\int |\tilde{A}(x)|\sum_{j=1}^{m}e^{-\gamma|x-Q_{j}|}w_{Q_{m+1}} + \epsilon e^{-\beta\mu}\int |\tilde{A}(x)|\sum_{j=1}^{m}e^{-\gamma|x-Q_{j}|}w_{Q_{m+1}}\vspace{0.2cm}\\ \end{array} \end{equation} \begin{equation*} \begin{array}{ll} &\,\,\,\,\,\,\ds+ \epsilon e^{-\beta\mu}\int |\tilde{A}(x)|\sum_{j=1}^{m}e^{-\gamma|x-Q_{j}|}|\nabla w_{Q_{m+1}}|+ \epsilon^{2} e^{-\beta\mu}\int |\tilde{A}(x)|^{2}\sum_{j=1}^{m}e^{-\gamma|x-Q_{j}|}w_{Q_{m+1}}\vspace{0.2cm}\\ &\,\,\,\,\,\,\ds+\epsilon^{4}\Bigl(\int|\tilde{A}(x)|^{2}|w_{Q_{m+1}}|\Bigl)^{2}+\epsilon^{2}\Bigl( \int|\tilde{A}(x)||\nabla w_{Q_{m+1}}|\Bigl)^{2}+\epsilon^{2}\Bigl(\int|div\tilde{A}(x)||w_{Q_{m+1}}| \Bigl)^{2}\vspace{0.2cm}\\ &\,\,\,\,\,\,\ds+\epsilon^{2}\int|\tilde{V}(x)|^{2}|w_{Q_{m+1}}|^{2}+\epsilon^{2}\int|\tilde{A}(x)|^{2}|\nabla w_{Q_{m+1}}|^{2}+\epsilon^{2}\int|div\tilde{A}(x)|^{2}|w_{Q_{m+1}}|^{2}\vspace{0.2cm}\\ &\,\,\,\,\,\,\ds+\epsilon^{4}\int|\tilde{A}(x)|^{4}|w_{Q_{m+1}}|^{2}+\epsilon^{2}\bigl(\int|\tilde{V}(x)||w_{Q_{m+1}}|\bigl)^{2}\Bigl]. \end{array} \end{equation*} By the assumption that $|Q^{(n)}_{ m+1}|\rightarrow+\infty$, \begin{equation}\label{4.23} \begin{array}{ll} &\ds\epsilon \int \tilde{V}w_{\textbf{Q}_{m}}w_{Q^{(n)}_{m+1}}+ \epsilon e^{-\beta\mu} \int \sum_{j=1}^{m}e^{-\gamma|x-Q_{j}|}\tilde{V}w_{Q^{(n)}_{m+1}}\vspace{0.2cm}\\ &\ds+ \epsilon \int |div\tilde{A}(x)|w_{\textbf{Q}_{m}}w_{Q^{(n)}_{m+1}} + \epsilon \int |\tilde{A}(x)||\nabla w_{\textbf{Q}_{m}}|w_{Q^{(n)}_{m+1}}+ \epsilon^{2} \int |\tilde{A}(x)|^{2}w_{\textbf{Q}_{m}}w_{Q^{(n)}_{m+1}}\vspace{0.2cm}\\ &\ds+ \epsilon e^{-\beta\mu}\int |\tilde{A}(x)|\sum_{j=1}^{m}e^{-\gamma|x-Q_{j}|}w_{Q^{(n)}_{m+1}} + \epsilon e^{-\beta\mu}\int |\tilde{A}(x)|\sum_{j=1}^{m}e^{-\gamma|x-Q_{j}|}w_{Q^{(n)}_{m+1}}\vspace{0.2cm}\\ &\ds+ \epsilon e^{-\beta\mu}\int |\tilde{A}(x)|\sum_{j=1}^{m}e^{-\gamma|x-Q_{j}|}|\nabla w_{Q^{(n)}_{m+1}}|+ \epsilon^{2} e^{-\beta\mu}\int |\tilde{A}(x)|^{2}\sum_{j=1}^{m}e^{-\gamma|x-Q_{j}|}w_{Q^{(n)}_{m+1}}\vspace{0.2cm}\\ &\ds+\epsilon^{4}\bigl(\int|\tilde{A}(x)|^{2}|w_{Q^{(n)}_{m+1}}|\bigl)^{2}+\epsilon^{2}\bigl( \int|\tilde{A}(x)||\nabla w_{Q^{(n)}_{m+1}}|\bigl)^{2}+\epsilon^{2}\bigl(\int|div\tilde{A}(x)||w_{Q^{(n)}_{m+1}}| \bigl)^{2}\vspace{0.2cm}\\ &\ds+\epsilon^{2}\int|\tilde{V}(x)|^{2}|w_{Q^{(n)}_{m+1}}|^{2}+\epsilon^{2}\int|\tilde{A}(x)|^{2}|\nabla w_{Q^{(n)}_{m+1}}|^{2}+\epsilon^{2}\int|div\tilde{A}(x)|^{2}|w_{Q^{(n)}_{m+1}}|^{2}\vspace{0.2cm}\\ &\ds+\epsilon^{4}\int|\tilde{A}(x)|^{4}|w_{Q^{(n)}_{m+1}}|^{2}+\epsilon^{2} \Bigl(\int|\tilde{V}(x)||w_{Q^{(n)}_{m+1}}|\Bigl)^{2}\rightarrow0,~as~n\rightarrow+\infty \end{array} \end{equation} and \begin{equation}\label{4.24} -\frac{1}{4}\vartheta\sum_{j=1}^{m}w(|Q_{m+1}-Q_{j}|)+O\Bigl(e^{-\beta\mu}\sum_{j=1}^{m}w(|Q_{m+1}-Q_{j}|)\Bigl)<0. \end{equation} Combining \eqref{4.12}, \eqref{4.22}, \eqref{4.23} and \eqref{4.24}, we have \begin{equation}\label{4.25} \mathcal {C}_{m+1}\leq\mathcal {C}_{m} + I(z). \end{equation} On the other hand, since by the assumption, $\mathcal {C}_{m}$ can be attained at $(\bar{Q}_{1},\ldots, \bar{Q}_{m})$, so there exists other point $Q_{m+1}$ which is far away from the m points which be determined later. Next let's consider the solution concentrated at the points $(\bar{Q}_{1},\ldots, \bar{Q}_{m},Q_{m+1})$, and we denote the solution by $u_{\bar{\textbf{Q}}_{m},Q_{m+1}}$, then similar with the above argument, applying the estimate \eqref{3.36} of $\phi_{m+1}$ instead of \eqref{3.5}, we have the following estimate: \begin{equation}\label{4.26} \begin{array}{ll} &\ds J(u_{\textbf{Q},Q_{m+1}})\vspace{0.2cm}\\ &=\ds J(u_{\bar{\textbf{Q}}_{m}})+I(z)+\frac{1}{2}\int\epsilon \tilde{V}(x)|z_{Q_{m+1}}|^{2}+\frac{1}{2}\int\epsilon^{2} |\tilde{A}(x)|^{2}|z_{Q_{m+1}}|^{2}-O\Big(\sum_{j=1}^{m}w(|Q_{m+1}-\bar{Q}_{j}|)\Big)\vspace{0.2cm}\\ &\,\,\,\,\,\,\ds+ O\Bigl[\epsilon \int \tilde{V}w_{\textbf{Q}_{m}}w_{Q_{m+1}}+ \epsilon e^{-\beta\mu} \int \sum_{j=1}^{m}e^{-\gamma|x-Q_{j}|}\tilde{V}w_{Q_{m+1}}\vspace{0.2cm}\\ &\,\,\,\,\,\,\ds+ \epsilon \int |div\tilde{A}(x)|w_{\textbf{Q}_{m}}w_{Q_{m+1}} + \epsilon \int |\tilde{A}(x)||\nabla w_{\textbf{Q}_{m}}|w_{Q_{m+1}}+ \epsilon^{2} \int |\tilde{A}(x)|^{2}w_{\textbf{Q}_{m}}w_{Q_{m+1}}\vspace{0.2cm}\\ &\,\,\,\,\,\,\ds+ \epsilon e^{-\beta\mu}\int |div\tilde{A}(x)|\sum_{j=1}^{m}e^{-\gamma|x-Q_{j}|}w_{Q_{m+1}} + \epsilon e^{-\beta\mu}\int |\tilde{A}(x)|\sum_{j=1}^{m}e^{-\gamma|x-Q_{j}|}w_{Q_{m+1}}\vspace{0.2cm}\\ &\,\,\,\,\,\,\ds+ \epsilon^{2} e^{-\beta\mu}\int |\tilde{A}(x)|^{2}\sum_{j=1}^{m}e^{-\gamma|x-Q_{j}|}w_{Q_{m+1}}+\epsilon^{4}\int|\tilde{A}(x)|^{4}|w_{Q_{m+1}}|^{2}\vspace{0.2cm}\\ &\,\,\,\,\,\,\ds+\epsilon^{2}\int|\tilde{V}(x)|^{2}|w_{Q_{m+1}}|^{2}+\epsilon^{2}\int|\tilde{A}(x)|^{2}|\nabla w_{Q_{m+1}}|^{2}+\epsilon^{2}\int|div\tilde{A}(x)|^{2}|z_{Q_{m+1}}|^{2}\Bigl)\vspace{0.2cm}\\ &\,\,\,\,\,\,\ds+O\Bigl(\epsilon^{4}\Bigl(\sum_{j=1}^{m+1}\Bigl(\int_{B_{\frac{\mu}{2}}(Q_{j})}|\tilde{A}(x)|^{4}|w_{Q_{m+1}}|^{2}\Bigl)^{\frac{1}{2}}\Bigl)^{2}+\epsilon^{2}\Bigl( \sum_{j=1}^{m+1}\Bigl(\int_{B_{\frac{\mu}{2}}(Q_{j})}|\tilde{A}(x)|^{2}|\nabla w_{Q_{m+1}}|^{2}\Bigl)^{\frac{1}{2}}\Bigl)^{2}\vspace{0.2cm}\\ &\,\,\,\,\,\,\ds+\epsilon^{2}\Bigl(\sum_{j=1}^{m+1}\Bigl(\int_{B_{\frac{\mu}{2}}(Q_{j})}|div\tilde{A}(x)|^{2}|w_{Q_{m+1}}|^{2}\Bigl)^{\frac{1}{2}} \Bigl)^{2}+\epsilon^{2}\Bigl(\sum_{j=1}^{m+1}\Bigl(\int_{B_{\frac{\mu}{2}}(Q_{j})}|\tilde{V}(x)|^{2}|w_{Q_{m+1}}|^{2}\Bigl)^{\frac{1}{2}}\Bigl)^{2}\Bigl]. \end{array} \end{equation} By the asymptotic behavior of $V$, $A$ and $\nabla A$ at infinity, for some $\alpha < 1$, we choose $\gamma >\alpha$, then we can choose $Q_{m+1}$ such that \begin{equation}\label{4.27} |Q_{m+1}|\gg\frac{\max_{j=1}^{m}|\bar{Q}|_{j}+\ln\epsilon}{\gamma-\alpha}, \end{equation} then we can get \begin{equation}\label{4.28} \begin{array}{ll} &\ds\frac{1}{2}\int\epsilon \tilde{V}(x)|w_{Q_{m+1}}|^{2}+\frac{1}{2}\int\epsilon^{2} |\tilde{A}(x)|^{2}|w_{Q_{m+1}}|^{2}-O\Big(\sum_{j=1}^{m}w(|Q_{m+1}-\bar{Q}_{j}|)\Big)\vspace{0.2cm}\\ &+O\Bigl(\epsilon \ds\int \tilde{V}w_{\textbf{Q}_{m}}w_{Q_{m+1}}+ \epsilon e^{-\beta\mu} \int \sum_{j=1}^{m}e^{-\gamma|x-Q_{j}|}\tilde{V}w_{Q_{m+1}}+ \epsilon^{2} e^{-\beta\mu}\int |\tilde{A}(x)|^{2}\sum_{j=1}^{m}e^{-\gamma|x-Q_{j}|}w_{Q_{m+1}}\vspace{0.2cm}\\ &+ \epsilon \ds\int |div\tilde{A}(x)|w_{\textbf{Q}_{m}}w_{Q_{m+1}} + \epsilon \int |\tilde{A}(x)||\nabla w_{\textbf{Q}_{m}}|w_{Q_{m+1}}+ \epsilon^{2} \int |\tilde{A}(x)|^{2}w_{\textbf{Q}_{m}}w_{Q_{m+1}}\vspace{0.2cm}\\ &+ \epsilon e^{-\beta\mu}\ds\int |\tilde{A}(x)|\sum_{j=1}^{m}e^{-\gamma|x-Q_{j}|}w_{Q_{m+1}} + \epsilon e^{-\beta\mu}\int |\tilde{A}(x)|\ds\sum_{j=1}^{m}e^{-\gamma|x-Q_{j}|}w_{Q_{m+1}}\vspace{0.2cm}\\ &+ \epsilon e^{-\beta\mu}\ds\int |\tilde{A}(x)|\sum_{j=1}^{m}e^{-\gamma|x-Q_{j}|}|\nabla w_{Q_{m+1}}|+\epsilon^{4}\int|\tilde{A}(x)|^{4}|w_{Q_{m+1}}|^{2}\vspace{0.2cm}\\ \end{array} \end{equation} \begin{equation*} \begin{array}{ll} &+\epsilon^{2}\ds\int|\tilde{V}(x)|^{2}|w_{Q_{m+1}}|^{2}+\epsilon^{2}\int|\tilde{A}(x)|^{2}|\nabla w_{Q_{m+1}}|^{2}+\epsilon^{2}\ds\int|div\tilde{A}(x)|^{2}|w_{Q_{m+1}}|^{2}\Bigl)\vspace{0.2cm}\\ &+O\Bigl[\epsilon^{4}\Bigl(\ds\sum_{j=1}^{m+1}\Bigl(\ds\int_{B_{\frac{\mu}{2}}(Q_{j})}|\tilde{A}(x)|^{4}|w_{Q_{m+1}}|^{2}\Bigl)^{\frac{1}{2}}\Bigl)^{2}+\epsilon^{2}\Bigl( \sum_{j=1}^{m+1}\Bigl(\int_{B_{\frac{\mu}{2}}(Q_{j})}|\tilde{A}(x)|^{2}|\nabla w_{Q_{m+1}}|^{2}\Bigl)^{\frac{1}{2}}\Bigl)^{2}\vspace{0.2cm}\\ &+\epsilon^{2}\Bigl(\ds\sum_{j=1}^{m+1}\Bigl(\ds\int_{B_{\frac{\mu}{2}}(Q_{j})}|div\tilde{A}(x)|^{2}|w_{Q_{m+1}}|^{2}\Bigl)^{\frac{1}{2}} \Bigl)^{2}+\epsilon^{2}\Bigl(\ds\sum_{j=1}^{m+1}\Bigl(\ds\int_{B_{\frac{\mu}{2}}(Q_{j})}|\tilde{V}(x)|^{2}|w_{Q_{m+1}}|^{2}\Bigl)^{\frac{1}{2}}\Bigl)^{2}\Bigl]\vspace{0.2cm}\\ &\geq\ds C\epsilon e^{-\alpha|Q_{m+1}|}-O\Big(\sum_{j=1}^{m}e^{-\eta|\bar{Q}_{j}-Q_{m+1}|}\Big)>0. \end{array} \end{equation*} So \begin{equation}\label{4.29} \mathcal {C}_{m+1}\geq J(u_{\bar{\textbf{Q}}_{m},Q_{m+1}})>\mathcal {C}_{m}+I(z). \end{equation} It follows from \eqref{4.25} and \eqref{4.29} that \begin{equation}\label{4.30} \mathcal {C}_{m}+I(z)< \mathcal {C}_{m+1}\leq\mathcal {C}_{m}+I(z), \end{equation} which is impossible. Hence we prove that $\mathcal {C}_{m+1}$ can be attained at finite points in $\Omega_{m+1}$. \end{proof} Now we are in position to prove our main result. \begin{proof}[\textbf{Proof of Theorem 1.2.}] In order to prove our main result, we only need to prove that the maximization problem \begin{equation}\label{4.31} \ds \max_{\textbf{Q}_{m}\in\bar{\Omega}_{m}}\mathcal {M}(\textbf{Q}_{m}) \end{equation} has a solution $\textbf{Q}_{m}\in\Omega^{o}_{m}$, i.e., the interior of $\Omega_{m}$. We prove it by an indirect method. Assume that $\bar{\textbf{Q}}_{m}=(\bar{Q}_{1},\ldots,\bar{Q}_{m})\in\partial\Omega_{m}$. Then there exists $(j,k)$ such that $|\bar{Q}_{j}-\bar{Q}_{k}|=\mu$. Without loss of generality, we assume $(j, k) = (j, m)$. Then following the estimates \eqref{4.13}, \eqref{4.14}, \eqref{4.15} and \eqref{4.21}, we have \begin{equation}\label{4.32} \begin{array}{ll} &\ds \mathcal {C}_{m}=J(u_{\bar{\textbf{Q}}_{m}}) \vspace{0.2cm}\\ &\leq\ds \mathcal {C}_{m-1}+I(z)+\frac{\epsilon }{2}\int \tilde{V}(x)|w_{Q_{m+1}}|^{2}+\frac{1}{2}\int\epsilon^{2} |\tilde{A}(x)|^{2}|w_{Q_{m+1}}|^{2}\vspace{0.2cm}\\ &\,\,\,\,\,\,\ds-\frac{\vartheta}{4}\sum_{j=1}^{m-1}w(|\bar{Q}_{m} -\bar{Q}_{j} |)+O\Big(e^{-\beta\mu}\sum_{j=1}^{m-1}e^{-|\bar{Q}_{m}-\bar{Q}_{j}|}\Big)+O(\epsilon)\vspace{0.2cm}\\ &\leq\ds \mathcal {C}_{m-1}+I(z)-\frac{\vartheta}{4}\sum_{j=1}^{m-1}w(|\bar{Q}_{m}-\bar{Q}_{j}|)+O\Big(e^{-\beta\mu}\sum_{j=1}^{m-1}e^{-|\overline{Q}_{m}-\bar{Q}_{j}|}\Big)+O(\epsilon). \end{array} \end{equation} By the definition of the configuration set, we observe that given a ball of size $\mu$, there are at most $C_{N} := 6^{N}$ number of non-overlapping ball of size $\mu$ surrounding this ball. Since $|\bar{Q}_{j}-\bar{Q}_{k}|=\mu$, we have $$\sum_{j=1}^{m-1}w(|\bar{Q}_{m}-\bar{Q}_{j}|)= w(|\bar{Q}_{m} -\bar{Q}_{j} |) + \sum_{k\neq j}w(|\bar{Q}_{m} -\bar{Q}_{k} |) $$ and \begin{equation}\label{4.33} \begin{array}{ll} \ds \sum_{k\neq j}w(|\bar{Q}_{m} -\bar{Q}_{k} |)&\leq Ce^{-\mu} + C_{N}e^{-\mu-\frac{\mu}{2}} + \ldots + C_{N}^{k}e^{-\mu-\frac{k\mu}{2}} \vspace{0.2cm}\\ &\leq\ds Ce^{-\mu}\sum_{j=1}^{\infty}e^{j(\ln C_{N}-\frac{\mu}{2})} \leq\ds Ce^{-\mu}, \end{array} \end{equation} if $C_{N}<e^{\frac{\mu}{2}}$ , which is true for $\mu$ large enough. Hence, we have \begin{equation}\label{4.34} C_{m} \leq C_{m-1} + I(z) + C\epsilon-\frac{\vartheta}{4}w(\mu) + O(e^{-(1+\beta)\mu}) < C_{m-1} + I(z), \end{equation} which contradicts to \eqref{4.5} in Lemma \ref{lem4.1}. \end{proof}
1,314,259,995,420
arxiv
\section{Introduction}\label{sec:introduction} Over the past ten years, blockchains have generated tremendous interest by enabling decentralized payment systems. The term blockchain, first introduced by Satoshi Nakamoto in his design of Bitcoin \cite{nakamotobitcoin}, refers to the distributed data structure at the heart of these systems. Consensus protocols are used to ensure the consistency of blockchains among different parties. Although the data structure itself has remained fairly standard, associated consensus protocols have proliferated. We refer the reader to \cite{bano2019sok} and \cite{garay2020sok} for surveys on different blockchain consensus protocols. In this work, we restrict our attention to the {\em longest-chain protocol}, or Nakamoto consensus, which forms the backbone of various popular cryptocurrencies like Bitcoin and Ethereum. Many papers have formally studied this protocol's security under under a variety of modeling assumptions. These modeling assumptions vary, among other things, with respect to the nature of the leader election mechanism (Proof of Work-PoW \cite{garay2015bitcoin, pass2017analysis} versus Proof of Stake-PoS \cite{kiayias2017ouroboros, pass2017sleepy}), and the timing assumptions (continuous time \cite{li2020close, dembo2020everything} versus discrete time \cite{blum2020combinatorics, gazi2020tight}). Notwithstanding these modeling differences, some basic principles behind the security of the protocol have emerged. One fundamental principle is that the longest chain protocol is secure in the synchronous network model, under sufficient honest representation. In this model, a message sent at time $\tau$ will be delivered by time $\tau + \Delta$, where $\Delta$ is a system parameter. Under this assumption, \cite{dembo2020everything, gazi2020tight} show that the protocol is secure if and only if \begin{equation}\label{eq:adv_threshold} \beta < \frac{1-\beta}{1 + (1-\beta)f \Delta} \end{equation} where $\beta$ is the fraction of adversarial power and $f$ is the mining rate (or block production rate). The term $1/(1 + (1-\beta) f \Delta)$ can be thought of as a {\em discount factor} in the honest power, capturing the effect of the message delays. Succinctly put, \eqref{eq:adv_threshold} states that the security threshold of the protocol degrades with $f\Delta$. A second principle is that when the tuple $(\beta, f, \Delta)$ satisfies \eqref{eq:adv_threshold}, the protocol satisfies both safety (all honest parties have consistent chains, except for the last few blocks) and liveness (new honest blocks are included in all parties' chains at a regular rate) security properties with high probability. In fact, the probability that these properties are violated decreases exponentially with a parameter $k$. Recent works (e.g., \cite{li2020close, blum2020combinatorics}) state security properties in a form such that the probability of violations remains negligible even for infinite horizon executions. The security statements in this form are more general, and imply bounds for the statements given in other works such as \cite{garay2015bitcoin, kiayias2017ouroboros}. The aforementioned principles hold for both PoW and PoS versions of the protocol. In real-world conditions, worst-case message delays may be much larger than typical delays. Therefore, the longest-chain protocol may have better security guarantees than those suggested by analysis which sets $\Delta$ to the maximum possible delay. The use of the random delay model, as proposed in this paper, formalizes this intuition. \subsection{Our Contributions}\label{sec:our_contributions} This paper studies the security of the longest chain protocol in a network with random, possibly unbounded, delays. Briefly, each peer-to-peer communication is subject to an independent and identically distributed (i.i.d.) delay. Thus, different recipients of a broadcast may receive the message at different times. This communication model is a generalization of the synchronous model, and has not been studied in prior work on blockchain security. Drawing inspiration from statistical physics, this paper states and distinguishes between two forms of security properties: \textit{intensive} and \textit{extensive}. Intensive security properties capture the security of localized portions of blockchains, whereas extensive security properties provide global security guarantees. Prior works typically state properties in only one of these forms; those works that state properties in both forms do not formally distinguish them. We show that guarantees for the intensive forms imply guarantees for the extensive forms. Our main result, Theorem \ref{thm:main}, states that the longest-chain protocol satisfies the settlement and chain quality properties in the random delay model, except with probability that decays exponentially in a wait-time (or security parameter) $k$. These properties are intensive forms of safety and liveness, and pertain to an infinite-horizon execution. We provide explicit error bounds. As in the synchronous model, the security guarantees hold under appropriate bounds on the adversarial power and the mining rate. Our work highlights the dual role of communication delays: these delays have both global and local effects. Delays in messages from past leaders to future leaders have global effect: they influence the growth of the longest chain and impact the security of all honest parties. We generalize the analysis tools developed in the Ouroboros line of papers \cite{kiayias2017ouroboros, david2018ouroboros, badertscher2018ouroboros, blum2020combinatorics} to handle these delays (e.g., see Section \ref{sec:special_honest_slots}, which describes a generalization of characteristic strings). In contrast, delays in messages from leaders to a given honest observer $h$ have local impact: they affect the length of the chain held by $h$. We define a new local metric called $\mathsf{Unheard}_h$ (see Section \ref{sec:unheard}) to handle these delays. Theorem \ref{thm:main} reflects this dual role of delays. The error bounds of the security statements include two terms: one is a bound on atypical behavior of the characteristic string; the other is a bound on atypical behavior of $\mathsf{Unheard}_h$ for every honest party $h$. Note that a given party may be a leader in one context and an observer in another. \subsection{Comparison with Prior Work}\label{sec:related_work} \paragraph{Communication Model} We compare the random delay model of this work to the \textit{partially synchronous model} \cite{dwork1988consensus} and the \textit{sleepy model} \cite{pass2017sleepy} of communication. The partially synchronous model assumes that message delays are unbounded until an adversarially chosen time $T^*$, and are bounded thereafter. In this model, the longest-chain protocol is secure only after $O(T^*)$ time, as shown in \cite{neu2020ebb}. In the sleepy model, the adversary can put an honest party to sleep for an arbitrary period of time. Sleepy honest parties have unbounded delay (equivalently, they do not communicate), while the awake parties have bounded delays. The longest-chain protocol is secure in the sleepy model, provided the fraction of awake honest parties exceeds the fraction of corrupt parties (see \cite{pass2017sleepy}). In both models, unbounded delays are localized--to select period(s) of time in the partially synchronous model and to select parties in the sleepy model. In comparison, the random delay model conveys a more homogeneous network setting. Here, message delays from any honest party at any time may be large--across parties and time simultaneously. Although each of these models describe communication settings with sporadic large delays, they capture different facets of sub-optimal network behavior. Studying the same protocol in different models provides a better understanding of its real-world performance. Two other works \cite{fanti2019barracuda, gopalan2020stability} study the longest-chain protocol under random, unbounded delay. These works model the network as a graph of inter-connected nodes, and assume that delays between two neighboring nodes in the network are exponentially distributed and i.i.d. However, neither of these works analyze an adversary trying to disrupt the security of the protocol. In this work, we model point-to-point communication instead of communication over a graph. We also allow for general delay distributions. \paragraph{Security Analysis} Our statement, and proof, of the settlement (safety) property draw inspiration from Blum et al. \cite{blum2020combinatorics}. Blum et al. show that PoS longest-chain protocols satisfy safety with an error probability that decays exponentially in the wait-time $k$. Moreover, the proof in \cite{blum2020combinatorics} yields explicit expressions for the constants in the error bounds. For PoW models, \cite{li2020close} provides explicit, exponentially decaying error bounds for intensive security properties. The analysis of \cite{blum2020combinatorics}, which is for the special case $\Delta = 0$, can be extended to any constant $\Delta$ as shown in \cite{david2018ouroboros, badertscher2018ouroboros}. Similarly, we adapt the analysis to the random delay model. We generalize the notion of $\Delta$-isolated slots in \cite{david2018ouroboros, badertscher2018ouroboros} to that of \textit{special honest slots}, retaining the property that blocks from these slots must be at different heights. The statement and proof of the chain quality property in this work is inspired by \cite{badertscher2018ouroboros}. Our work focuses on the intensive form of this property, which also applies to the extensive form, while the analysis in \cite{badertscher2018ouroboros} is only for the extensive property. The works of Dembo et al. \cite{dembo2020everything} and Gazi et al. \cite{gazi2020tight} give a tight characterization of the security regime of the longest chain protocol via \eqref{eq:adv_threshold}. The security threshold is obtained by comparing the growth rate of the adversarial chain with that of the honest tree (the private attack). Obtaining an expression for the growth rate of the honest tree in the random delay model, and extending the results of \cite{dembo2020everything, gazi2020tight} to this model are directions for future research. \section{The System Model}\label{sec:model} \subsection{Preliminaries}\label{sec:preliminaries} The protocol proceeds in discrete time slots that are indexed by $\mathbb{N}$ and runs for an infinite duration. We assume that clocks of all parties are perfectly synchronized. Blocks are treated as abstract data structures containing an integer timestamp, a hash pointer to a parent block with a smaller timestamp, a cryptographic signature of the block's proposer, some transactions and other relevant information. A special genesis block, with timestamp $0$ and no parent, is known to all parties at the start of the protocol. We assume the existence of a leader election mechanism which selects a subset of parties in each time slot to be leaders for that slot. Only leaders can propose blocks with the corresponding timestamp. This mechanism is an abstraction of the mining process in PoW systems or the leader election protocol in PoS systems. \paragraph{Parties in the protocol} The parties in the protocol comprise of honest ones and a single adversary $\mathcal{A}$. (Replacing all corrupt parties by a single one is done for simplicity). The set of honest parties is represented by $\mathcal{H}$ and may be finite or infinite. Arbitrary honest parties are denoted by $h, h_1, h_2,$ etc. In our model, the adversary can never corrupt an honest party and the honest parties never go offline. Honest parties follow the longest-chain protocol, while the adversary can deviate from the protocol arbitrarily. The precise difference between honest and adversarial actions are given in Section \ref{sec:slot}. \paragraph{Blockchains} From any block, a unique sequence of blocks leading up to the genesis block can be identified via the hash pointers. We call this sequence a \textit{blockchain}, or simply a \textit{chain}. The convention is that the genesis block is the first block of the chain, and the terminating block is called the \textit{tip}. The timestamps of blocks in a blockchain must strictly increase, going from the genesis to the tip. At any given slot, honest parties store a single chain in their memory. We use $\mathcal{C}^h_i$ to denote the chain held by an honest party $h$ at (the end of) slot $i$. We use $\mathcal{C}[i_1:i_2]$ to represent the portion of a chain $\mathcal{C}$ consisting of blocks with timestamps in the interval $\{i_1, \ldots, i_2\}$. \paragraph{Blocktrees} The set of all blocks generated up to a given slot $i$ forms a directed tree. Let $\mathcal{F}_i$ be the directed graph $(V, E)$, where $V$ is the set of blocks generated up to slot $i$ and $E$ is the set of parent-child block pairs. These edges point from parent to child, in the opposite direction of the hash pointers. The genesis block is the root of the tree, with no parent. In addition, the timestamp of block $v$ is denoted by $\ell(v)$. Every blockchain $\mathcal{C}^h_i$ is a directed path in $\mathcal{F}_i$ that begins at the genesis block and ends at any other block. $\mathcal{F}_i$ includes blocks held privately by the adversary. \subsection{Details of a Slot}\label{sec:slot} Within a slot, the following events occur in the given order. This describes the prescribed honest protocol, and also specifies the adversary's powers. \begin{itemize} \item (\textbf{Leader Election Phase}) All parties learn the slot leaders through the leader election mechanism. \item (\textbf{Honest Send Phase}) Honest leaders create a new block, append it to their chain, and broadcast this new chain to all parties. The communication network assigns random delay to each point-to-point message. \item (\textbf{Adversarial Send Phase}) $\mathcal{A}$ receives all chains sent (if any) in the Honest Send Phase, along with their respective message delays. $\mathcal{A}$ may then create some new blocks with timestamps of any slot for which it was elected a leader and may create multiple blocks with the same timestamp. It sends each new block (along with the preceding blockchain) to an arbitrary subset of honest parties. \item (\textbf{Deliver Phase}) Messages from honest parties slated for delivery in the current slot and $\mathcal{A}$'s messages from the current slot are delivered to the appropriate honest parties. $\mathcal{A}$ can also choose to deliver any honest messages ahead of schedule. \item (\textbf{Adopt Phase}) Each honest party updates its chain if it receives any chain strictly longer than the one it holds. If an honest party receives multiple longer chains, it chooses the longest one with $\mathcal{A}$ breaking any ties. \end{itemize} \subsection{Leader Election}\label{sec:leader_election} We model the leader election mechanism such that the sets of leaders in different slots are independent and identically distributed subsets of $\mathcal{H}\cup \{ \mathcal{A}\}.$ For example, the leader election process in the first few slots may be: $\{h_1\}$, $\emptyset$, $\{\mathcal{A}\}$, $\{h_2, h_3\}$, $\emptyset$, $\{h_4, \mathcal{A}\}$, $\{h_1, h_5, \mathcal{A}\}.$ Let $\mathcal{L}_s$ denote the set of leaders in slot $s$. The adversary cannot influence the leader election mechanism. Let $A_s=1$ if $\mathcal{A} \in \mathcal{L}_s$ and $A_s=0$ otherwise, and let $N_s$ be the number of honest leaders in slot $\mathcal{L}_s$. Note that $(N_s, A_s)$ may have any possible joint distribution, but the process $\{(N_s, A_s)\}_{s \geq 1}$ is i.i.d. Let $(N, A)$ denote a representative random tuple of the aforementioned process. Define \begin{itemize} \item $f \triangleq \mathbb{P}(A + N > 0)$. $f$ is the probability of a \textit{non-empty slot}, i.e., a slot with one or more leaders. In a sense, it is the mining rate of the protocol. \item $\alpha \triangleq \mathbb{P}(N = 1 \ \& \ A = 0 \, \vert \, A + N > 0)$. $\alpha$ is the probability of having a unique honest leader in a slot, given that the slot is a non-empty slot. \end{itemize} \subsection{The Communication Model}\label{sec:communication_network} We now describe our model of the communication network, the \textit{random delay model}. Every message sent by one honest party to another is subject to a random delay, which can take any value in $\mathbb{Z}_+$. Note that a broadcast is a set of different point-to-point messages, each of which is subject to an independent delay. We adopt the convention that the minimum possible delay is zero; in this case, a message sent in a time slot is received by the end of that slot. The delays of different messages are i.i.d.; let $\Delta$ denote a random variable with this distribution, called the \textit{delay distribution}. The synchronous model is a special case with a constant $\Delta$. For technical reasons, we require that the delay distribution has a \textit{non-decreasing failure rate function}. The failure rate function for the delay distribution $\Delta$, is defined as \[ \textsf{Failure Rate}(s) = \begin{cases}\mathbb{P}(\Delta = s \vert \Delta \geq s) & \text{if } \mathbb{P}(\Delta \geq s) > 0\\ 1 & \text{if } \mathbb{P}(\Delta \geq s) = 0\end{cases}\] A geometric random variable has a constant failure rate. A constant $\Delta$ has a failure rate function that is $0$ up to the constant and $1$ thereafter. Therefore, they are both admissible in our model. A consequence of a non-decreasing failure rate is that, for all $i \geq 0$ and $s \geq 0$ such that $\mathbb{P}(\Delta \geq s) > 0$, $\mathbb{P}(\Delta \geq s + i \vert \Delta \geq s) \leq \mathbb{P}(\Delta \geq i)$. Given the power of the adversary to deliver honest messages earlier than scheduled, a system with a given delay distribution $\Delta$ can be subsumed by a system that has a different delay distribution $\Tilde{\Delta}$, provided the latter stochastically dominates the former. If $\Tilde{\Delta}$ satisfies the non-decreasing failure rate restriction, guarantees for a system with delay $\Delta$ can be given in terms of the distribution $\Tilde{\Delta}$. The non-decreasing failure rate restriction is not a fundamental limitation of the model, but rather of the method of analysis. One technique of removing this restriction is to assume a slightly different form of the leader election process. This is described next. \subsection{One-Time Leader Model} \label{sec:infinite_users} Consider an alternate model of the leader election process in which each honest party can be chosen as a leader at most once. We call this model the \textit{one-time leader model} to distinguish it from the \textit{i.i.d. leader model} described in Section \ref{sec:leader_election}. Let the set of honest parties $\mathcal{H}$ be divided into two groups, leaders $\mathcal{M}$ and observers $\mathcal{O}$. The set of miners is countably infinite and are indexed $m_1, m_2, \ldots$. The set of observers may be finite or infinite. Leaders are chosen among parties in $\mathcal{M}$ in the order of their indexing. In this model, the sets of leaders in each slot is no longer independent. However, the tuples $\{(N_s, A_s)\}_{s \geq 1}$ are i.i.d., where $N_s$ and $A_s$ have the same interpretation as before. The parameters $f$ and $\alpha$ are also defined in the same manner as before. This alternate leader election model allows us to extend the security analysis to any delay distribution. In particular, $\Delta$ can now be infinity with some probability. A delay of infinity for a message implies that the adversary can choose to deliver the message at any time of its choice, or never at all. \begin{comment} \paragraph{The Honest Users} The set of honest parties $\mathcal{H}$ is divided into two groups, miners $\mathcal{M}$ and observers $\mathcal{O}$. The set of miners is countably infinite and are indexed $m_1, m_2, \ldots$. They are chosen as leaders of a slot in the order of their indexing. The set of observers may be finite or infinite. An observer is never chosen as a leader. \paragraph{The Leader Election Process} For example, the leader election process in the first few slots may be: $\{m_1\}$, $\emptyset$, $\{\mathcal{A}\}$, $\{m_2, m_3\}$, $\emptyset$, $\{m_4, \mathcal{A}\}$, $\{m_5, m_6, m_7, \mathcal{A}\}, \ldots$. \paragraph{Permissible Delay distributions} In this model, we allow $\Delta$ to be any distribution on $\mathbb{Z}_+ \cup \{\infty\}$. A delay of infinity for a message implies that the adversary can choose to deliver the message at any time of its choice. A particular delay distribution to consider is \[\Delta = \begin{cases} \Delta_0 & \text{w.p. } d \\ \infty & \text{w.p. } 1 - d \end{cases}\] Note that any finite delay distribution can be stochastically dominated by a distribution of the form above, for appropriate values of $\Delta_0$ and $d$. A system with such a delay distribution can also be thought of as a deviation from a synchronous system with delay $\Delta_0$, such that each message can be lost with probability $d$. \subsection{Discussion on Modeling Choices}\label{sec:discussion_model} {\color{blue}Review this subsection. Maybe, omit it.} As mentioned at the beginning of this section, our model is an idealized one, where we abstract away real-world implementation details. Existing works in the literature, e.g. \cite{kiayias2017ouroboros}, \cite{david2018ouroboros} and \cite{badertscher2018ouroboros}, highlight how to realize these abstractions using cryptographic tools. The idealized model allows us to focus on certain key aspects of security that are affected by communication delays. This is similar to the strategy adopted in \cite{blum2020combinatorics}. In a sense, the leader election mechanism and the communication network act as `oracles', that generate random values that cannot be influenced by the adversary. The security of the protocol is characterized by the distribution of these two components, i.e., the leader election distribution and the delay distribution. Essentially, the only constraint we have is that $\alpha \mathbb{P}(\Delta < G) > 0.5$, where $G \sim \textsf{Geom}(f)$ and is independent of $\Delta$. A similar constraint is encountered in pretty much all existing works, where $\Delta$ is a constant. \textbf{Implementing the leader election process:} We do not specify the implementation details of the leader election scheme in our paper. The exact mechanism does not have any bearing on our results, as long as the leader election process can be abstractly modeled as described in Section \ref{sec:leader_election}. The leader election process could be implemented, e.g., via Verifiable Random Functions (VRFs) as described in \cite{david2018ouroboros}. The adversary's powers in our model is quite general and encompasses the powers of many typical Proof of Stake (PoS) leader election mechanisms, such as those outlined in \cite{kiayias2017ouroboros, bentov2016snow}. We also remark that our leader election process models a Proof of Work (PoW) blockchain such as Bitcoin by discretizing real time into slots whose duration is much smaller than the typical block inter-arrival time. In other words, we assume $f \ll 1$. Under this regime, it is practically impossible to have a slot with two or more leaders. In PoW, an adversary is weaker than in a PoS blockchain, in the sense it can create only one block each time it is a leader, as opposed to many. Therefore, such an adversary fits into our model. Note that if the slots are not small, it is possible in the real-world that the adversary is chosen as a leader twice in the same slot, and builds two blocks in succession. This action of the adversary is not allowed in our model. \textbf{The honest majority assumption:} In all blockchain protocols, some form of the `honest majority' criterion is required for the protocol to be secure, be it that the honest users users own majority of the stake or computational power. In our protocol, the corresponding criterion is stated as $\alpha > 1/2$. In general, this is a stronger condition than the honest majority criterion. However, if most honest leaders are chosen in uniquely honest slots, then the condition is not much stronger than the honest majority assumption. The structure would be $\mathbb{E}(N_\mathcal{H}) > p_\ensuremath{\mathcal{A}}{}$. Owing to our proof techniques, we shall require a different condition: $\alpha > 1/2$. This condition is a strictly stronger condition than $(1 - q_0) > p_\ensuremath{\mathcal{A}}{}$, as we show below: \begin{align*} \alpha > \frac{1}{2} &\Leftrightarrow \frac{\overline{p}_\ensuremath{\mathcal{A}}{} q_1}{f} > \frac{1}{2} \\ &\Leftrightarrow \overline{p}_\ensuremath{\mathcal{A}}{} q_1 > \frac{1}{2}(1 - \overline{p}_\ensuremath{\mathcal{A}}{} q_0) \\ &\Leftrightarrow \overline{p}_\ensuremath{\mathcal{A}}{} q_1 + \overline{p}_\ensuremath{\mathcal{A}}{}(q_0 + q_1) > 1 \\ &\Rightarrow \overline{p}_\ensuremath{\mathcal{A}}{}(1 + q_1) > 1 \\ &\Leftrightarrow (1 + q_1) > 1/(1 - p_\ensuremath{\mathcal{A}}{}) > 1 + p_\ensuremath{\mathcal{A}}{} \\ &\Leftrightarrow q_1 > p_\ensuremath{\mathcal{A}}{} \\ &\Rightarrow \mathbb{E}(N_\mathcal{H})(1 - q_0) > p_\ensuremath{\mathcal{A}}{} \end{align*} \textbf{Setting the mining rate:} Existing works have shown that the mining rate $f$ must be chosen such that the typical gap between two leaders is larger than the typical delay. In earlier works that assumed a $\Delta$-synchronous condition, this would be expressed as $\mathbb{P}(G > \Delta)$ is close to one, where $G \sim \textsf{Geom} (f)$ is a random variable representing the gap between two non-empty slots. In our work, we see that the same condition appears, with the difference being $\Delta$ now represents a typical delay random variable, independent of $G$. \textbf{Sleepy User Model:} We noted in Section \ref{sec:honest_protocol} that honest players broadcast their messages. We reiterate here that each point-to-point communication is subject to an independent delay. Therefore, some honest users would hear of a particular honest broadcast quickly while others would not. It is interesting to consider an alternate model where for each honest leader in each slot, a random delay is chosen and assigned to \textit{all} of its outgoing messages. Let us call this alternate model, the \textit{uniform delay model}. Such a model would be similar to the notion of \textit{sleepy users} (studied in \cite{pass2017sleepy}), where an honest party can be in one of two states: asleep or awake. A sleepy party does not send any messages, even when chosen as a leader. One can model a sleepy party as an honest leader whose outgoing messages suffers a very large delay (more than the length of the execution). Although the model in \cite{pass2017sleepy} allows the adversary to put honest parties to sleep arbitrarily, it does so without knowledge of the future leader election process. In effect, whether a (honest) leader of a slot is sleepy or not is purely random. The protocol can be accurately captured by the uniform delay model. \end{comment} \section{The Desired Security Properties}\label{sec:desired_properties} The security properties defined in this section, and the guarantees for them hold for both models introduced in Section \ref{sec:model}: the i.i.d. leader model with delays having a non-decreasing failure rate function and the one-time leader model with general delay distributions. Each security property refers to a desirable condition over an \textit{execution}. Formally, an execution of the protocol refers to a particular instantiation of the random components (i.e., leader election and communication delays) and the actions of the adversary. Whether a certain property holds or not in an execution depends on both these factors. The adversary's actions can be arbitrary and our theorems are stated for the worst-case scenario of all possible adversarial actions. \subsection{Property Definitions}\label{sec:property_definitions} We first define the settlement property, which is an intensive form of safety (see \cite{blum2020combinatorics} for the original definition). \begin{definition}[Settlement]\label{def:settlement} In an execution, the \emph{settlement property with parameters $s, k \in \mathbb{N}$ and $\mathcal{I} \subseteq \mathcal{H}$} holds if, for any pair of honest parties $h_1, h_2 \in \mathcal{I}$ and slots $i_1, i_2$ such that $s + k \leq i_1 \leq i_2$, it holds that $\mathcal{C}^{h_1}_{i_1}[1:s] = \mathcal{C}^{h_2}_{i_2}[1:s]$. \end{definition} We refer to the settlement property with parameters $s, k$, and $\mathcal{I}$ as the ($s, k, \mathcal{I}$)-settlement property for brevity. We use a similar convention for other properties too. The ($s, k, \mathcal{I}$)-settlement property, roughly speaking, means that parties in $\mathcal{I}$ will agree on the order of blocks mined up to slot $s$ after $k$ more slots. We now state the common prefix property, an extensive form of safety. \begin{definition}[Common Prefix]\label{def:common_prefix} In an execution, the \emph{common prefix property with parameters $T, k \in \mathbb{N}$ and $\mathcal{I} \subseteq \mathcal{H}$} holds if, for any pair of honest players $h_1, h_2 \in \mathcal{I}$ and slots $s, i_1, i_2$ such that $s \leq T$ and $s + k \leq i_1 \leq i_2$, it holds that $\mathcal{C}^{h_1}_{i_1}[1:s] = \mathcal{C}^{h_2}_{i_2}[1:s]$. \end{definition} The intensive and extensive forms of safety have a subtle difference, which we illustrate with an example. Let $T$ be some large number. The ($T, k, \mathcal{I}$)-settlement property means the parties in $\mathcal{I}$ agree forever after slot $T+k$ about the chain up to slot $T$. This immediately implies that all parties in $\mathcal{I}$ agree forever about the chain up to slot $s$, after slot $T+k$, for any $s \leq T$. This does not, however, imply that all parties in $\mathcal{I}$ agree forever about the chain up to time $s$, after slot $s+k$, for all $s$ with $s \leq T$. This latter statement is captured by the extensive form given by the common prefix property. Formally, the intensive and extensive forms of common prefix are related as follows: \begin{lemma}\label{lem:settlement_CP} Fix a set of honest players $\mathcal{I} \subseteq \mathcal{H}$ and parameters $T, k \in \mathbb{N}$. If the settlement property holds with parameters $s, k$ and $\mathcal{I}$ for all $s \leq T$, then the common prefix property holds with parameters $T, k$ and $\mathcal{I}$. \end{lemma} \begin{proof} Pick any pair of honest users, $h_1, h_2 \in \mathcal{I}$ and any slot $s \leq T$. Pick any $i_1, i_2$ satisfying $s+k \leq i_1 \leq i_2$. Consider the chains held by $h_1, h_2$ at slots $i_1, i_2$ respectively: $\mathcal{C}^{h_1}_{i_1}, \mathcal{C}^{h_2}_{i_2}$. We wish to show that $\mathcal{C}^{h_1}_{i_1}[1:s] = \mathcal{C}^{h_2}_{i_2}[1:s]$. But this follows from the settlement property with parameters $s, k$ and $\mathcal{I}$. \end{proof} We next state the chain quality property, first in its intensive form and then in its extensive form. \begin{definition}[Intensive Chain Quality]\label{def:chain_quality_intensive} In an execution, the \emph{intensive chain quality property} with parameters $\mu \in (0, 1)$, $s, k \in \mathbb{N}$ and $\mathcal{I} \subseteq \mathcal{H}$ holds if, for any honest players $h \in \mathcal{I}$ and slot $i \geq s + k$, $\mathcal{C}^{h}_{i}[s+1:s+k]$ contains greater than $k f \mu $ honestly mined blocks. \end{definition} \begin{definition}[Extensive Chain Quality]\label{def:chain_quality_extensive} In an execution, the \emph{extensive chain quality property} with parameters $\mu \in (0, 1)$, $T, k \in \mathbb{N}$ and $\mathcal{I} \subseteq \mathcal{H}$ holds if, for any honest players $h \in \mathcal{I}$ and slots $s, i$ such that $s \leq T$ and $i \geq s + k$, $\mathcal{C}^{h}_{i}[s+1:s+k]$ contains greater than $k f \mu $ honestly mined blocks. \end{definition} The relation between the intensive and extensive versions of chain quality parallels that between the settlement and common prefix property noted in Lemma \ref{lem:settlement_CP}. We state the relation formally in Lemma \ref{lem:intensive_extensive_CQ}, but omit the proof. \begin{lemma}\label{lem:intensive_extensive_CQ} Fix a set of honest players $\mathcal{I} \subseteq \mathcal{H}$ and parameters $\mu \in (0, 1)$, $T, k \in \mathbb{N}$. If the intensive chain quality property holds with parameters $\mu, s, k$ and $\mathcal{I}$ for all $s \leq T$, then the extensive chain quality property holds with parameters $\mu, T, k$ and $\mathcal{I}$. \end{lemma} \subsection{Main Result}\label{sec:main_results} \begin{definition}[$\epsilon$-honest majority]\label{def:eps_honest_maj} Consider a blockchain protocol where the leader election process has parameters $\alpha$ and $f$; and the communication network's typical delay is represented by a random variable $\Delta$. Let $G \sim \mathsf{geom} (f)$ be a random variable that is independent of $\Delta$. Let $p \triangleq \alpha\, \mathbb{P}(\Delta < G).$ Suppose the system's parameters are such that $p > 0.5$. Let $\epsilon$ be such that $p = (1+\epsilon)/2$. We say that such a protocol has \emph{$\epsilon$-honest majority}. \end{definition} Note that for any $\alpha > 0.5$ and $\Delta$ such that $\mathbb{P}(\Delta < \infty) = 0$, one can choose $f > 0$ such that $p > 0.5$. Our main result states that the intensive safety and liveness properties hold with high probability, irrespective of the behavior of the adversary. \begin{theorem}[Main Result]\label{thm:main} Consider a blockchain protocol with $\epsilon$-honest majority. Then for any $\mathcal{I} \subseteq \mathcal{H}$, $s\in \mathbb{N}$ and $k \in \mathbb{N}$, \iftoggle{arxiv} { \begin{equation} \label{eq:main_settlement_bnd} \mathbb{P}((s, k, \mathcal{I})\text{-settlement property is violated}) \leq p_{\textsf{settlement}} + |\mathcal{I}| p_{\textsf{unheard}} \end{equation} } { \begin{multline} \label{eq:main_settlement_bnd} \mathbb{P}((s, k, \mathcal{I})\text{-settlement property is violated}) \\ \leq p_{\textsf{settlement}} + |\mathcal{I}| p_{\textsf{unheard}} \end{multline} } where \begin{align*} p_{\textsf{settlement}} &= \exp{(-kf \epsilon^3/12)} + 3\exp{(-kf\epsilon^2/32)} \\ p_{\textsf{unheard}} & = \left[\frac{2}{ 1-(1/2)^{\epsilon/2}}\right] \exp(-kf \epsilon/16) \end{align*} Further, for any $\mu < \epsilon$, \iftoggle{arxiv} { \begin{equation} \label{eq:main_quality_bnd} \mathbb{P}((\mu, s, k, \mathcal{I})\text{-intensive chain quality property is violated}) \leq p_{\textsf{CQ}} + |\mathcal{I}| \Tilde{p}_{\textsf{unheard}} \end{equation} } { \begin{multline} \label{eq:main_quality_bnd} \mathbb{P}((\mu, s, k, \mathcal{I})\text{-intensive chain quality property is violated}) \\ \leq p_{\textsf{CQ}} + |\mathcal{I}| \Tilde{p}_{\textsf{unheard}} \end{multline} } where \begin{align*} p_{\textsf{CQ}} &= 4 \exp(-kf (\epsilon-\mu)^2 /48) \\ \tilde{p}_{\textsf{unheard}} & = \left[\frac{2}{ 1-(1/2)^{\epsilon/2}}\right] \exp(-kf(\epsilon - \mu) /8) \end{align*} \end{theorem} As a corollary, we get the following result about the extensive safety and liveness properties. This statement follows from Theorem \ref{thm:main}, Lemmas \ref{lem:settlement_CP} and \ref{lem:intensive_extensive_CQ}, and the union bound. We omit a formal proof. \begin{corollary}\label{thm:corollary} Consider a blockchain protocol with $\epsilon$-honest majority. Then for any $\mathcal{I} \subseteq \mathcal{H}$, $T\in \mathbb{N}$ and $k \in \mathbb{N}$, \iftoggle{arxiv} { \begin{equation*} \mathbb{P}((T, k, \mathcal{I})\text{-common prefix property is violated}) \leq T\,(p_{\textsf{settlement}} + |\mathcal{I}| p_{\textsf{unheard}}) \end{equation*} } { \begin{multline*} \mathbb{P}((T, k, \mathcal{I})\text{-common prefix property is violated}) \\ \leq T\,(p_{\textsf{settlement}} + |\mathcal{I}| p_{\textsf{unheard}}) \end{multline*} } Further, for any $\mu < \epsilon$, \iftoggle{arxiv} { \begin{equation*} \mathbb{P}((\mu, T, k, \mathcal{I})\text{-extensive chain quality property is violated})\leq T\,(p_{\textsf{CQ}} + |\mathcal{I}| \Tilde{p}_{\textsf{unheard}}) \end{equation*} } { \begin{multline*} \mathbb{P}((\mu, T, k, \mathcal{I})\text{-extensive chain quality property is violated}) \\ \leq T\,(p_{\textsf{CQ}} + |\mathcal{I}| \Tilde{p}_{\textsf{unheard}}) \end{multline*} } \end{corollary} The key difference between intensive and extensive properties can be seen by the guarantees on them. The probability of an intensive property, say the ($T, k, \mathcal{I}$)-settlement property, being violated is independent of $T$ (Theorem \ref{thm:main}). The probability of an extensive property, say the ($T, k, \mathcal{I}$)-common prefix property, being violated grows linearly with $T$ (Corollary \ref{thm:corollary}). \begin{figure}[htbp] \centering \iftoggle{arxiv} {\includegraphics[width = 0.7\textwidth]{beta.pdf}} {\includegraphics[width = \columnwidth]{formal-writeup/figures/beta.pdf}} \caption{Comparison of the security threshold guaranteed by this work for exponentially distributed delays with the tight threshold for deterministic delays (i.e., \eqref{eq:adv_threshold}).} \label{fig:plot_exponential_delay} \end{figure} Theorem \ref{thm:main} and its corollary prove security properties under the $\epsilon$-honest majority assumption for some $\epsilon > 0.$ While the results are stated for discrete time, they imply corresponding results in continuous time by taking a limit, as described in \cite{gazi2020tight}. Also, in continuous time, the probability of more than one party mining at a time is zero. Leaders are elected at times of a Poisson process of rate $f$, with a leader being the adversary with probability $\beta$ and honest with probability $1-\beta.$ The honest majority condition reduces to $(1-\beta)\mathbb{P}(\Delta < \mathsf{Exp}(f)) > \frac 1 2,$ where $\mathsf{Exp}(f)$ denotes an exponentially distributed random variable with rate parameter $f$ (mean $1/f$). In case $\Delta$ has the $\mathsf{Exp}(1/\eta)$ distribution (with mean $\eta$), the honest majority condition becomes $\beta < \frac{1-\eta f} 2.$ Figure \ref{fig:plot_exponential_delay} displays the boundary of the security region we have established (i.e. $\beta = \frac{1-\eta f} 2$), and for comparison, the boundaries of the security region guaranteed for bounded delay by \eqref{eq:adv_threshold} for $\Delta\equiv \eta,$ $\Delta\equiv 4\eta,$ and $\Delta\equiv 16 \eta.$ Consider a delay distribution that is identical to $\mathsf{Exp}(1/\eta)$ from $[0, 4\eta]$, and concentrates the rest of the mass at $4 \eta$. Such a distribution can be stochastically dominated by both $\mathsf{Exp}(1/\eta)$, as well as the constant delay $4\eta$. The Figure shows that the adversarial tolerance guarantees provided by this work with $\mathsf{Exp}(1/\eta)$ delays are comparable to the best possible guarantees with constant delays $4 \eta$, for the range $f\eta < 0.2$. \section{Definitions and Preliminary Results}\label{sec:definitions} In this section, we define new terms pertaining to our model that are key to the proof of Theorem \ref{thm:main}. The two most important terms are $\mathsf{CharString}$ and $\mathsf{Unheard}$, defined in Sections \ref{sec:special_honest_slots} and \ref{sec:unheard} respectively. \subsection{Notation}\label{sec:notation} All random processes in our model are discrete-time processes, indexed by $\mathbb{N}$, $\mathbb{Z}_{+}$ or $\mathbb{Z}$ (the relevant indexing will be specified when the process is defined). For a random process $\textsf{Process}$, the notation for the $i\textsuperscript{th}$ variable is $\textsf{Process}[i]$. The portion of the process from index $i_1$ to $i_2$, both inclusive, is denoted by $\textsf{Process}[i_1:i_2]$. If $i_2 < i_1$, this denotes an empty string. The process from index $i$ onward (including $i$) is denoted by $\textsf{Process}[i: \ ]$, and the process up to index $i$ (including $i$) is denoted by $\textsf{Process}[\ : i]$. In our analysis, we often consider processes taking values in $\ensuremath{\{\perp, 0, 1\}}{}$. For such processes, define the following sets of time slots: \begin{align*} \mathcal{N}_0(\textsf{Process}[i_1: i_2]) &\triangleq \{i \in \mathbb{N} : i_1 \leq i \leq i_2, \textsf{Process}[i] = 0\} \\ \mathcal{N}_1(\textsf{Process}[i_1: i_2]) &\triangleq \{i \in \mathbb{N} : i_1 \leq i \leq i_2, \textsf{Process}[i] = 1\} \\ \mathcal{N}(\textsf{Process}[i_1: i_2]) &\triangleq \{i \in \mathbb{N} : i_1 \leq i \leq i_2, \textsf{Process}[i] \neq \perp\} \end{align*} We denote the cardinality of these sets by using $N$ instead of $\mathcal{N}$. For example, $N_0(\textsf{Process}[i_1 : i_2]) = |\mathcal{N}_0(\textsf{Process}[i_1 : i_2])|.$ \subsection{\textsf{LeaderString} and the compressed time scale}\label{sec:compressed_time_scale} We start by defining a representation of the leader election process---$\mathsf{LeaderString}$---that we use in our analysis. \begin{definition}[LeaderString]\label{def:leader_sequence} \emph{$\mathsf{LeaderString}$} is a process taking values in $\ensuremath{\{\perp, 0, 1\}}$, defined as follows. For each $i \geq 1$, \begin{equation} \label{eq:leader_sequence_dist} \mathsf{LeaderString}[i] = \left\{ \begin{array}{lll} \perp & \text{if } N_i = 0,\, A_i = 0 &\mbox{(prob. } 1-f) \\ 0 & \text{if } N_i = 1,\, A_i = 0 &\mbox{(prob. } \alpha f) \\ 1 & \text{if } N_i > 1 \text{ or } A_i = 1 &\mbox{(prob. } (1-\alpha)f ) \\ \end{array} \right. \end{equation} \end{definition} By the properties of the leader election process in Section \ref{sec:leader_election}, $\mathsf{LeaderString}$ is an i.i.d. process with the probabilities shown in \eqref{eq:leader_sequence_dist}. We call a slot $i$ \textit{empty} if $\mathsf{LeaderString}[i] = \,\perp$ (and \textit{non-empty} otherwise). We call a slot $i$ \textit{uniquely honest} if $\mathsf{LeaderString}[i] = 0$. Let $(\mathsf{LeaderString}[i]: i \leq 0)$ be a sequence of i.i.d. random variables with the same distribution as given in \eqref{eq:leader_sequence_dist}. With this extension, the set of non-empty slots forms a \emph{stationary renewal process} with lifetime distribution $\mathsf{geom}(f).$ Given the locations of all the renewal points, the labels at the renewal points are i.i.d. Bernoulli random variables with $\mathbb{P}(0) = \alpha$. Let $1\leq T_1 < T_2 < \ldots$ denote the non-empty slots of $\mathsf{LeaderString}$ from slot $1$ onward. Similarly, let $0\geq T_0 > T_{-1} > T_{-2} > \ldots$ index the non-empty slots before or up to slot zero, going backwards in time. For any $j \neq 1$, $T_j - T_{j-1}$ has distribution $\mathsf{geom}(f)$, while $T_1-T_0 = T_1 + (1-T_0) - 1,$ so that $T_1-T_0$ is the sum of two independent $\mathsf{geom}(f)$ random variables minus one. In the terminology of renewal theory, $T_1 - T_0$ is the {\em sampled lifetime} sampled at time 0. Suppose slot $T_j$ is uniquely honest, for some $j \in \mathbb{N}$. The leader of the slot, denoted by $h_j$, broadcasts a message to all other honest parties, each of which have independent delays. Let $\textsf{delay}(T_j \rightarrow h)$ denote the delay from the leader of $T_j$ to an honest party $h \in \mathcal{H}$. Strictly speaking, $\textsf{delay}(T_j \rightarrow h)$ has distribution $\Delta$ for all $h \neq h_j$, and is equal to $0$ for $h = h_j$. For the sake of homogeneity, however, we pretend that honest leaders send themselves a message that is subject to random delay. We, thus, extend the notation $\textsf{delay}(T_j \rightarrow h)$ to all slots $T_j, j \in \mathbb{Z}$ and assign independent delay random variables to them. Then $\{\textsf{delay}(T_j \rightarrow h): h \in \mathcal{H}, j \in \mathbb{Z}\}$ are i.i.d. delay random variables. The renewal points of $\mathsf{LeaderString}$ defines a new time scale: the clock ticks by one whenever a new non-empty slot occurs. Call this event-driven time scale the \textit{compressed time scale}. The following notation is used to define processes on the compressed time scale. For $s \geq 0$ and $j \geq 1$, let \begin{equation}\label{eq:def_T_s_j} T^s_j \triangleq \min\{i: N(\mathsf{LeaderString})[s+1:s+i] = j\} \end{equation} In other words, $T^s_j$ is the $j\textsuperscript{th}$ renewal point strictly after time $s.$ Clearly, $T^0_j = T_j$. Note that, for any $s \geq 0,$ $T^s_1$ and the random variables $\{T^s_j - T^s_{j-1}\}_{j \geq 2}$ are i.i.d. with distribution $\mathsf{geom}(f).$ Given any process $\textsf{Process}$ on the original time scale, denote its time-shifted, compressed version relative to reference slot $s$ as $\textsf{CompressedProcess}_s$, defined by \begin{align} \textsf{CompressedProcess}_s[0] &\triangleq \textsf{Process}[s] \nonumber \\ \label{eq:def_compressed_process_s} \textsf{CompressedProcess}_s[j] & \triangleq \textsf{Process}[s+T^s_j] \ \quad \text{for} \ j \geq 1 \end{align} For example, $\textsf{CompressedLeaderString}_s [1 : \ ]$ is an i.i.d. $0$-$1$ valued process with probability of $0$ equal to $\alpha$. In other words, it is a Bernoulli process with parameter $1-\alpha$. \subsection{Special Honest Slots and \textsf{CharString}}\label{sec:special_honest_slots} We now introduce a new concept called \textit{special honest slots}. We also introduce our definition of \emph{the characteristic string}, denoted by $\mathsf{CharString}$. Special honest slots are a subset of uniquely honest slots and play a role similar to {\em $\Delta$-isolated slots} in \cite{david2018ouroboros}. Namely, the blocks mined in special honest slots must be at distinct heights in $\mathcal{F}_i$, irrespective of the actions of the adversary. The process $\mathsf{CharString}$ is defined such that it marks special honest slots with symbol $0$, other non-empty slots with symbol $1$, and empty slots with symbol $\perp$. Thus, defining $\mathsf{CharString}$ is equivalent to identifying special honest slots among uniquely honest slots. We first define $\mathsf{CharString}$ in the one-time leader model and then in the i.i.d. leader model. \subsubsection{\textsf{CharString} for one-time leader model} The process $\mathsf{CharString}$ is a process indexed by $\mathbb{Z}$, taking values in $\ensuremath{\{\perp, 0, 1\}}{}$. We describe its construction, conditioned on the entire leader election process being known. Let $\mathsf{CharString}[i] = \perp$ for all $i$ such that $\mathsf{LeaderString}[i] = \perp$ and let $\mathsf{CharString}[i] = 1$ for all $i$ such that $\mathsf{LeaderString} = 1.$ It remains to select special honest slots among uniquely honest slots. We first do so for negative time, where special honest slots have no real interpretation. For $j \leq 0$, if $\mathsf{LeaderString}[T_j] = 0$, randomly set $\mathsf{CharString}[T_j]=0$ with probability $\mathbb{P}(\Delta<T_j-T_{j-1}|T_j-T_{j-1})$, and let $\mathsf{CharString}[T_j]=1$ otherwise. The aforementioned choices are conditionally independent across all $j\leq 0.$ For positive time, special honest slots are labeled sequentially as follows. For $j \geq 1$, let $T_{j^*}$ denote the last special honest slot at or before slot $T_{j-1}$. Define slot $T_j$ to be special honest if it is uniquely honest and $R_j < T_j - T_{j-1}$, where $R_j \triangleq \textsf{delay}(T_{j^*}\to h_j)$. Note that $h_j$ receives the message from the previous special honest slot at time $T_{j^*} + R_j$, which, if $T_j$ is special honest, satisfies $T_{j^*} + R_j < T_{j^*} + T_j - T_{j-1} \leq T_j.$ Thus, the condition for $T_j$ to be a special honest slot is sufficient, but not necessary, for $h_j$ to have received the message from the previous special honest slot. \subsubsection{Internal representation and refreshed residuals} The definitions in this section are used in the following to define special honest slots for the i.i.d. leader model. Consider a probability mass function (pmf) $\textsf{f}$ on $\mathbb{Z}_+$, and let $X$ be a random variable with this pmf ($\mathbb{P}(X = i) = \textsf{f}[i]$). The {\em failure rate function} of the distribution, $\textsf{FailureRate}$, is defined by \[\textsf{FailureRate}[i] \triangleq \frac{\textsf{f}[i]}{\sum_{j \geq i} \textsf{f}[i]} = \frac{\mathbb{P}(X = i)}{\mathbb{P}(X \geq i)} \quad \text{for each} \ i \geq 0\] with the convention that $\textsf{FailureRate}[i]=1$ if $\mathbb{P}(X \geq i)=0$. A random variable with pmf $\textsf{f}$ can be constructed as follows. Let $D=\min\{i\geq 0 : \textsf{U}[i]\leq \textsf{FailureRate}[i]\}$ where $\textsf{U}= (\textsf{U}[0], \textsf{U}[1], \ldots )$ be a sequence of independent random variables that are each uniformly distributed on the interval $[0,1]$. We call $(\textsf{FailureRate},\textsf{U})$ the {\em internal representation} of $D$. If $D_1$ and $D_2$ are random variables with independent internal representations, then $D_1$ and $D_2$ are independent as well. Given $d\geq 0$, define the {\em refreshed residual} of $D$ at elapsed time $d$ by $\mathsf{refresh}_d(D)=\min\{i\geq 0: \textsf{U}[i+d]\leq \textsf{FailureRate}[i]\}.$ Although $\mathsf{refresh}_d(D)$ depends on the internal representation of $D$, the internal representation is suppressed in the notation. \begin{lemma} \label{lem:refresh_property} Let $D$ be a $\mathbb{Z}_+$-valued random variable with an internal representation and let $d\geq 0.$ The following hold. \begin{description} \item (a) $\mathsf{refresh}_d(D) \stackrel{d.}{=} D$. \item (b) The random variable $\min \{d, D\}$ is independent of $\mathsf{refresh}_d(D)$. More generally, if $0 = d_0 < d_1 < \cdots < d_n$ then for each $j \in [n]$, $\min \{d_j, \mathsf{refresh}_{d_{j-1}}(D)\}$ and $\mathsf{refresh}_{d_n}(D)$ are mutually independent. \item (c) If $D$ has a non-decreasing failure rate function, $D\leq d + \mathsf{refresh}_d(D).$ \end{description} \end{lemma} \begin{proof} Statement (a) follows from $\textsf{U} \stackrel{d.}{=} \textsf{U}[d: \ ].$ The first statement in (b) follows from the facts that $\min \{d, D\}$ is determined by $\textsf{U}[0 : d-1]$ and $\mathsf{refresh}_d(D)$ is determined by $\textsf{U}[d: \ ]$. The generalization in (b) similarly follows: the indicated random variables are functions of disjoint subsets of $U$. (c) is proved as follows. \begin{align*} D & \leq \min\{i\geq d: \textsf{U}[i] \leq \textsf{FailureRate}[i]\}\\ & = d + \min\{i\geq 0: \textsf{U}[i+d] \leq \textsf{FailureRate}[i+d]\} \\ & \leq d + \min\{i\geq 0: \textsf{U}[i+d] \leq \textsf{FailureRate}[i]\} \\ & = d + \mathsf{refresh}_d(D). \end{align*} \end{proof} \subsubsection{\textsf{CharString} for i.i.d. leader model} \label{sec:SHS} In this section we define $\mathsf{CharString}$ in the i.i.d. leader model. Without loss of generality, we assume all message delays have independent internal representations. The definition of $\mathsf{CharString}$ is the same as in the one-time leader model, except that the variables $R_j$ are defined differently. For each $j\geq 1,$ let $R_j \stackrel{\triangle}{=} \mathsf{refresh}_{T_{j-1} - T_{j^*}}(\mathsf{delay}(T_{j^*} \to h_j))$. Just as before, define $T_j$ to be a special honest slot (i.e. $\mathsf{CharString}[j]=0$) if $\mathsf{LeaderString}[j]=0$ and $ R_j < T_j - T_{j-1}.$ Note that $h_j$ receives the message from the previous special honest slot at time $T_{j^*} + \mathsf{delay}(T_{j^*} \to h_j)$. If $T_j$ is special honest, then by Lemma \ref{lem:refresh_property}(c), \iftoggle{arxiv} { \begin{equation*} T_{j^*} + \mathsf{delay}(T_{j^*} \to h_j) \leq T_{j^*} + (T_{j-1}-T_{j^*}) + R_j < T_{j-1} + T_{j} - T_{j-1} = T_j. \end{equation*} } { \begin{multline*} T_{j^*} + \mathsf{delay}(T_{j^*} \to h_j) \leq T_{j^*} + (T_{j-1}-T_{j^*}) + R_j \\ < T_{j-1} + T_{j} - T_{j-1} = T_j. \end{multline*} } Thus, just as for the one-time leader model, the condition for $T_j$ to be a special honest slot is sufficient, but not necessary, for $h_j$ to have received the message from the previous special honest slot. \subsubsection{The distribution of \textsf{CharString}} The second lemma in this section characterizes the distribution of the random process $\mathsf{CharString}.$ Some preliminaries are given first. All results in this section hold for both the i.i.d. leader model and the one-time leader model. For $j\geq 2$, define the following information set (i.e. $\sigma$-algebra generated by the set of random variables shown): \begin{align*} \mathsf{Info}_j = \sigma \left\{ \begin{array}{c} \mbox{leader election process from slot 1 up to slot } T_{j-1} \\ \mathsf{CharString}[:T_{j-1}] , ~ h_j \end{array} \right\} \end{align*} Note that the leader election process specifies the identities of the leaders of each slot. \begin{lemma} \label{lem:CharStringProperties} For any $j\geq 2,$ $ \mathsf{Info}_{j},$ $R_j,$ and $T_j-T_{j-1}$ are mutually independent, $T_j- T_{j-1}$ has the $\mathsf{geom}(f)$ probability distribution, and $R_j$ has the same distribution as $\Delta.$ \end{lemma} \begin{proof} In the one-time leader model, the lemma is true by the construction of $\mathsf{CharString}.$ The lemma is true in the i.i.d. leader model by the construction of $\mathsf{CharString}$ and Lemma \ref{lem:refresh_property} (a) and (b). \end{proof} The main result of this section is the following lemma. \begin{lemma} [Renewal structure of $\mathsf{CharString}$] \label{lem:renewal_prop_CharString} The sequence of non-empty slots of $\mathsf{CharString}$ forms a stationary renewal process with lifetime distribution $\mathsf{geom}(f).$ Conditioned on the renewal times $(T_j: j\in \mathbb{Z}),$ the labels $(\mathsf{CharString}[T_j]: j\in \mathbb{Z})$ are independent and for all $j\in \mathbb{Z},$ \begin{align} \label{eq:CharStringConditional} \mathbb{P}(\mathsf{CharString}[T_j]=0|T_{j''}:j''\in \mathbb{Z}) = \alpha \mathbb{P}(\Delta < T_j - T_{j-1}|T_j - T_{j-1}) \end{align} \end{lemma} \begin{proof} The first sentence is true because $\mathsf{CharString}$ has the same set of non-empty slots as $\mathsf{LeaderString}$. Equation \eqref{eq:CharStringConditional} is true by construction for $j\leq 1$. Consider the following statement for $j\geq 1:$ ${\mathcal{S}}_j:$ The sequence of non-empty slots up to time $T_j$, $(T_{j''}: j'' \leq j),$ forms a stationary renewal process with lifetime distribution $\mathsf{geom}(f)$ and conditioned on such process, the labels $(\mathsf{CharString}[T_{j'}]: j'\leq j)$ are conditionally independent, and for any $j' \leq j$, \iftoggle{arxiv} { \begin{equation*} \mathbb{P}(\mathsf{CharString}[T_{j'}]=0|T_{j''}:j''\leq j) = \alpha \mathbb{P}(\Delta < T_{j'} - T_{j'-1}|T_{j'} - T_{j'-1}) \end{equation*} } { \begin{multline*} \mathbb{P}(\mathsf{CharString}[T_{j'}]=0|T_{j''}:j''\leq j) = \\ \alpha \mathbb{P}(\Delta < T_{j'} - T_{j'-1}|T_{j'} - T_{j'-1}) \end{multline*} } It is shown next that ${\mathcal{S}}_j$ is true for all $j\geq 1$ by induction on $j.$ The base case $j=1$ is true by the construction of $\mathsf{CharString}.$ Suppose ${\mathcal{S}}_{j-1}$ is true for some $j\geq 2.$ Note that $\textsf{Info}_j$ includes the information in $(T_{j''}: j'' \leq j-1),$ so Lemma \ref{lem:CharStringProperties} shows that the next lifetime is independent of the previous ones and the probability the renewal point at the end of the next lifetime is labeled 0 depends on the lifetime in the appropriate way. Therefore, ${\mathcal{S}}_j$ is true, completing the proof by induction that ${\mathcal{S}}_j$ holds for all $j\geq 1.$ The statement of Lemma \ref{lem:renewal_prop_CharString} pertains to the joint distribution of $((T_j, \mathsf{CharString}[T_j]) : j\in \mathbb{Z}),$ which by definition is a statement about any finite sub-collection of the variables involved. For any finite sub-collection of the variables, the truth of ${\mathcal{S}}_j$ for $j$ sufficiently large implies that the finite sub-collection of variables have the joint distribution specified by the lemma, completing the proof of the lemma. \end{proof} Properties of $\mathsf{CompressedCharString}$ follow as a corollary of Lemma \ref{lem:renewal_prop_CharString}. \begin{lemma} \label{lem:CCS} For any $s\geq 0$, \[\mathbb{P}(\mathsf{CompressedCharString}_s[1]=0 \vert \mathsf{CharString}[\ :s]) \geq p.\] Further, for any $s\geq 0$ and $j \geq 2$, \[\mathbb{P}(\mathsf{CompressedCharString}_s[j]=0 \vert \mathsf{CharString}[\ :T^s_{j-1}]) = p.\] Here, $p$ is the parameter defined in Definition \ref{def:eps_honest_maj}. \end{lemma} \begin{proof} Lemma \ref{lem:renewal_prop_CharString} implies that the right-hand sides of the two statements to be proved are the same for all $s\geq 0,$ and the statements follow for $s=0$ by Lemma \ref{lem:renewal_prop_CharString} as well. \end{proof} \subsection{The \textsf{Unheard} process}\label{sec:unheard} Every honest party suffers some delay in receiving messages from special honest slots, and therefore may not have heard of all the special honest broadcasts. To prove security guarantees for a certain party $h \in \mathcal{H}$, we consider the delays suffered by $h$ alone in receiving messages from the leaders of special honest slots. Likewise, if we wish to prove security guarantees for a group of honest parties $\mathcal{I} \subset \mathcal{H}$, then we consider the delays suffered by all the parties in $\mathcal{I}$, but not other honest parties. The only other relevant delays for the security guarantees for $\mathcal{I}$ are the delays among the honest leaders, and these are appropriately incorporated into the definition of special honest slots. \begin{definition} (LatestHeard and Unheard) For an honest party $h$ and $i\geq 1,$ let $\mathsf{LatestHeard}_h[i]$ denote the special honest slot with greatest index that $h$ has heard by the end of slot $i$. That is, \iftoggle{arxiv} { \begin{equation*} \mathsf{LatestHeard}_h[i] = \max\{i': 1\leq i' \leq i, \mathsf{CharString}[i']=0, i' + \mathsf{delay}[i'\to h] \leq i\}, \end{equation*} } { \begin{multline*} \mathsf{LatestHeard}_h[i] = \\ \max\{i': 1\leq i' \leq i, \mathsf{CharString}[i']=0, i' + \mathsf{delay}[i'\to h] \leq i\}, \end{multline*} } with the convention that the maximum of an empty set is $-\infty.$ Let $\mathsf{Unheard}_h[i]$ denote the number of special honest slots after the slot containing the most recent special honest broadcast heard by $h$ by slot $i$. That is, \iftoggle{arxiv} { \begin{equation*} \mathsf{Unheard}_h[i] = |i'': \max\{0,\mathsf{LatestHeard}_h[i]\} < i'' \leq i, \mathsf{CharString}[i'']=0\}|. \end{equation*} } { \begin{multline*} \mathsf{Unheard}_h[i] = \\ |i'': \max\{0,\mathsf{LatestHeard}_h[i]\} < i'' \leq i, \mathsf{CharString}[i'']=0\}|. \end{multline*} } Additionally, for any honest party $h,$ let $\mathsf{CompressedUnheard}_{h,s}$ be the compressed process corresponding to process $\mathsf{Unheard}_h$ and reference slot $s$, as in Section \ref{sec:compressed_time_scale}. Finally, given a set of honest parties $\mathcal{I}$, let \[\mathsf{LatestHeard}_\mathcal{I}[i]= \min_{h\in \mathcal{I}} \mathsf{LatestHeard}_h[i],\] \[\mathsf{Unheard}_\mathcal{I}[i] = \max_{h\in \mathcal{I}}\mathsf{Unheard}_h[i],\] \[\mathsf{CompressedUnheard}_{\mathcal{I},s}[j] = \max_{h\in \mathcal{I}}\mathsf{CompressedUnheard}_{h,s}[j].\] \end{definition} For example, $\mathsf{Unheard}_h[i] = 2$ means that by the end of slot $i$, $h$ had not heard the last two special honest slots occurring before or at $i$, and it either heard the third most recent special honest slot before slot $i$ or there were only two special honest slots during $[1:i].$ \begin{restatable}[]{lemma}{distUnheard}\label{lem:dist_unheard} Let $q = \mathbb{P}(\Delta \leq \mathsf{geom}(f)).$ Then the following statements hold: \\ (a) For any $i\geq 1$, $\mathbb{P}(\mathsf{Unheard}_h[i] > a) \leq (1-q)^a$ for all integers $a\geq 0.$ \\ (b) For any $s\geq 1$, and $j\geq 1$, $\mathbb{P}(\mathsf{CompressedUnheard}_{h,s}[j] > a) \leq (1-q)^a$ for all integers $a\geq 0.$ \end{restatable} The proof of this lemma is given in Appendix \ref{app:lemma_unheard}. The following lemma is a consequence of Lemma \ref{lem:dist_unheard} and the union bound. \begin{lemma} \label{lem:Uheard_line_bnd} For any $k' \in \mathbb{N}$, $B \geq 0$ and $c \geq 0$, \iftoggle{arxiv} { \begin{equation}\label{eq:RHS_comp} \mathbb{P}(\textsf{CompressedUnheard}_{h,s}[j] \geq B + c(j-k')\text{ for some } j \geq k') \\ \leq \left[\frac{1}{(1-q)(1-(1-q)^c)}\right] \exp(-Bq) \end{equation} } { \begin{multline}\label{eq:RHS_comp} \mathbb{P}(\textsf{CompressedUnheard}_{h,s}[j] \geq B + c(j-k')\text{ for some } j \geq k') \\ \leq \left[\frac{1}{(1-q)(1-(1-q)^c)}\right] \exp(-Bq) \end{multline} } \end{lemma} \begin{proof} Lemma \ref{lem:dist_unheard} implies $ \mathbb{P}(\mathsf{CompressedUnheard}_{h,s}[j] \geq t) \leq (1 - q)^{t-1} \mbox{ for } t \in \mathbb{R}_+ $ Substituting $B + c(j-k')$ for $t$ and using the union bound by summing over the possible values of $j-k'$ yields that the left-hand side of \eqref{eq:RHS_comp} is bounded from above by \begin{align*} \sum_{d=0}^{\infty}(1 - q)^{B+cd-1} = \left[\frac{1}{(1-q)(1-(1-q)^c)}\right] (1-q)^B. \end{align*} Since $(1-q)^B \leq \exp(-B q)$, the bound in equation \eqref{eq:RHS_comp} follows. \end{proof} \section{Lemmas on Deterministic Properties}\label{sec:deterministic_lemmas} In this section, we deduce some necessary conditions for violations of settlement and chain quality. The main tool is the notion of a {\em fork}, which describes some constraints on the possible blocktrees in an execution. We then define {\em reach} and {\em margin}, which are functions of a characteristic string and its associated fork. These metrics, first introduced in the Ouroboros line of work \cite{kiayias2017ouroboros, david2018ouroboros, blum2020combinatorics}, prove useful for analyzing settlement and chain quality violations, and we adapt the Ouroboros definitions to our setting. We introduce the basic terminology used for analyzing forks in Sections \ref{sec:forks}-\ref{sec:viable_tines_balanced_forks}. Sections \ref{sec:settlement_balanced_forks}-\ref{sec:settlement_margin} then focus on the settlement and Section \ref{sec:intensive_chain_quality} focuses on chain quality. \subsection{Forks}\label{sec:forks} Recall from Section \ref{sec:preliminaries} that $\mathcal{F}_i$ is a labeled, directed tree, representing the set of all blocks produced until the end of slot $i$. $\mathcal{F}_i$ depends on two factors: the adversary's actions and the random components of the protocol beyond the adversary's control. The characteristic string separates these two factors, by capturing all components beyond the adversary's control. The characteristic string, thus, imposes constraints on the possible $\mathcal{F}_i$ that the adversary can construct. These constraints are aptly described by the notion of a fork, which we define next. \begin{definition}[Fork]\label{def:fork} Let $w \in \ensuremath{\{\perp, 0, 1\}}{}^*$ be a finite string. A \emph{fork with respect to $w$} is a directed, rooted tree $F = (V,E)$ with a labeling $\ell:V \rightarrow \{0\} \cup \mathcal{N}(w)$ that satisfies the following properties. \begin{itemize} \item each edge of $F$ is directed away from the root \item the root $r \in V$ is given the label $\ell(r) = 0$ \item the labels along any directed path are strictly increasing \item each index $s \in \mathcal{N}_0(w)$ is the label of exactly one vertex of $F$ \item the function $\mathbf{d} : \mathcal{N}_0(w) \rightarrow \mathbb{N}$, defined so that $\mathbf{d}(s)$ is the depth in $F$ of the unique vertex $v$ for which $\ell(v) = s$, satisfies the following monotonicity property: if $s_1 < s_2$, then $\mathbf{d}(s_1) < \mathbf{d}(s_2)$ \end{itemize} We use the notation $F \vdash w$ if $F$ is a fork with respect to $w$. \end{definition} We now show that in any execution, irrespective of the adversary's actions and the instantiations of the random components, $\mathcal{F}_i$ is a fork with respect to $\mathsf{CharString}[1:i]$ (i.e., it satisfies the five properties listed in Definition \ref{def:fork}). The first three properties follow from the basic properties of blockchains described in Section \ref{sec:preliminaries}. The fourth property is immediate given that special honest slots are a subset of uniquely honest slots, and that every honest leader proposes exactly one block when it is chosen as a leader. The last property is implied by the fillowing two facts. First, every honest leader builds a chain that is strictly longer than any of the chains it has heard previously. Second, every special honest slot's leader has heard of the previous special honest slot's broadcast in a previous slot (see Section \ref{sec:special_honest_slots}). All honestly held chains, $\mathcal{C}^h_s$, $s \leq i$, are considered to be \textit{tines} in $\mathcal{F}_i$, where tines are defined as follows: \begin{definition}[Tine]\label{def:tine} Let $w \in \ensuremath{\{\perp, 0, 1\}}{}^*$ be a finite string. Let $F \vdash w$ be a fork. A \emph{tine} $t$ of $F$ is a directed path starting from the root. This is denoted by $t \in F$. For any tine $t$ define \emph{$\text{length}(t)$} to be the number of edges in the path, and for any vertex $v$ define its depth to be the length of the unique tine that ends at $v$. also define $\ell(t)$ to be the label of the vertex at the end of $t$. \end{definition} In Section \ref{sec:viable_tines_balanced_forks}, we further characterize honestly held chains by defining \emph{viable tines}. We end this section with an important note about terminology. If $w \in \ensuremath{\{\perp, 0, 1\}}{}^*$ is a finite string and $F=(V,E)$ is a fork with respect to $w$, we call a slot $i$ an {\em adversarial slot} if $w[i]=1$ and we call a vertex in $V$ an \emph{adversarial block} if its label is an adversarial slot. In particular, consider $\mathsf{CharString}$. We treat a slot $i$ with $\mathsf{CharString}[i] = 1$ as adversarial, \emph{even if} $\mathsf{LeaderString}[i] = 0$. In other words, we treat uniquely honest slots that are not special honest as adversarial. \subsection{Reach and Margin}\label{sec:reach_margin} In this subsection, we define the terms reach and margin. These were previously described in earlier works (\cite{kiayias2017ouroboros, david2018ouroboros, blum2020combinatorics}). For a single point of comparison, we refer to \cite{blum2020combinatorics}. Our definitions are different from those in \cite{blum2020combinatorics} in two minor respects. First, our definitions are with respect to characteristic strings in $\ensuremath{\{\perp, 0, 1\}}{}^*$, instead of $\{0, 1\}^*$ as in \cite{blum2020combinatorics}. Second, in the following definition, $t_1 \nsim_s t_2$ or being $s$-disjoint means the tines do not share any nodes with label greater than or equal to $s$, whereas in \cite{blum2020combinatorics} it means the times do not share any nodes with label (strictly) greater than $s.$ The version we use is more natural for considering violations of the $s,k$ settlement property. In what follows, $i \in \mathbb{N}$ and $w \in \ensuremath{\{\perp, 0, 1\}}{}^i$ are arbitrary. \begin{definition}[The $\sim$ relation]\label{def:disjoint_tines} Let $F \vdash w$. For two tines $t_1$ and $t_2$ of $F$, write $t_1 \sim t_2$ if $t_1$ and $t_2$ share an edge; otherwise write $t_1 \nsim t_2$ and refer to them as \emph{disjoint tines}. For any $s \leq i$, write $t_1 \sim_s t_2$ if $t_1$ and $t_2$ share a node with a label greater than or equal to $s$; otherwise, write $t_1 \nsim_s t_2$ and call such tines \emph{$s$-disjoint}. \end{definition} \begin{definition}[Closed fork]\label{def:closed_fork} A fork $F \vdash w$ is closed if every leaf in $F$ is special honest. In other words, every leaf in $F$ has a label from the set $\mathcal{N}_0(w)$. \end{definition} \begin{definition}[Closure of a fork]\label{def:closure} Given a fork $F \vdash w$, the closure of $F$, $\overline{F} \vdash w$ is a closed fork obtained from $F$ by trimming all trailing adversarial blocks from all tines of $F$. \end{definition} \begin{definition}[Gap, Reserve, Reach]\label{def:gap_reserve_reach} For a closed fork $F \vdash w$ and its unique longest tine $\hat{t}$, define the \emph{gap of a tine $t\in F$} by $ \text{gap}(t) \triangleq \text{length}(\hat{t}) - \text{length}(t).$ Define the \emph{reserve of $t$}, denoted $\text{reserve}(t)$, to be the number of adversarial indices in $w$ that appear after the terminating vertex of $t$. In other words, if $v$ is the last vertex of $t$, then $\text{reserve}(t) \triangleq \left \vert \{ i > \ell(v)\,|\, w[i] = 1\}\right \vert.$ These quantities are used to define the \emph{reach of a tine $t$}: $\text{reach}(t) \triangleq \text{reserve}(t) - \text{gap}(t).$ \end{definition} For the intuition behind these definitions, we refer the reader to \cite{kiayias2017ouroboros, blum2020combinatorics}. \begin{definition}[Reach of a fork or string]\label{def:max_reach} For a closed fork $F \vdash w$, define $\mathsf{Reach}(F,w)$ to be the largest reach attained by any tine of $F$ (i.e., $\mathsf{Reach}(F,w) \triangleq \max_{t \in F} \text{reach}(t)$). We overload this notation to denote the maximum reach over all closed forks with respect to a finite-length characteristic string $w$: \[\mathsf{Reach}(w) \triangleq \max_{F \vdash w,\, F \text{closed}} \mathsf{Reach}(F,w).\] \end{definition} Note that $\mathsf{Reach}(F,w)$ is non-negative, because the longest tine of any fork always has non-negative reach. \begin{definition}[Margin of a fork or string]\label{def:margin} For a closed fork $F \vdash w$ and $s < i$, define the margin of $(F,w)$ relative to $s$ by: \emph{\[\mathsf{Margin}_s(F,w) \triangleq \max_{t_1 \nsim_s t_2} \min\{\textnormal{reach}(t_1), \textnormal{reach}(t_2)\}.\]} Once again, we overload notation to denote the relative margin of a string. \emph{\[\mathsf{Margin}_s(w) \triangleq \max_{F \vdash w,\,F \text{closed}} \mathsf{Margin}_s(F,w).\]} \end{definition} For an infinite string $w \in \ensuremath{\{\perp, 0, 1\}}{}^\mathbb{N}$, $\mathsf{Reach}$ and $\mathsf{Margin}_s$ obey the recursive formulae we state below in equations (\ref{eq:reach_recursive}) and (\ref{eq:margin_recursive}). These are similar to those in Lemmas 2 and 3 in \cite{blum2020combinatorics}, with minor differences accounting for the two factors mentioned at the beginning of this section. The inclusion of $\perp$'s is inconsequential, as we show here. Suppose, for some $i$, $w[i] = \perp$. Then $F \vdash w[1:i-1] $ if and only if $ F \vdash w[1:i]$. It follows from Definitions \ref{def:gap_reserve_reach}, \ref{def:max_reach} and \ref{def:margin} that $\mathsf{Reach}(w[1:i]) = \mathsf{Reach}(w[1:i-1])$ and $\mathsf{Margin}_s(w[1:i]) = \mathsf{Margin}_s(w[1:i-1])$. For the complete proof of the following recursions, we refer the reader to \cite{kiayias2017ouroboros, blum2020combinatorics}. For the sake of defining the recursions, we define these quantities for an empty string as well. Let $\mathsf{Reach}(w[1:0]) = 0$, and for $i \geq 1$, \begin{equation}\label{eq:reach_recursive} \mathsf{Reach}(w[1:i]) = \begin{cases} \mathsf{Reach}(w[1:i-1]) &\text{if } w[i] = \,\perp \\ \mathsf{Reach}(w[1:i-1]) + 1 &\text{if } w[i] = 1 \\ \textsf{(Reach}(w[1:i-1])-1 )_+ &\text{if } w[i] = 0 \end{cases} \end{equation} Let $s \in \mathbb{N}$ and $i \in \mathbb{Z}_+$. Then $\mathsf{Margin}_s(w[1:i]) = \mathsf{Reach}(w[1:i]) \mbox{ for } i < s$, and for $i \geq s$, \iftoggle{arxiv} { \begin{equation}\label{eq:margin_recursive} \mathsf{Margin}_s(w[1:i]) = \begin{cases} \mathsf{Margin}_s(w[1:i-1]) &\text{if } w[i] = \,\perp \\ \mathsf{Margin}_s(w[1:i-1]) + 1 &\text{if } w[i] = 1 \\ \mathsf{Margin}_s(w[1:i-1]) &\text{if } w[i] = 0 \text{ and }\\ & \mathsf{Reach}(w[1:i-1]) > \mathsf{Margin}_s(w[1:i-1]) = 0 \\ \mathsf{Margin}_s(w[1:i-1]) - 1 &\text{if } w[i] = 0 \text{ and }\\ & \{(\mathsf{Reach}(w[1:i-1]) = 0 \text{ or } \mathsf{Margin}_s(w[1:i-1]) \neq 0)\} \end{cases} \end{equation} } { \begin{multline}\label{eq:margin_recursive} \mathsf{Margin}_s(w[1:i]) = \\ \begin{cases} \mathsf{Margin}_s(w[1:i-1]) &\text{if } w[i] = \,\perp \\ \mathsf{Margin}_s(w[1:i-1]) + 1 &\text{if } w[i] = 1 \\ \mathsf{Margin}_s(w[1:i-1]) &\text{if } w[i] = 0 \text{ and }\\ & \mathsf{Reach}(w[1:i-1]) \\ & > \mathsf{Margin}_s(w[1:i-1]) = 0 \\ \mathsf{Margin}_s(w[1:i-1]) - 1 &\text{if } w[i] = 0 \text{ and }\\ & \{(\mathsf{Reach}(w[1:i-1]) = 0 \\ & \text{ or } \mathsf{Margin}_s(w[1:i-1]) \neq 0)\} \end{cases} \end{multline} } The fact that $\mathsf{Margin}_s(w[1:i]) = \mathsf{Reach}(w[1:i])$ for $i < s$ is not explicitly shown in \cite{blum2020combinatorics}, so we prove it here. It suffices to show that $\mathsf{Margin}_s(F) = \mathsf{Reach}(F) $ for $ F \vdash w[1:i], F \text{ closed},$ and $i < s$. The desired property holds because any tine in $F$ would not have blocks with a label greater than or equal to $s$, and is therefore $s$-disjoint with itself. The second largest reach among all $s$-disjoint pairs of tines, i.e., $\mathsf{Margin}_s(F)$, is therefore equal to the largest reach among all tines $\mathsf{Reach}(F)$. \subsection{Viable Tines and Balanced Forks}\label{sec:viable_tines_balanced_forks} We now introduce the terms \textit{viable tines} and \textit{balanced forks}, which are borrowed from \cite{kiayias2017ouroboros, blum2020combinatorics} but modified appropriately to suit our analysis. \begin{definition}[Viable Tine]\label{def:viable_tine} Let $i \in \mathbb{N}$ and $w \in \ensuremath{\{\perp, 0, 1\}}{}^i$ be given. Let $F \vdash w$ be a fork and let $t$ be a tine of $F$. Say that $t$ is \emph{viable} if for all \emph{$s \in \mathcal{N}_0(w), \ \mathbf{d}(s) \leq \text{length}(t)$}. Similarly, $t$ is \emph{$l$-viable} if for all \emph{$s \in \mathcal{N}_0(w[1 : l]), \ \mathbf{d}(s) \leq \text{length}(t)$}. \end{definition} Note that viability of a tine $t$ is defined in the context of a fixed fork $F$ and characteristic string $w$ with $F \vdash w $ ($i$ is implicit; it's the length of $w$). When specializing to $\mathcal{F}_i \vdash \mathsf{CharString}[1:i]$, $l$-viable tines have the following interpretation. For an honest party $h$, let $l = \mathsf{LatestHeard}_h[i]$. Then $\mathcal{C}^h_i$ is an $l$-viable tine in $\mathcal{F}_i$. We note some useful facts concerning viable tines. These facts are used in the proofs of the subsequent lemmas. \begin{itemize} \item Given $i \in \mathbb{N}, w \in \ensuremath{\{\perp, 0, 1\}}{}^i$ and $F \vdash w$, a viable tine in $F$ is equivalent to an $i$-viable tine. If a tine is $l_1$-viable, it is also $l_2$-viable for every $l_2 < l_1$. \item If $t_1$ is an $l$-viable tine in $F$, and $t_2 \in F$ is a tine that is at least as long as $t_1$, then $t_2$ is also an $l$-viable tine. \item If $t \in F$ is at least as long as the longest tine in $\Bar{F}$, $t$ is viable in $F$. \end{itemize} \begin{definition}[Balanced Forks]\label{def:balanced_forks} Let $i \in \mathbb{N}, w \in \ensuremath{\{\perp, 0, 1\}}{}^i$, and $s \in \mathbb{N}$ such that $s \leq i$. A fork \emph{$F \vdash w$} is \emph{$s$-balanced} if it contains two tines $t_1, t_2$ s.t. both tines are viable and $t_1 \nsim_s t_2$. Similarly, $F$ is \emph{($s, l$)-balanced} if it contains two tines $t_1, t_2$ s.t. both tines are $l$-viable and $t_1 \nsim_s t_2$. \end{definition} In principle, we could allow for $s > i$ in the above definition. However, all forks $F \vdash w$ are $s$-balanced if $s > i$. This is because the longest tine in a fork is always viable, and it is $s$-disjoint with itself if $s > i$. Similarly, for any $l < s$, any fork is ($s, l$)-balanced. For any $l$, there is always an $l$-viable tine composed of blocks with labels $\leq l$ (the longest tine ending at a vertex with label in $\mathcal{N}_0(w[1:l])$. Such a tine is $s$-disjoint with itself. Next, we introduce the notion of fork prefixes as they appear frequently in our proofs. \begin{definition}[Fork Prefixes]\label{def:fork_prefixes} Let $i \in \mathbb{N}, w \in \ensuremath{\{\perp, 0, 1\}}{}^i$ and $i' \in \mathbb{N}$ such that $i' \leq i$ be given. For two forks $F \vdash w$, $F' \vdash w[1:i']$, say that $F'$ is a prefix of $F$ if $F'$ is a consistently labeled sub-graph of $F$. This is written as $F' \sqsubseteq F$. \end{definition} For every tine $t \in F$, there is a unique tine $t' \in F'$ with the vertices of $t'$ being the vertices of $t$ that are in $F'.$ Note that $\mathcal{F}_{i'} \sqsubseteq \mathcal{F}_{i}$ for any $i' < i$. In addition, for any $w \in \ensuremath{\{\perp, 0, 1\}}{}^*$ and any $F \vdash w$, $\Bar{F} \sqsubseteq F$. If $F'$ is a prefix of $F$, say $F$ is a suffix of $F'$. The notion of disjoint tines carries across forks that are prefixes of each other. Suppose $i \in \mathbb{N}, w \in \ensuremath{\{\perp, 0, 1\}}{}^i$ and $s \in \mathbb{N}$ such that $s \leq i$ are given. Let $F \vdash w$ be a fork containing two tines $t_1, t_2$ such that $t_1 \nsim_s t_2$. For some $i' \leq i$, let $F' \vdash w[1:i']$ be a prefix of $F$, and let $t'_1$, $t'_2$ be tines corresponding to $t_1$ and $t_2$ respectively. Then $t'_1 \nsim_s t'_2$. A slightly technical point to note is that this statement holds irrespective of whether $i' \geq s$ or $i < s$; in the latter case, it is trivial as any tine $t$ such that $\ell(t) < s$ satisfies $t \nsim_s t$. \subsection{Settlement and Balanced Forks} \label{sec:settlement_balanced_forks} We first introduce some terminology to reason about events concerning the settlement property for a given execution. Given $s, k \geq 1$, and a subset $\mathcal{I}$ of honest parties, we define the event: \[\mathcal{E}_{\text{settlement}} \triangleq \{\forall \ h_1, h_2 \in \mathcal{I}, \ \forall i_1, i_2 \geq s+k, \mathcal{C}^{h_1}_{i_1}[1:s] = \mathcal{C}^{h_2}_{i_2}[1:s]\}\] and, for $i\geq 1$, we define the event: \iftoggle{arxiv} { \begin{equation} \mathcal{E}_{i\text{-settlement}} \triangleq \{\forall \ h_1, h_2 \in \mathcal{I}, \ \mathcal{C}^{h_1}_{i}[1:s] = \mathcal{C}^{h_2}_{i}[1:s]\} \cap \{\forall \ h \in \mathcal{I}, \mathcal{C}^{h}_{i}[1:s] = \mathcal{C}^{h}_{i+1}[1:s]\} \label{eq:def_i_settlement} \end{equation} } { \begin{multline} \mathcal{E}_{i\text{-settlement}} \triangleq \{\forall \ h_1, h_2 \in \mathcal{I}, \ \mathcal{C}^{h_1}_{i}[1:s] = \mathcal{C}^{h_2}_{i}[1:s]\} \\ \cap \{\forall \ h \in \mathcal{I}, \mathcal{C}^{h}_{i}[1:s] = \mathcal{C}^{h}_{i+1}[1:s]\} \label{eq:def_i_settlement} \end{multline} } From these definitions, we deduce that \iftoggle{arxiv} { \begin{equation}\label{eq:def_i_settlement_complement} \mathcal{E}^c_{i\text{-settlement}} = \{\exists \ h_1, h_2 \in \mathcal{I} \text{ such that } \mathcal{C}^{h_1}_{i}[1:s] \neq \mathcal{C}^{h_2}_{i}[1:s]\} \cup \{\exists \ h \in \mathcal{I} \text{ such that } \mathcal{C}^{h}_{i}[1:s] \neq \mathcal{C}^{h}_{i+1}[1:s]\} \end{equation} } { \begin{multline}\label{eq:def_i_settlement_complement} \mathcal{E}^c_{i\text{-settlement}} = \{\exists \ h_1, h_2 \in \mathcal{I} \text{ such that } \mathcal{C}^{h_1}_{i}[1:s] \neq \mathcal{C}^{h_2}_{i}[1:s]\} \\ \cup \{\exists \ h \in \mathcal{I} \text{ such that } \mathcal{C}^{h}_{i}[1:s] \neq \mathcal{C}^{h}_{i+1}[1:s]\} \end{multline} } Say that the settlement property with parameters $s, k, \mathcal{I}$ is \emph{violated at slot $i$} if $\mathcal{E}^c_{i\text{-settlement}}$ occurs. In words, this means that there exist two different honest parties who hold chains at slot $i$ that do not agree on slots up to $s$, or there exists an honest party whose chain at slot $i+1$ does not agree with its chain at slot $i$ on slots up to $s$. Suppose the $i$-settlement property is \textit{not} violated for any slot $i$ such that $i \geq s + k$. Then all honestly held chains (among those in $\mathcal{I}$) agree up to slot $s$, from slot $s+k$ onward. This can be argued by induction. Therefore, $ \mathcal{E}_{\text{settlement}} = \bigcap_{i \geq s + k} \mathcal{E}_{i\text{-settlement}}$ or, equivalently, \begin{equation}\label{eq:settlement_splitting} \mathcal{E}^c_{\text{settlement}} = \bigcup_{i \geq s + k} \mathcal{E}^c_{i\text{-settlement}} \end{equation} We now state a relation between balanced forks and settlement violation. \begin{restatable}[Settlement Violation and Balanced Forks]{lemma}{settleBalanceFork}\label{lem:settle_balance_fork} Suppose, in an execution, the settlement property for some $(s, k, \mathcal{I})$ is violated at slot $i$ (i.e., \emph{$\mathcal{E}^c_{i\text{-settlement}}$} occurs). Let \emph{$l = \mathsf{LatestHeard}_\mathcal{I}[i]$}. Then \emph{$ F \vdash \mathsf{CharString}[1:i]$} for some $(s, l)$-balanced fork $F.$ \end{restatable} The proof of this lemma is given in Appendix \ref{app:deterministic_lemmas}. \subsection{Balanced Forks and Margin} Lemma \ref{lem:settle_balance_fork} shows that settlement violations imply the existence of a balanced fork with respect to the characteristic string. We now derive an implication about the characteristic string alone. Towards this end, we first recall a lemma from \cite{blum2020combinatorics}. \begin{restatable}[from \cite{blum2020combinatorics}]{lemma}{balanceMargin}\label{lem:balanced_fork_mu} Let $i \in \mathbb{N}$, $w \in \ensuremath{\{\perp, 0, 1\}}{}^i$ and $s \in \mathbb{N}$ such that $s \leq i$. There exists an $s$-balanced fork $F \vdash w$ if and only if $\mathsf{Margin}_s(w) \geq 0$. \end{restatable} For completeness, we provide the proof in Appendix \ref{app:deterministic_lemmas}. The above lemma provides a characterization for the existence of $s$-balanced forks $F \vdash w$. However, we are interested in characterizing a more general form of balanced forks, i.e., $(s, l)$-balanced forks. We show that every $(s, l)$-balanced fork can be mapped to an $s$-balanced fork and vice-versa (Lemma \ref{lem:balanced_fork_equivalence}). First define a useful transformation on strings in $\ensuremath{\{\perp, 0, 1\}}{}^*$ that will be used in this lemma. \begin{definition}[$O_l(w)$]\label{def:observer_char_string} Let $i \in \mathbb{N}$, $w \in \ensuremath{\{\perp, 0, 1\}}{}^i$, and $l \in \mathbb{Z}_+$ such that $l \leq i$. Then $O_l(w) \in \{0, 1, \perp\}^i$ is a string obtained from $w$ by replacing each 0 in $w[l+1:i]$ by $\perp$. \end{definition} $O_l(\cdot)$ is a map from $\ensuremath{\{\perp, 0, 1\}}{}^* \rightarrow \ensuremath{\{\perp, 0, 1\}}{}^*$. It has the following interpretation. For any $i \in \mathbb{N}$, let $l = \mathsf{LatestHeard}_h[i]$. Then $O_l(\mathsf{CharString}[1:i])$ is effectively the characteristic string observed by the honest party $h$, assuming the adversary delays all messages maximally. Since $h$ has not heard the broadcasts from the special honest slots after $l$, those slots are seen as empty slots by $h$. Note that this interpretation works only by assuming a certain adversarial action; the adversary may choose to reveal blocks from special honest slots in $[l+1:i]$ if it so wishes. The notion of fork prefixes can be extended naturally to forks $F \vdash w$, $F' \vdash w'$, where $w' = O_l(w)$. Given $i \in \mathbb{N}, w \in \ensuremath{\{\perp, 0, 1\}}{}^i, F \vdash w$ and $l \leq i$, drop all blocks with labels in $\mathcal{N}_0(w[l+1:i])$ and their descendants to obtain $F'$. It can be verified that such an $F'$ satisfies the rules of a fork with respect to $w'$. Clearly, $F'$ is a sub-tree of $F$ and we therefore say $F' \sqsubseteq F$. \begin{restatable}{lemma}{equivalence}\label{lem:balanced_fork_equivalence} Let $i \in \mathbb{N}$, $w \in \ensuremath{\{\perp, 0, 1\}}{}^i$, $s \in \mathbb{N}$ such that $s \leq i$, and $l \in \mathbb{Z}_+$ such that $l \leq i$. Let $w' = O_l(w)$. There exists an $(s, l)$-balanced fork $F \vdash w$ if and only if there exists an $s$-balanced fork $F' \vdash w'$. \end{restatable} The proof of this lemma is given in Appendix \ref{app:deterministic_lemmas}. Combining Lemma \ref{lem:balanced_fork_equivalence} with Lemma \ref{lem:balanced_fork_mu} gives us the following corollary: \begin{lemma}\label{lem:balanced_fork_mu_2} Let $i \in \mathbb{N}$, $w \in \ensuremath{\{\perp, 0, 1\}}{}^i$, $s \in \mathbb{N}, s \leq i$ and $l \in \mathbb{Z}_+, l \leq i$ be given. Then $\exists \, (s, l)$-balanced fork $F \vdash w$ if and only if \emph{$\mathsf{Margin}_s(O_l(w)) \geq 0$}. \end{lemma} \begin{proof} By Lemma \ref{lem:balanced_fork_equivalence}, $\exists \, (s, l)$-balanced fork $F \vdash w$ if and only if $\exists \, s$-balanced fork $F' \vdash O_l(w)$. By Lemma \ref{lem:balanced_fork_mu}, $\exists \, s'$-balanced fork $F \vdash O_l(w)$ if and only if $\mathsf{Margin}_s(O_l(w)) \geq 0$. Together, they imply $\exists \, (s, l)$-balanced fork $F \vdash w$ if and only if $\mathsf{Margin}_s(O_l(w)) \geq 0$. \end{proof} \subsection{Settlement and Margin} \label{sec:settlement_margin} Lemmas \ref{lem:settle_balance_fork} and \ref{lem:balanced_fork_mu_2} give the following necessary condition for violations of settlement: \begin{lemma}[Settlement Violation] \label{lem:settlement_violation_margin} If the settlement property with parameters $(s, k, \mathcal{I})$ is violated in an execution at slot $i$ (i.e., \emph{$\mathcal{E}^c_{i\text{-settlement}}$} occurs), then \emph{$\mathsf{Margin}_s(O_l(\mathsf{CharString}[1:i])) \geq 0$}, where \emph{$l = \mathsf{LatestHeard}_\mathcal{I}[i]$}. \end{lemma} \begin{proof} By Lemma \ref{lem:settle_balance_fork}, if the settlement property with parameters $(s, k, \mathcal{I})$ is violated at slot $i$, then $\exists \, (s, l)$-balanced fork $F \vdash \mathsf{CharString}[1:i]$ such that $F$ is an ($s, l$)-balanced fork. By Lemma \ref{lem:balanced_fork_mu_2}, $\exists \, (s, l)$-balanced fork $F \vdash \mathsf{CharString}[1:i]$ if and only if $\mathsf{Margin}_s(O_l(\mathsf{CharString}[1:i])) \geq 0$. Thus, the statement of the lemma follows. \end{proof} The following lemma helps relate $\mathsf{Margin}_s(O_l(\mathsf{CharString}[1:i]))$ to $\mathsf{Margin}_s(\mathsf{CharString}[1:i])$: \begin{lemma}\label{lem:reach_margin_unheard_bound} Let $w \in \ensuremath{\{\perp, 0, 1\}}{}^\mathbb{N}$ and $l, s \in \mathbb{N}$. Then, for any $i \geq l$, \begin{align*} \mathsf{Reach}(O_l(w[1:i])) &= \mathsf{Reach}(w[1:l]) + N_1(w[l+1:i])\\ & \leq \mathsf{Reach}(w[1:i]) + N_0(w[l+1:i]) \\ \mathsf{Margin}_s(O_l(w[1:i])) &= \mathsf{Margin}_s(w[1:l]) + N_1(w[l+1:i]) \\ &\leq \mathsf{Margin}_s(w[1:i]) + N_0(w[l+1:i]) \end{align*} \end{lemma} \begin{proof} We prove the result for $\mathsf{Reach}$ by induction; the result for $\mathsf{Margin}_s$ can be proven in an identical fashion. By re-arranging terms, the desired result can be stated in the following terms: \begin{align*} \mathsf{Reach}(O_l(w[1:i])) &= \mathsf{Reach}(w[1:l]) + N_1(w[l+1:i]) \\ \mathsf{Reach}(w[1:i]) &\geq \mathsf{Reach}(w[1:l]) + N_1(w[l+1:i])\iftoggle{arxiv}{}{\\ &~~~} - N_0(w[l+1:i]) \end{align*} For the base case with $i = l$, we observe that $O_l(w[1:l]) = w[1:l]$, which implies $\mathsf{Reach}(O_l(w[1:l])) = \mathsf{Reach}(w[1:l])$, which is identical to the desired statement with $i = l$. For any $i > l$, assume the desired statements hold for all $i' < i$. The key observation here is that for a fixed $l$, $\mathsf{Reach}(O_l(w[1:i]))$ satisfies \eqref{eq:reach_recursive}. This is because $O_l(w[1:i])$ is a string that is obtained by concatenating one additional symbol to $O_l(w[1:i-1])$. \begin{itemize} \item If $w[i] = \perp$, $\mathsf{Reach}(O_l(w[1:i])) = \mathsf{Reach}(O_l(w[1:i-1]))$ and $\mathsf{Reach}(w[1:i]) = \mathsf{Reach}(w[1:i-1])$. \item If $w[i] = 1$, $\mathsf{Reach}(O_l(w[1:i])) = \mathsf{Reach}(O_l(w[1:i-1])) + 1$ and $\mathsf{Reach}(w[1:i]) = \mathsf{Reach}(w[1:i-1]) + 1$. \item If $w[i] = 0$, $\mathsf{Reach}(O_l(w[1:i])) = \mathsf{Reach}(O_l(w[1:i-1]))$ and $\mathsf{Reach}(w[1:i]) = \mathsf{Reach}(w[1:i-1])$ or $\mathsf{Reach}(w[1:i]) = \mathsf{Reach}(w[1:i-1]) - 1$. We can therefore say $\mathsf{Reach}(w[1:i]) \geq \mathsf{Reach}(w[1:i-1]) - 1$ \end{itemize} (Crucially, these equations hold for $\mathsf{Margin}_s$ also, irrespective of the value of $s$.) These equations can be summarized as: \begin{align*} \mathsf{Reach}(O_l(w[1:i])) &= \mathsf{Reach}(O_l(w[1:i-1])) + N_1(w[i]) \\ \mathsf{Reach}((w[1:i])) &\geq \mathsf{Reach}(w[1:i-1]) + N_1(w[i])\iftoggle{arxiv}{}{\\ &~~~}- N_0(w[1:i-1]) \end{align*} By the induction hypothesis, \begin{align*} \mathsf{Reach}(O_l(w[1:i-1])) &= \mathsf{Reach}(w[1:l]) + N_1(w[l+1:i-1]) \\ \mathsf{Reach}(w[1:i-1]) &\geq \mathsf{Reach}(w[1:l]) + N_1(w[l+1:i-1])\iftoggle{arxiv}{}{\\ &~~~} - N_0(w[l+1:i-1]) \end{align*} Combining these equations, we get the desired result. \end{proof} We now obtain the main lemma, which states a necessary condition for $\mathcal{A}$ to violate the settlement property. \begin{lemma}[Settlement Violation-Necessary Condition]\label{lem:settlement_necessary} Suppose, in an execution, the settlement property with parameters $(s, k, \mathcal{I})$ is violated (i.e., \emph{$\mathcal{E}^c_{\text{settlement}}$} occurs). Then, for some $i \geq s+k$, \[\mathsf{Margin}_s(\mathsf{CharString}[1:i]) + \mathsf{Unheard}_\mathcal{I}[i] \geq 0.\] \end{lemma} \begin{proof} Suppose $\mathcal{E}^c_{\text{settlement}}$ occurs. Then, by \eqref{eq:settlement_splitting}, there exists $i \geq s + k$ such that $\mathcal{E}^c_{i\text{-settlement}}$ occurs. By Lemma \ref{lem:settlement_violation_margin}, $\mathsf{Margin}_s(O_l(\mathsf{CharString}[1:i])) \geq 0,$ where $l = \mathsf{LatestHeard}_\mathcal{I}[s+i].$ By Lemma \ref{lem:reach_margin_unheard_bound}, $\mathsf{Margin}_s(\mathsf{CharString}[1:i]) + N_0(\mathsf{CharString}[l+1:i]) \geq 0. $ Since $N_0(\mathsf{CharString}[l+1:i]) = \mathsf{Unheard}_\mathcal{I}[i],$ the result follows. \end{proof} It is interesting to contrast Lemma \ref{lem:settlement_necessary} with the corresponding statement in \cite{blum2020combinatorics}, which is given below in our notation: \[\mathsf{Margin}_s(\mathsf{CharString}[1:i]) \geq 0 \text{ for some } i \geq s+k\] Clearly, the delay model places a more stringent condition on $\mathsf{Margin}_s[i]$ for settlement to hold. \subsection{Intensive Chain Quality}\label{sec:intensive_chain_quality} In this section, we derive a necessary condition for violations of intensive chain quality. Recall the definition of intensive chain quality with parameters $s, k, f, \mu,$ and $\mathcal{I}$ from Definition \ref{def:chain_quality_intensive}: this property holds if any chain held by an honest party in $\mathcal{I}$ after slot $s +k$ has at least a fraction of $\mu$ honest blocks from the interval $\{s+1, \ldots, s+k\}$. We shall work with a stronger property, by replacing honest blocks by special honest blocks. So given $s,k \in \mathbb{N}$, $f, \mu > 0$ and a set of honest parties $\mathcal{I},$ let $\mathcal{E}_{\text{cq}},$ be the event that $\mathcal{C}^h_i[s+1:s+k]$ contains greater than $k \mu f \text{ special honest blocks}$ for all $i \geq s + k$ and all $h\in \mathcal{I}.$ $\mathcal{E}_{\text{cq}}$ implies intensive chain quality with the same parameters. The main result of this section, Lemma \ref{lem:chain_quality_necessary}, gives a necessary condition for $\mathcal{E}_{\text{cq}}^c$, the event of an intensive chain quality violation, in terms of the characteristic string and related quantities. Section \ref{sec:reach_margin} defines $\mathsf{Reach}$ as a mapping from strings to $\mathbb{Z}_+.$ If the string is $\mathsf{CharString},$ let $\mathsf{Reach}$ denote the random process defined by $\mathsf{Reach}[s]=\mathsf{Reach}(\mathsf{CharString}[1:s]).$ For any slot $i \in \mathbb{N}$, let $\mathcal{C}^*_{i}$ denote the chain broadcast by the leader of the last special honest slot at or before slot $i$. Since these chains must have strictly increasing lengths, \[|\mathcal{C}^*_{i_2}| \geq |\mathcal{C}^*_{i_1}| + N_0(\mathsf{CharString}[i_1+1:i_2]), \forall i_1 \leq i_2.\] The following lemma provides an upper bound on the length of any prefix of an honestly held chain. \begin{lemma} \label{lem:ICQ_lower} For any $h \in \mathcal{H}$, for any $i,s \in \mathbb{N},$ \[|\mathcal{C}^h_{i}[1:s]| \leq |\mathcal{C}^*_{s}| + \mathsf{Reach}[s].\] \end{lemma} \begin{proof} We first prove a more general result, stated for any (string, fork, tine) tuple. Let $i \in \mathbb{N}$, $w \in \ensuremath{\{\perp, 0, 1\}}{}^i$, a fork $F \vdash w$ and a tine $t \in F$ be given. Let $\Bar{F} \vdash w$ be the closure of $F$, and let $\Bar{t} \in \Bar{F}$ be the tine corresponding to $t$. Let $\hat{t}$ be the longest tine in $\Bar{F}$. Then, \begin{align} \text{length}(t) &\leq \text{length}(\hat{t}) + \text{reach}(\Bar{t}) \leq \text{length}(\hat{t}) + \mathsf{Reach}(\Bar{F}) \nonumber \\ & \leq \text{length}(\hat{t}) + \mathsf{Reach}(w) \label{eq:length_bound} \end{align} The first inequality follows from Definition \ref{def:gap_reserve_reach}, while the second and third follow from Definition \ref{def:max_reach}. To complete the proof of the lemma we explain why the claimed result is a special case of \eqref{eq:length_bound}. Let $w = \mathsf{CharString}[1:s]$, and let $F$ be the prefix of $\mathcal{F}_i$ obtained by dropping all blocks with label greater than $s$. Since $\mathcal{C}^h_{i}$ is a tine in $\mathcal{F}_i$, $\mathcal{C}^h_{i}[1:s]$ is a tine in $F$; denote it by $t$. Further, the longest tine in $\Bar{F}$ is the tine ending in the block labeled with the last special honest slot at or before $s$, which is precisely the tine $\mathcal{C}^*_s$. With this mapping, the desired inequality follows. \end{proof} We now define $\mathsf{Advantage}_s$ as follows: \begin{align} & \textsf{Advantage}_s(\mathsf{CharString}[1:i]) \triangleq N_1(\mathsf{CharString}[s+1:i]) \iftoggle{arxiv}{}{\nonumber \\ & ~~~~~ }- N_0(\mathsf{CharString}[s+1:i]) + k f \mu + \mathsf{Reach}[s] \label{eq:def_advantage} \end{align} $\mathsf{Advantage}_s$ is used in the lemma below. \begin{lemma}[Intensive chain quality violation -- necessary condition] \label{lem:chain_quality_necessary} Suppose intensive chain quality with parameters $s, k, f, \mu$ and $\mathcal{I}$ is violated in an execution (i.e., $\mathcal{E}_{\text{cq}}^c$ occurs). Then, for some $i \geq s+k,$ \begin{align} \label{eq:CQV} \mathsf{Advantage}_s(\mathsf{CharString}[1:i]) + \mathsf{Unheard}_{\mathcal{I}}[i] \geq 0. \end{align} \end{lemma} \begin{proof} Consider an execution where $\mathcal{E}_{\text{cq}}^c$ occurs. There exist a slot $i \geq s + k$ and honest party $h \in \mathcal{I}$ such that $N_0(\mathcal{C}^h_i[s+1:s+k])\leq k \mu f.$ First, consider the case that $\mathsf{LatestHeard}_h[i] < s$. Then, \begin{align*} N_0(\mathsf{CharString}[s+1:i]) \leq \mathsf{Unheard}_h[i] \leq \mathsf{Unheard}_{\mathcal{I}}[i]. \end{align*} Combining this inequality with \eqref{eq:def_advantage} yields \eqref{eq:CQV}. Next, consider the case that $\mathsf{LatestHeard}_h[i] \geq s.$ Let $i^*$ be the largest integer such that: $s+k \leq i^* \leq i$ and there are no special honest slots in $\mathcal{C}^h_i[s+k+1:i^*].$ We now show that \begin{align} \label{eq:lowerICQ} |\mathcal{C}^h_{i}[1:i^*]| \geq |\mathcal{C}^*_{s}| + N_0(\mathsf{CharString}[s+1:i^*]) - \mathsf{Unheard}_h[i^*]. \end{align} The proof of \eqref{eq:lowerICQ} is divided into the cases $i^*<i$ and $i^*=i.$ ($i^* < i$) Suppose $i^* < i.$ Then $i^*+1$ is a special honest slot and the message sent by the leader $h'$ of slot $i^*+1$, $\mathcal{C}^{h'}_{i^*+1}$, is a prefix of $\mathcal{C}^h_i$. Therefore, $\mathcal{C}^{h'}_{i^*+1}[1:i^*] = \mathcal{C}^h_i[1:i^*].$ Since, by the end of slot $i^*$, $h'$ received messages sent by the leaders of all special honest slots in $[1:i^*],$ it follows that \begin{align*} |\mathcal{C}^h_i[1:i^*]| = |\mathcal{C}^{h'}_{i^*+1}[1:i^*] | \geq |\mathcal{C}^*_{s}| + N_0(\mathsf{CharString}[s+1:i^*]). \end{align*} which implies \eqref{eq:lowerICQ}. ($i^* = i$) Suppose $i^* = i.$ Let $l = \mathsf{LatestHeard}_h[i].$ Then $l\leq i$ and, by our prior assumption, $l \geq s.$ Therefore \begin{align*} |\mathcal{C}^h_{i}|& \geq |\mathcal{C}^*_{l}| \geq |\mathcal{C}^*_{s}| + N_0(\mathsf{CharString}[s+1:l]) \iftoggle{arxiv}{}{\\ &}= |\mathcal{C}^*_{s}| + N_0(\mathsf{CharString}[s+1:i]) - \mathsf{Unheard}_h[i], \end{align*} which, together with the fact $i=i^*$ (so $\mathcal{C}^h_{i} =\mathcal{C}^h_{i}[1:i^*]$), proves \eqref{eq:lowerICQ}. This completes the proof of \eqref{eq:lowerICQ} in either case. We now find an upper bound for $|\mathcal{C}^h_{i}[1:i^*]|.$ We know that: \begin{itemize} \item $|\mathcal{C}^h_i[1:s]| \leq |\mathcal{C}^*_{s}| + \mathsf{Reach}[s]$, by Lemma \ref{lem:ICQ_lower}. \item $|\mathcal{C}^h_i[s + 1:s + k]| \leq k \mu f + N_1(\mathsf{CharString}[s + 1:s + k])$, because, by assumption, at most $ k \mu f$ blocks in $\mathcal{C}^h_i[s + 1:s + k]$ are from special honest slots; the rest must have labels in $\mathcal{N}_1(\mathsf{CharString}[s + 1:s + k])$. \item $|\mathcal{C}^h_i[s + k + 1:i^*]| \leq N_1(\mathsf{CharString}[s + k + 1:i^*])+1$, because none of the blocks in $\mathcal{C}^h_i[s + k + 1:i^*]$ are from special honest slots. \end{itemize} Together, we get \begin{align} |\mathcal{C}^h_{i}[1:i^*]| &= |\mathcal{C}^h_i[1:s]| + |\mathcal{C}^h_i[s + 1:s + k]| + |\mathcal{C}^h_i[s + k + 1:i^*]| \nonumber \\ &\leq |\mathcal{C}^*_{s}| + \mathsf{Reach}[s] + k f \mu + N_1(\mathsf{CharString}[s + 1:s + k])\iftoggle{arxiv}{}{ \nonumber \\ &~~~~~} + N_1(\mathsf{CharString}[s + k + 1:i^*]) \nonumber \\ &= |\mathcal{C}^*_{s}| + \mathsf{Reach}[s] + k f \mu + N_1(\mathsf{CharString}[s + 1:i^*]) \label{eq:upperICQ} \end{align} Combining \eqref{eq:def_advantage}, \eqref{eq:lowerICQ}, and \eqref{eq:upperICQ} yields \eqref{eq:CQV}. Thus, the lemma holds. \end{proof} \section{Proof Sketch of Theorem \ref{thm:main}}\label{sec:probabilistic_lemmas} We provide a proof sketch in this section and defer the full proof to Appendix \ref{app:probabilistic_lemmas}. The proof of Theorem \ref{thm:main} relies primarily on the properties of $\mathsf{CharString}$ (Lemmas \ref{lem:renewal_prop_CharString} and \ref{lem:CCS}) and the bounds on $\mathsf{Unheard}$ (Lemmas \ref{lem:dist_unheard} and \ref{lem:Uheard_line_bnd}). As Lemmas \ref{lem:settlement_necessary} and \ref{lem:chain_quality_necessary} provide necessary conditions for violations of settlement and chain quality, bounding their probabilities is sufficient to prove security. In other words, it suffices to prove the following two statements: \iftoggle{arxiv} { \begin{align*} \mathbb{P} \left( \mathsf{Margin}_s(\mathsf{CharString}[1:i]) + \mathsf{Unheard}_\mathcal{I}[i] \geq 0 \text{ for some } i \geq s+k \right) &\leq p_{\textsf{settlement}} + |\mathcal{I}|p_{\textsf{unheard}} \\ \mathbb{P}\left(\mathsf{Advantage}_s(\mathsf{CharString}[1:i]) + \mathsf{Unheard}_{\mathcal{I}}[i] \geq 0 \text{ for some } i \geq s+ k\right) &\leq p_{\textsf{CQ}} + |\mathcal{I}| \tilde{p}_{\textsf{unheard}} \end{align*} } { \begin{align*} \mathbb{P} \left( \begin{array}{c} \mathsf{Margin}_s(\mathsf{CharString}[1:i]) + \mathsf{Unheard}_\mathcal{I}[i] \geq 0 \\ \text{ for some } i\geq s+k \end{array} \right) \nonumber \\ \leq p_{\textsf{settlement}} + |\mathcal{I}|p_{\textsf{unheard}} \\ \mathbb{P}\left( \begin{array}{c} \mathsf{Advantage}_s(\mathsf{CharString}[1:i]) + \mathsf{Unheard}_{\mathcal{I}}[i] \geq 0 \\ \text{ for some } i \geq s+ k \end{array} \right) \nonumber \\ \leq p_{\textsf{CQ}} + |\mathcal{I}| \tilde{p}_{\textsf{unheard}} \end{align*} } In Appendix \ref{sec:time_reduction}, we derive events on the compressed time scale that are implied by the events on the left-hand sides of these two statements. Analyzing these new events is therefore sufficient to prove security. In Appendix \ref{sec:reach}, we use Lemma \ref{lem:renewal_prop_CharString} to show that $\mathsf{Reach}[s]$ is stochastically dominated by a geometric random variable. By Lemma \ref{lem:CCS}, $\mathsf{CompressedCharString}_s$ is (nearly) a Bernoulli process. The difference between the number of adversarial blocks and special honest blocks as a function of time behaves, therefore, like a random walk with negative drift. In Appendix \ref{sec:random_walk}, we bound such a process from above by an affine function with negative slope. The results of Appendices \ref{sec:reach} and \ref{sec:random_walk} translate into affine bounds for the compressed time scale analogues of $\mathsf{Margin}_s$ and $\mathsf{Advantage}_s$. In the case of $\mathsf{Margin}_s$, we extend a result of \cite{blum2020combinatorics}. In Appendices \ref{sec:margin} and \ref{sec:advantage}, we combine these affine bounds with Lemma \ref{lem:Uheard_line_bnd} to prove the desired statements on settlement and chain quality. \iftoggle{arxiv}{}{\clearpage} \section{Proof of Lemma \ref{lem:dist_unheard}} \label{app:lemma_unheard} \distUnheard* \begin{proof} Fix $i\geq 1.$ It is possible that $i$ itself is a special honest slot and $h$ has not heard it by slot $i$. In any case, $\mathsf{Unheard}_h[i] \leq 1 + N_0(\mathsf{CharString}[\mathsf{LatestHeard}[i]:i])$, i.e., $\mathsf{Unheard}_h[i]$ is less than or equal to one plus the number of consecutive special honest slots from strictly before slot $i$ that $h$ has not heard by slot $i$. The non-empty slots of $\mathsf{CharString}$ form both a Bernoulli process with parameter $f$ and a renewal process. Let $D_1, D_2, \ldots $ denote the lifetimes of the renewal process going backwards from slot $i$. Thus, $i-D_1 - \cdots - D_j$ is the $j\textsuperscript{th}$ non-empty slot of $\mathsf{CharString}$ (strictly) before $i$. The random variables $D_i$ are independent with the $\mathsf{geom}(f)$ distribution. The last special honest slot before slot $i$ must be at least $D_1$ slots before slot $i$, so the probability $h$ has heard that special honest slot is at least $q$. In general, for $j\geq 1$, the $j\textsuperscript{th}$ from the last special honest slot before slot $i$ must be at least $D_j$ slots before slot $i.$ (Here $D_j$ is used as a lower bound on $D_1 + \cdots + D_j.$) Thus, no matter which of the last $j-1$ special honest slots before slot $i$ that $h$ has heard, the probability $h$ hears the $j\textsuperscript{th}$ from last special honest slot before $i$ is at least $q.$ Therefore, $\mathsf{Unheard}_h[i]$ can be viewed as at most one plus the number of consecutive failures in a sequence of trials, such that each successive trial is successful with probability at least $q$. Thus, $\mathsf{Unheard}_h[i]$ is stochastically dominated by the $\mathsf{geom}(q)$ distribution, which is the conclusion of (a). The proof of (b) is similar. Fix $s\geq 1$ and $j\geq 1.$ By the nature of the same renewal process considered in the previous paragraph, the lifetime that begins at the last renewal point less than or equal to $s$, has the sampled lifetime distribution, equivalent to the sum of two $\mathsf{geom}(f)$ random variables minus one. Such sampled lifetime distribution is stochastically greater that the typical lifetime distribution, $\mathsf{geom}(f).$ All the other lifetimes of the renewal process going forwards or backwards from $s$ have the $\mathsf{geom}(f)$ probability distribution. Thus, if we consider the renewal process from the perspective of slot $s + T^s_j,$ which is the $j\textsuperscript{th}$ renewal point after slot $s$, the $j\textsuperscript{th}$ lifetime going backwards has the sampled lifetime distribution and all the other lifetimes have the $\mathsf{geom}(f)$ distribution. Furthermore, these lifetimes are mutually independent. Thus, the same proof as in part (a), with $i$ there replaced by $s + T^s_j$, holds to prove (b). \end{proof} \section{Proofs of Lemmas \ref{lem:settle_balance_fork}, \ref{lem:balanced_fork_mu}, and \ref{lem:balanced_fork_equivalence}} \label{app:deterministic_lemmas} \settleBalanceFork* \begin{proof} By equation \eqref{eq:def_i_settlement_complement}, we know that $\mathcal{E}^c_{i\text{-settlement}}$ implies one of two events. We show that the lemma holds in each case. Let us first consider the case where $\exists \ h_1, h_2 \in \mathcal{I}$ such that $\mathcal{C}^{h_1}_{i}[1:s] \neq \mathcal{C}^{h_2}_{i}[1:s]$. Both $\mathcal{C}^{h_1}_{i}$ and $\mathcal{C}^{h_2}_{i}$ are tines in $\mathcal{F}_i$. By the definition of $l$, both parties have heard of a special honest broadcast at slot $l$ or later. That is, $\mathsf{LatestHeard}_{h_1}[i] \geq l$ and $\mathsf{LatestHeard}_{h_2}[i] \geq l$. Therefore, both tines are $l$-viable. Finally, since these tines diverge at a block with label $< s$, they must have completely different blocks with labels (timestamps) $s$ onwards. Therefore these tines are $s$-disjoint ($t_1 \nsim_s t_2$). Together, we deduce that $\mathcal{F}_i \vdash \mathsf{CharString}[1:i]$ is an ($s, l$)-balanced fork. Now, consider the case that $\mathcal{C}^{h}_{i}[1:s] \neq \mathcal{C}^{h}_{i+1}[1:s]$ for some $h \in \mathcal{I}.$ Consider the fork $\mathcal{F}_{i+1} \vdash \mathsf{CharString}[1:i+1]$. Let $t_1$ and $t_2$ be the tines in $\mathcal{F}_{i+1}$ that represent the chains $\mathcal{C}^{h}_{i}$ and $\mathcal{C}^{h}_{i+1}$ respectively. Let $F$ be the directed tree obtained by dropping all blocks with label $i+1$ from $\mathcal{F}_{i+1}$. We now show that $F$ is an ($s, l$)-balanced fork. To prove this, we first note the following properties of $F$. \begin{itemize} \item $F \vdash \mathsf{CharString}[1:i]$. This follows from the construction of $F$ from $\mathcal{F}_{i+1}$ and $\mathcal{F}_{i+1} \vdash \mathsf{CharString}[1:i+1]$. \item $\mathcal{F}_i \sqsubseteq F \sqsubseteq \mathcal{F}_{i+1}$ ($F$ potentially contains some adversarial blocks not in $\mathcal{F}_i$). \item If a tine $t \in \mathcal{F}_i$ is $l$-viable in $\mathcal{F}_i$, then $t \in F$ is $l$-viable in $F$. \item If $t \in \mathcal{F}_{i+1}$ there is a corresponding tine $\Tilde{t} \in F$ that includes all but possibly the last block of $t$. This is because $t$ may contain at most one block with label $i+1$, which would be the only block not common between $\Tilde{t}$ and $t$. $\text{length}(\Tilde{t}) \geq \text{length}(t) - 1$. \end{itemize} We know that $t_1$ is an $l$-viable tine in $\mathcal{F}_i$, because it was held by an honest party in slot $i$ and $l \leq \mathsf{LatestHeard}_h[i]$. By the properties of $F$ above, $t_1$ is an $l$-viable tine in $F$. Further, there is a tine $\Tilde{t}_2 \in F$ corresponding to $t_2$. $t_2$ is a tine in $\mathcal{F}_{i+1}$ that is strictly longer than $t_1 \in \mathcal{F}_i$, and therefore $t_2 \in F$ must be at least as long as $t_1 \in F$. Therefore $t_2$ is also an $l$-viable tine in $F$. Lastly, $t_1 \nsim_s t_2$, because they represent chains that diverge prior to slot $s$ (here, $t_1, t_2$ are tines in $\mathcal{F}_{i+1}$). Therefore, $t_1 \nsim_s \Tilde{t}_2$ in the fork $F$. Thus, $F \vdash \mathsf{CharString}[1:i]$ is an ($s, l$)-balanced fork. \end{proof} \balanceMargin* \begin{proof} (if) The proof relies on the definitions of margin, reach, reserve and gap (Definitions \ref{def:gap_reserve_reach} and \ref{def:margin}). Suppose $\mathsf{Margin}_s(w) \geq 0$. Then there exists a closed fork $\Bar{F} \vdash w$ such that $\mathsf{Margin}_s(\Bar{F}) \geq 0$. We shall construct $F \vdash w$ such that $\Bar{F} \sqsubseteq F$ and $F$ is $s$-balanced. Note that $\mathsf{Margin}_s(\Bar{F}) \geq 0$ implies $\Bar{F}$ has two tines $\Bar{t}_1$, $\Bar{t}_2$ such that $\Bar{t}_1 \nsim_s \Bar{t}_2$ and reach($\Bar{t}_j$) $\geq 0$, $j \in \{1, 2\}$. (In what follows, any statement with subscript $j$ holds for $j \in \{1, 2\}$). It follows that reserve($\Bar{t}_j$) $\geq$ gap($\Bar{t}_j$). Recall that reserve($\Bar{t}_j$) are the number of adversarial slots in $w$ whose label is strictly greater than $\ell(\Bar{t}_j)$. This implies we can construct a fork $F \vdash w$ from $\Bar{F}$ by extending each tine $\Bar{t}_j$ by reserve($\Bar{t}_j$) adversarial blocks. Let $t_j$ denote the tine in $F$ extending $\Bar{t}_j$. Then $t_1 \nsim_s t_2$. By the definition of gap, tine $\Bar{t}_j$ is shorter than the longest tine in $\Bar{F}$ by gap($\Bar{t}_j$). Since reserve($t_j$) $\geq$ gap($t_j$), both tines $t_j$ are now longer than the longest tine in $\Bar{F}$. From the third observation made following Definition \ref{def:viable_tine}, and the fact that $\Bar{F}$ is the closure of $F$, we conclude that both $t_j$ are viable in $F$. Thus $F$ is an $s$-balanced fork. (only if) For this portion, we work with the definition of viable tines and $s$-balanced forks (Definitions \ref{def:viable_tine} and \ref{def:balanced_forks}). Let $F \vdash w$ be an $s$-balanced fork. Then there exists two tines $t_1, t_2 \in F$ such that $t_1 \nsim_s t_2$ and they are both viable. Let $\Bar{F}$ be the closure of $F$, and let $\Bar{t}_1, \Bar{t}_2$ be the trimmed versions of $t_1$ and $t_2$ in $\Bar{F}$. It is sufficient to show that $\Bar{t}_1 \nsim_s \Bar{t}_2$ and reach($\Bar{t}_1$), reach($\Bar{t}_2$) $\geq 0$. Together, they imply \begin{align*} \mathsf{Margin}_s(w) & \geq \mathsf{Margin}_s(\Bar{F}) = \max_{t' \nsim_s t''} \min \left\{\text{reach}(t'), \text{reach}(t'') \right\} \\ & \geq \min \{\text{reach}(\Bar{t}_1), \text{reach}(\Bar{t}_2) \} \geq 0. \end{align*} The first point, $\Bar{t}_1 \nsim_s \Bar{t}_2$, follows from the observation after Definition \ref{def:fork_prefixes}. We now show reach($\Bar{t}_j$) $\geq 0$, $j = \{1, 2\}$. First, we note that $\text{length}(t_j) \leq \text{length}(\Bar{t}_j) + \text{reserve}(\Bar{t}_j)$; this follows from the definition of reserve. Rearranging this inequality, we get $\text{reserve}(\Bar{t}_j) \geq \text{length}(t_j) - \text{length}(\Bar{t}_j)$. Second, let $t$ be the longest tine in $\Bar{F}$. By definition, $\text{gap}(\Bar{t}_j) = \text{length}(t) - \text{length}(\Bar{t}_j)$. Third, we note that $t$ is also the longest tine in $F$ that ends in a vertex with a label in $\mathcal{N}_0(w)$. By the definition of viability, $\text{length}(t_j) \geq \text{length}(t), j = \{1, 2\}$. Putting these terms together, we get reach($\Bar{t}_j$) $=$ reserve($\Bar{t}_j$) $-$ gap($\Bar{t}_j$) $\geq \text{length}(t_j) - \text{length}(\Bar{t}_j) - (\text{length}(t) - \text{length}(\Bar{t}_j)) = \text{length}(t_j) - \text{length}(t) \geq 0$. \end{proof} \equivalence* \begin{proof} (if) Let $F' \vdash w'$. Let $t$ denote the longest tine in $F$ ending at a block with label in $\mathcal{N}_0(w')$. We create a fork $F \vdash w$ by extending $t$ with a string of special honest nodes corresponding to slots in $\mathcal{N}_0(w[l+1:i])$. If $t'_1 \nsim_{s} t'_2$ are two viable tines in $F'$, then length($t'_j$) $\geq$ length($t$). Since $t$, $t'_1$, and $t'_2$ remain valid tines in $F$, these inequalities holds in $F$ as well. This implies $t'_1$, $t'_2$ are $l$-viable tines in $F$. The property $t'_1 \nsim_{s} t'_2$ trivially extends from $F'$ to $F$. Thus $F \vdash w$ is an $(s, l)$-balanced fork. (only if) Let $F \vdash w$ be an $(s, l)$-balanced fork. We know there exist tines $t_1$ and $t_2 \in F$ such that $t_1 \nsim_s t_2$ and both $t_1$ and $t_2$ are $l$-viable in $F$. Let $t$ be the longest tine in $F$ that ends at a block with a label in $\mathcal{N}_0(w[1:l])$ (such a tine is unique in each $F$). Then $\text{length}(t_1)$, $\text{length}(t_2)$ $\geq$ $\text{length}(t)$. Let $F' \vdash w'$ be a prefix of $F$, obtained by dropping all blocks with labels in $\mathcal{N}_0(w[l+1:i])$ and their descendants. Let $t'_1$ and $t'_2$ be the tines in $F'$ corresponding to $t_1$ and $t_2$. To show $F'$ is an $s$-balanced fork, it is sufficient to show that $t'_1 \nsim_s t'_2$ and $t'_1$ and $t'_2$ are viable tines in $F'$. The first point, $t'_1 \nsim_s t'_2$, follows from the first observation after Definition \ref{def:fork_prefixes}. Note that $t$ is the longest tine in $F'$ ending at a block with label in $\mathcal{N}_0(w')$. To establish viability, it suffices to show that length($t'_j$) $\geq$ length$(t)$ for $j \in \{1, 2\}$. Note that if any block from tine $t_j$ was dropped to obtain $t'_j$, it must have been at a depth strictly greater than length($t$). This is because any block with a label in $\mathcal{N}_0(w[l+1:i])$ must be at a depth strictly greater than length($t$), by the fifth property of forks (see Definition \ref{def:fork}). Therefore, length($t'_j$) $\geq$ length$(t)$ for $j \in \{1, 2\}$, which is what we wish to prove. \end{proof} \section{Proof of Theorem \ref{thm:main}}\label{app:probabilistic_lemmas} By Lemmas \ref{lem:settlement_necessary} and \ref{lem:chain_quality_necessary}, it suffices to prove the following: \iftoggle{arxiv} { \begin{align} \mathbb{P} \left( \mathsf{Margin}_s(\mathsf{CharString}[1:i]) + \mathsf{Unheard}_\mathcal{I}[i] \geq 0 \text{ for some } i \geq s+k \right) &\leq p_{\textsf{settlement}} + |\mathcal{I}|p_{\textsf{unheard}} \label{eq:bound_settlement} \\ \mathbb{P}\left(\mathsf{Advantage}_s(\mathsf{CharString}[1:i]) + \mathsf{Unheard}_{\mathcal{I}}[i] \geq 0 \text{ for some } i \geq s+ k\right) &\leq p_{\textsf{CQ}} + |\mathcal{I}| \tilde{p}_{\textsf{unheard}} \label{eq:bound_chain_quality} \end{align} } { \begin{align} \mathbb{P} \left( \begin{array}{c} \mathsf{Margin}_s(\mathsf{CharString}[1:i]) + \mathsf{Unheard}_\mathcal{I}[i] \geq 0 \\ \text{ for some } i\geq s+k \end{array} \right) \nonumber \\ \leq p_{\textsf{settlement}} + |\mathcal{I}|p_{\textsf{unheard}} \label{eq:bound_settlement}\\ \mathbb{P}\left( \begin{array}{c} \mathsf{Advantage}_s(\mathsf{CharString}[1:i]) + \mathsf{Unheard}_{\mathcal{I}}[i] \geq 0 \\ \text{ for some } i \geq s+ k \end{array} \right) \nonumber \\ \leq p_{\textsf{CQ}} + |\mathcal{I}| \tilde{p}_{\textsf{unheard}} \label{eq:bound_chain_quality} \end{align} } \subsection{Reduction to compressed time scale} \label{sec:time_reduction} We defined compressed time-scale processes in Section \ref{sec:compressed_time_scale}. In this section, we specify events on the compressed time scale implied by the events on the left-hand sides of \eqref{eq:bound_settlement} and \eqref{eq:bound_chain_quality}. First, we establish some notation. Recall that $\mathsf{Reach}$ denotes both a mapping of strings to $\mathbb{Z}_+$ and the random process $\mathsf{Reach}[i]=\mathsf{Reach} (\mathsf{CharString} [1:i]).$ We define similar random processes for $\mathsf{Margin}_s$ and $\mathsf{Advantage}_s.$ Fix $s\geq 1.$ Then $\mathsf{Margin}_s$ is a mapping of strings to $\mathbb{Z}_+$, where $s$ relates to $s$-disjoint tines. We now apply this mapping to $\mathsf{CharString}$ and define a random process with the same name: $$\mathsf{Margin}_s[i] \stackrel{\triangle}{=} \mathsf{Margin}_s(\mathsf{CharString}[1:i]).$$ We now define a random process on the compressed time scale based on $\mathsf{Margin}_s$ by using the same value $s$ for both the parameter in defining disjoint tines and the reference slot for the compressed process. Thus, $\mathsf{CompressedMargin}_s[0]=\mathsf{Margin}_s[s]$ and for $j\geq 1,$ \begin{align*} \mathsf{CompressedMargin}_s[j] \stackrel{\triangle}{=} \mathsf{Margin}_s(\mathsf{CharString}[1:s+T_j^s]). \end{align*} We define $\mathsf{Advantage}_s$ and $\mathsf{CompressedAdvantage}_s$ similarly. Finally, recall the processes $\mathsf{CompressedUnheard}_{h,s}$ and $\mathsf{CompressedUnheard}_{\mathcal{I},s}$ from Section \ref{sec:unheard}. Now, given $k' \geq 1$, consider the following three events: \begin{align*} F_0 & = \{ T^s_{k'} > k \} \\ F_1 & = \{ \mathsf{CompressedMargin}_s[j] + \mathsf{CompressedUnheard}_{\mathcal{I},s}[j] \geq 0 \iftoggle{arxiv}{}{\\& \hspace{10pt}} \text{ for some } j\geq k' \} \\ F_2 & = \{ \mathsf{CompressedAdvantage}_s[j] + \mathsf{CompressedUnheard}_{\mathcal{I},s}[j] \iftoggle{arxiv}{}{\\& \hspace{10pt}} \geq 0 \text{ for some } j \geq k' \} \end{align*} We claim that the event on the left-hand side of \eqref{eq:bound_settlement} implies $F_0 \cup F_1$. The event on the left-hand side of \eqref{eq:bound_settlement} implies that $\mathsf{Margin}_s[i'] +\mathsf{Unheard}_\mathcal{I}[i'] \geq 0$ for some $i' \geq s+k$. The process $\mathsf{Margin}_s$ is constant over intervals of the form $[T^s_j: T^s_{j+1}-1]$ and the process $\mathsf{Unheard}_\mathcal{I}[i]$ is non-increasing over such intervals. So if $j'$ is such that $s+ T_{j'}$ is the last renewal time less than or equal to $i'$, then $\mathsf{CompressedMargin}_s[j']+\mathsf{CompressedUnheard}_{\mathcal{I},s}[j'] \geq 0.$ If $F_0$ does not hold, then $s+ T^s_{k'} \leq s+ k$, implying that $j' \geq k'$, and hence $F_1$ is true. This completes the proof of the claim. Similarly, the event on the left-hand side of \eqref{eq:bound_chain_quality} implies $F_0 \cup F_2.$ Thus, to prove \eqref{eq:bound_settlement} and \eqref{eq:bound_chain_quality}, it suffices to obtain upper bounds on $\mathbb{P}(F_0 \cup F_1)$ and $\mathbb{P}(F_0 \cup F_2),$ respectively. The following lemma will be used to help bound $\mathbb{P}(F_0).$ \begin{lemma} \label{lemma:time_scales} Suppose $k' = \lceil rkf \rceil$ such that $0 < r < 1.$ Then $\mathbb{P}(T^s_{k'} > k) \leq \exp(-kf(1-r)^2/2).$ \end{lemma} \begin{proof} Note that $\{T^s_{k'} > k\} = \{N(\mathsf{CharString}[s+1:s+k])\leq k'-1\},$ and $N(\mathsf{CharString}[s+1:s+k])$ has the binomial distribution with parameters $k$ and $f.$ Thus \begin{align*} \mathbb{P}(T^s_{k'} > k) & = \mathbb{P}(\mathsf{binom}(k,f) \leq k'-1) \\ &\leq \mathbb{P}(\mathsf{binom}(k,f) \leq rkf ) \leq \exp(-kf(1-r)^2/2). \end{align*} where we use the bound $\mathbb{P}(\mathsf{binom}(n,p) \leq rnp)\leq \exp(np(1-r)^2/2).$ \end{proof} \subsection{On \textsf{Reach}}\label{sec:reach} In this section, we show that the marginal distribution of $\mathsf{Reach}$ is stochastically dominated by a geometric random variable. This result is used to bound both $\mathsf{Margin}_s$ and $\mathsf{Advantage}_s$ in later sections. Let $B$ denote the backwards residual lifetime process for the locations of the non-empty slots in $\mathsf{CharString},$ counting from zero. In other words, $B_t = \min\{i\geq 0: \mathsf{CharString}[t-i]\neq \perp\}.$ \begin{lemma}\label{lem:reach_stationary} The process $(B_t, \mathsf{Reach}[t])$ is a discrete-time Markov process with equilibrium probability mass function given by $\pi(b,r) = f(1-f)^b \left(1-\frac{1-p}{p} \right) \left( \frac{1-p}{p}\right)^r.$ In other words, under the equilibrium distribution, $B_t$ is independent of $\mathsf{Reach}[t],$ $B_t$ has the $\mathsf{geom}(f)-1$ distribution, and $\mathsf{Reach}[t]$ has the $\mathsf{geom}\left(\frac{1-p}{p}\right)-1$ distribution. \end{lemma} \begin{proof} The Markov property follows from (i) the recursion \eqref{eq:reach_recursive} for determining $\mathsf{Reach}$ from $\mathsf{CharString}$ and (ii) the renewal structure of $\mathsf{CharString}$ described in Lemma \ref{lem:renewal_prop_CharString}. The nonzero transition probabilities out of any given state $(b,r) \in \mathbb{Z}_+^2$ are given by (with $F_b=\mathbb{P}(\Delta \leq b)$): \begin{align*} \mathbb{P}((b,r)\to(b+1,r)) &= 1-f \\ \mathbb{P}((b,r)\to(0,(r-1)_+))&=f\alpha F_b \\ \mathbb{P}((b,r)\to(0,r+1)) &= f(1-\alpha F_b) \end{align*} To verify $\pi$ is the equilibrium distribution, it suffices to check that if the state of the process at one time has distribution $\pi$, then in one step of the process, the probability of jumping out of any given state is equal to the probability of jumping into the state. For a state of the form $(b,r)$ with $b\geq 1$, the probability of jumping into the state is $\pi(b-1,r)(1-f),$ which is equal to $\pi(b,r),$ the probability of jumping out of the state. For a state of the form $(0,r)$ with $r\geq 1$, the probability of jumping into the state satisfies the following: \begin{align*} &\sum_{b=0}^\infty \pi(b,r-1)f(1-\alpha F_b) + \sum_{b=0}^\infty \pi(b,r+1)f\alpha F_b \\ &~~~ = \pi(0,r) \left[ \sum_{b=0}^\infty \frac{p}{1-p}(1-f)^b f(1-\alpha F_b) + \sum_{b=0}^\infty \frac{1-p}{p}(1-f)^b f\alpha F_b \right] \\ & = \pi(0,r), \end{align*} where we used the fact $\alpha \sum_{b=0}^\infty (1-f)^b f F_b = p.$ Thus, the probability of jumping into the state $(0,r)$ is equal to $\pi(0,r)$, which is the probability of jumping out of state $(0,r).$ It remains to show probabilities of jumping into and out of state (0,0) are the same, but that follows from the fact it is true for all other states. \end{proof} \begin{lemma}\label{lem:reach_dist_bound} For all integers $i \geq 0$, $\mathbb{P}(\mathsf{Reach}[i] \geq a) \leq \left(\frac{1-p}{p}\right)^a$ for all $a\in \mathbb{R}_+.$ \end{lemma} \begin{proof} By the renewal structure of $\mathsf{CharString}$ described in Lemma \ref{lem:renewal_prop_CharString}, the sequence of non-empty slots is a Bernoulli process with parameter $f$, so the distribution of $B_0$ is $\mathsf{geom}(f)-1.$ The initialization of $\mathsf{Reach}$ is $\mathsf{Reach}[0]=0.$ Consider a comparison system such that $\mathsf{Reach}[0]$ is a random variable independent of $\mathsf{CharString}$ with the $\mathsf{geom}\left(\frac{1-p}{p}\right)-1$ distribution. Then in the comparison system, $((B_t, \mathsf{Reach}[0]): t\geq 0)$ is a stationary Markov process, and in particular, $\mathsf{Reach}[t]$ has the $\mathsf{geom}\left(\frac{1-p}{p}\right)-1$ distribution for all $t.$ Note that, for $\mathsf{CharString}$ fixed, all the variables $((B_t, \mathsf{Reach}[t]): t\geq 0)$ are nondecreasing functions of the initial state $(B_0,\mathsf{Reach}[0])$, as can be readily shown by induction on $t.$ Since the actual initial state of the original system is less than the initial state of the comparison system, it follows that $\mathsf{Reach}[t]$ in the original system is stochastically dominated by the $\mathsf{geom}\left(\frac{1-p}{p}\right)-1$ distribution, as promised by the lemma. \end{proof} \subsection{A bound on a random walk}\label{sec:random_walk} Let $\textsf{W}$ denote a simple integer valued random walk with a drift $-\epsilon,$ In other words, $\textsf{W}[0] = 0$ and \begin{equation}\label{eq:random_walk_defn} \textsf{W}[j + 1] = \begin{cases} \textsf{W}[j] + 1 & \text{w.p. } \frac{1-\epsilon}{2} \\ \textsf{W}[j] - 1 & \text{w.p. } \frac{1+\epsilon}{2} \\ \end{cases} \end{equation} Here, $\epsilon$ can be any value in $[-1,1]$, but in our application, $0 < \epsilon < 1.$ The purpose of this section is to prove the following lemma. \begin{lemma}\label{lem:random_walk_bound} Let $W[j]$ be a simple random walk defined in \eqref{eq:random_walk_defn}. For any $c < \epsilon$, for any $k \in \mathbb{N}$, \begin{align} \label{eq:to_prove} \mathbb{P}(W[j] \geq -cj \text{ for some } j \geq k) \leq 2\exp\left(-k(\epsilon - c)^2/3 \right) \end{align} \end{lemma} \begin{proof} Let $b>0,$ to be determined below. Observe that the event on the left-hand side of \eqref{eq:to_prove} is contained in $G_1\cup G_2$ where $G_1=\{W[k] \geq -ck - b\}$ and $G_2= \{ \max_{i \geq 0} (W[i+k] - W[k] + ci) \geq b\}.$ Since $W[k] + k\epsilon$ is the sum of $k$ i.i.d. random variables with $0$ mean, each taking values in an interval of length two, Hoeffding's inequality implies that for any $\delta > 0$, $ \mathbb{P}\left(W[k] + k\epsilon \geq k \delta \right) \leq \exp(-k\delta^2/2)$. Setting $\delta = \epsilon - c -(b/k)$ yields $P(G_1) \leq \exp(-k\delta^2/2).$ Let $Y$ be a random variable such that \begin{equation*} Y = \begin{cases} ~~ 1 + c & \text{ w.p. } \frac{1 - \epsilon}{2} \\ -1 + c & \text{ w.p. } \frac{1 + \epsilon}{2}, \end{cases} \end{equation*} and let $Y_1, Y_2, \ldots$ be i.i.d. copies of $Y$. Kingman's tail bound \cite{Kingman64} is that, for $\theta^* = \sup\{\theta > 0 : \mathbb{E}\left[e^{\theta Y}\right] \leq 1\},$ \begin{align*} \mathbb{P}\left(\max_{i \geq 0} \sum_{i' = 1}^{i} Y_{i'} \geq b \right) \leq e^{-\theta^* b } \end{align*} To obtain a bound on $\theta^*,$ note that Hoeffding's lemma for bounded random variables \cite{Hoeffding} implies that $\mathbb{E}\left[e^{\theta (Y-(c-\epsilon))}\right] \leq e^{\theta^2/2}$. Taking $\theta=-2(c-\epsilon)$ shows that $\mathbb{E}\left[e^{2(\epsilon - c) Y}\right] \leq 1.$ Therefore $\theta^* \geq 2(\epsilon - c),$ Thus, for any $b \geq 0$, \begin{equation}\label{eq:kingman_bound} \mathbb{P}\left(\max_{i \geq 0} \sum_{i' = 1}^{i} Y_{i'} \geq b \right) \leq e^{-\theta^* b } \leq e^{-2(\epsilon - c) b}, \end{equation} For any $k \in \mathbb{N}$, we note that the random processes $(\sum_{i' = 1}^i Y_{i'}: i\geq 0)$ and $(W[i + k] - W[k] + ci : i \geq 0)$ have the same distribution. Therefore, \eqref{eq:kingman_bound} implies $\mathbb{P}(G_2) \leq \exp\left(-2(\epsilon - c) b\right). $ Thus $\mathbb{P}(G_1\cup G_2) \leq \exp\left(-k\delta^2/2\right) + \exp\left(-2(\epsilon - c) b\right).$ Setting $b = k(\epsilon - c)\left(1-\sqrt{\frac 2 3}\right)$ gives $\delta^2/2 = (\epsilon - c)^2/3$ and using $2\left(1-\sqrt{\frac 2 3} \right) \geq 0.367$ yields \begin{align*} \mathbb{P}(G_1\cup G_2)& \leq \exp\left(-k(\epsilon - c)^2/3\right) + \exp\left(-k(0.367)(\epsilon - c)^2 \right) \\ &\leq 2\exp\left(-k(\epsilon - c)^2/3\right) \end{align*} which proves the lemma. \end{proof} \subsection{On \textsf{Margin} and proof of settlement bound} \label{sec:margin} We prove bound \eqref{eq:main_settlement_bnd} in this section. By Section \ref{sec:time_reduction}, it suffices to prove $\mathbb{P}(F_0\cup F_1) \leq p_{\textsf{settlement}} + |\mathcal{I}| p_{\textsf{unheard}}.$ Recall that $\mathsf{Unheard}_{\mathcal{I}}$ is the maximum over the $|\mathcal{I}|$ processes $\mathsf{Unheard}_h$ with $h\in\mathcal{I}.$ It thus suffices to prove the following bounds, where $c$ is a constant determined below such that $0 < c < \epsilon$, $k' = \lceil 3kf/4 \rceil,$ and $h$ denotes an arbitrary special honest user. \begin{align} \iftoggle{arxiv}{}{&}\mathbb{P}(T^s_{k'} > k ) + \mathbb{P}(\mathsf{CompressedMargin}_s[j] \geq -cj + \frac{ck'} 2 \text{ for some } j \geq k') \iftoggle{arxiv}{}{\nonumber \\} & \leq p_{\textsf{settlement}} \label{eq:bound_margin} \\ \iftoggle{arxiv}{}{&}\mathbb{P}(\mathsf{CompressedUnheard}_{h,s}[j] \geq \iftoggle{arxiv}{}{~~~} cj - ck'/2 \text{ for some } j \geq k')\iftoggle{arxiv}{}{\nonumber \\} &\leq p_{\textsf{unheard}} \label{eq:bound_unheard} \end{align} Lemma \ref{lemma:time_scales} with $r=3/4$ yields that $ \mathbb{P}(T^s_{k'} > k ) \leq \exp(-kf/32).$ The recursions \eqref{eq:reach_recursive} and \eqref{eq:margin_recursive} imply $\mathsf{Margin}_s[i] = \mathsf{Reach}[i]$ for $1 \leq i \leq s-1,$ and $\mathsf{Margin}_s[i] \leq \mathsf{Reach}[i]$ for all $i\geq s.$ In particular, $\mathsf{CompressedMargin}_s[0] \leq \mathsf{CompressedReach}_s[0] = \mathsf{Reach}[s]$. The following lemma is adapted from \cite{blum2020combinatorics}: \begin{lemma}\label{lem:margin_ineq2} For any $s, k \in \mathbb{N}$, \begin{equation}\label{eq:margin_ineq2} \mathbb{P}(\mathsf{CompressedMargin}_s[j] \geq 0 \text{ for some } j \geq k) \leq \exp{(-k \epsilon^3/3)} \end{equation} \end{lemma} \begin{proof} The lemma is a slight modification of the first corollary at the beginning of Section 6 of \cite{blum2020combinatorics}, which in turn is based on the theorem in that section. We explain why these results can be adapted to our model, and some differences in the form of the bound on the right-hand side of \eqref{eq:margin_ineq2}. In \cite{blum2020combinatorics}, these results are stated for the quantity $\mu_x(y)$, which roughly maps to the quantity $\mathsf{CompressedMargin}_s[j]$. A subtle difference between the two quantities is that $\mathsf{CompressedMargin}_s[j]$ is a metric concerning tines diverging prior to a reference slot $s$ on the \textit{original time scale}, whereas $\mu_x(y)$ corresponds to a reference slot $|x|$ on the \textit{compressed time scale}. Nevertheless, $(\mathsf{CompressedReach}_s[j], \mathsf{CompressedMargin}_s[j])$ satisfy the same recursions as $(\rho(xy), \mu_x(y))$ (see \eqref{eq:margin_recursive} and the two Lemmas in Section 5 of \cite{blum2020combinatorics}), and are `driven' by a $\{0, 1\}$ valued process satisfying the $\epsilon$-martingale condition. Moreover, the initial values satisfy: $\mathsf{CompressedMargin}_s[0] \leq \mathsf{CompressedReach}_s[0] = \mathsf{Reach}[s] \preceq R^*$; the same holds for $(\rho(x), \mu_x(\varepsilon))$ (see Lemmas in Section 5 and 6.2 of \cite{blum2020combinatorics}). The proof of the theorem of \cite{blum2020combinatorics} depends only on the fact that $(\rho(xy), \mu_x(y))$ satisfy these properties, and thus can be adapted to $\mathsf{CompressedMargin}_s$ as well. The right-hand side of the inequality of the result in \cite{blum2020combinatorics} (the corollary) is stated as $O(1) \exp(-\Omega(k))$, while we use the expression $\exp(-\epsilon^3k/3)$. The difference in the expression comes from two factors. Firstly, the proof of the theorem in Section 6 involves analyzing the (random) time after which $\mu_x(\cdot)$ is negative forever. Put differently, the proof of the theorem actually proves the stronger statement of the corollary in \cite{blum2020combinatorics}. Thus, the bound presented in the corollary can be obtained without a union bound argument. We therefore omit the factor of $O(1)$ of \cite{blum2020combinatorics}. Secondly, the proofs in Section 6 of \cite{blum2020combinatorics} provide exact expressions for the constants in the error exponent. In particular, any bound of the form $\exp(-ak)$ can be used, if $a$ satisfies \[1 + a < \sqrt{\frac{1}{1+\epsilon}\left(\frac{2}{\sqrt{1-\epsilon^2}} - \frac{1}{1+\epsilon}\right)}\] The proof of \cite{blum2020combinatorics} concludes with the expression $a = \epsilon^3(1 - O(\epsilon))/2$; however, it can be analytically verified, using the Maclaurin series for $1/\sqrt{1-\epsilon^2}$ and $1/(1+\epsilon)$, that $a = \epsilon^3/3$ satisfies the above inequality for all $\epsilon \in (0, 1)$. \end{proof} Let $T$ be a stopping time with respect to $\mathsf{CompressedMargin}_s$, defined as follows: \begin{equation}\label{eq:margin_stoppingtime} T = \min\{j \geq k: \mathsf{CompressedMargin}_s[j] \geq 0\}, \end{equation} with the convention that the minimum of the empty set is $\infty.$ Therefore $T < \infty$ is equivalent to the event $\mathsf{CompressedMargin}_s[j] \geq 0$ for some $j \geq k$. From Lemma \ref{lem:margin_ineq2}, $\mathbb{P}(T < \infty) \leq \exp(-k \epsilon^3/3).$ Over the period $[k, T)$, the behavior of $\mathsf{CompressedMargin}_s$ is identical to that of the simple random walk $\textsf{W}$ defined in \eqref{eq:random_walk_defn}. More precisely, writing $N_0-N_1(w)$ as short for $N_0(w) - N_1(w)$, for any $j \in \{k, \ldots, T\}$, \begin{align*} & \mathsf{CompressedMargin}_s[j] - \mathsf{CompressedMargin}_s[k] \iftoggle{arxiv}{}{\\ &} = N_0 - N_1 (\mathsf{CompressedCharString}_s[k+1:j]) \end{align*} and, as random processes, \begin{align*} N_0-N_1 (\mathsf{CompressedCharString}_s[k+1:j]) \stackrel{d.}{=} (W[j-k]: j\geq k). \end{align*} If $T = \infty$, $\mathsf{CompressedMargin}_s[k] < 0$. Putting the above facts together, we get the following result due to the union bound and Lemma \ref{lem:random_walk_bound}: \begin{align*} &\mathbb{P}(\mathsf{CompressedMargin}_s[j] \geq -c(j - k) \text{ for some } j \geq 2k) \\ &~~~~~~ \leq \mathbb{P}(T < \infty) + \mathbb{P}(W[j-k] \geq -c(j - k) \text{ for some } j \geq 2k)\\ &~~~~~~ \leq \exp{(-k\epsilon^3/3)} + 2\exp{(-k(\epsilon - c)^2/3)} \end{align*} Replacing $k$ by $k'/2$ in the above equation yields: \begin{align*} &\mathbb{P}(\mathsf{CompressedMargin}_s[j] \geq -cj + ck'/2 \text{ for some } j \geq k') \iftoggle{arxiv}{\leq}{\\ \leq &}\exp{(-k' \epsilon^3/6)} + 2\exp{(-k'(\epsilon - c)^2/6)} \end{align*} By Lemma \ref{lem:Uheard_line_bnd}, \begin{align*} &\mathbb{P}(\textsf{CompressedUnheard}_{h,s}[j] \geq cj - \frac{ ck'} 2 \text{ for some } j \geq k') \\ = &\mathbb{P}(\textsf{CompressedUnheard}_{h,s}[j] \geq \frac{ ck'} 2 + c(j-k') \text{ for some } j \geq k') \\ \leq &\left[\frac{1}{(1-q)(1-(1-q)^c)}\right] \exp(-k'cq/2) \end{align*} Combining the bounds in this section shows that \eqref{eq:bound_margin} and \eqref{eq:bound_unheard} and thus also \eqref{eq:main_settlement_bnd} hold if \begin{align*} \exp(-kf/32) + \exp{(-k' \epsilon^3/6)} + 2\exp{(-k'(\epsilon - c)^2/6)} & \leq p_{\textsf{settlement}} \\ \left[\frac{1}{(1-q)(1-(1-q)^c)}\right] \exp(-k'cq/2) & \leq p_{\textsf{unheard}} \end{align*} for some choice of $c.$ Let $c=\epsilon/2$ and use the fact $k' \geq 3kf/4$ to get that the following is sufficient. \begin{align*} \exp(-kf/32) + \exp{(-kf \epsilon^3/12)} + 2\exp{(-kf\epsilon^2/32)} & \leq p_{\textsf{settlement}} \\ \left[\frac{1}{(1-q)(1-(1-q)^{\epsilon/2})}\right] \exp(-kf \epsilon q/8) & \leq p_{\textsf{unheard}} \end{align*} Also, $q \geq p >0.5.$ Thus, \eqref{eq:main_settlement_bnd} holds for \begin{align*} p_{\textsf{settlement}} &= \exp{(-kf \epsilon^3/12)} + 3\exp{(-kf\epsilon^2/32)} \\ p_{\textsf{unheard}} & = \left[\frac{2}{ 1-(1/2)^{\epsilon/2}}\right] \exp(-k f \epsilon/16) \end{align*} \subsection{On \textsf{Advantage} and proof of chain quality bound} \label{sec:advantage} We prove bound \eqref{eq:main_quality_bnd} in this section. By Section \ref{sec:time_reduction}, it suffices to prove $\mathbb{P}(F_0\cup F_2) \leq p_{\textsf{CQ}} + |\mathcal{I}| \Tilde{p}_{\textsf{unheard}}.$ Let $\gamma, r,$ and $c$ be positive constants, to be specified below, such that $\gamma+\mu < cr < c < \epsilon.$ We use the fact that $\mathsf{Unheard}_{\mathcal{I}}$ is the maximum over the $|\mathcal{I}|$ processes $\mathsf{Unheard}_h$ with $h\in\mathcal{I}.$ It suffices to prove the following bounds, where $k' = \lceil rkf \rceil,$ and $h$ denotes an arbitrary special honest user. \begin{align} \iftoggle{arxiv}{}{&} \mathbb{P}(T^s_{k'} > k ) \iftoggle{arxiv}{}{\nonumber \\} + \iftoggle{arxiv}{}{&} \mathbb{P}(\mathsf{CompressedAdvantage}_s[j] \geq -cj + kf (\gamma + \mu) \text{ for some } j \geq k') \iftoggle{arxiv}{}{\nonumber \\ }& \leq p_{\textsf{CQ}} \label{eq:bound_advantage} \\ \iftoggle{arxiv}{}{&}\mathbb{P}(\mathsf{CompressedUnheard}_{h,s}[j] \geq \iftoggle{arxiv}{}{~~~} cj - kf (\gamma + \mu) \text{ for some } j \geq k') \iftoggle{arxiv}{}{ \nonumber \\} &\leq \Tilde{p}_{\textsf{unheard}} \label{eq:bound_unheard_a} \end{align} Lemma \ref{lemma:time_scales} shows that $ \mathbb{P}(T^s_{k'} > k ) \leq \exp(-kf(1-r)^2/2).$ Next, note that \iftoggle{arxiv} { \[\mathsf{CompressedAdvantage}_s[j] = N_0 - N_1(\mathsf{CompressedCharString}_s[1:j]) + kf \mu + \mathsf{Reach}[s].\] } { $ \mathsf{CompressedAdvantage}_s[j]$ is equal to: \[N_0 - N_1(\mathsf{CompressedCharString}_s[1:j]) + kf \mu + \mathsf{Reach}[s].\] } Therefore, \begin{align*} &\mathbb{P}(\mathsf{CompressedAdvantage}_s[j] \geq -cj + kf (\gamma + \mu) \text{ for some } j \geq k') \\ \leq & \mathbb{P}(N_0 - N_1(\mathsf{CompressedCharString}_s[1:j]) \geq - cj \text{ for some } j \geq k') \iftoggle{arxiv}{}{\\ &~~~~~~ } + \mathbb{P}(\mathsf{Reach}[s] \geq kf \gamma ) \end{align*} Lemma \ref{lem:CCS} implies that for any $s \in \mathbb{N}$ and any $j \geq 2$, \[N_0 - N_1(\mathsf{CompressedCharString}_s[2:j]) \stackrel{d.}{=} \textsf{W}[j] - \textsf{W}[1] \] where $\textsf{W}$ is a simple random walk as defined in \eqref{eq:random_walk_defn}. Moreover, $N_0(\mathsf{CompressedCharString}_s[1:j]) - N_1(\mathsf{CompressedCharString}_s[1:j])$ stochastically dominates $\textsf{W}$, because its value at $j=1$ is one with probability greater than $p.$ By Lemma \ref{lem:random_walk_bound}, \iftoggle{arxiv} { \begin{equation*} \mathbb{P}(N_0 - N_1(\mathsf{CompressedCharString}_s[1:j] \geq - cj \text{ for some } j \geq k') \leq 2\exp(-k'(\epsilon - c)^2/3) \end{equation*} } { \begin{multline*} \mathbb{P}(N_0 - N_1(\mathsf{CompressedCharString}_s[1:j] \geq - cj \text{ for some } j \geq k') \\ \leq 2\exp(-k'(\epsilon - c)^2/3) \end{multline*} } We next bound $\mathsf{Reach}[s]$ as follows: \begin{align*} \mathbb{P}(\mathsf{Reach}[s] \geq kf \gamma ) &\leq \left(\frac{1 - p}{p}\right)^{kf \gamma } \quad \text{by Lemma \ref{lem:reach_dist_bound}} \\ &= \left(\frac{1 + \epsilon}{1 - \epsilon}\right)^{-kf \gamma}\quad \text{by Definition \ref{def:eps_honest_maj}} \\ &\leq \exp(-2kf \gamma \epsilon) , \end{align*} where the final step follows because $\log((1+\epsilon)/(1-\epsilon)) \geq 2 \epsilon$ for $ \epsilon \in [0, 1)).$ By Lemma \ref{lem:Uheard_line_bnd}, \iftoggle{arxiv} { \begin{align*} &\mathbb{P}(\textsf{CompressedUnheard}_{h,s}[j] \geq cj - kf (\gamma + \mu) \text{ for some } j \geq k') \\ = &\mathbb{P}(\textsf{CompressedUnheard}_{h,s}[j] \geq kf(cr-\gamma-\mu) + c(j-k') \text{ for some } j \geq k') \\ \leq &\left[\frac{1}{(1-q)(1-(1-q)^c)}\right] \exp(-kf(cr-\gamma-\mu)q) \end{align*} } { \begin{align*} &\mathbb{P}(\textsf{CompressedUnheard}_{h,s}[j] \geq cj - kf (\gamma + \mu) \text{ for some } j \geq k') \\ &= \mathbb{P}(\textsf{CompressedUnheard}_{h,s}[j] \geq kf(cr-\gamma-\mu) + c(j-k') \\ &\hspace{1in} \text{ for some } j \geq k') \\ & \leq \left[\frac{1}{(1-q)(1-(1-q)^c)}\right] \exp(-kf(cr-\gamma-\mu)q) \end{align*} } Combining the bounds in this section shows that \eqref{eq:bound_advantage} and \eqref{eq:bound_unheard_a} and thus also \eqref{eq:main_quality_bnd} holds if \begin{align*} & \exp(-kf(1-r)^2/2) + 2\exp(-k'(\epsilon - c)^2/3) + \exp(-2kf \epsilon \gamma ) \leq p_{\textsf{CQ}} \\ & \left[\frac{1}{(1-q)(1-(1-q)^c)}\right] \exp(-kf(cr-\gamma - \mu)q) \leq \tilde{p}_{\textsf{unheard}} \end{align*} for some choice of $\gamma$, $c$, and $r.$ Select these constants so that the five values, $\mu, \gamma+ \mu, cr, c, \epsilon,$ form an arithmetic sequence, i.e. the consecutive values each differ by $\gamma = \frac{\epsilon - \mu} 4.$ Observe that $1 - r = 1 - \frac{c - \gamma } c = \gamma/c \geq \gamma$ and use the fact $k' \geq rk.$ So it is sufficient that: \begin{align*} & \exp(-kf \gamma^2/2) + 2\exp(-kf \gamma^2 /3) + \exp(-2kf \gamma \epsilon ) \leq p_{\textsf{CQ}} \\ & \left[\frac{1}{(1-q)(1-(1-q)^c)}\right] \exp(-kf \gamma q) \leq \tilde{p}_{\textsf{unheard}} \end{align*} Also, $q \geq p >0.5,$ $c\geq \epsilon/2,$ and $\gamma = \frac{\epsilon - \mu} 4$ can be used. Thus, \eqref{eq:main_quality_bnd} holds for \begin{align*} p_{\textsf{CQ}} &= 4 \exp(-kf (\epsilon-\mu)^2 /48) \\ \tilde{p}_{\textsf{unheard}} & = \left[\frac{2}{ 1-(1/2)^{\epsilon/2}}\right] \exp(-kf(\epsilon- \mu) /8) \end{align*}
1,314,259,995,421
arxiv
\section{Introduction} Conventional single-reference coupled cluster theory provides a robust and systematically improvable treatment of electron correlation effects.\cite{Coester1958,Cizek1966,Cizek1971,Paldus1972,Bartlett1981,MEST,MBM,Bartlett2007} The wave function ansatz features an exponential cluster operator that ensures size-extensivity, while the straightforward truncation scheme of the cluster operator based on the excitation level yields an ordered hierarchy of approximations that converges toward the full configuration interaction limit. The coupled cluster model in its standard single-reference formulation is one of the most accurate tools in describing dynamical electron correlation but it fails when the electronic systems under study has multireference character. In such cases, the hierarchy of approximations breaks down and the truncation of the cluster operator provides incorrect approximations to the exact electronic wave function with unphysical coupling between cluster amplitudes.\cite{paldus1999,cc-piecuch,cc-scuseria} One possible remedy dedicated to capture strong electron correlation effects are externally corrected or tailored coupled cluster methods\cite{externally_corrected_cc, externally_corrected_cc_2,externally_corrected_cc_3, externally_corrected_cc_4, externally_corrected_cc_5, externally_corrected_cc_6, externally_corrected_cc_8, externally_corrected_cc_9, dmrg-tcc-2016, dmrg-tcc-2019, dmrg-tcc-2020}. In this methodology, a subset of cluster amplitudes is extracted from an external model that guarantees the proper description of the multireference nature of the molecular system under study. Popular wave function approaches to except some external coupled cluster amplitudes include the multireference configuration interaction (MRCI) or complete active space self-consistent field (CASSCF) methods. However, these approaches are computationally infeasible for large molecules and force us to resign from black-box computational setups. As an alternative to conventional electronic structure methods, the density matrix renormalization group (DMRG) algorithm\cite{white, dmrg-4, dmrg-5, dmrg-6, dmrg-7, dmrg-8, dmrg-9} and various geminal-based approaches\cite{geminal1971, geminal1972, gvb-1973, surjan1999, apsg-2002, geminals2007, surjan2012, pccd-2013-limacher, scuseria2013, pccd-2014-prb, pccd-2014-jcp, pccd-2014-jpca, pccd-2014-jctc, pccd-2014-prc, pccd-2014-stein, seniority-cc-2014, apsg-pt-2014, bytautas2015, geminals_lcc_2015, pccd-2015, erpa-2016, geminals-2016, Nowak2019, brzek2019} offer a computationally less complex way to model strongly-correlated electrons. The DMRG algorithm represents a computationally efficient method to optimize matrix product state (MPS) wave functions. The evaluation of the electronic energy scales only polynomially with system size. Therefore, DMRG allows us to efficiently handle large active orbital spaces. Due to its favorable computational scaling, the DMRG algorithm found numerous applications to investigate strongly-correlated systems including transition metal-\cite{marti2008, cr2_2011, fenoDMRG, kurashige2013, Corinne_2015, Zhao2015, freitag2015, freitag2015errata} or actinide-containing molecules.~\cite{cuo_dmrg, boguslawski2017, ola-neptunyl-cci-2018} Although being computationally more efficient, DMRG still requires us to select active space orbitals. This can be done efficiently exploiting, for instance, tools based on quantum information theory\cite{Ziesche1995, legeza2006, rissler2006, qi-2012, dmrg-2013, vedral2014, dmrg-2015, freitag2015, freitag2015errata, dmrg-2015-ijqc,dmrg-2016, Schilling2016, autocas2016, autocas2016-2, ijqc-eratum, autocas2017, boguslawski2017,ding2020, ding2021}, fractional occupancies of unrestricted natural orbitals\cite{unocas1989, unocas1998}, or high-spin-state unrestricted Hartre--Fock (UHF) natural orbitals.~\cite{abccas2018,abccas2019} In this work, we benchmark various coupled cluster singles and doubles models tailored by unconventional electronic structure approaches. Specifically, we focus on orbital-optimized pair coupled cluster doubles (pCCD)~\cite{pccd-2014-prb,pccd-2014-stein} and DMRG wave functions. The optimization of these wave functions scales only polynomially with system size. Hence, these approaches provide an efficient way to capture strong electron correlation effects (within an active orbital space in the case of DMRG). Most importantly, both methods allow for an accurate description of static/nondynamic electron correlation effects~\cite{qi-2012,dmrg-2016} and thus represent a promising choice for reference coupled cluster amplitudes in externally corrected coupled cluster methods. Furthermore, the quality of DMRG calculations is affected by the type of molecular orbitals used in the active orbital space. Recent studies report that local pair natural orbitals and domain-based local pair natural orbitals perform better than canonical RHF orbitals in DMRG-tCCSD.\cite{dmrg-tcc-2016, dmrg-tcc-2019-jcp, dmrg-tcc-2020-jctc, dmrg-tcc-2020-4c} Here, we benchmark another type of orbitals of localized nature, namely pCCD-optimized orbitals as they have not yet been combined with coupled cluster theory tailored by DMRG wave functions. Furthermore, the optimal active orbital space used in DMRG calculations that are then employed in the tailored coupled cluster flavour can be chosen according to the selection protocol presented by some of us~\cite{dmrg-tcc-2019}. This active space selection protocol provides accurate values for the correlation energy. In this work, however, we focus on a different approach. We aim at constructing a minimal but optimal active orbital space that captures the dominant part of static/nondynamic electron correlation using a one-step procedure exploiting tools based on quantum information theory. Such an active-space selection scheme will be beneficial for large-scale modelling or the accurate and efficient prediction of potential energy surfaces as it reduces the number of DMRG calculations to be performed on the daily basis. Besides, active orbital spaces can change along the reaction pathway. Thus, ensuring the same composition of active orbitals comprised in large active space calculation might be difficult to achieve along the reaction coordinate. This work is organized as follows. In Section 2, we briefly review the main ideas of tailored-coupled-cluster methods, followed by the pCCD- and DMRG-tailored flavours. Computational details are presented in Section 3. We discuss the results and performance of these methods in Section 4. Finally, we conclude in section 5. \section{Tailored coupled cluster theory} \label{section:theory} The core of coupled cluster theory is the exponential parametrization of the wave function. Tailored coupled cluster approaches take advantage of this ansatz as the exponential form allows us to utilize the Baker-Campbell-Hausdorff (BCH) formula and to factorize any operator of the form ${e}^{\hat{T}} = {e}^{\hat{T}_\mathrm{a}+\hat{T}_{b}}$ to $\mathrm{e}^{\hat{T}_\mathrm{a}}\mathrm{e}^{\hat{T}_\mathrm{b}}$ if and only if the operators $\hat{T}_\mathrm{a}$ and $\hat{T}_\mathrm{b}$ commute. Furthermore, the particular partitioning scheme of the cluster operator (here in $\hat{T}_\mathrm{a}$ and $\hat{T}_\mathrm{b}$) depends on the external model for strong correlation. The tailored coupled cluster wave function is thus expressed as \begin{equation}\label{eq:tcc} \ket{\Phi_{\rm tCC}} = e^{\hat{T}} \ket{\Phi_0} = e^{\hat{T}_\mathrm{a}} e^{\hat{T}_\mathrm{b}} \ket{\Phi_0}, \end{equation} where $\ket{\Phi_0}$ is a reference Slater determinant and $\hat{T}$ is the cluster operator that is partitioned into a sum of two cluster operators, $\hat{T}_\mathrm{a}$ and $\hat{T}_\mathrm{b}$. Note that we assume that $\hat{T}_\mathrm{a}$ and $\hat{T}_\mathrm{b}$ do commute. The cluster amplitudes of one part of the composite cluster operators $\hat{T}$, here say $\hat{T}_\mathrm{b}$, are then derived from some external calculation that provides a proper treatment of strong correlation. With the $\hat{T}_\mathrm{b}$ amplitudes being frozen, the remaining cluster amplitudes of $\hat{T}_\mathrm{a}$ can be obtained using optimization algorithms that are analogous to single-reference coupled cluster methods. That is, the cluster amplitudes of $\hat{T}_\mathrm{a}$ can be obtained using projection techniques, where the Schr\"{o}dinger equation for this particular wave function ansatz reads \begin{align}\label{eq:hcc} e^{-\hat{T}_\mathrm{b}} e^{-\hat{T}_\mathrm{a}}\hat{H} e^{\hat{T}_\mathrm{a}} e^{\hat{T}_\mathrm{b}} \ket{ \Phi_0} &= E \ket{\Phi_0}. \end{align} In the above equation, the $\hat{T}_\mathrm{b}$ amplitudes are kept fixed during the optimization. Thus, the projection manifold of the tailored coupled cluster wave function contains only the set of determinants $\{\hat{T}_\mathrm{a} \ket{\Phi_0}\}$. \subsection{Frozen-pair coupled cluster theory}\label{s:fpcc} Frozen-pair coupled cluster theory originates from the idea of seniority-based coupled cluster approaches where the components that differ in the number of unpaired electrons are treated separately. The most significant part of the wave function is the seniority-zero sector, which includes only those amplitudes where the number of unpaired electrons is zero \cite{pccd-2014-jcp}. The pCCD wave function is an example for such an ansatz. That is, the pCCD wave function is constructed from two-electron functions, also called geminals, using an exponential cluster operator of the form, \begin{equation} \ket{\Phi_{\rm pCCD}} = e^{\hat{T}_\mathrm{p}} \ket{\Phi_0}, \end{equation} where the cluster operator $\hat{T}_\mathrm{p}$ contains only electron pair-excitations (geminals), \begin{equation}\label{eq:Tp} \hat{T}_\mathrm{p} = \sum_i^{\rm occ}\sum_{a}^{virt} t_{ii}^{aa} a_a^{\dagger}a_{\bar {a}}^{\dagger} a_{\bar{i}} a_i \end{equation} with $a_p$ ($a_p^\dagger$) and $a_{\bar{p}}$ ($a_{\bar{p}}^\dagger$) being electron annihilation (creation) operators for $\alpha$- and $\beta$-spin electrons, respectively. If combined with an orbital optimization protocol, the pCCD method is size-consistent and provides a proper qualitative description of the (exact) electronic wave function for strongly-correlated electronic systems \cite{pccd-2013-limacher, pccd-2014-prb, pccd-2014-jcp, pccd-2014-jctc, pccd-2014-prc, pccd-2014-stein}. However, energetics and other properties cannot be described precisely (for instance, satisfying chemical accuracy) by restricting the wave function to the seniority-zero sector alone. To reach quantitative accuracy, we need to go beyond the electron-pair approximation and to extend the electronic wave function by components that account for unpaired electrons, so-called broken-pairs~\cite{seniority-cc-2014,pccd-pt-2014,geminals_lcc_2015,garza2015actinide,pccd-PTX}. Frozen-pair coupled cluster (fpCC) theory chooses the pCCD wave function as the fixed reference function \cite{seniority-cc-2014}. That is, the cluster operator $\hat{T}_\mathrm{b}$ in eq.~\eqref{eq:tcc} is equivalent to the electron-pair cluster operator of eq.~\eqref{eq:Tp} and the cluster amplitudes are thus divided to pair-amplitudes ($\hat{T}_\mathrm{b}$) and non-pair amplitudes ($\hat{T}_\mathrm{a}$). The fpCC ansatz therefore reads \begin{equation} \ket{\Phi_{\rm fpCC}} = e^{\hat{T}'} \ket{\Phi_{\rm pCCD}} = e^{\hat{T}'} e^{\hat{T_\mathrm{p}}} \ket{\Phi_0}, \end{equation} where $\hat{T}'$ is a cluster operator that is restricted to contain electron excitations (singles, broken-pair doubles, etc.) beyond electron-pair excitations. In the fpCCD method, the cluster operator is defined as $\hat{T}' = \hat{T}_2 - \hat{T}_{\rm p}$, while for fpCCSD, the cluster operator includes also single excitations, $\hat{T}' = \hat{T}_1 + \hat{T}_2 - \hat{T}_{\rm p}$. The geminal amplitudes $\{c_i^a\}$ account for strong electron correlation effects, while the remaining amplitudes introduce broken-pair components to complement the wave function. The difficulties in coupled cluster theory arise from the non-linearity of the amplitude equations that have to be solved to obtain the cluster amplitudes. This technical obstacle can be bypassed by truncating/neglecting the non-linear terms in the BCH expansion. Although linearized coupled cluster (LCC) theory did not gain popularity due to its poor performance, the linearized version of pCCD-tailored coupled cluster approaches allows us to reach chemical accuracy for many challenging systems.~\cite{geminals_lcc_2015,pccd-PTX,filip-jctc-2019,Nowak2019,pawel-yb2} Specifically, it has been shown that the pCCD-tailored LCC method is an efficient and reliable alternative to conventional electronic structure methods for both ground- \cite{geminals_lcc_2015,pccd-PTX,filip-jctc-2019} and excited states \cite{Nowak2019,pawel-yb2}. The ansatz is given by \begin{equation} \ket{\Phi_{\rm fpLCC}} \approx (1+\hat{T}') \ket{\Phi_{\rm pCCD}} = (1+\hat{T}') e^{\hat{T_\mathrm{p}}} \ket{\Phi_0}, \end{equation} where $\hat{T}'$ is again some cluster operator that is restricted to contain non-pair electron excitations. The coupled cluster equations are linear with respect to non-pair amplitudes but the coupling between all pair- and non-pair amplitudes is included (in addition to all non-linear terms originating from the pCCD reference function). In this work, we use truncated coupled cluster models that include either only double excitations (fpLCCD) or single and double excitations (fpLCCSD). \subsection{The DMRG-tailored coupled cluster method} The DMRG algorithm provides qualitatively correct solutions within some active orbital space, while the wave function components that include external or inactive orbitals can be included \textit{a posteriori} using, for example, DMRG-tailored CC approaches.~\cite{dmrg-tcc-2016} In these externally corrected CC flavours, the matrix product state ansatz, which is optimized by the DMRG algorithm, is translated to a CI-type wave function~\cite{dmrg-casci} with some specific reference determinant. Spin-dependent coupled cluster amplitudes can then be evaluated from the reconstructed CI coeffcients using standard equations, \begin{gather} t_i^a = c_{i}^{a} / c_0 \\ t_{ij}^{ab} = c_{ij}^{ab} / c_0 - (c_i^a c_j^b - c_i^b c_j^a) / c_0^2, \end{gather} where $c_0$ is the CI coefficient of the chosen reference determinant and the indices indicate spin orbitals. The spin-free amplitudes optimized by solving the spin-summed CC amplitude equations can be deduced form the spin-dependent ones as $t_I^A = t_{I_\alpha}^{A_\alpha}$ and $t_{IJ}^{AB} = t_{I_\alpha J_\beta}^{A_\alpha B_\beta}$, where the spin degree of freedom is labeled as a subscript and capital letters indicate spatial orbitals. Within the DMRG-tCCSD formalism, the wave function is optimized in its split-amplitude form, \begin{equation} \label{eq:wcc} \ket{\Phi_{\rm DMRG-tCCSD}} = e^{\hat{T}_{\rm CAS}} e^{\hat{T}_{\rm ext}} \ket{\Phi_0}, \end{equation} where the $\hat{T}_{\rm CAS}$ cluster operator includes amplitudes comprising excitations within the active space orbitals, while the operator $\hat{T}_{\rm ext}$ incorporates all excitations beyond the active space \cite{kinoshita2005}. The $\hat{T}_{\rm CAS}$ amplitudes are extracted from DMRG calculations and kept frozen during the optimization of the remaining amplitudes. That is, the $\hat{T}_{\rm ext}$ amplitudes are obtained from the solution of the conventional CCSD equations where the $\hat{T}_{\rm CAS}$ amplitudes are fixed. Preventing the relaxation of the $\hat{T}_{\rm CAS}$ amplitudes allows us to capture strong electron correlation effects within the CCSD framework, while the relaxed $\hat{T}_{\rm ext}$ amplitudes are optimized to supplement the wave function by the missing dynamical electron correlation effects. \section{Computational Details} \label{section:details} \subsection{Electronic structure calculations} All pCCD and (tailored) coupled cluster calculations (using the spin-summed equations) were performed in a developer version of the PyBEST software package\cite{pybest-2021,pybest-1.0.0}. We used the aug-cc-pVDZ and aug-cc-pVTZ basis sets for the F atom, the cc-pCVDZ basis set for the benzene molecule, and Dunning's cc-pVDZ and cc-pVTZ basis sets for all other atoms. For the N$_2$ dimer, we performed additional calculations for the cc-pVQZ basis set.~\cite{basis-cc-pvdz,basis-cc-pvtz,basis-cc-pvqz,basis-cc-pcvdz} The depths of the potential energy well were obtained from a generalized Morse function\cite{morse_potential} fit. The vibrational frequencies and equilibrium bond lengths were calculated numerically from a polynomial fit of sixth order around the equilibrium distance, where we used $M_{\textrm{B}} = 11.0093\,u$ for the B atom, $M_{\textrm{C}} = 12\,u$ for the C atom, $M_{\textrm{N}} = 14.0031\,u$ for the N atom, $M_{\textrm{O}} = 15.9949\,u$ for the O atom, and $M_{\textrm{F}} = 18.9984\,u$ for the F atom. The CCSD(T) and CCSDT calculations for ammonia, ethylene, and cyclobutadiene have been performed with the Molpro 2012.1.12 software package.\cite{molpro2012, molpro2012_2} The spin-free DMRG calculations were performed using the Budapest QC-DMRG program.~\cite{dmrg_ors} Two different sets of molecular orbitals were investigated, namely canonical restricted Hartree-Fock (RHF) orbitals and pCCD-optimized orbitals. Furthermore, we aimed at constructing small but chemically reasonable active orbital spaces as they allow us to maintain the same level of approximation for orbitals of similar weight and correlation strength. Specifically, it allows us to avoid errors driven by an unbalanced composition of the active space. The active space selection for RHF orbitals was based on quantum information measures obtained from $m = {64, 128}$ DMRG calculations since DMRG calculations exploiting even small bond dimensions provide robust single-orbital entropies and orbital-pair mutual information profiles. In all calculations involving active orbital spaces, we performed calculations with different values of the bond dimension $m$ to ensure that DMRG energy converged with respect to $m$, that is, we used $m={32, 64, 128}$ for all first-row atom dimers, $m={128, 256, 512}$ for Cr$_2$, ammonia, ethylene, and the benzene molecules, and $m={256, 512, 1024}$ for cyclobutadiene molecule. The DMRG energies of all first-row atom dimers, ammonia, and benzene were converged up to $\Delta E = 10^{-8}$ with respect to $m$, while for the Cr$_2$, ethylene and cyclobutadiene molecules the convergence threshold was relaxed to $\Delta E = 10^{-5}$ and $\Delta E = 10^{-4}$, respectively. The converged DMRG wave functions were first used to reconstruct the CI and then CC coefficients. The active space of the N$_2$ and F$_2$ molecules consists of one $3\sigma_g$, two $\pi_u$, two $\pi_g$, and one $3 \sigma_u$ orbital. In the case of the carbon dimer, the active orbital space was extended by the $2\sigma_g$ and $2\sigma_u$ orbitals as the single orbital entropy and orbital-pair mutual information in the pCCD-optimized orbital basis suggested that these orbitals might have non-negligible impact on the balanced description of electron correlation effects in the dissociation region. The orbital interactions are similar for the C$_2$ isoelectronic analogues --- BN, BO$^+$, and CN$^+$ --- and, therefore, their active orbital spaces were composed of two $\sigma$, two $\sigma^*$, two $\pi$, and two $\pi^*$ orbitals occupied by eight electrons. For the chromium dimer, we used all twelve valence orbitals (4s and 3d) following the recommendations of Refs.~\citenum{cr2_2016, dmrg-tcc-2016}. We studied two active spaces in the case of the ammonia compound, which were selected solely on the values of the single orbital entropies and the orbital-pair mutual information. Specifically, we looked for pronounced gaps in the single-orbital entropy and orbital-pair mutual information distributions to find reasonable cutoff values. This procedure resulted in a small CAS(6,6) containing orbitals with $s_i$ > 0.04 and $I_{ij}$ > 0.07, while a slightly bigger CAS(8,8) was derived from a decreased cutoff value of $s_i$ > 0.03 and $I_{ij}$ > 0.01. A similar active space selection protocol was used to obtain a CAS(12,12) for the ethylene molecule and a CAS(20,20) for the cyclobutadiene complex. For benzene, the thresholds was tightened up ($I_{ij}$ > 0.1) resulting in a CAS(6,6). The diagrams for the single-orbital entropy and orbital-pair mutual information of selected systems (molecules and bond lengths) are summarized in the Supporting Information (SI). The coupled cluster amplitudes were reconstructed from matrix product state wave functions using the method described by some of us.\cite{dmrg-tcc-2016} All tailored coupled cluster calculations were performed using the CC amplitude equations in the spin-summed form. \subsection{Abbreviations of method names} In all (conventional and tailored) coupled cluster calculations, we used two different reference wave functions and molecular orbitals: (a) canonical RHF and (b) (variationally) orbital-optimized pCCD. In this work, all coupled cluster methods were truncated at the doubles (CCD) and singles and doubles (CCSD) levels. Thus, CCD$^a$ and CCSD$^a$ represent the traditional coupled cluster methods with a canonical RHF reference function, while the abbreviations CCD$^b$ and CCSD$^b$ indicate that the reference determinant (and hence the molecular orbitals) of the orbital-optimized pCCD wave function was selected as the reference determinant in conventional, that is untailored, coupled cluster calculations. In these flavours, all cluster amplitudes are thus optimized and all information about the electron-pair amplitudes is lost. We should note that the linearized CC corrections with a pCCD reference function are labeled as fpLCCD and fpLCCSD, respectively, while they were originally introduced using the acronyms pCCD-LCCD and pCCD-LCCSD. All fpCC calculations, including the linearized variants, have been performed in the pCCD-optimized orbital basis only. \section{Results} \label{section:results} \begin{table}[H] \caption{Spectroscopic constants for the dissociation of homonuclear main-group diatomic molecules for different quantum chemistry methods and basis sets. Errors with respect to MRCI, FCIQMC, or DMRG are given in parentheses. The reference data has no error given in parentheses. The superscript $a$ denotes that calculations have been performed in the canonical RHF orbital basis, while the superscript $b$ stands for pCCD-optimized orbitals. The ``$*$'' denotes the lowest-lying singlet excited state. } \label{tab:3-1} \begin{tiny} \begin{tabular}{p{0.01\textwidth}p{0.15\textwidth} p{0.02\textwidth}r p{0.02\textwidth}r p{0.02\textwidth}r p{0.02\textwidth}r p{0.02\textwidth}r p{0.02\textwidth}r} & & \multicolumn{2}{l}{ r$_\mathrm{e}$ [\r{A}] } & \multicolumn{2}{l}{ D$_\mathrm{e}$ [$\rm \frac{kcal}{mol}$] } & \multicolumn{2}{l}{ $\omega_\mathrm{e}$ [cm$^{-1}$] } & \multicolumn{2}{l}{ r$_\mathrm{e}$ [\r{A}] } & \multicolumn{2}{l}{ D$_\mathrm{e}$ [$\rm \frac{kcal}{mol}$] } & \multicolumn{2}{l}{ $\omega_\mathrm{e}$ [cm$^{-1}$] } \\ \hline & & \multicolumn{6}{l}{aug-cc-pVDZ} & \multicolumn{4}{l}{aug-cc-pVTZ} \\ \hline F$_2$ & RHF & 1.338 & ( 0.115 ) & 181.0 & ( -152.5 ) & 1216 & ( -415 ) & 1.328 & ( 0.092 ) & & & 1271 & ( -379 ) \\ & CCD$^a$ & 1.414 & ( 0.039 ) & 69.1 & ( -40.6 ) & 964 & ( -163 ) & 1.383 & ( 0.037 ) & 82.2 & ( -48.3 ) & 1053 & ( -161 ) \\ & CCSD$^a$ & 1.425 & ( 0.028 ) & 57.0 & ( -28.5 ) & 922 & ( -121 ) & 1.392 & ( 0.028 ) & 70.1 & ( -36.2 ) & 1018 & ( -126 ) \\ & pCCD & 1.502 & ( -0.049 ) & 12.6 & ( 15.9 ) & 622 & ( 179 ) & 1.466 & ( -0.046 ) & 16.2 & ( 17.7 ) & 708 & ( 184 ) \\ & CCD$^b$ & 1.425 & ( 0.028 ) & 55.5 & ( -27.0 ) & 924 & ( -123 ) & 1.391 & ( 0.029 ) & 68.8 & ( -34.9 ) & 1021 & ( -129 ) \\ & CCSD$^b$ & 1.424 & ( 0.029 ) & 55.9 & ( -27.4 ) & 926 & ( -125 ) & 1.391 & ( 0.029 ) & 69.3 & ( -35.4 ) & 1022 & ( -130 ) \\ & fpLCCD & 1.473 & ( -0.020 ) & 37.5 & ( -9.0 ) & 776 & ( 25 ) & 1.434 & ( -0.014 ) & 44.3 & ( -10.4 ) & 866 & ( 26 ) \\ & fpLCCSD & 1.469 & ( -0.016 ) & 38.1 & ( -9.6 ) & 788 & ( 13 ) & 1.431 & ( -0.011 ) & 45.2 & ( -11.3 ) & 879 & ( 13 ) \\ & fpCCD & 1.473 & ( -0.020 ) & 36.7 & ( -8.2 ) & 773 & ( 28 ) & 1.433 & ( -0.013 ) & 43.3 & ( -9.4 ) & 863 & ( 29 ) \\ & fpCCSD & 1.469 & ( -0.016 ) & 37.1 & ( -8.6 ) & 782 & ( 19 ) & 1.431 & ( -0.011 ) & 44.2 & ( -10.3 ) & 873 & ( 19 ) \\ & DMRG(8,8)-tCCSD$^a$ & 1.493 & ( -0.040 ) & 37.7 & ( -9.2 ) & 728 & ( 73 ) & 1.454 & ( -0.034 ) & 43.7 & ( -9.8 ) & 758 & ( 134 ) \\ & DMRG(8,8)-tCCSD$^b$ & 1.468 & ( -0.015 ) & 38.4 & ( -9.9 ) & 728 & ( 73 ) & 1.427 & ( -0.007 ) & 45.9 & ( -12.0 ) & 875 & ( 17 ) \\ & MRCI\cite{peterson1993} & 1.453 & & 28.5 & & 801 & & 1.420 & & 33.9 & & 892 & \\ & exp.\cite{MolSpectraStruct, irikura2007} & 1.412 & ( 0.041 ) & 37.7 & ( -9.2 ) & 917 & ( -116 ) & 1.412 & ( 0.008 ) & 37.7 & ( -3.8 ) & 917 & ( -25 ) \\ \cline{2-14} & & \multicolumn{6}{l}{cc-pVDZ} & \multicolumn{4}{l}{cc-pVTZ} \\ \cline{2-14} & pCCD & 1.471 & ( -0.006 ) & 15.5 & ( 10.8 ) & 693 & ( 64 ) & 1.471 & ( -0.052 ) & 15.5 & ( 17.0 ) & 693 & ( 198 ) \\ & CCD$^b$ & 1.392 & ( 0.074 ) & 66.1 & ( -39.8 ) & 1017 & ( -261 ) & 1.392 & ( 0.028 ) & 66.2 & ( -33.7 ) & 1016 & ( -126 ) \\ & CCSD$^b$ & 1.391 & ( 0.074 ) & 66.5 & ( -40.2 ) & 1019 & ( -263 ) & 1.391 & ( 0.028 ) & 66.6 & ( -34.1 ) & 1018 & ( -128 ) \\ & fpLCCD & 1.432 & ( 0.034 ) & 42.0 & ( -15.7 ) & 868 & ( -112 ) & 1.432 & ( -0.013 ) & 42.1 & ( -9.6 ) & 867 & ( 23 ) \\ & fpLCCSD & 1.429 & ( 0.036 ) & 43.1 & ( -16.8 ) & 879 & ( -123 ) & 1.429 & ( -0.010 ) & 43.2 & ( -10.7 ) & 878 & ( 13 ) \\ & fpCCD & 1.432 & ( 0.034 ) & 41.0 & ( -14.7 ) & 864 & ( -108 ) & 1.432 & ( -0.012 ) & 41.2 & ( -8.7 ) & 864 & ( 27 ) \\ & fpCCSD & 1.429 & ( 0.036 ) & 41.9 & ( -15.6 ) & 874 & ( -117 ) & 1.429 & ( -0.010 ) & 42.1 & ( -9.6 ) & 872 & ( 18 ) \\ & DMRG(8,8)-tCCSD$^a$ & 1.471 & ( -0.006 ) & 35.0 & ( -8.7 ) & 765 & ( -9 ) & 1.426 & ( -0.007 ) & 43.8 & ( -11.3 ) & 878 & ( 12 ) \\ & DMRG(8,8)-tCCSD$^b$ & 1.474 & ( -0.006 ) & 35.8 & ( -9.5 ) & 742 & ( 15 ) & 1.426 & ( -0.007 ) & 43.4 & ( -10.9 ) & 872 & ( 19 ) \\ & MRCI\cite{peterson1993} & 1.465 & & 26.3 & & 756 & & 1.419 & & 32.5 & & 891 & \\ & exp.\cite{MolSpectraStruct, irikura2007} & 1.412 & ( 0.053 ) & & & 917 & ( -161 ) & 1.412 & ( 0.007 ) & & & 917 & ( -27 ) \\ \hline & & \multicolumn{6}{l}{cc-pVDZ} & \multicolumn{4}{l}{cc-pVTZ} \\ \hline N$_2$ & pCCD & 1.099 & ( 0.020 ) & 239.6 & ( -37.6 ) & 2482 & ( -153 ) & 1.085 & ( 0.019 ) & 244.8 & ( -26.9 ) & 2517 & ( -176 ) \\ & CCD$^b$ & 1.111 & ( 0.008 ) & & & 2415 & ( -86 ) & 1.093 & ( 0.011 ) & & & 2447 & ( -106 ) \\ & CCSD$^b$ & 1.112 & ( 0.008 ) & & & 2412 & ( -84 ) & 1.093 & ( 0.011 ) & & & 2444 & ( -104 ) \\ & fpLCCD & 1.119 & ( 0.001 ) & & & 2320 & ( 8 ) & 1.101 & ( 0.003 ) & & & 2347 & ( -7 ) \\ & fpLCCSD & 1.120 & ( 0.000 ) & & & 2315 & ( 14 ) & 1.102 & ( 0.002 ) & & & 2340 & ( 0 ) \\ & fpCCD & 1.118 & ( 0.002 ) & & & 2335 & ( -7 ) & 1.100 & ( 0.004 ) & & & 2365 & ( -24 ) \\ & fpCCSD & 1.118 & ( 0.002 ) & & & 2336 & ( -8 ) & 1.100 & ( 0.004 ) & & & 2365 & ( -24 ) \\ & DMRG(6,6)-tCCSD$^a$ & 1.118 & ( 0.002 ) & 208.1 & ( -6.1 ) & 2327 & ( 2 ) & 1.100 & ( 0.004 ) & 227.4 & ( -9.5 ) & 2356 & ( -15 ) \\ & DMRG(6,6)-tCCSD$^b$ & 1.122 & ( -0.002 ) & 219.0 & ( -17.0 ) & 2383 & ( -55 ) & 1.102 & ( 0.002 ) & 239.3 & ( -21.4 ) & 2387 & ( -46 ) \\ & MRCI \cite{peterson1993} & 1.120 & & 202.0 & & 2329 & & 1.104 & & 217.9 & & 2341 & \\ \cline{2-14} & & \multicolumn{6}{l}{cc-pVQZ} & \multicolumn{4}{l}{aug-cc-pVTZ} \\ \cline{2-14} & pCCD & 1.077 & ( 0.023 ) & 252.8 & ( -28.6 ) & 2723 & ( -371 ) & 1.087 & ( 0.011 ) & 256.1 & ( -31.0 ) & 2450 & ( -91 ) \\ & CCD$^b$ & 1.090 & ( 0.010 ) & & & 2447 & ( -95 ) & 1.092 & ( 0.006 ) & & & 2455 & ( -96 ) \\ & CCSD$^b$ & 1.090 & ( 0.010 ) & & & 2450 & ( -98 ) & 1.092 & ( 0.006 ) & & & 2448 & ( -89 ) \\ & fpLCCD & 1.098 & ( 0.003 ) & & & 2370 & ( -18 ) & 1.100 & ( -0.002 ) & & & 2357 & ( 2 ) \\ & fpLCCSD & 1.098 & ( 0.002 ) & & & 2373 & ( -21 ) & 1.101 & ( -0.003 ) & & & 2344 & ( 15 ) \\ & fpCCD & 1.096 & ( 0.005 ) & & & 2394 & ( -43 ) & 1.099 & ( -0.001 ) & & & 2374 & ( -15 ) \\ & fpCCSD & 1.096 & ( 0.005 ) & & & 2402 & ( -51 ) & 1.099 & ( -0.001 ) & & & 2369 & ( -10 ) \\ & DMRG(6,6)-tCCSD$^a$ & 1.097 & ( 0.004 ) & 230.9 & ( -6.7 ) & 2353 & ( -2 ) & 1.094 & ( 0.004 ) & 222.1 & ( 3.0 ) & 2280 & ( 79 ) \\ & DMRG(6,6)-tCCSD$^b$ & 1.099 & ( 0.001 ) & 244.4 & ( -20.2 ) & 2392 & ( -40 ) & 1.117 & ( -0.019 ) & 237.0 & ( -11.9 ) & 2232 & ( 127 ) \\ & MRCI\cite{peterson1993} & 1.101 & & 224.2 & & 2352 & & & & & & & \\ & exp.\cite{MolSpectraStruct, shimanouchi1997} & 1.098 & ( 0.002 ) & 225.1 & ( -0.9 ) & 2359 & ( -7 ) & 1.098 & & 225.1 & & 2359 & \\ \hline & & \multicolumn{6}{l}{cc-pVDZ} & \multicolumn{4}{l}{cc-pVTZ} \\ \hline C$_2$ & CCD$^b$ & 1.263 & ( 0.009 ) & 136.0 & ( -9.2 ) & 1873 & ( -62 ) & 1.240 & ( 0.013 ) & 156.1 & ( -20.8 ) & 1911 & ( -76 ) \\ & CCSD$^b$ & 1.263 & ( 0.009 ) & 136.5 & ( -9.7 ) & 1873 & ( -62 ) & 1.240 & ( 0.013 ) & 156.6 & ( -21.2 ) & 1915 & ( -80 ) \\ & fpLCCD & 1.264 & ( 0.008 ) & 131.3 & ( -4.5 ) & 1901 & ( -90 ) & 1.241 & ( 0.011 ) & 136.7 & ( -1.3 ) & 1871 & ( -36 ) \\ & fpLCCSD & 1.265 & ( 0.008 ) & 134.2 & ( -7.4 ) & 1890 & ( -79 ) & 1.241 & ( 0.011 ) & 140.0 & ( -4.6 ) & 1862 & ( -27 ) \\ & fpCCD & 1.261 & ( 0.012 ) & 130.4 & ( -3.6 ) & 1905 & ( -94 ) & 1.237 & ( 0.015 ) & 143.4 & ( -8.0 ) & 1901 & ( -67 ) \\ & fpCCSD & 1.260 & ( 0.013 ) & 131.7 & ( -4.9 ) & 1906 & ( -95 ) & 1.236 & ( 0.017 ) & 145.1 & ( -9.7 ) & 1905 & ( -70 ) \\ & DMRG(8,8)-tCCSD$^a$ & 1.266 & ( 0.007 ) & 134.3 & ( -7.5 ) & 1850 & ( -39 ) & 1.242 & ( 0.011 ) & 147.5 & ( -12.1 ) & 1893 & ( -58 ) \\ & DMRG(8,8)-tCCSD$^b$ & 1.265 & ( 0.008 ) & 136.0 & ( -9.2 ) & 1899 & ( -88 ) & 1.249 & ( 0.004 ) & 146.3 & ( -11.0 ) & 1885 & ( -51 ) \\ & MRCI\cite{peterson1995} & 1.251 & ( 0.021 ) & 135.4 & ( -8.6 ) & 1873 & ( -62 ) & 1.252 & ( 0.001 ) & 140.4 & ( -5.0 ) & 1840 & ( -5 ) \\ & DMRG(12,28)\cite{wouters2014} & 1.272 & ( 0.001 ) & 130.1 & ( -3.3 ) & 1816 & ( -5 ) & & & & & & \\ & FCIQMC\cite{booth2011} & 1.273 & & 126.8 & & 1811 & & 1.253 & & 135.4 & & 1835 & \\ & exp.\cite{MolSpectraStruct} & 1.243 & ( 0.030 ) & 147.8 & ( -21.0 ) & 1855 & ( -44 ) & 1.243 & ( 0.010 ) & 147.8 & ( -12.4 ) & 1855 & ( -20 ) \\ \hline & & \multicolumn{6}{l}{cc-pVDZ} & \multicolumn{4}{l}{cc-pVTZ} \\ \hline C$_2^*$ & CCD$^b$ & 1.415 & ( -0.005 ) & 111.3 & ( -23.8 ) & 1641 & ( -274 ) & 1.393 & ( -0.016 ) & 133.4 & ( -32.2 ) & 1970 & ( -546 ) \\ & CCSD$^b$ & 1.416 & ( -0.006 ) & 111.3 & ( -23.8 ) & 1648 & ( -281 ) & 1.320 & ( 0.057 ) & 131.9 & ( -30.7 ) & 1387 & ( 37 ) \\ & fpLCCD & 1.466 & ( -0.056 ) & 87.5 & ( 0.0 ) & 1421 & ( -54 ) & 1.404 & ( -0.026 ) & 94.9 & ( 6.3 ) & 1389 & ( 35 ) \\ & fpLCCSD & 1.449 & ( -0.039 ) & 91.1 & ( -3.6 ) & 1317 & ( 50 ) & 1.406 & ( -0.028 ) & 98.4 & ( 2.8 ) & 1348 & ( 76 ) \\ & fpCCD & 1.425 & ( -0.015 ) & 91.9 & ( -4.4 ) & 1286 & ( 80 ) & 1.394 & ( -0.016 ) & 106.7 & ( -5.5 ) & 1379 & ( 45 ) \\ & fpCCSD & 1.426 & ( -0.016 ) & 94.4 & ( -6.9 ) & 1301 & ( 66 ) & 1.395 & ( -0.018 ) & 109.4 & ( -8.2 ) & 1370 & ( 54 ) \\ & DMRG(12,28)\cite{wouters2014} & 1.410 & & 87.6 & & 1367 & & & & & & & \\ & exp.\cite{douay1988} & 1.377 & ( 0.033 ) & 101.2 & ( -13.6 ) & 1424 & ( -57 ) & 1.377 & & 101.2 & & 1424 & \\ \hline \end{tabular} \end{tiny} \end{table} \begin{table}[tpb] \caption{Spectroscopic constants for the dissociation of main-group heteronuclear diatomic molecules for different quantum chemistry methods and basis sets. Errors with respect to MRCI, CMRCI+Q, FCIQMC, or experiment are given in parentheses. The superscript $a$ denotes that calculations have been performed in the canonical RHF orbital basis, while the superscript $b$ stands for pCCD-optimized orbitals. The ``$*$'' denotes the lowest-lying singlet excited state that could be optimized within pCCD. } \label{tab:3-2} \begin{tiny} \begin{tabular}{p{0.025\textwidth}p{0.15\textwidth} p{0.03\textwidth}r p{0.03\textwidth}r p{0.03\textwidth}r p{0.03\textwidth}r p{0.03\textwidth}r p{0.03\textwidth}r} \hline & & \multicolumn{2}{l}{ r$_\mathrm{e}$ [\r{A}] } & \multicolumn{2}{l}{ D$_\mathrm{e}$ [$\rm \frac{kcal}{mol}$] } & \multicolumn{2}{l}{ $\omega_\mathrm{e}$ [cm$^{-1}$] } & \multicolumn{2}{l}{ r$_\mathrm{e}$ [\r{A}] } & \multicolumn{2}{l}{ D$_\mathrm{e}$ [$\rm \frac{kcal}{mol}$] } & \multicolumn{2}{l}{ $\omega_\mathrm{e}$ [cm$^{-1}$] } \\ \hline & & \multicolumn{6}{l}{cc-pVDZ} & \multicolumn{4}{l}{cc-pVTZ} \\ \hline BO$^+$ & CCD$^a$ & 1.199 & ( 0.027 ) & 114.8 & ( 3.2 ) & 1901 & ( -168 ) & 1.186 & ( 0.030 ) & 126.5 & ( -0.5 ) & 1966 & ( -182 ) \\ & CCSD$^a$ & 1.205 & ( 0.021 ) & 119.6 & ( -1.6 ) & 1883 & ( -150 ) & 1.191 & ( 0.025 ) & 130.6 & ( -4.6 ) & 1949 & ( -165 ) \\ & CCD$^b$ & 1.198 & ( 0.028 ) & 108.5 & ( 9.6 ) & 1903 & ( -169 ) & 1.184 & ( 0.031 ) & 122.8 & ( 3.2 ) & 2005 & ( -221 ) \\ & CCSD$^b$ & 1.205 & ( 0.021 ) & 111.5 & ( 6.6 ) & 1894 & ( -160 ) & 1.191 & ( 0.025 ) & 124.3 & ( 1.7 ) & 1952 & ( -168 ) \\ & fpLCCD & 1.205 & ( 0.021 ) & 107.5 & ( 10.6 ) & 1854 & ( -121 ) & 1.191 & ( 0.025 ) & 116.3 & ( 9.7 ) & 1942 & ( -158 ) \\ & fpLCCSD & 1.232 & ( -0.006 ) & 120.5 & ( -2.5 ) & 2264 & ( -530 ) & 1.245 & ( -0.029 ) & 132.0 & ( -6.0 ) & 1865 & ( -81 ) \\ & fpCCD & 1.203 & ( 0.023 ) & 106.1 & ( 12.0 ) & 1874 & ( -140 ) & 1.189 & ( 0.026 ) & 114.8 & ( 11.3 ) & 1958 & ( -174 ) \\ & fpCCSD & 1.205 & ( 0.021 ) & 107.0 & ( 11.0 ) & 1925 & ( -191 ) & 1.193 & ( 0.023 ) & 114.4 & ( 11.6 ) & 1936 & ( -152 ) \\ & DMRG(8,8)-tCCSD$^a$ & 1.230 & ( -0.004 ) & 130.8 & ( -12.8 ) & 1823 & ( -89 ) & 1.218 & ( -0.002 ) & 141.7 & ( -15.7 ) & 1868 & ( -84 ) \\ & DMRG(8,8)-tCCSD$^b$ & 1.188 & ( 0.038 ) & 102.0 & ( 16.0 ) & 1676 & ( 58 ) & 1.166 & ( 0.049 ) & 93.3 & ( 32.7 ) & 1718 & ( 66 ) \\ & CASSCF\cite{peterson1995} & 1.211 & ( 0.016 ) & 133.7 & ( -15.7 ) & 1815 & ( -82 ) & 1.205 & ( 0.011 ) & 138.5 & ( -12.4 ) & 1835 & ( -52 ) \\ & CMRCI\cite{peterson1995} & 1.225 & ( 0.001 ) & 118.6 & ( -0.5 ) & 1741 & ( -7 ) & 1.214 & ( 0.001 ) & 126.7 & ( -0.7 ) & 1792 & ( -8 ) \\ & CMRCI+Q\cite{peterson1995} & 1.226 & & 118.0 & & 1734 & & 1.216 & & 126.0 & & 1784 & \\ \hline & & \multicolumn{6}{l}{cc-pVDZ} & \multicolumn{4}{l}{cc-pVTZ} \\ \hline BN & pCCD & 1.269 & ( 0.030 ) & 100.3 & ( 48.2 ) & 1776 & ( -125 ) & & ( 1.285 ) & & ( 154.4 ) & & ( 1682 ) \\ & CCD$^b$ & 1.293 & ( 0.005 ) & 150.8 & ( -2.2 ) & 1689 & ( -39 ) & 1.279 & ( 0.006 ) & 164.4 & ( -10.1 ) & 1683 & ( -1 ) \\ & CCSD$^b$ & 1.288 & ( 0.010 ) & 153.3 & ( -4.7 ) & 1714 & ( -63 ) & 1.271 & ( 0.014 ) & 167.7 & ( -13.3 ) & 1759 & ( -77 ) \\ & fpLCCD & 1.294 & ( 0.004 ) & 148.5 & ( 0.1 ) & 1683 & ( -33 ) & 1.281 & ( 0.004 ) & 187.7 & ( -33.3 ) & 1461 & ( 221 ) \\ & fpLCCSD & 1.293 & ( 0.005 ) & 154.3 & ( -5.8 ) & 1695 & ( -45 ) & 1.297 & ( -0.012 ) & 193.9 & ( -39.5 ) & 1595 & ( 87 ) \\ & fpCCD & 1.291 & ( 0.008 ) & 146.2 & ( 2.4 ) & 1700 & ( -49 ) & 1.282 & ( 0.003 ) & 184.0 & ( -29.7 ) & 1589 & ( 93 ) \\ & fpCCSD & 1.287 & ( 0.012 ) & 147.8 & ( 0.8 ) & 1724 & ( -74 ) & 1.276 & ( 0.009 ) & 185.7 & ( -31.4 ) & 1606 & ( 76 ) \\ & DMRG(8,8)-tCCSD$^a$ & 1.293 & ( 0.005 ) & 159.6 & ( -11.0 ) & 1787 & ( -136 ) & 1.282 & ( 0.003 ) & 169.7 & ( -15.4 ) & 1836 & ( -154 ) \\ & DMRG(8,8)-tCCSD$^b$ & 1.297 & ( 0.001 ) & 138.7 & ( 9.9 ) & 1742 & ( -91 ) & 1.298 & ( -0.013 ) & 159.2 & ( -4.8 ) & 1852 & ( -170 ) \\ & CASSCF\cite{peterson1995} & 1.294 & ( 0.004 ) & 158.8 & ( -10.2 ) & 1681 & ( -30 ) & 1.288 & ( -0.003 ) & 160.8 & ( -6.4 ) & 1686 & ( -4 ) \\ & CMRCI\cite{peterson1995} & 1.298 & ( 0.001 ) & 149.8 & ( -1.2 ) & 1655 & ( -4 ) & 1.284 & ( 0.001 ) & 155.9 & ( -1.5 ) & 1687 & ( -5 ) \\ & CMRCI+Q\cite{peterson1995} & 1.298 & & 148.6 & & 1651 & & 1.285 & & 154.4 & & 1682 & \\ \hline & & \multicolumn{6}{l}{cc-pVDZ} & \multicolumn{4}{l}{cc-pVTZ} \\ \hline CN$^+$ & CCD$^b$ & 1.186 & ( 0.012 ) & 173.6 & ( -9.8 ) & & & 1.186 & ( 0.012 ) & 173.6 & ( -9.8 ) & 2106 & ( -127 ) \\ & CCSD$^b$ & 1.184 & ( 0.014 ) & 173.0 & ( -9.2 ) & & & 1.184 & ( 0.014 ) & 173.0 & ( -9.2 ) & 2121 & ( -141 ) \\ & fpLCCD & 1.174 & ( 0.024 ) & 179.5 & ( -15.7 ) & 2068 & ( -88 ) & 1.174 & ( 0.024 ) & 179.5 & ( -15.7 ) & 2068 & ( -88 ) \\ & fpLCCSD & 1.173 & ( 0.025 ) & 176.3 & ( -12.5 ) & 2080 & ( -100 ) & 1.173 & ( 0.025 ) & 176.3 & ( -12.5 ) & 2080 & ( -100 ) \\ & fpCCD & 1.169 & ( 0.029 ) & 167.5 & ( -3.7 ) & 2075 & ( -95 ) & 1.169 & ( 0.029 ) & 167.5 & ( -3.7 ) & 2070 & ( -91 ) \\ & fpCCSD & 1.181 & ( 0.017 ) & 176.8 & ( -13.0 ) & 2081 & ( -101 ) & 1.160 & ( 0.038 ) & 176.8 & ( -13.0 ) & 2174 & ( -194 ) \\ & DMRG(8,8)-tCCSD$^a$ & 1.191 & ( 0.007 ) & 171.2 & ( -7.4 ) & 2013 & ( -34 ) & 1.179 & ( 0.019 ) & 155.3 & ( 8.5 ) & 2156 & ( -177 ) \\ & DMRG(8,8)-tCCSD$^b$ & 1.190 & ( 0.008 ) & 178.5 & ( -14.7 ) & 1986 & ( -7 ) & 1.170 & ( 0.028 ) & 170.9 & ( -7.0 ) & 2042 & ( -62 ) \\ & CASSCF\cite{peterson1995} & 1.191 & ( 0.007 ) & 178.8 & ( -14.9 ) & 2030 & ( -51 ) & 1.182 & ( 0.016 ) & 181.0 & ( -17.2 ) & 2018 & ( -39 ) \\ & CMRCI\cite{peterson1995} & 1.197 & ( 0.001 ) & 165.2 & ( -1.4 ) & 1985 & ( -6 ) & 1.182 & ( 0.016 ) & 170.6 & ( -6.8 ) & 2006 & ( -26 ) \\ & CMRCI+Q\cite{peterson1995} & 1.198 & & 163.8 & & 1979 & & 1.198 & & 163.8 & & 1979 & \\ & exp.\cite{MolSpectraStruct} & 1.173 & ( 0.025 ) & & & 2033 & ( -54 ) & 1.173 & ( 0.025 ) & & & 2033 & ( -54 ) \\ \hline & & \multicolumn{6}{l}{cc-pVDZ} & \multicolumn{4}{l}{cc-pVTZ} \\ \hline (CN$^+$)$^*$ & CCD$^b$ & 1.335 & & 160.7 & & 1962 & & 1.335 & & 160.7 & & 1962 & \\ & CCSD$^b$ & 1.339 & & 162.8 & & 2084 & & 1.339 & & 162.8 & & 2084 & \\ & fpLCCD & 1.389 & & 123.0 & & 1099 & & 1.389 & & 123.0 & & 1099 & \\ & fpLCCSD & 1.391 & & 142.8 & & 1407 & & 1.391 & & 142.8 & & 1407 & \\ & fpCCD & 1.375 & & 128.0 & & 1264 & & 1.375 & & 128.0 & & 1363 & \\ & fpCCSD & 1.373 & & 144.7 & & 1515 & & 1.373 & & 144.7 & & 1510 & \\ \hline & & \multicolumn{6}{l}{cc-pVDZ} & \multicolumn{4}{l}{cc-pVTZ} \\ \hline CO & CCD$^a$ & 1.135 & ( 0.010 ) & 326.7 & ( -85.2 ) & 2248 & ( -104 ) & 1.104 & ( 0.031 ) & 307.0 & ( -55.1 ) & 2419 & ( -265 ) \\ & CCSD$^a$ & 1.138 & ( 0.007 ) & 329.0 & ( -87.4 ) & 2212 & ( -68 ) & 1.123 & ( 0.013 ) & 334.0 & ( -82.1 ) & 2264 & ( -110 ) \\ & CCSD(T)$^a$ & 1.145 & & 241.5 & & 2144 & & 1.136 & & 251.9 & & 2154 & \\ & pCCD & 1.117 & ( 0.027 ) & 224.8 & ( 16.7 ) & 2295 & ( -151 ) & 1.116 & ( 0.020 ) & 237.9 & ( 14.1 ) & 2317 & ( -163 ) \\ & CCD$^b$ & 1.132 & ( 0.013 ) & 262.8 & ( -21.3 ) & 2246 & ( -102 ) & 1.125 & ( 0.011 ) & 288.8 & ( -36.9 ) & 2257 & ( -103 ) \\ & CCSD$^b$ & 1.137 & ( 0.007 ) & 264.0 & ( -22.4 ) & 2219 & ( -75 ) & 1.125 & ( 0.011 ) & 289.7 & ( -37.8 ) & 2238 & ( -84 ) \\ & fpLCCD & 1.134 & ( 0.011 ) & 229.6 & ( 11.9 ) & 2171 & ( -26 ) & 1.129 & ( 0.006 ) & 244.3 & ( 7.6 ) & 2218 & ( -64 ) \\ & fpLCCSD & 1.143 & ( 0.001 ) & 236.2 & ( 5.3 ) & 2105 & ( 39 ) & 1.130 & ( 0.006 ) & 253.4 & ( -1.5 ) & 2155 & ( -2 ) \\ & fpCCD & 1.133 & ( 0.011 ) & 235.1 & ( 6.5 ) & 2178 & ( -33 ) & 1.128 & ( 0.008 ) & 254.3 & ( -2.4 ) & 2228 & ( -74 ) \\ & fpCCSD & 1.137 & ( 0.007 ) & 240.3 & ( 1.2 ) & 2178 & ( -34 ) & 1.128 & ( 0.008 ) & 260.7 & ( -8.8 ) & 2218 & ( -64 ) \\ & exp.\cite{NIST, carbon_monoxide, irikura2007} & 1.128 & ( 0.016 ) & 255.8 & ( -14.3 ) & 2170 & ( -26 ) & 1.128 & ( 0.007 ) & 255.8 & ( -3.9 ) & 2170 & ( -16 ) \\ \hline \end{tabular} \end{tiny} \end{table} \subsection{Diatomic molecules} As a first test case, we selected seven diatomic molecules containing only main-group elements (namely B, C, N, O, and F), which feature complex electronic structures driven by quasi-degenerate 2p orbitals. The dissociation process of these main-group dimers highlights the disparate interplay of nondynamic/static and dynamic electron correlation. Specifically, the dissociation of the fluorine dimer, despite its single bond, cannot be reliably modeled with the gold standard of quantum chemistry, that is CCSD(T), as it produces an unphysical shape of the potential energy surface (PES).\cite{li1998,bytautas2007,evangelista2007,bytautas2009,geminals_lcc_2015} A similar outcome is observed for the CCSD model in the case of the nitrogen dimer. This molecule is known as one of the most challenging (diatomic) systems due to the triple bonding mechanism, which requires higher excitation operators in the theoretical model (as well as high angular momenta in the basis set) to reach spectroscopic accuracy.~\cite{deegan1994,li2001,li2008,wilson2011,csontos2013,seniority-cc-2014,bytautas2015,pccd-PTX} The PESs of the carbon dimer, cyano cation, and boron nitride feature electron configurations of energetic proximity and avoided crossing. \cite{martin1992,peterson1995,wulfov1996,peterson1997,abrams2004,sherril2005,shi2011,booth2011,wilson2011,wouters2014,pccd-2014-jcp,geminals_lcc_2015,sharma2015,gulania2018} We also included the carbon monoxide molecule and the boron monoxide cation in our test set, which are less affected by electron correlation effects.\cite{peterson1995,geminals_lcc_2015} The spectroscopic constants obtained for our diatomic test set by various coupled-cluster models (and basis sets) are summarized in Tables \ref{tab:3-1} and \ref{tab:3-2}. For the F$_2$ molecule, coupled cluster methods with an RHF reference result in noticeably deeper potential energy wells. Specifically, CCD(RHF) considerably overestimates the dissociation energy D$_{\rm e}$ with an error of 44.5 kcal/mol. The inclusion of single excitations only slightly improves D$_{\rm e}$. Changing the reference determinant (and hence the molecular orbital basis) to the pCCD-optimized solution results in similar shapes of the PES as obtained in conventional CCSD calculations. Thus, switching to the pCCD reference determinant allows us to obtain CCSD accuracy by only solving the CCD equations. The DMRG-tailored and pCCD-tailored CCD and CCSD flavours substantial improve the accuracy of the predicted potential energy well depths and vibrational frequencies. While single-reference CCSD overestimates the potential energy well depth by 28--40 kcal/mol, this error drops down to 8--17 kcal/mol if the ansatz is tailored by multireference wave functions. Similar gain is observed for vibrational frequencies, where tailoring decreases the difference with respect to MRCI results below the basis set error threshold. The equilibrium bond lengths are estimated with the highest accuracy by the DMRG-tCCSD method in the pCCD-optimized orbital basis and differ from reference MRCI results by 0.007--0.015 \AA{}, which is less than the difference between the MRCI result and the experimental value. DMRG-tCCSD in the RHF orbital basis provides similar results for correlation-consistent basis sets (cc-pVDZ, cc-pVTZ) but the errors increase when augmented basis sets (aug-cc-pVDZ, aug-cc-pVTZ) are used. For the singlet ground-state N$_2$ molecule, all tested coupled cluster flavours achieve good accuracy in the near-equilibrium region and predict accurate bond lengths and vibrational frequencies. However, most coupled-cluster methods fail to accurately describe the region with a stretched N--N bond and the vicinity of dissociation. This also holds for pCCD-tailored coupled cluster methods, where the linearized models feature particularly large divergencies in the dissociation limit. This problem is cured by both DMRG-tCCSD models in the RHF and pCCD orbital basis, which are able to (indirectly) include triple excitations in the active space spanned by the 2p orbitals. \begin{figure}[tpb] \includegraphics[width=\textwidth]{figure1} \caption{The potential energy curves for the two singlet states of the carbon dimer using the cc-pVDZ basis set compared to DMRG\cite{wouters2014} reference data. The left upper panel corresponds to the ground state pCCD orbitals, while the left bottom panel was obtained for the excited-state pCCD orbitals. The superscript $a$ denotes that calculations have been performed in the canonical RHF orbital basis, while the superscript $b$ stands for pCCD-optimized orbitals. The right panels shows two sets of (valence) pCCD orbitals for the C$_2$ molecules with occupations numbers in parentheses.\label{fig:3}} \end{figure} The carbon dimer, cyano cation, boron nitride, and boron monoxide cation are isoelectronic diatomic species where a balanced description of electron correlation effects is required to obtain accurate energetics and spectroscopic properties.\cite{martin1992,peterson1995,pccd-2014-jcp,geminals_lcc_2015,gulania2018} The challenge in modelling their PESs arises from the near-degeneracies of the valence orbitals and the energetic proximity of electronic configurations.\cite{wulfov1996,peterson1997,abrams2004,sherril2005,shi2011,booth2011,wilson2011,wouters2014,sharma2015,geminals_lcc_2015} For example, the two lowest $^1\Sigma_g^+$ states of C$_2$ are affected by an avoided crossing since the $\ket{1\pi_u^4}$ configuration is favoured around the equilibrium bond length while the $\ket{3 \sigma_g^2 1\pi_u^2}$ configuration dominates in the ground state wave function in the dissociation limit. The CN$^+$ and BN molecules exhibit an analogous bonding pattern to the carbon dimer~\cite{murrell1979,wulfov1996,peterson1997}, while BO$^+$ features a more single-reference character.\cite{peterson1997} For the carbon dimer and cyano cation, we obtained two sets of pCCD-optimized orbitals corresponding to pCCD solutions dominated by either the $\ket{1\pi^4}$ or $\ket{3 \sigma^2 1\pi_u^2}$ determinant. The latter causes symmetry breaking of the orbitals since the pCCD model does not describe two equivalent $\ket{3 \sigma_g^2 1\pi_u^2}$ determinants on an equal footing. The adiabatic excitation energies of these two states are presented in Table \ref{tab:1}, while the PESs of the C$_2$ molecule obtained with the two different sets of pCCD-optimized orbitals are presented in Figure~\ref{fig:3}. Only for DMRG-tCCSD, we were not able to optimize a different PES using the second set of pCCD orbitals as we obtained the same total energies for both orbital sets. Our results are consistent with DMRG(12,28)\cite{wouters2014} and FCIQMC\cite{booth2011} reference data, but the PESs around the avoiding crossing region is not smooth in the case of fpCCD, fpCCSD, fpLCCD, and fpLCCSD. \begin{table}[t] \caption{Adiabatic excitation energies [eV] between the singlet ground- and first excited state of the C$_2$ and CN$^+$ molecules. The acronym in parentheses indicates the molecular orbital basis employed in calculations. The superscript $b$ denotes that calculations have been performed in the pCCD-optimized orbital basis. } \label{tab:1} \centering \begin{tabular}{lllll} \hline & \multicolumn{2}{c}{C$_2$} & \multicolumn{2}{c}{CN$^+$}\\ & cc-pVDZ & cc-pVTZ & cc-pVDZ & cc-pVTZ \\\hline CCD$^b$ & 1.064 & 0.997 & 1.567 & 1.567 \\ CCSD$^b$& 1.080 & 1.031 & 1.578 & 1.578 \\ fpLCCD & 1.811 & 1.792 & 2.643 & 2.643 \\ fpLCCSD & 1.819 & 1.774 & 2.669 & 2.669 \\ fpCCD & 1.652 & 1.591 & 2.426 & 2.425 \\ fpCCSD & 1.601 & 1.549 & 2.533 & 2.533 \\ DMRG(12,28)\cite{wouters2014} & 1.913 & & & \\\hline \end{tabular} \end{table} The electron-pair cluster (geminal) amplitudes $t_{ii}^{aa}$ of the C$_2$ molecule obtained with DMRG-tCCSD, pCCD, and (conventional) CCSD are presented in Figure~\ref{fig:4}. The pair-amplitudes for other diatomic molecules are presented in the SI.$^\mathsection$ For the near-equilibrium geometry, the largest cluster amplitudes (in absolute value) are found for the space spanned by six 2p-type orbitals. Two additional orbitals participate in the bond-breaking process. In the canonical RHF basis, the electron-pair amplitudes of the conventional CCD and CCSD model agree well with the DMRG-tCCSD reference amplitudes in the equilibrium region, but disagree in the vicinity of dissociation. For the pCCD-optimized orbital basis, the differences between pCCD and DMRG electron-pair amplitudes are small and rather of quantitative nature, while the general structure of the wave function is similar in both cases. The CCD and CCSD electron-pair amplitudes (in both the canonical RHF and pCCD orbital basis) tend to substantially differ from the pCCD and DMRG-tailored amplitudes, which is particularly pronounced for the BO$^+$ molecule in the dissociation limit. \begin{figure}[tb] \centering \includegraphics[width=0.8\textwidth]{figure2} \caption{Mean errors with respect to accurate theoretical reference data (MRCISD+Q for BN, BO$^+$, CN$^+$, MRCI for C$_2$, CN$^+$, N$_2$, F$_2$, and CCSD(T) for CO molecule) including the standard deviation (black lines) determined for all fitted spectroscopic constants for our test set of main-group diatomics (BN, BO$^+$, C$_2$, CN$^+$, N$_2$, F$_2$, CO). The superscript $a$ denotes that calculations have been performed in the canonical RHF orbital basis, while the superscript $b$ indicates pCCD-optimized orbitals. See also Table \ref{tab:err} for the definition of the corresponding error measures.} \label{fig:errors} \end{figure} \begin{table}[t] \caption{Error measures determined for all fitted spectroscopic parameters (equilibrium bond lengths, potential energy well depths, and harmonic vibrational frequencies) of our test set containing main-group diatomics (BN, BO$^+$, C$_2$, CN$^+$, CO, N$_2$, F$_2$) with respect to accurate multireference methods (MRCISD+Q for BN, BO$^+$, CN$^+$, MRCI for C$_2$, CN$^+$, N$_2$, F$_2$, and CCSD(T) for the CO molecule). ME: mean error, MAE: mean absolute error, RMSD: root mean square deviation. The superscript $a$ denotes that calculations have been performed in the canonical RHF orbital basis, while the superscript $b$ indicates pCCD-optimized orbitals.} \label{tab:err} \begin{scriptsize} \begin{tabular}{llrrrrrrrrr} \hline & & \multicolumn{3}{c}{ $\delta$r$_\mathrm{e}$ [\r{A}] } & \multicolumn{3}{c}{ $\delta$D$_\mathrm{e}$ [$\rm \frac{kcal}{mol}$] } & \multicolumn{3}{c}{ $\delta\omega_\mathrm{e}$ [cm$^{-1}$] } \\ & & ME & MAE & RMSD & ME & MAE & RMSD & ME & MAE & RMSD \\ \hline \parbox[t]{2mm}{\multirow{8}{*}{\rotatebox[origin=c]{90}{cc-pVDZ}}} & CCD$^b$ & -0.021 & 0.021 & 0.031 & 2.3 & 9.1 & 11.3 & 112 & 112 & 141 \\ & CCSD$^b$ & -0.021 & 0.021 & 0.030 & 11.8 & 14.0 & 19.3 & 130 & 130 & 159 \\ & fpLCCD & -0.015 & 0.015 & 0.018 & 4.9 & 13.8 & 18.9 & 68 & 68 & 79 \\ & fpLCCSD & -0.010 & 0.012 & 0.017 & 5.0 & 7.1 & 8.9 & 129 & 142 & 227 \\ & fpCCD & -0.017 & 0.017 & 0.020 & -0.9 & 7.7 & 9.2 & 76 & 76 & 87 \\ & fpCCSD & -0.015 & 0.015 & 0.018 & 1.8 & 7.4 & 9.3 & 92 & 92 & 107 \\ & DMRG-tCCSD$^a$ & -0.002 & 0.005 & 0.005 & 8.6 & 9.0 & 10.2 & 40 & 49 & 69 \\ & DMRG-tCCSD$^b$ & -0.007 & 0.010 & 0.016 & 2.5 & 11.2 & 12.5 & 11 & 40 & 50 \\ \hline \parbox[t]{2mm}{\multirow{8}{*}{\rotatebox[origin=c]{90}{cc-pVTZ}}} & CCD$^b$ & -0.016 & 0.016 & 0.018 & 17.2 & 18.2 & 22.2 & 108 & 108 & 127 \\ & CCSD$^b$ & -0.017 & 0.017 & 0.018 & 18.1 & 18.7 & 22.8 & 112 & 112 & 118 \\ & fpLCCD & -0.009 & 0.012 & 0.015 & 6.3 & 13.3 & 16.4 & 16 & 98 & 121 \\ & fpLCCSD & 0.001 & 0.014 & 0.016 & 11.6 & 11.8 & 17.7 & 18 & 51 & 64 \\ & fpCCD & -0.011 & 0.014 & 0.017 & 6 & 9.8 & 13.6 & 47 & 87 & 98 \\ & fpCCSD & -0.013 & 0.016 & 0.019 & 9.3 & 13.2 & 15.7 & 64 & 95 & 112 \\ & DMRG-tCCSD$^a$ & -0.005 & 0.008 & 0.010 & 8.4 & 11.3 & 11.7 & 78 & 82 & 104 \\ & DMRG-tCCSD$^b$ & -0.011 & 0.017 & 0.024 & 2.9 & 13.8 & 17.1 & 40 & 68 & 83 \\\hline \end{tabular} {\raggedright \\ ME (mean error) $= \frac{1}{N} \sum_i^N (x_i^{\mathrm{method}} - x_i^{\mathrm{reference}}) $ \\ MAE (mean absolute error) $= \frac{1}{N} \sum_i^N |x_i^{\mathrm{method}} - x_i^{\mathrm{reference}}| $ \\ RMSD (root mean square deviation) $= \sqrt{\frac{1}{N} \sum_i^N (x_i^{\mathrm{method}} - x_i^{\mathrm{reference}})^2} $ \par} \end{scriptsize} \end{table} \begin{figure}[tpb] \includegraphics[width=\columnwidth]{figure3} \caption{Electron-pair amplitudes represented as a matrix for the C$_2$ molecule obtained by CCD, CCSD, pCCD, and DMRG-tCCSD. The horizontal axis denotes occupied orbitals, while the vertical axis stands for virtual orbitals. The value of each amplitude is color-coded. The superscript $a$ denotes that calculations have been performed in the canonical RHF orbital basis, while the superscript $b$ indicates pCCD-optimized orbitals.} \label{fig:4} \end{figure} \begin{figure}[tb] \centering \includegraphics[width=\columnwidth]{figure4} \caption{The potential energy curves for the CO molecule (cc-pVTZ basis set). The superscript $b$ indicates pCCD-optimized orbitals.} \label{fig:6} \end{figure} Figure~\ref{fig:6} summarizes the dissociation path of the CO molecule obtained by various CC methods. Although the reference wave functions (RHF and pCCD) provide smooth Morse potential-shaped plots, fpLCCSD diverges in the region between 1.5--2.3 \r{A}, while CCSD in the RHF orbital basis fails in predicting a smooth PES. Inspecting the nature and composition of the pCCD wave function, we observe that the pCCD solution becomes multireference for those bond lengths. When the C--O distance reaches 1.48 \r{A}, the geminal coefficients for two dominant excited determinants are about $-0.1$. They gradually increase in absolute value when the molecule is stretched reaching values of $-0.95$ and $-0.8$ for a distance of 3.17 \r{A}. The failure of fpLCCSD may be further attributed to the linearized nature of the coupled-cluster amplitude equations, which may feature divergencies and poles in their solutions.~\cite{geminals_lcc_2017} Figure~\ref{fig:errors} and Table~\ref{tab:err} show the mean errors including the standard deviation of the fitted spectroscopic constants (equilibrium distance, dissociation energy, and harmonic vibrational frequencies) for our test set of main-group diatomic molecules with respect to reference data. All studied coupled cluster methods, including CCD and CCSD with pCCD-optimized orbitals, tailored CCSD approaches with both RHF and pCCD-optimized orbitals, fpCCD, and fpCCSD, overestimate equilibrium bond lengths. In general, tailoring selected coupled-cluster amplitudes reduces the errors in all spectroscopic constants. DMRG-tCC is the most accurate method in predicting dissociation energies (in the cc-pVTZ basis), but lies in between fpCC and fpLCC quality for equilibrium distances and vibrational frequencies. Furthermore, pCCD orbitals are slightly better in the dissociation region and thus allow us to predict more precise D$_e$ and $\omega$ values. In general, DMRG-tCCSD(RHF) provides the smallest errors only for equilibrium bond lengths, while its performance strongly deteriorates for dissociation energies and harmonic vibrational frequencies. Moreover, the addition of single excitations in the fpCCD model does not improve the accuracy compared to fpCCD. Comparing the mean errors and mean absolute errors suggests that the error measures provided by tailored coupled cluster approaches are not as systematic as those obtained by single-reference coupled cluster theory, where equilibrium bond lengths are systematically too large and vibrational frequencies are systematically underestimated. Finally, our statistical analysis on the bond-breaking process of selected main-group diatomics suggest that the results of the linearized pCCD-tCC models are of similar quality as its non-linear version. However, pCCD-tailored wave functions (restricted to at most double excitations) are insufficient to accurately describe the dissociation pathway of molecules featuring a triple bond. As expected, the fpLCCSD method may, however, show unphysical features in the PESs like divergencies or poles. These divergencies can be cured by including non-linear terms in the CC amplitude equations resulting in the fpCCSD framework. \subsection{The dissociation pathway of the Cr$_2$ dimer} The chromium dimer and its dissociation process are widely used as a benchmark problem in quantum chemistry primarily because its complicated electronic structure and formal hextuple bond pose a remarkable challenge for present-day quantum chemical methods.\cite{cr2-1993, cr2_2009, dmrg-caspt2, cr2_2011, cr2_2016, vancoillie2016} Even excitations of fourth order are insufficient to accurately capture electron correlation effects within a single-reference framework.\cite{dmrg2015} Multi-reference methods represent the most robust and trustworthy approach to study the electronic structure of Cr$_2$. For instance, Veis \textit{et al.} report that DMRG-tCCSD effectively describes the Cr$_2$ energy around the equilibrium geometry and outperforms the conventional CCSDTQ method in terms of total energies.\cite{dmrg-tcc-2016} Figure~\ref{fig:7} summarizes the PESs obtained with different flavours of conventional and tailored coupled-cluster theory. pCCD provides a smooth curve, albeit overestimating the equilibrium bond length (r$_e=1.881$ \AA). Note that pCCD does not converge for bond lengths r$_{\rm Cr-Cr} > 2.5$ \AA{}. CCD$^b$ and CCSD$^b$ yield potential energy curves with too short bond lengths (r$_e=1.533$ \AA) and too large slopes. fpCCD reproduces the proper shape of the PES around the equilibrium and predicts a bond length of 1.641 \AA{}, which is closest to the experimental value of 1.6788 \AA. However, pCCD, and thus, pCCD-tailored CC theory fails in the description of the dissociation path and dissociation limit. We should note that we encountered convergence problems for stretched Cr--Cr distances in all frozen-pair coupled-cluster calculations, while the linearized pCCD-tailored coupled cluster flavours (both fpLCCD and fpLCCSD) fail due to divergencies near the equilibrium. Thus, those points are not shown in Figure~\ref{fig:7}. DMRG(12,12) calculations in the RHF orbital basis do not yield a bonded PES and diverge for r$_{\rm Cr-Cr} > 1.8$ \AA{}, while DMRG(12,12) in the pCCD orbital basis provides an unphysical PES similar to CASSCF(12,12).\cite{cr2_2011, cr2_2016, vancoillie2016} Note that the poor performance of minimal active space calculations for Cr$_2$, that is 12 electrons in 12 orbitals, is a well known problem in computational chemistry~\cite{cr2_2009, dmrg-caspt2, cr2_2016, cr2_2011, vancoillie2016}. All investigated DMRG-tailored coupled cluster methods perform well in the near-equilibrium region. However, for $r_{\rm Cr-Cr} > 1.9$ \AA{}, we observe overcorrelation in the RHF orbital basis, while for pCCD-optimized orbital the DMRG-tCCSD equations do not converge. Thus, DMRG-tCCSD exploiting the minimal active space in DMRG calculations cannot be used to model the dissociation of the sextuple bond of the chromium dimer. Further investigations are required to determine whether extending the active space can cure the problems related to overcorrelation, divergencies, and convergence difficulties in the CC amplitude equations. \begin{table} \caption{Computed spectroscopic constants of the chromium dimer for various coupled cluster methods and atomic basis sets. The superscript $a$ denotes that calculations have been performed in the canonical RHF orbital basis, while the superscript $b$ indicates pCCD-optimized orbitals.} \begin{tabular}{llll llll} & r$_\mathrm{e}$ [\r{A}] & $\omega_\mathrm{e}$ [cm$^{-1}$] & r$_\mathrm{e}$ [\r{A}] & $\omega_\mathrm{e}$ [cm$^{-1}$] \\ \hline & \multicolumn{2}{l}{cc-pVDZ} & \multicolumn{2}{l}{cc-pVTZ} \\ \hline pCCD & 1.884 & 264 & 1.881 & 244 \\ CCSD$^b$& 1.542 & 950 & 1.535 & 943 \\ fpCCD & 1.676 & 397 & 1.637 & 425 \\ fpCCSD & 1.651 & 525 & 1.623 & 567 \\ DMRG(12,12)-tCCSD$^a$ & 1.584 & 785 & 1.576 & 790 \\ DMRG(12,12)-tCCSD$^b$ & 1.626 & 635 & 1.627 & 602 \\ exp. & 1.6788 \cite{bondybey1983} & 481 \cite{casey1993} & 1.679 \cite{bondybey1983} & 481 \cite{casey1993} \\ \hline \end{tabular} \end{table} \begin{figure}[tb] \centering \includegraphics[width=\columnwidth]{figure5} \caption{The potential energy curves of Cr$_2$ (cc-pVTZ basis set). The superscript $a$ denotes that calculations have been performed in the canonical RHF orbital basis, while the superscript $b$ stands for pCCD orbitals. Note that DMRG(12,12)$^{\rm b}$ yields an unbound PES and is hence not shown here.} \label{fig:7} \end{figure} \subsection{Umbrella inversion of ammonia} Theoretical models have been struggling for many years to obtain spectroscopic accuracy for the six vibrational modes of the NH$_3$ molecule. The main reason for this struggle is an inversion mode that is characterized by a high amplitude but a low frequency. Since this system does not feature strong electron correlation and the energy converges fast with respect to the order of the cluster operator, single-reference coupled cluster methods are sufficient to approach spectroscopic accuracy.\cite{pastorczak2015} To assess the performance of various tailored coupled cluster flavours, we modeled the path of the conversion of the ground-state equilibrium pyramidal-shaped molecule to the planar complex. Furthermore, the accuracy of tCC theory is benchmarked against theoretical results rather than experimental ones due to the non-monotonic behavior concerning the basis set for structural and spectroscopic properties.\cite{pesonen2001} \begin{figure}[tb] \centering \includegraphics[width=\columnwidth]{figure6} \caption{Ammonia umbrella inversion -- energy difference compared to CCSDT data (cc-pVTZ basis set). The type of orbitals used in DMRG calculations is denoted by upper index with $a$ standing for RHF orbitals and $b$ standing for pCCD orbitals.} \label{fig:ammonia} \end{figure} Figure~\ref{fig:ammonia} shows the potential energy curves for the umbrella inversion of the NH$_3$ complex, while Table \ref{tab:6} presents the equilibrium angles, barrier hights, and non-parallelity errors. Both DMRG featuring a small active space and orbital-optimized pCCD (that is models capturing mostly static/nondynamic electron correlation) greatly overestimate the barrier height of the umbrella inversion process. Specifically, the small DMRG active space comprises six valence electrons distributed in six hybridized bonding and antibonding sp$^3$ orbitals. We also studied a CAS that was extended by two additional sp$^3$ orbitals that feature significant values of the single orbital entropy and orbital-pair mutual information. This overestimation originates from the fact that pCCD, DMRG(6,6), and DMRG(8,8) are insufficient to capture dynamical electron correlation effects. Note that the shape of the PES obtained by DMRG(8,8)$^{\mathrm{b}}$ at $\alpha=90^\circ$ features an unphysical shape and diverges. Thus, the corresponding value for D$_e$ is not shown in Table \ref{tab:6}. For DMRG-tCCSD, we observe that augmenting the active space deteriorates the accuracy of the results compared to CCSDT reference energies. Nonetheless, a large fraction of the missing dynamical correlation energy can be recovered by various tailored coupled cluster flavours. Most importantly, the differences in the shape of the PESs, the equilibrium angles, and the barrier height are small with respect to CCSDT reference data for all investigated tailored CC approaches and lie within chemical accuracy as long as dynamical correlation effects have been accounted for in the theoretical model. \begin{table*} \caption{Equilibrium angles and barrier heights for the umbrella inversion of NH$_3$. The difference with respect to CCSDT results is given in parentheses. The superscript $a$ denotes that calculations have been performed in the canonical RHF orbital basis, while the superscript $b$ indicates pCCD-optimized orbitals.} \label{tab:6} \begin{scriptsize} \begin{tabular}{lrrrrlrrrrl} \hline & \multicolumn{5}{c}{ cc-pVDZ } & \multicolumn{5}{c}{cc-pVTZ } \\ & \multicolumn{2}{c}{ $\alpha_{\rm e}$} & \multicolumn{2}{c}{ D$_e$ [$\rm \frac{kcal}{mol}$]} & NPE$^a$ [$\rm \frac{kcal}{mol}$] & \multicolumn{2}{c}{ $\alpha_{\rm e}$} & \multicolumn{2}{c}{ D$_e$ [$\rm \frac{kcal}{mol}$]} & NPE$^a$ [$\rm \frac{kcal}{mol}$] \\ \hline RHF & 67.1 & ( 1.1 ) & 7.9 & ( -0.7 ) & 1.9 & 68.1 & ( 1.0 ) & 6.4 & ( -0.4 ) & 1.7 \\ CCD & 66.1 & ( 0.2 ) & 8.4 & ( -0.2 ) & 0.4 & 67.4 & ( 0.4 ) & 6.5 & ( -0.4 ) & 0.9 \\ CCSD & 66.1 & ( 0.2 ) & 8.4 & ( -0.2 ) & 0.4 & 67.4 & ( 0.4 ) & 6.5 & ( -0.4 ) & 0.9 \\ CCSD(T) & 65.9 & ( 0.0 ) & 8.6 & ( 0.0 ) & 0.0 & 67.1 & ( 0.0 ) & 6.9 & ( 0.0 ) & 0.0 \\ CCSDT & 65.9 & ( 0.0 ) & 8.6 & ( 0.0 ) & & 67.1 & ( 0.0 ) & 6.9 & ( 0.0 ) & \\ pCCD & 66.7 & ( 0.8 ) & 10.0 & ( 1.4 ) & 1.6 & 67.9 & ( 0.8 ) & 8.9 & ( 2.0 ) & 2.1 \\ fpCCD & 66.5 & ( 0.6 ) & 8.6 & ( 0.0 ) & 0.5 & 67.6 & ( 0.5 ) & 6.6 & ( -0.3 ) & 1.3 \\ fpCCSD & 66.3 & ( 0.3 ) & 8.8 & ( 0.2 ) & 0.6 & 67.5 & ( 0.4 ) & 6.9 & ( 0.0 ) & 0.9 \\ fpLCCD & 66.3 & ( 0.4 ) & 8.6 & ( 0.0 ) & 0.7 & 67.6 & ( 0.5 ) & 6.5 & ( -0.3 ) & 1.2 \\ fpLCCSD & 66.2 & ( 0.3 ) & 8.8 & ( 0.2 ) & 0.5 & 67.4 & ( 0.3 ) & 6.9 & ( 0.1 ) & 0.7 \\ DMRG(6,6)$^{\mathrm{a}}$ & 68.1 & ( 2.2 ) & 8.7 & ( 0.1 ) & 2.2 & 66.8 & ( -0.3 ) & 6.8 & ( 0.0 ) & 1.1 \\ DMRG(6,6)-tCCSD$^{\mathrm{a}}$ & 67.2 & ( 1.3 ) & 8.4 & ( -0.2 ) & 0.3 & 67.2 & ( 0.1 ) & 6.6 & ( -0.3 ) & 0.7 \\ DMRG(6,6)$^{\mathrm{b}}$ & 65.8 & ( -0.1 ) & 10.9 & ( 2.2 ) & 0.6 & 65.7 & ( -1.4 ) & 9.0 & ( 2.2 ) & 2.3 \\ DMRG(6,6)-tCCSD$^{\mathrm{b}}$ & 65.8 & ( -0.1 ) & 8.5 & ( -0.1 ) & 0.2 & 66.1 & ( -1.0 ) & 6.4 & ( -0.5 ) & 0.8 \\ DMRG(8,8)$^{\mathrm{a}}$ & 65.4 & ( -0.6 ) & 9.7 & ( 1.0 ) & 1.0 & 67.2 & ( 0.1 ) & 7.2 & ( 0.4 ) & 0.6 \\ DMRG(8,8)-tCCSD$^{\mathrm{a}}$ & 65.4 & ( -0.5 ) & 8.7 & ( 0.0 ) & 0.0 & 67.1 & ( 0.0 ) & 6.6 & ( -0.3 ) & 0.6 \\ DMRG(8,8)$^{\mathrm{b}}$ & 66.5 & ( 0.5 ) & - & ( - ) & 2.5 & 68.0 & ( 0.9 ) & - & ( - ) & 2.8 \\ DMRG(8,8)-tCCSD$^{\mathrm{b}}$ & 65.6 & ( -0.3 ) & 8.4 & ( -0.2 ) & 0.4 & 67.4 & ( 0.3 ) & 6.1 & ( -0.8 ) & 1.4 \\ \hline \end{tabular} {\raggedright \\ $^a$ NPE (non-parallelity error) $= \max\limits_{\alpha}(|\Delta E_{\alpha}|) - \min\limits_{\alpha} (|\Delta E_{\alpha}|) $ and $\Delta E_{\alpha} = E^{\rm CC}_{\alpha}-E^{\rm CCSDT}_{\alpha}$ \par} \end{scriptsize} \end{table*} \subsection{Ethylene twist} By twisting the dihedral angle in the ethylene molecule, we can scrutinize the flexibility of various tCC models to describe varying degrees of strong and weak electron correlation effects. The ground state molecule in its equilibrium geometry features D$_{2h}$ symmetry and its electronic wave function is dominated by a single Slater determinant. The orbital interaction picture changes when one CH$_2$ group is rotated by ninety degrees and the molecular point group is hence reduced to D$_{2d}$ symmetry. For this twisted geometry, the $\pi$ and $\pi^*$ orbitals become degenerate and multi-reference approaches are required to capture static/non-dynamical electron correlation effects. \begin{figure}[tb] \centering \includegraphics[width=\columnwidth]{figure7} \caption{Potential energy surfaces for the ethylene twist (cc-pVTZ basis set) obtained by various coupled cluster methods. The superscript $a$ denotes that calculations have been performed in the canonical RHF orbital basis, while the superscript $b$ stands for pCCD orbitals.} \label{fig:8} \end{figure} Figure~\ref{fig:8} shows the potential energy curves of the ethylene torsion for various electronic structure methods. Specifically, RHF as well as single-reference CCD and CCSD (with canonical RHF orbitals) feature an unphysical cusp in the PES for a dihedral angle of 90$^\circ$. This cusp is a common problem encountered in the transition state when RHF orbitals are used together with some post-HF treatment that is not sufficiently accurate to account for (static/nondynamic and dynamic) electron correlation effects in a balanced way.\cite{musial2011} A smooth potential energy curve can be obtained either with the inclusion of triple excitations, complete active space calculations containing at least two electrons and two orbitals in the active space, or with pCCD-tailored coupled cluster approaches. Although being limited to pair excitations, the pCCD model correctly predicts that both configurations with (occupied) $\pi$ and $\pi^*$ orbitals are degenerate. Therefore, the orbital-optimized pCCD model is sufficient to provide a smooth reaction pathway, that is, without a cusp. A qualitatively correct PES is also provided by DMRG(12,12) calculations for both a canonical RHF and orbital-optimized pCCD molecular orbital basis. In general, all tCC flavours improve the shape of the PES and the barrier height for the twist without introducing unphysical effects as observed in standard single-reference approaches. \subsection{Automerization of cyclobutadiene} The cyclobutadiene molecule features a significant multi-reference character even in its equilibrium rectangle-shaped geometry. The self-automerization of cyclobutadiene is a process where the carbon atoms are rearranged so that the final geometry is similar to the initial geometry, albeit rotated by 90$^\circ$. During the self-automerization process, cyclobutadiene passes through a transition state where all C--C bonds have equal lengths and the HOMO and LUMO orbitals are exactly degenerate. For the square geometry, the symmetry breaking of the RHF wave function affects post-HF treatments.\cite{li2009} Specifically, single-reference coupled cluster methods tend to underestimate the weight of one of the two equivalent $\ket{\pi^1 \pi^{*1}}$ determinants associated with the C--C double bonds. Previous works demonstrate \cite{maksic2006, li2009, lyakh2011} that multireference approaches, like state-specific MRCCSD, outperform the so-called ``gold standard'' of quantum chemistry CCSD(T). The selection of the reference wave function is, however, crucial since increasing the size of the active orbital space can exacerbate the quality of the results.\cite{pccd-2014-jctc} Table \ref{tab:cbt} summarizes the automerization barrier heights obtained from various conventional and unconventional electronic structure methods. The considerable difference in the barrier heights predicted by a perturbative treatment of triple excitations in CCSD(T) compared to full-T calculations indicates that triple excitations are important and a perturbative treatment is not sufficient. Thus, in order to achieve reliable results using single-reference coupled cluster methods, triple excitations are to be included in the cluster operator. Although it remains difficult to assess whether the CCSDT results are already converged with respect to the truncation order of the cluster operator as the difference between CCSD, CCSD(T), and CCSDT energies are significant, CCSDT provides a reliable description of the automerization process yielding barrier heights that agree well with experimental results and MkCCSD reference calculations. In general, all tailored CC flavours restricted to at most double excitations overestimate the barrier height significantly, which lies up to 15 kcal/mol beyond the experimental range of 1.6--10 kcal/mol.\cite{cyclobutadiene-exp,cyclobutadiene-exp2} The best performance is observed for DMRG-tCCSD, which predicts a barrier height between CCSD(T) and CCSDT accuracy. Nonetheless, DMRG and DMRG-tCCSD feature a strong dependence on the choice of both the atomic basis set size and the molecular orbitals (differences amount to approximately 5--9 kcal/mol). The method- and basis-set-dependence is smallest for (localized) pCCD-optimized orbitals (less than 1 kcal/mol). Note that all pCCD-tCC methods provide similar barrier heights of approximately 23 kcal/mol. \begin{table} \caption{Barrier heights in kcal/mol for the automerization of cyclobutadiene obtained for various quantum chemistry methods and basis sets. The superscript $a$ denotes that calculations have been performed in the canonical RHF orbital basis, while the superscript $b$ stands for pCCD orbitals.} \label{tab:cbt} \begin{tabular}{lrr} \hline & cc-pVDZ & aug-cc-pVDZ \\\hline RHF & 28.5 & 27.2 \\ CCSD$^{\mathrm{a}}$ & 20.3 & 20.0 \\ CCSD(T)$^{\mathrm{a}}$ & 15.4 & 15.7 \\ CCSDT$^{\mathrm{a}}$ & 7.3 & 8.1 \\ pCCD & 23.8 & 23.1 \\ fpLCCD & 24.4 & 23.6 \\ fpLCCSD & 24.5 & 23.8 \\ fpCCD & 22.2 & 21.4 \\ fpCCSD & 22.4 & 21.6 \\ DMRG(20,20)$^{\mathrm{a}}$ & 10.3 & 19.0 \\ DMRG(20,20)-tCCSD$^{\mathrm{a}}$ & 11.1 & 14.9 \\ DMRG(20,20)$^{\mathrm{b}}$ & 15.7 & 16.2 \\ DMRG(20,20)-tCCSD$^{\mathrm{b}}$ & 16.8 & 17.2 \\ NEVPT2/CAS(20,16)\cite{pccd-2014-jctc} & 41.2 & \\ MkCCSD\cite{Bhaskaran-Nair2008} & 7.8 & \\ exp.\cite{cyclobutadiene-exp} & \multicolumn{2}{r}{ 1.6 -- 10} \\\hline \end{tabular} \end{table} \subsection{Benzene distortion} The equilibrium structure of the benzene molecule features D$_{6h}$ point group symmetry where all carbon--carbon bond lengths are equivalent. Some approximate wave function models (including orbital-optimized pCCD \cite{pccd-2014-jctc}) tend to distort this symmetry and thus break aromaticity due to the formation of three partial double bonds caused by the localization of the three $\pi$- and $\pi^*$-orbitals. Here, we will assess whether frozen-pair coupled cluster methods can restore the proper symmetry and hence aromaticity when imposed on top of the pCCD ansatz. For this purpose, we scrutinize the deformation pathway of benzene as depicted in Ref. \citenum{Pierrefixe2008} where the distortion is determined by the difference between the equilibrium and distorted angle defined through some carbon atom, the center of the molecule, and the neighboring C atom. \begin{figure}[tb] \centering \includegraphics[width=0.5\columnwidth]{figure8} \caption{Potential energy surfaces for the distortion of benzene using different tailored and untailored coupled cluster methods (cc-pVTZ basis set). The superscript $a$ denotes that calculations have been performed in the canonical RHF orbital basis, while the superscript $b$ stands for pCCD orbitals.} \label{fig:benzene} \end{figure} Figure~\ref{fig:benzene} shows the PESs of the deformation process obtained from pCCD, pCCD-tLCC, pCCD-tCC, DMRG-tCC, and CCSD exploiting pCCD-optimized orbitals. Similar to conventional CCSD (with a canonical RHF reference), which does not break the symmetry of benzene, CCSD, frozen-pair linearized CC, and DMRG-tCCSD calculations performed on pCCD-optimized orbitals correctly predict the aromatic structure (D$_{6h}$ point group symmetry) as the minimum geometry. Although the DMRG(6,6) potential energy curves differ, we obtained almost exactly the same energies in DMRG(6,6)-tCCSD calculations exploiting both the pCCD and canonical RHF orbital basis. On the contrary, the fpCCD and fpCCSD methods anticipate the minimum energy to be associated with some slightly deformed molecular structure, thus breaking the aromaticity of benzene. Specifically, fpCCD and fpCCSD predict that the most stable molecular structure is distorted by approximately 0.2--0.3$^{\circ}$ and lies about 0.09--0.17 kcal/mol lower in energy than the D$_{6h}$ structure. Most importantly, the predicted symmetry breaking is reduced compared to the pure pCCD ansatz where the distortion angle reached 1.8$^{\circ}$ and the energy difference between the aromatic and distorted structures amount to 3.01 kcal/mol. \section{Conclusions} \label{section:conclusions} In this work, we scrutinized the performance of various coupled cluster methods tailored by pCCD and DMRG wave functions for various small- and medium-sized molecules (F$_2$, C$_2$, CN$^+$, BN, BO$^+$, CO, Cr$_2$, ammonia, ethylene, cyclobutadiene, and benzene) and analyzed the limitations of this computationally cheap and conceptually simple approaches. Specifically, all conventional and DMRG-tailored CC calculations were performed for two different molecular orbital basis sets and thus reference determinants: (i) delocalized canonical RHF orbitals and (ii) localized pCCD-optimized molecular orbitals. The active spaces in all DMRG calculations were further selected using a black-box orbital-selection protocol based on the single-orbital entropy and orbital-pair mutual information. If pCCD-optimized orbitals are chosen as molecular orbital basis in DMRG calculations, an entropy- and/or correlation-based active space can be furthermore straightforwardly and cheaply selected from the complete set of orbitals as these measures are readily available at the end of an orbital-optimized pCCD calculation~\cite{dmrg-2016}. This feature greatly facilitates and speeds up DMRG calculations with respect to selecting proper active orbital spaces, DMRG convergence, and computational cost (that is, the maximum number of bond dimensions $m$ required for energy convergence). Tailored CC approaches noticeably improve the quality of spectroscopic properties, that is equilibrium bond lengths, harmonic vibrational frequencies, and dissociation energies, in comparison to their single-reference counterparts CCD and CCSD. Although restricting the cluster operator to at most double excitations is insufficient to reach the accuracy of the more expensive CCSDT, MRCC, or MRCI methods, the mean errors in spectroscopic constants can be reduced to 0.005 \AA{} for bond lengths, 2.5 kcal/mol for dissociation energies, and 40 cm$^{-1}$ for harmonic vibrational frequencies. Thus, tailored coupled cluster theory constitutes a promising alternative to the conventional CCSD approach if the inclusion of triple and higher excitation operators is computationally too demanding. Furthermore, for F$_2$, C$_2$, CN$^+$, BN, BO$^+$, CO, ammonia, and ethylene, we obtained congruent coupled cluster electron-pair amplitudes and spectroscopic constants for both DMRG- and pCCD-tailored approached. This indicates that electron-pair amplitudes are qualitatively well described by the rather simple and cheap pCCD approach. Statistically, the performance of DMRG-tCCSD in predicting reliable spectroscopic constants lies between fpCCSD and its linearized version fpLCCSD, while fpCC(S)D and fpLCC(S)D typically provide results of similar accuracy (like relative energy differences and spectroscopic constants). The major drawback of linearized fpCC methods is that divergencies and poles can be encountered when solving the set of the (linearized) amplitude equations. These divergencies and poles disappear if non-linear terms are added in the amplitude equations thus resulting in fpCC-type methods. Finally, the matrix product state ansatz optimized by the DMRG algorithm is a more flexible reference wave function for externally-corrected CC methods than pCCD as it correctly describes the dissociation of the triple bond of the nitrogen dimer and does not break in the region of avoided crossings. The performance of DMRG-tCC is, however, dependent on the choice of the molecular orbital basis: while for some cases (like equilibrium bond lengths, automerization of cyclobutadiene, benzene distortion) delocalized canonical RHF orbitals yield results that agree better with reference data (both from experiment and multireference calculations), DMRG-tCC exploiting localized pCCD-optimized orbitals performs better for others (like vibrational frequencies, dissociation energies, Cr$_2$). However, none of the studied tailored CC methods was able to predict the correct barrier height for the automerization of cyclobutadiene or to reliably describe the complete potential energy surface of the chromium dimer. Most likely, full triple excitations are required to reach chemical accuracy for such challenging systems. \begin{acknowledgement} A.~L.~and K.~B.~thank for financial support from the National Science Centre, Poland (SONATA BIS 5 Grant No.~2015/18/E/ST4/00584). A.~L. acknowledges funding from Interdisciplinary Doctoral School \textit{Academia Copernicana}. M.~M. and \"{O}.~L.~acknowledges financial support from the Hungarian National Research, Development and Innovation Office (Grant Nos. K120569 and K134983), the Hungarian Quantum Technology National Excellence Program (Grant No.~2017-1.2.1-NKP-2017-00001) and the Hungarian Quantum Information National Laboratory (QNL). \"{O}.~L.~acknowledges financial support from the Alexander von Humboldt foundation. Calculations have been carried out using resources provided by Wroclaw Centre for Networking and Supercomputing (http://wcss.pl), Grant No.~412. The development of the DMRG libraries was supported by the Center for Scalable and Predictive methods for Excitation and Correlated phenomena (SPEC), which is funded from the Computational Chemical Sciences Program by the U.S. Department of Energy (DOE), at Pacific Northwest National Laboratory. \end{acknowledgement} \providecommand{\latin}[1]{#1} \makeatletter \providecommand{\doi} {\begingroup\let\do\@makeother\dospecials \catcode`\{=1 \catcode`\}=2 \doi@aux} \providecommand{\doi@aux}[1]{\endgroup\texttt{#1}} \makeatother \providecommand*\mcitethebibliography{\thebibliography} \csname @ifundefined\endcsname{endmcitethebibliography} {\let\endmcitethebibliography\endthebibliography}{} \begin{mcitethebibliography}{152} \providecommand*\natexlab[1]{#1} \providecommand*\mciteSetBstSublistMode[1]{} \providecommand*\mciteSetBstMaxWidthForm[2]{} \providecommand*\mciteBstWouldAddEndPuncttrue {\def\unskip.}{\unskip.}} \providecommand*\mciteBstWouldAddEndPunctfalse {\let\unskip.}\relax} \providecommand*\mciteSetBstMidEndSepPunct[3]{} \providecommand*\mciteSetBstSublistLabelBeginEnd[3]{} \providecommand*\unskip.}{} \mciteSetBstSublistMode{f} \mciteSetBstMaxWidthForm{subitem}{(\alph{mcitesubitemcount})} \mciteSetBstSublistLabelBeginEnd {\mcitemaxwidthsubitemform\space} {\relax} {\relax} \bibitem[Coester(1958)]{Coester1958} Coester,~F. Bound states of a many-particle system. \emph{Nucl. Phys.} \textbf{1958}, \emph{7}, 421--424\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[{\v{C}}{\'\i}{\v{z}}ek(1966)]{Cizek1966} {\v{C}}{\'\i}{\v{z}}ek,~J. On the correlation problem in atomic and molecular systems. calculation of wavefunction components in ursell type expansion using quantum field theoretical methods. \emph{J.~Chem.~Phys.} \textbf{1966}, \emph{45}, 4256--4266\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[{\v{C}}{\'\i}{\v{z}}ek and Paldus(1971){\v{C}}{\'\i}{\v{z}}ek, and Paldus]{Cizek1971} {\v{C}}{\'\i}{\v{z}}ek,~J.; Paldus,~J. Correlation problems in atomic and molecular systems {III}. Rederivation of the coupled-pair many-electron theory using the traditional quantum chemical methods. \emph{Int. J. Quantum Chem.} \textbf{1971}, \emph{5}, 359--379\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Paldus \latin{et~al.}(1972)Paldus, {\v{C}}{\'\i}{\v{z}}ek, and I.]{Paldus1972} Paldus,~J.; {\v{C}}{\'\i}{\v{z}}ek,~J.; I.,~S. Correlation problems in atomic and molecular systems. {IV}. Extended coupled-pair many-electron theory and Its application to the {BH3} molecule. \emph{Phys. Rev. A} \textbf{1972}, \emph{5}, 50--67\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Bartlett(1981)]{Bartlett1981} Bartlett,~R.~J. Many-body perturbation theory and coupled cluster theory for electron correlation in molecules. \emph{Annu. Rev. Phys. Chem.} \textbf{1981}, \emph{32}, 359--401\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Helgaker \latin{et~al.}(2000)Helgaker, J{\o}rgensen, and Olsen]{MEST} Helgaker,~T.; J{\o}rgensen,~P.; Olsen,~J. \emph{Molecular Electronic Structure Theory}; Wiley: Chichester, 2000\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Shavitt(2009)]{MBM} Shavitt,~R.~J.,~I.;~Bartlett \emph{Many-body methods in chemistry and physics}; Cambridge University Press: New York, 2009\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Bartlett and Musia\l{}(2007)Bartlett, and Musia\l{}]{Bartlett2007} Bartlett,~R.~J.; Musia\l{},~M. Coupled-cluster theory in quantum chemistry. \emph{Rev. Mod. Phys.} \textbf{2007}, \emph{79}, 291--350\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Paldus and Li(1999)Paldus, and Li]{paldus1999} Paldus,~J.; Li,~X. \emph{Advances in Chemical Physics}; John Wiley and Sons, Ltd, 1999; pp 1--175\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Fan and Piecuch(2006)Fan, and Piecuch]{cc-piecuch} Fan,~P.-D.; Piecuch,~P. The usefulness of exponential wave function expansions employing one- and two-body cluster operators in electronic structure theory: the extended and generalized coupled-cluster methods. \emph{Adv. Quantum Chem} \textbf{2006}, \emph{51}, 1--57\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Henderson \latin{et~al.}(2014)Henderson, Scuseria, Dukelsky, Signoracci, and Duguet]{cc-scuseria} Henderson,~T.~M.; Scuseria,~G.~E.; Dukelsky,~J.; Signoracci,~A.; Duguet,~T. Quasiparticle coupled cluster theory for pairing interactions. \emph{Phys. Rev. C} \textbf{2014}, \emph{89}, 054305\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Paldus \latin{et~al.}(1984)Paldus, {\v{C}}{\'\i}{\v{z}}ek, and Takahashi]{externally_corrected_cc} Paldus,~J.; {\v{C}}{\'\i}{\v{z}}ek,~J.; Takahashi,~M. Approximate account of the connected quadruply excited clusters in the coupled-pair many-electron theory. \emph{Phys.~Rev.~A} \textbf{1984}, \emph{30}, 2193\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Piecuch \latin{et~al.}(1993)Piecuch, Tobo{\l}a, and Paldus]{externally_corrected_cc_2} Piecuch,~P.; Tobo{\l}a,~R.; Paldus,~J. Approximate account of connected quadruply excited clusters in multi-reference Hilbert space coupled-cluster theory. Application to planar H4 models. \emph{Chem.~Phys.~Lett.} \textbf{1993}, \emph{210}, 243--252\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Paldus \latin{et~al.}(1994)Paldus, Planelles, and Li]{externally_corrected_cc_3} Paldus,~J.; Planelles,~J.; Li,~X. Valence bond corrected single reference coupled cluster approach II. Application to PPP model systems. \emph{Theor. Chim. Acta} \textbf{1994}, \emph{89}, 33--57\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Stolarczyk(1994)]{externally_corrected_cc_4} Stolarczyk,~L.~Z. Complete active space coupled-cluster method. Extension of single-reference coupled-cluster method using the CASSCF wavefunction. \emph{Chem.~Phys.~Lett.} \textbf{1994}, \emph{217}, 1--6\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Li \latin{et~al.}(1997)Li, Peris, Planelles, Rajadall, and Paldus]{externally_corrected_cc_5} Li,~X.; Peris,~G.; Planelles,~J.; Rajadall,~F.; Paldus,~J. Externally corrected singles and doubles coupled cluster methods for open-shell systems. \emph{J.~Chem.~Phys.} \textbf{1997}, \emph{107}, 90--98\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Li and Paldus(1997)Li, and Paldus]{externally_corrected_cc_6} Li,~X.; Paldus,~J. {Reduced multireference CCSD method: An effective approach to quasidegenerate states}. \emph{J.~Chem.~Phys.} \textbf{1997}, \emph{107}, 6257--6269\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Li and Paldus(1998)Li, and Paldus]{externally_corrected_cc_8} Li,~X.; Paldus,~J. {Dissociation of N$_2$ triple bond: a reduced multireference CCSD study}. \emph{Chem.~Phys.~Lett.} \textbf{1998}, \emph{286}, 145--154\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Li \latin{et~al.}(2000)Li, Grabowski, Jankowski, and Paldus]{externally_corrected_cc_9} Li,~X.; Grabowski,~I.; Jankowski,~K.; Paldus,~J. Approximate coupled cluster methods: combined reduced multireference and almost--linear coupled cluster methods with singles and doubles. \emph{Adv.~Quantum ~Chem.} \textbf{2000}, \emph{36}, 231--251\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Veis \latin{et~al.}(2016)Veis, Antal\'{i}k, Brabec, Neese, \"{O}rs Legeza, and Pittner]{dmrg-tcc-2016} Veis,~L.; Antal\'{i}k,~A.; Brabec,~J.; Neese,~F.; \"{O}rs Legeza,; Pittner,~J. Coupled cluster method with single and double excitations tailored by Matrix Product State wave functions. \emph{J. Phys. Chem. Lett.} \textbf{2016}, \emph{7}, 4072--4078\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Faulstich \latin{et~al.}(2019)Faulstich, M\'{a}t\'{e}, Laestadius, Csirik, Veis, Antalik, Brabec, Schneider, Pittner, Kvaal, and \"{O}rs Legeza]{dmrg-tcc-2019} Faulstich,~F.~M.; M\'{a}t\'{e},~M.; Laestadius,~A.; Csirik,~M.~A.; Veis,~L.; Antalik,~A.; Brabec,~J.; Schneider,~R.; Pittner,~J.; Kvaal,~S.; \"{O}rs Legeza, Numerical and theoretical aspects of the {DMRG-TCC} method exemplified by the nitrogen dimer. \emph{J.~Chem.~Theory~Comput.} \textbf{2019}, \emph{15}, 2206--2220\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[M\"{o}rchen \latin{et~al.}(2020)M\"{o}rchen, Freitag, and Reiher]{dmrg-tcc-2020} M\"{o}rchen,~M.; Freitag,~L.; Reiher,~M. Tailored coupled cluster theory in varying correlation regimes. \emph{J.~Chem.~Phys.} \textbf{2020}, \emph{153}, 244113\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[White(1992)]{white} White,~S.~R. Density matrix formulation for quantum renormalization groups. \emph{Phys.~Rev.~Lett.} \textbf{1992}, \emph{69}, 2863--2866\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[{U. Schollw{\"o}ck}(2005)]{dmrg-4} {U. Schollw{\"o}ck}, The density-matrix renormalization group. \emph{Rev. Mod. Phys.} \textbf{2005}, \emph{77}, 259--315\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Marti and Reiher(2010)Marti, and Reiher]{dmrg-5} Marti,~K.~H.; Reiher,~M. The density matrix renormalization group algorithm in quantum chemistry. \emph{Z. Phys. Chem.} \textbf{2010}, \emph{224}, 583--599\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Chan and Sharma(2011)Chan, and Sharma]{dmrg-6} Chan,~G. K.-L.; Sharma,~S. The density matrix renormalization group in quantum chemistry. \emph{Annu. Rev. Phys. Chem.} \textbf{2011}, \emph{62}, 465--481\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Wouters and {Van Neck}(2014)Wouters, and {Van Neck}]{dmrg-7} Wouters,~S.; {Van Neck},~D. The density matrix renormalization group for ab initio quantum chemistry. \emph{Eur. Phys. J. D} \textbf{2014}, \emph{68}, 272\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Szalay \latin{et~al.}(2015)Szalay, Pfeffer, Murg, Barcza, Verstraete, Schneider, and Legeza]{dmrg-8} Szalay,~S.; Pfeffer,~M.; Murg,~V.; Barcza,~G.; Verstraete,~F.; Schneider,~R.; Legeza,~{\"O}. Tensor product methods and entanglement optimization for ab initio quantum chemistry. \emph{Int. J. Quantum Chem.} \textbf{2015}, \emph{115}, 1342--1391\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Yanai \latin{et~al.}(2015)Yanai, Kurashige, Mizukami, Chalupsk\'{y}, Lan, and Saitow]{dmrg-9} Yanai,~T.; Kurashige,~Y.; Mizukami,~W.; Chalupsk\'{y},~J.; Lan,~T.~N.; Saitow,~M. Density matrix renormalization group for ab initio calculations and associated dynamic correlation methods: A review of theory and applications. \emph{Int. J. Quantum Chem.} \textbf{2015}, \emph{115}, 283--299\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Paldus \latin{et~al.}(1971)Paldus, {\v{C}}{\'\i}{\v{z}}ek, and Sengupta]{geminal1971} Paldus,~J.; {\v{C}}{\'\i}{\v{z}}ek,~J.; Sengupta,~S. Geminal localization in the separated-pair $\pi$-electronic model of benzene. \emph{J.~Chem.~Phys.} \textbf{1971}, \emph{55}, 2452--2462\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Paldus \latin{et~al.}(1972)Paldus, Sengupta, and {\v{C}}{\'\i}{\v{z}}ek]{geminal1972} Paldus,~J.; Sengupta,~S.; {\v{C}}{\'\i}{\v{z}}ek,~J. Diagrammatical method for geminals. {II}. {A}pplications. \emph{J.~Chem.~Phys.} \textbf{1972}, \emph{57}, 652--666\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Goddard \latin{et~al.}(1973)Goddard, {Dunning Jr.}, Hunt, and Hay]{gvb-1973} Goddard,~W.~A.; {Dunning Jr.},~T.~H.; Hunt,~W.~J.; Hay,~P.~J. Generalized valence bond description of bonding in low-lying states of molecules. \emph{Acc. Chem. Res.} \textbf{1973}, \emph{6}, 368--376\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Surj{\'a}n(1999)]{surjan1999} Surj{\'a}n,~P.~R. \emph{Correlation and localization}; Springer Berlin Heidelberg: Berlin, Heidelberg, 1999; pp 63--88\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Rassolov(2002)]{apsg-2002} Rassolov,~V.~A. A geminal model chemistry. \emph{J.~Chem.~Phys.} \textbf{2002}, \emph{117}, 5978--5987\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Rassolov and Xu(2007)Rassolov, and Xu]{geminals2007} Rassolov,~V.~A.; Xu,~F. Geminal model chemistry. {IV}. {V}ariational and size consistent pure spin states. \emph{J.~Chem.~Phys.} \textbf{2007}, \emph{127}, 044104\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Surj{\'a}n \latin{et~al.}(2012)Surj{\'a}n, Szabados, Jeszenszki, and Zoboki]{surjan2012} Surj{\'a}n,~P.~R.; Szabados,~{\'A}.; Jeszenszki,~P.; Zoboki,~T. Strongly orthogonal geminals: size-extensive and variational reference states. \emph{J. Math. Chem.} \textbf{2012}, \emph{50}, 534--551\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Limacher \latin{et~al.}(2013)Limacher, Ayers, Johnson, {De Baerdemacker}, {Van Neck}, and Bultinck]{pccd-2013-limacher} Limacher,~P.~A.; Ayers,~P.~W.; Johnson,~P.~A.; {De Baerdemacker},~S.; {Van Neck},~D.; Bultinck,~P. A new mean-field method suitable for strongly correlated electrons: computationally facile antisymmetric products of nonorthogonal geminals. \emph{J.~Chem.~Theory~Comput.} \textbf{2013}, \emph{9}, 1394--1401\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Ellis \latin{et~al.}(2013)Ellis, Martin, and Scuseria]{scuseria2013} Ellis,~J.~K.; Martin,~R.~L.; Scuseria,~G.~E. On pair functions for strong correlations. \emph{J.~Chem.~Theory~Comput.} \textbf{2013}, \emph{9}, 2857--2869\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Boguslawski \latin{et~al.}(2014)Boguslawski, Tecmer, Ayers, Bultinck, {De Baerdemacker}, and {Van Neck}]{pccd-2014-prb} Boguslawski,~K.; Tecmer,~P.; Ayers,~P.~W.; Bultinck,~P.; {De Baerdemacker},~S.; {Van Neck},~D. {Efficient description of strongly correlated electrons with mean-field cost}. \emph{Phys. Rev. B} \textbf{2014}, \emph{89}, 201106\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Boguslawski \latin{et~al.}(2014)Boguslawski, Tecmer, Limacher, Johnson, Ayers, Bultinck, {De Baerdemacker}, and {Van Neck}]{pccd-2014-jcp} Boguslawski,~K.; Tecmer,~P.; Limacher,~P.~A.; Johnson,~P.~A.; Ayers,~P.~W.; Bultinck,~P.; {De Baerdemacker},~S.; {Van Neck},~D. Projected seniority-two orbital optimization of the antisymmetric product of one-reference orbital geminal. \emph{J.~Chem.~Phys.} \textbf{2014}, \emph{140}, 214114\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Tecmer \latin{et~al.}(2014)Tecmer, Boguslawski, Johnson, Limacher, Chan, Verstraelen, and Ayers]{pccd-2014-jpca} Tecmer,~P.; Boguslawski,~K.; Johnson,~P.~A.; Limacher,~P.~A.; Chan,~M.; Verstraelen,~T.; Ayers,~P.~W. Assessing the accuracy of new geminal-based approaches. \emph{J.~Phys.~Chem.~A} \textbf{2014}, \emph{118}, 9058--9068\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Boguslawski \latin{et~al.}(2014)Boguslawski, Tecmer, Bultinck, {De Baerdemacker}, {Van Neck}, and Ayers]{pccd-2014-jctc} Boguslawski,~K.; Tecmer,~P.; Bultinck,~P.; {De Baerdemacker},~S.; {Van Neck},~D.; Ayers,~P.~W. Nonvariational orbital optimization techniques for the {AP1roG} wave function. \emph{J.~Chem.~Theory~Comput.} \textbf{2014}, \emph{10}, 4873--4882\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Henderson \latin{et~al.}(2014)Henderson, Dukelsky, Scuseria, Signoracci, and Duguet]{pccd-2014-prc} Henderson,~T.~M.; Dukelsky,~J.; Scuseria,~G.~E.; Signoracci,~A.; Duguet,~T. Quasiparticle coupled cluster theory for pairing interactions. \emph{Phys. Rev. C} \textbf{2014}, \emph{89}, 054305\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Stein \latin{et~al.}(2014)Stein, Henderson, and Scuseria]{pccd-2014-stein} Stein,~T.; Henderson,~T.~M.; Scuseria,~G.~E. Seniority zero pair coupled cluster doubles theory. \emph{J. Chem. Phys.} \textbf{2014}, \emph{140}, 214113\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Henderson \latin{et~al.}(2014)Henderson, Bulik, Stein, and Scuseria]{seniority-cc-2014} Henderson,~T.~M.; Bulik,~I.~W.; Stein,~T.; Scuseria,~G.~E. Seniority-based coupled cluster theory. \emph{J.~Chem.~Phys.} \textbf{2014}, \emph{141}, 244104\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Jeszenszki \latin{et~al.}(2014)Jeszenszki, Nagy, Zoboki, Szabados, and Surj\'{a}n]{apsg-pt-2014} Jeszenszki,~P.; Nagy,~P.~R.; Zoboki,~T.; Szabados,~A.; Surj\'{a}n,~P.~R. Perspectives of {APSG}-based multireference perturbation theories. \emph{Int. J. Quantum Chem.} \textbf{2014}, \emph{114}, 1048--1052\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Bytautas \latin{et~al.}(2015)Bytautas, Scuseria, and Ruedenberg]{bytautas2015} Bytautas,~L.; Scuseria,~G.~E.; Ruedenberg,~K. Seniority number description of potential energy surfaces: Symmetric dissociation of water, {N$_2$}, {C$_2$}, and {Be$_2$}. \emph{J.~Chem.~Phys.} \textbf{2015}, \emph{143}, 094105\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Boguslawski and Ayers(2015)Boguslawski, and Ayers]{geminals_lcc_2015} Boguslawski,~K.; Ayers,~P.~W. Linearized coupled cluster correction on the antisymmetric product of 1-reference orbital geminals. \emph{J. Chem. Theory Comput.} \textbf{2015}, \emph{11}, 5252--5261\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Henderson \latin{et~al.}(2015)Henderson, Bulik, and Scuseria]{pccd-2015} Henderson,~T.~M.; Bulik,~I.~W.; Scuseria,~G.~E. Pair extended coupled cluster doubles. \emph{J. Chem. Phys.} \textbf{2015}, \emph{142}, 214116\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Garza \latin{et~al.}(2016)Garza, Bulik, Alencar, Sun, Perdew, and Scuseria]{erpa-2016} Garza,~A.~J.; Bulik,~I.~W.; Alencar,~A. G.~S.; Sun,~J.; Perdew,~J.~P.; Scuseria,~G.~E. Combinations of coupled cluster, density functionals, and the random phase approximation for describing static and dynamic correlation, and van der {Waals} interactions. \emph{Mol. Phys.} \textbf{2016}, \emph{114}, 997--1018\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Limacher(2016)]{geminals-2016} Limacher,~P.~A. A new wavefunction hierarchy for interacting geminals. \emph{J.~Chem.~Phys.} \textbf{2016}, \emph{145}, 194102\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Nowak \latin{et~al.}(2019)Nowak, Tecmer, and Boguslawski]{Nowak2019} Nowak,~A.; Tecmer,~P.; Boguslawski,~K. {Assessing the accuracy of simplified coupled cluster methods for electronic excited states in f0 actinide compounds}. \emph{Phys.~Chem.~Chem.~Phys.} \textbf{2019}, \emph{21}, 19039--19053\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Brz\k{e}k \latin{et~al.}(2019)Brz\k{e}k, Boguslawski, Tecmer, and \.{Z}uchowski]{brzek2019} Brz\k{e}k,~F.; Boguslawski,~K.; Tecmer,~P.; \.{Z}uchowski,~P.~S. Benchmarking the accuracy of seniority-zero wave function methods for noncovalent interactions. \emph{J.~Chem.~Theory~Comput.} \textbf{2019}, \emph{15}, 4021--4035\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Marti \latin{et~al.}(2008)Marti, Ond\'{i}k, Moritz, and Reiher]{marti2008} Marti,~K.~H.; Ond\'{i}k,~I.~M.; Moritz,~G.; Reiher,~M. Density matrix renormalization group calculations on relative energies of transition metal complexes and clusters. \emph{J.~Chem.~Phys.} \textbf{2008}, \emph{128}, 014104\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Kurashige and Yanai(2011)Kurashige, and Yanai]{cr2_2011} Kurashige,~Y.; Yanai,~T. Second-order perturbation theory with a density matrix renormalization group self-consistent field reference function: Theory and application to the study of chromium dimer. \emph{J.~Chem.~Phys.} \textbf{2011}, \emph{135}, 094104\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Boguslawski \latin{et~al.}(2012)Boguslawski, Marti, Legeza, and Reiher]{fenoDMRG} Boguslawski,~K.; Marti,~K.~H.; Legeza,~{\"O}.; Reiher,~M. {Accurate $ab$ $initio$ spin densities}. \emph{J.~Chem.~Theory~Comput.} \textbf{2012}, \emph{8}, 1970--1982\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Kurashige \latin{et~al.}(2013)Kurashige, Chan, and Yanai]{kurashige2013} Kurashige,~Y.; Chan,~G. K.-L.; Yanai,~T. {Entangled quantum electronic wavefunctions of the Mn$_4$CaO$_5$ cluster in photosystem II}. \emph{Nature Chem.} \textbf{2013}, \emph{5}, 660--666\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Duperrouzel \latin{et~al.}(2015)Duperrouzel, Tecmer, Boguslawski, Barcza, Legeza, and Ayers]{Corinne_2015} Duperrouzel,~C.; Tecmer,~P.; Boguslawski,~K.; Barcza,~G.; Legeza,~{\"O}.; Ayers,~P.~W. A quantum informational approach for dissecting chemical reactions. \emph{Chem. Phys. Lett.} \textbf{2015}, \emph{621}, 160--164\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Zhao \latin{et~al.}(2015)Zhao, Boguslawski, Tecmer, Duperrouzel, Barcza, Legeza, and Ayers]{Zhao2015} Zhao,~Y.; Boguslawski,~K.; Tecmer,~P.; Duperrouzel,~C.; Barcza,~G.; Legeza,~{\"{O}}.; Ayers,~P.~W. {Dissecting the bond-formation process of d 10-metal--ethene complexes with multireference approaches}. \emph{Theor. Chem. Acc.} \textbf{2015}, \emph{134}, 120\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Freitag \latin{et~al.}(2015)Freitag, Knecht, Keller, Delcey, Aquilante, Pedersen, Lindh, Reiher, and Gonz\'{a}lez]{freitag2015} Freitag,~L.; Knecht,~S.; Keller,~S.~F.; Delcey,~M.~G.; Aquilante,~F.; Pedersen,~T.~B.; Lindh,~R.; Reiher,~M.; Gonz\'{a}lez,~L. Orbital entanglement and {CASSCF} analysis of the {Ru}--{NO} bond in a Ruthenium nitrosyl complex. \emph{Phys.~Chem.~Chem.~Phys.} \textbf{2015}, \emph{17}, 14383--14392\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Freitag \latin{et~al.}(2015)Freitag, Knecht, Keller, Delcey, Aquilante, Pedersen, Lindh, Reiher, and Gonz\'{a}lez]{freitag2015errata} Freitag,~L.; Knecht,~S.; Keller,~S.~F.; Delcey,~M.~G.; Aquilante,~F.; Pedersen,~T.~B.; Lindh,~R.; Reiher,~M.; Gonz\'{a}lez,~L. Correction: Orbital entanglement and {CASSCF} analysis of the {Ru}--{NO} bond in a Ruthenium nitrosyl complex. \emph{Phys.~Chem.~Chem.~Phys.} \textbf{2015}, \emph{17}, 13769--13769\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Tecmer \latin{et~al.}(2014)Tecmer, Boguslawski, \"{O}rs Legeza, and Reiher]{cuo_dmrg} Tecmer,~P.; Boguslawski,~K.; \"{O}rs Legeza,; Reiher,~M. Unravelling the quantum-entanglement effect of noble gas coordination on the spin ground state of {CUO}. \emph{Phys.~Chem.~Chem.~Phys.} \textbf{2014}, \emph{16}, 719--727\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Boguslawski \latin{et~al.}(2017)Boguslawski, R\'{e}al, Tecmer, Duperrouzel, Gomes, Legeza, Ayers, and Vallet]{boguslawski2017} Boguslawski,~K.; R\'{e}al,~F.; Tecmer,~P.; Duperrouzel,~C.; Gomes,~A. S.~P.; Legeza,~{\"O}.; Ayers,~P.~W.; Vallet,~V. On the multi-reference nature of plutonium oxides: {PuO$_2^{2+}$}, {PuO$_2$}, {PuO$_3$}, and {PuO$_2$(OH)$_2$}. \emph{Phys.~Chem.~Chem.~Phys.} \textbf{2017}, \emph{19}, 4317--4329\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[\L{}achmanska \latin{et~al.}(2019)\L{}achmanska, Tecmer, \"{O}rs Legeza, and Boguslawski]{ola-neptunyl-cci-2018} \L{}achmanska,~A.; Tecmer,~P.; \"{O}rs Legeza,; Boguslawski,~K. {Elucidating cation--cation interactions in neptunyl dications using multireference ab initio theory}. \emph{Phys.~Chem.~Chem.~Phys.} \textbf{2019}, \emph{21}, 744--759\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Ziesche(1995)]{Ziesche1995} Ziesche,~P. Correlation strength and information entropy. \emph{Int. J. Quantum Chem.} \textbf{1995}, \emph{56}, 363--369\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Legeza and S\'olyom(2006)Legeza, and S\'olyom]{legeza2006} Legeza,~O.; S\'olyom,~J. Two-site entropy and quantum phase transitions in low-dimensional models. \emph{Phys.~Rev.~Lett.} \textbf{2006}, \emph{96}, 116401\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Rissler \latin{et~al.}(2006)Rissler, Noack, and White]{rissler2006} Rissler,~J.; Noack,~R.~M.; White,~S.~R. Recent developments in the {DMRG} applied to quantum chemistry. \emph{AIP Conference Proceedings} \textbf{2006}, \emph{816}, 186--197\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Boguslawski \latin{et~al.}(2012)Boguslawski, Tecmer, Legeza, and Reiher]{qi-2012} Boguslawski,~K.; Tecmer,~P.; Legeza,~{\"O}.; Reiher,~M. Entanglement measures for single- and multireference correlation effects. \emph{J. Phys. Chem. Lett.} \textbf{2012}, \emph{3}, 3129--3135\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Boguslawski \latin{et~al.}(2013)Boguslawski, Tecmer, Barcza, Legeza, and Reiher]{dmrg-2013} Boguslawski,~K.; Tecmer,~P.; Barcza,~G.; Legeza,~{\"O}.; Reiher,~M. Orbital entanglement in bond-formation processes. \emph{J. Chem. Theory Comput.} \textbf{2013}, \emph{9}, 2959--2973\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Vedral(2014)]{vedral2014} Vedral,~V. Quantum entanglement. \emph{Nature Phys.} \textbf{2014}, \emph{10}, 256--258\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Keller \latin{et~al.}(2015)Keller, Boguslawski, Janowski, Reiher, and Pulay]{dmrg-2015} Keller,~S.; Boguslawski,~K.; Janowski,~T.; Reiher,~M.; Pulay,~P. Selection of active spaces for multiconfigurational wavefunctions. \emph{J. Chem. Phys.} \textbf{2015}, \emph{142}, 244104\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Boguslawski and Tecmer(2015)Boguslawski, and Tecmer]{dmrg-2015-ijqc} Boguslawski,~K.; Tecmer,~P. Orbital entanglement in quantum chemistry. \emph{Int. J. Quantum Chem.} \textbf{2015}, \emph{115}, 1289--1295\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Boguslawski \latin{et~al.}(2016)Boguslawski, Tecmer, and Legeza]{dmrg-2016} Boguslawski,~K.; Tecmer,~P.; Legeza,~O. Analysis of two-orbital correlations in wave functions restricted to electron-pair states. \emph{Phys. Rev. B} \textbf{2016}, \emph{94}, 155126\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Schilling and Schilling(2016)Schilling, and Schilling]{Schilling2016} Schilling,~C.; Schilling,~R. Number-parity effect for confined fermions in one dimension. \emph{Phys. Rev. A} \textbf{2016}, \emph{93}, 021601\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Stein and Reiher(2016)Stein, and Reiher]{autocas2016} Stein,~C.~J.; Reiher,~M. Automated selection of active orbital spaces. \emph{J. Chem. Theory Comput.} \textbf{2016}, \emph{12}, 1760\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[C.~J.~Stein and Reiher(2016)C.~J.~Stein, and Reiher]{autocas2016-2} C.~J.~Stein,~V. v.~B.; Reiher,~M. The delicate balance of static and dynamic electron correlation. \emph{J. Chem. Theory Comput.} \textbf{2016}, \emph{12}, 3764\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Boguslawski and Tecmer(2017)Boguslawski, and Tecmer]{ijqc-eratum} Boguslawski,~K.; Tecmer,~P. Erratum: {O}rbital entanglement in quantum chemistry. \emph{Int.~J.~Quantum~Chem.} \textbf{2017}, \emph{117}, e25455\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Stein and Reiher(2017)Stein, and Reiher]{autocas2017} Stein,~C.~J.; Reiher,~M. Measuring multi-configurational character by orbital entanglement. \emph{Mol. Phys.} \textbf{2017}, \emph{115}, 2110\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Ding and Schilling(2020)Ding, and Schilling]{ding2020} Ding,~L.; Schilling,~C. Correlation paradox of the dissociation limit: A quantum information perspective. \emph{J.~Chem.~Theory~Comput.} \textbf{2020}, \emph{16}, 4159--417\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Ding \latin{et~al.}(2020)Ding, Mardazad, Das, Szalay, Schollw\"{o}ck, Zimbor\'{a}s, and Schilling]{ding2021} Ding,~L.; Mardazad,~S.; Das,~S.; Szalay,~S.; Schollw\"{o}ck,~U.; Zimbor\'{a}s,~Z.; Schilling,~C. Concept of orbital entanglement and correlation in quantum chemistry. \emph{arXiv:2006.00961 [quant-ph]} \textbf{2020}, \relax \mciteBstWouldAddEndPunctfalse \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Bofill and Pulay(1989)Bofill, and Pulay]{unocas1989} Bofill,~J.~M.; Pulay,~P. The unrestricted natural orbital-complete active space ({UNO}--{CAS}) method: An inexpensive alternative to the complete active space-self-consistent-field ({CAS}--{SCF}) method. \emph{J.~Chem.~Phys.} \textbf{1989}, \emph{90}, 3637--3646\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Kozlowski and Pulay(1998)Kozlowski, and Pulay]{unocas1998} Kozlowski,~P.; Pulay,~P. The unrestricted natural orbital-restricted active space method: methodology and implementation. \emph{Theor. Chem. Acc.} \textbf{1998}, \emph{100}, 12--20\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Sharma \latin{et~al.}(2018)Sharma, Truhlar, and Gagliardi]{abccas2018} Sharma,~P.; Truhlar,~D.~G.; Gagliardi,~L. Active space dependence in multiconfiguration pair-density functional theory. \emph{J.~Chem.~Theory~Comput.} \textbf{2018}, \emph{14}, 660--669\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Bao and Truhlar()Bao, and Truhlar]{abccas2019} Bao,~J.~J.; Truhlar,~D.~G. Automatic active space selection for calculating electronic excitation energies based on high-spin unrestricted Hartree--Fock orbitals. \emph{J.~Chem.~Theory~Comput.} \relax \mciteBstWouldAddEndPunctfalse \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Antal\'{i}k \latin{et~al.}(2019)Antal\'{i}k, Veis, Brabec, Demel, Legeza, and Pittner]{dmrg-tcc-2019-jcp} Antal\'{i}k,~A.; Veis,~L.; Brabec,~J.; Demel,~O.; Legeza,~O.; Pittner,~J. Toward the efficient local tailored coupled cluster approximation and the peculiar case of oxo-Mn(Salen). \emph{J.~Chem.~Phys.} \textbf{2019}, \emph{151}, 084112\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Lang \latin{et~al.}(2020)Lang, Antal\'{i}k, Veis, Brandejs, Brabec, \"{O}rs Legeza, and Pittner]{dmrg-tcc-2020-jctc} Lang,~J.; Antal\'{i}k,~A.; Veis,~L.; Brandejs,~J.; Brabec,~J.; \"{O}rs Legeza,; Pittner,~J. Near-linear scaling in DMRG-based tailored coupled clusters: an implementation of {DLPNO}-{TCCSD} and {DLPNO}-{TCCSD(T)}. \emph{J.~Chem.~Theory~Comput.} \textbf{2020}, \emph{16}, 3028--3040\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Brandejs \latin{et~al.}(2020)Brandejs, Vi\v{s}\v{n}\'{a}k, Veis, Mat\'{e}, Legeza, and Pittner]{dmrg-tcc-2020-4c} Brandejs,~J.; Vi\v{s}\v{n}\'{a}k,~J.; Veis,~L.; Mat\'{e},~M.; Legeza,~O.; Pittner,~J. Toward {DMRG}-tailored coupled cluster method in the 4c-relativistic domain. \emph{J.~Chem.~Phys.} \textbf{2020}, \emph{152}, 174107\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Limacher \latin{et~al.}(2014)Limacher, Ayers, Johnson, {De Baerdemacker}, {Van Neck}, and Bultinck]{pccd-pt-2014} Limacher,~P.; Ayers,~P.; Johnson,~P.; {De Baerdemacker},~S.; {Van Neck},~D.; Bultinck,~P. Simple and inexpensive perturbative correction schemes for antisymmetric products of nonorthogonal geminals. \emph{Phys.~Chem.~Chem.~Phys.} \textbf{2014}, \emph{16}, 5061--5065\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Garza \latin{et~al.}(2015)Garza, Sousa~Alencar, and Scuseria]{garza2015actinide} Garza,~A.~J.; Sousa~Alencar,~A.~G.; Scuseria,~G.~E. Actinide chemistry using singlet-paired coupled cluster and its combinations with density functionals. \emph{J.~Chem.~Phys.} \textbf{2015}, \emph{143}, 244106\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Boguslawski and Tecmer(2017)Boguslawski, and Tecmer]{pccd-PTX} Boguslawski,~K.; Tecmer,~P. Benchmark of dynamic electron correlation models for seniority-zero wavefunctions and their application to thermochemistry. \emph{J.~Chem.~Theory~Comput.} \textbf{2017}, \emph{13}, 5966--5983\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Brz\k{e}k \latin{et~al.}(2019)Brz\k{e}k, Boguslawski, Tecmer, and \.{Z}uchowski]{filip-jctc-2019} Brz\k{e}k,~F.; Boguslawski,~K.; Tecmer,~P.; \.{Z}uchowski,~P.~S. Benchmarking the accuracy of seniority-zero wave function methods for noncovalent interactions. \emph{J.~Chem.~Theory~Comput.} \textbf{2019}, \emph{15}, 4021--4035\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Tecmer \latin{et~al.}(2019)Tecmer, Boguslawski, Borkowski, \.{Z}uchowski, and K\k{e}dziera]{pawel-yb2} Tecmer,~P.; Boguslawski,~K.; Borkowski,~M.; \.{Z}uchowski,~P.~S.; K\k{e}dziera,~D. Modeling the electronic structures of the ground and excited states of the ytterbium atom and the ytterbium dimer: A modern quantum chemistry perspective. \emph{Int.~J.~Quantum~Chem.} \textbf{2019}, \emph{119}, e25983\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Boguslawski \latin{et~al.}(2011)Boguslawski, Marti, and Reiher]{dmrg-casci} Boguslawski,~K.; Marti,~K.~H.; Reiher,~M. Construction of CASCI-type wave functions for very large active spaces. \emph{J.~Chem.~Phys.} \textbf{2011}, \emph{134}, 224101\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Kinoshita \latin{et~al.}(2005)Kinoshita, Hino, and Bartlett]{kinoshita2005} Kinoshita,~T.; Hino,~O.; Bartlett,~R.~J. Coupled-cluster method tailored by configuration interaction. \emph{J.~Chem.~Phys.} \textbf{2005}, \emph{123}, 074106\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Boguslawski \latin{et~al.}(2021)Boguslawski, Leszczyk, Nowak, Brz\k{e}k, \.{Z}uchowski, K\k{e}dziera, and Tecmer]{pybest-2021} Boguslawski,~K.; Leszczyk,~A.; Nowak,~A.; Brz\k{e}k,~F.; \.{Z}uchowski,~P.~S.; K\k{e}dziera,~D.; Tecmer,~P. {Pythonic Black-box Electronic Structure Tool (PyBEST). An open-source Python platform for electronic structure calculations at the interface between chemistry and physics}. \emph{Comput. Phys. Commun} \textbf{2021}, \emph{264}, 107933\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Boguslawski \latin{et~al.}(2020)Boguslawski, Leszczyk, Nowak, Brz\k{e}k, \.{Z}uchowski, K\k{e}dziera, and Tecmer]{pybest-1.0.0} Boguslawski,~K.; Leszczyk,~A.; Nowak,~A.; Brz\k{e}k,~F.; \.{Z}uchowski,~P.~S.; K\k{e}dziera,~D.; Tecmer,~P. Pythonic Black-box Electronic Structure Tool (PyBEST). 2020; \url{https://doi.org/10.5281/zenodo.3925278}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[{Dunning Jr.}(1989)]{basis-cc-pvdz} {Dunning Jr.},~T.~H. {Gaussian basis sets for use in correlated molecular calculations. I. The atoms boron through neon and hydrogen}. \emph{J. Chem. Phys.} \textbf{1989}, \emph{90}, 1007--1023\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Kendall \latin{et~al.}(1992)Kendall, {Dunning Jr.}, , and Harrison]{basis-cc-pvtz} Kendall,~R.~A.; {Dunning Jr.},~T.~H.; ; Harrison,~R.~J. Electron affinities of the first-row atoms revisited. {S}ystematic basis sets and wave functions. \emph{J.~Chem.~Phys.} \textbf{1992}, \emph{96}, 6796--6806\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Woon and {Dunning Jr.}(1993)Woon, and {Dunning Jr.}]{basis-cc-pvqz} Woon,~D.~E.; {Dunning Jr.},~T.~H. Gaussian-basis sets for use in correlated molecular calculations. 3. {T}he atoms aluminum through argon. \emph{J.~Chem.~Phys.} \textbf{1993}, \emph{98}, 1358--1371\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Woon and Dunning(1995)Woon, and Dunning]{basis-cc-pcvdz} Woon,~D.~E.; Dunning,~T.~H. Gaussian basis sets for use in correlated molecular calculations. {V}. {C}ore‐valence basis sets for boron through neon. \emph{J.~Chem.~Phys.} \textbf{1995}, \emph{103}, 4572--4585\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Coxon(1992)]{morse_potential} Coxon,~J.~A. The radial Hamiltonian operator for {LiH} X1$\Sigma$+. \emph{J. Mol. Spectrosc.} \textbf{1992}, \emph{152}, 274--282\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Werner \latin{et~al.}(2012)Werner, Knowles, Knizia, Manby, Sch\"{u}tz, and {et al.}]{molpro2012} Werner,~H.; Knowles,~P.~J.; Knizia,~G.; Manby,~F.~R.; Sch\"{u}tz,~M.; {et al.}, MOLPRO, version 2012.1, a package of \emph{ab initio} programs. 2012; see http://www.molpro.net\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Werner \latin{et~al.}(2012)Werner, Knowles, Knizia, Manby, and Sch\"{u}tz]{molpro2012_2} Werner,~H.; Knowles,~P.~J.; Knizia,~G.; Manby,~F.~R.; Sch\"{u}tz,~M. Molpro: a general-purpose quantum chemistry program package. \emph{WIREs Comput. Mol. Sci.} \textbf{2012}, \emph{2}, 242--253\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Legeza \latin{et~al.}()Legeza, Veis, and Mosoni]{dmrg_ors} Legeza,~{\"O}.; Veis,~L.; Mosoni,~T. \textsc{QC-DMRG-Budapest}, a program for quantum chemical {DMRG} calculations. { \rm Copyright 2000--2018, HAS RISSPO Budapest}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Guo \latin{et~al.}(2016)Guo, Watson, Hu, Sun, and Chan]{cr2_2016} Guo,~S.; Watson,~M.~A.; Hu,~W.; Sun,~Q.; Chan,~G. K.-L. N-electron valence state perturbation theory based on a density matrix renormalization group reference function, with applications to the chromium dimer and a trimer model of poly($p$-phenylenevinylene. \emph{J.~Chem.~Theory~Comput.} \textbf{2016}, \emph{12}, 1583--1591\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Li and Paldus(1998)Li, and Paldus]{li1998} Li,~X.; Paldus,~J. Reduced multireference couple cluster method. {II}. {A}pplication to potential energy surfaces of {HF, F$_2$, and H$_2$O}. \emph{J.~Chem.~Phys.} \textbf{1998}, \emph{108}, 637--648\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Bytautas \latin{et~al.}(2007)Bytautas, Nagata, Gordon, and Ruedenberg]{bytautas2007} Bytautas,~L.; Nagata,~T.; Gordon,~M.~S.; Ruedenberg,~K. {Accurate ab initio potential energy curve of F$_2$. I. Nonrelativistic full valence configuration interaction energies using the correlation energy extrapolation by intrinsic scaling method}. \emph{J.~Chem.~Phys.} \textbf{2007}, \emph{127}, 164317\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Evangelista \latin{et~al.}(2007)Evangelista, Allen, and Schaefer]{evangelista2007} Evangelista,~F.~A.; Allen,~W.~D.; Schaefer,~H.~F. Coupling term derivation and general implementation of state-specific multireference coupled cluster theories. \emph{J.~Chem.~Phys.} \textbf{2007}, \emph{127}, 024102\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Bytautas and Ruedenberg(2009)Bytautas, and Ruedenberg]{bytautas2009} Bytautas,~L.; Ruedenberg,~K. {Ab initio potential energy curve of {F$_2$}. {IV}. {T}ransition from the covalent to the van der Waals region: Competition between multipolar and correlation forces}. \emph{J.~Chem.~Phys.} \textbf{2009}, \emph{130}, 204101\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Deegan and Knowles(1994)Deegan, and Knowles]{deegan1994} Deegan,~M. J.~O.; Knowles,~P.~J. Perturbative corrections to account for triple excitations in closed and open shell coupled cluster theories. \emph{Chem. Phys. Lett.} \textbf{1994}, \emph{227}, 321--326\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Li and Paldus(2001)Li, and Paldus]{li2001} Li,~X.; Paldus,~J. Energy versus amplitude corrected coupled-cluster approaches. {II}. {B}reaking the triple bond. \emph{J.~Chem.~Phys.} \textbf{2001}, \emph{115}, 5774--5783\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Li and Paldus(2008)Li, and Paldus]{li2008} Li,~X.; Paldus,~J. Full potential energy curve for {N$_2$} by the reduced multireference coupled-cluster method. \emph{J.~Chem.~Phys.} \textbf{2008}, \emph{129}, 054104\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Jiang and Wilson(2011)Jiang, and Wilson]{wilson2011} Jiang,~W.; Wilson,~A.~K. Multireference composite approaches for the accurate study of ground and excited electronic states: {C$_2$, N$_2$, and O$_2$}. \emph{J.~Chem.~Phys.} \textbf{2011}, \emph{134}, 034101\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Csontos \latin{et~al.}(2013)Csontos, Nagy, Csontos, and K\'{a}llay]{csontos2013} Csontos,~B.; Nagy,~B.; Csontos,~J.; K\'{a}llay,~M. Dissociation of the fluorine molecule. \emph{J.~Phys.~Chem.~A} \textbf{2013}, \emph{117}, 5518--5528\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Martin \latin{et~al.}(1992)Martin, Lee, Scuseria, and Taylor]{martin1992} Martin,~J. M.~L.; Lee,~T.~J.; Scuseria,~G.~E.; Taylor,~P.~R. Ab initio multireference study of the {BN} molecule. \emph{J.~Chem.~Phys.} \textbf{1992}, \emph{97}, 6549--6556\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Peterson(1995)]{peterson1995} Peterson,~K.~A. Accurate multireference configuration interaction calculations on the lowest {$^1\Sigma^+$} and {$^3\Pi$} electronic states of {C$_2$}, {CN$^+$}, {BN}, and {BO$^+$}. \emph{J.~Chem.~Phys.} \textbf{1995}, \emph{102}, 262--277\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Wulfov(1996)]{wulfov1996} Wulfov,~A.~L. Approximate full configuration interaction calculations of total energies, harmonic vibrational frequencies and equilibrium bond distances on {F$_2$, BF, C$_2$, CN$^+$ and NO$^+$} molecules in a {DZ + P} basis set. \emph{Chem. Phys. Lett.} \textbf{1996}, \emph{263}, 79--83\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Peterson \latin{et~al.}(1997)Peterson, Wilson, Woon, and {Dunning Jr.}]{peterson1997} Peterson,~K.~A.; Wilson,~A.~K.; Woon,~D.~E.; {Dunning Jr.},~T.~H. {Benchmark calculations with correlated molecular wave functions {XII}. {Core} correlation effects on the homonuclear diatomic molecules B$_2$--F$_2$}. \emph{Theor. Chem. Acc.} \textbf{1997}, \emph{97}, 251--259\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Abrams and Sherrill(2004)Abrams, and Sherrill]{abrams2004} Abrams,~M.~L.; Sherrill,~C.~D. Full configuration interaction potential energy curves for the {X$^1\Sigma_g^+$}, {B$^1\Delta_g$}, and {B'$\Sigma_g^+$} states of {C$_2$}: {A} challenge for approximate methods. \emph{J.~Chem.~Phys.} \textbf{2004}, \emph{121}, 9211--9219\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Sherrill and Piecuch(2005)Sherrill, and Piecuch]{sherril2005} Sherrill,~C.~D.; Piecuch,~P. The {X$^1\Sigma_g^+$}, {B$^1\Delta_g$}, and {B'$\Sigma_g^+$} states of {C$_2$}: A comparison of renormalized coupled-cluster and multireference methods with full configuration interaction benchmarks. \emph{J.~Chem.~Phys.} \textbf{2005}, \emph{122}, 124104\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Shi \latin{et~al.}(2011)Shi, Zhang, Sun, and Zhu]{shi2011} Shi,~D.; Zhang,~X.; Sun,~J.; Zhu,~Z. {MRCI study on spectroscopic and molecular properties of B$^1\Delta_g$ , B$^{'1}\Sigma^+_g$, C$^1\Pi_g$ , D$^1\Sigma^+_u$ , E$^1\Sigma^+_g$ and 1$^1\Delta_u$ electronic statesof the C$_2$ radical}. \emph{Mol. Phys.} \textbf{2011}, \emph{109}, 1453--1465\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Booth \latin{et~al.}(2011)Booth, Cleland, Thom, and Alavi]{booth2011} Booth,~G.~H.; Cleland,~D.; Thom,~A. J.~W.; Alavi,~A. Breaking the carbon dimer: The challenges of multiple bond dissociation with full configuration interaction quantum {M}onte {C}arlo methods. \emph{J.~Chem.~Phys.} \textbf{2011}, \emph{135}, 084104\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Wouters \latin{et~al.}(2014)Wouters, Poelmans, Ayers, and {Van Neck}]{wouters2014} Wouters,~S.; Poelmans,~W.; Ayers,~P.~W.; {Van Neck},~D. {CheMPS2: A free open-source spin-adapted implementation of the density matrix renormalization group for ab initio quantum chemistry}. \emph{Comput. Phys. Commun.} \textbf{2014}, \emph{185}, 1501--1514\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Sharma(2015)]{sharma2015} Sharma,~S. A general non-abelian density matrix renormalization group algorithm with application to the {C$_2$} dimer. \emph{J.~Chem.~Phys.} \textbf{2015}, \emph{142}, 024107\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Gulania \latin{et~al.}(2019)Gulania, Jagau, and Krylov]{gulania2018} Gulania,~S.; Jagau,~T.-C.; Krylov,~A.~I. {EOM-CC} guide to {F}ock-space travel: the {C}$_2$ edition. \emph{Faraday Discuss.} \textbf{2019}, \emph{217}, 514--532\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Peterson \latin{et~al.}(1993)Peterson, Kendall, and {Dunning Jr}]{peterson1993} Peterson,~K.~A.; Kendall,~R.~A.; {Dunning Jr},~T.~H. {Benchmark calculations with correlated molecular wave functions. III. Configuration interaction calculations on first row homonuclear diatomics}. \emph{J.~Chem.~Phys.} \textbf{1993}, \emph{99}, 9790--9805\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Huber and Herzberg(1979)Huber, and Herzberg]{MolSpectraStruct} Huber,~K.; Herzberg,~G. \emph{Molecular spectra and molecular structure. {IV}. {C}onstants of diatomic molecules}; Springer US, 1979\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Irikura(2007)]{irikura2007} Irikura,~K.~K. Experimental vibrational zero-point energies: Diatomic molecules. \emph{J. Phys. Chem. Ref. Data} \textbf{2007}, \emph{36}, 389--397\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Shimanouchi(1977)]{shimanouchi1997} Shimanouchi,~T. {Tables of molecular vibrational frequencies. Consolidated volume II}. \emph{J. Phys. Chem. Ref. Data} \textbf{1977}, \emph{6}, 993--1102\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Douay \latin{et~al.}(1988)Douay, Nietmann, and Bernath]{douay1988} Douay,~M.; Nietmann,~R.; Bernath,~P.~F. The discovery of two new infrared electronic transitions of {C$_2$}: {B$^1\Delta_g$--A$^1\Pi_u$} and {B'$^1\Sigma_g^+$--A$^1\Pi_u$}. \emph{J. Mol. Spectrosc.} \textbf{1988}, \emph{131}, 261--271\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Lovas \latin{et~al.}(2005)Lovas, Tiemann, Coursey, Kotochigova, Chang, Olsen, and Dragoset]{NIST} Lovas,~F.~J.; Tiemann,~E.; Coursey,~J.~S.; Kotochigova,~S.~A.; Chang,~J.; Olsen,~K.; Dragoset,~R.~A. NIST Standard Reference Database. 2005; \url{https://www.nist.gov/pml/diatomic-spectral-database}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Krupenie(1966)]{carbon_monoxide} Krupenie,~P.~H. \emph{The band spectrum of carbon monoxide}; U.S. Department of Commence National Bureau of Standards, 1966\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Murrell \latin{et~al.}(1979)Murrell, Al-Derzi, Tennyson, and Guest]{murrell1979} Murrell,~J.; Al-Derzi,~A.; Tennyson,~J.; Guest,~M. Potential energy curves of the lower states of {CN$^+$}. \emph{Mol. Phys.} \textbf{1979}, \emph{38}, 1755--1760\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Boguslawski and Tecmer(2017)Boguslawski, and Tecmer]{geminals_lcc_2017} Boguslawski,~K.; Tecmer,~P. Benchmark of dynamic electron correlation models for seniority-zero wave functions and their application to thermochemistry. \emph{J.~Chem.~Theory~Comput.} \textbf{2017}, \emph{13}, 5966--5983\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Casey and Leopold(1993)Casey, and Leopold]{cr2-1993} Casey,~S.~M.; Leopold,~D.~G. Negative ion photoelectron spectroscopy of {Cr$_2$}. \emph{J.~Phys.~Chem.} \textbf{1993}, \emph{97}, 816--830\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Brynda \latin{et~al.}(2009)Brynda, Gagliardi, and Roos]{cr2_2009} Brynda,~M.; Gagliardi,~L.; Roos,~B.~O. Analysing the chromium--chromium multiple bonds using multiconfigurational quantum chemistry. \emph{Chem. Phys. Lett.} \textbf{2009}, \emph{471}, 1--10\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Kurashige and Yanai(2011)Kurashige, and Yanai]{dmrg-caspt2} Kurashige,~Y.; Yanai,~T. Second-order perturbation theory with density matrix renormalization group self-consistent field reference function: Theory and application to the study of chromium dimer. \emph{J.~Chem.~Phys.} \textbf{2011}, \emph{135}, 094104\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Vancoillie \latin{et~al.}(2016)Vancoillie, \r{A}ke Malmqvist, and Veryazov]{vancoillie2016} Vancoillie,~S.; \r{A}ke Malmqvist,~P.; Veryazov,~V. Potential energy surface of the chromium dimer re-re-revisited with multiconfigurational perturbation theory. \emph{J.~Chem.~Theory~Comput.} \textbf{2016}, \emph{12}, 1647--1655\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Olivares-Amaya \latin{et~al.}(2015)Olivares-Amaya, Hu, Nakatani, Sharma, Yang, and Chan]{dmrg2015} Olivares-Amaya,~R.; Hu,~W.; Nakatani,~N.; Sharma,~S.; Yang,~J.; Chan,~G. K.-L. The ab-initio density matrix renormalization group in practice. \emph{J.~Chem.~Phys.} \textbf{2015}, \emph{142}, 034102\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Bondybey and English(1983)Bondybey, and English]{bondybey1983} Bondybey,~V.~E.; English,~J.~H. Electronic structure and vibrational frequency of Cr$_2$. \emph{Chem. Phys. Lett.} \textbf{1983}, \emph{94}, 443--447\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Casey and Leopold(1993)Casey, and Leopold]{casey1993} Casey,~S.~M.; Leopold,~D.~G. Negative ion photoelectron spectroscopy of chromium dimer. \emph{J.~Phys.~Chem.} \textbf{1993}, \emph{97}, 816--830\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Pastorczak and Pernal(2015)Pastorczak, and Pernal]{pastorczak2015} Pastorczak,~E.; Pernal,~K. {ERPA-APSG}: a computationally efficient geminal-based method for accurate description of chemical systems. \emph{Phys.~Chem.~Chem.~Phys.} \textbf{2015}, \emph{17}, 8622--8626\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Pesonen \latin{et~al.}(2001)Pesonen, Miani, and Halonen]{pesonen2001} Pesonen,~J.; Miani,~A.; Halonen,~L. New inversion coordinate for ammonia: Application to a CCSD(T) bidimensional potential energy surface. \emph{J.~Chem.~Phys.} \textbf{2001}, \emph{115}, 1243\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Musia\l{} \latin{et~al.}(2011)Musia\l{}, Perera, and Bartlett]{musial2011} Musia\l{},~M.; Perera,~A.; Bartlett,~R.~J. Multireference coupled-cluster theory: The easy way. \emph{J.~Chem.~Phys.} \textbf{2011}, \emph{134}, 114108\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Li and Paldus(2009)Li, and Paldus]{li2009} Li,~X.; Paldus,~J. Accounting for the exact degeneracy and quasidegeneracy in the automerization of cyclobutadiene via multireference coupled-cluster methods. \emph{J.~Chem.~Phys.} \textbf{2009}, \emph{131}, 114103\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Eckert-Maksi\'{c} \latin{et~al.}(2006)Eckert-Maksi\'{c}, Vazdar, Barbatti, Lischka, and Maksi\'{c}]{maksic2006} Eckert-Maksi\'{c},~M.; Vazdar,~M.; Barbatti,~M.; Lischka,~H.; Maksi\'{c},~Z.~B. Automerization reaction of cyclobutadiene and its barrier height: An ab initio benchmark multireference average quadratic coupled cluster study. \emph{J.~Chem.~Phys.} \textbf{2006}, \emph{125}, 064310\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Lyakh \latin{et~al.}(2011)Lyakh, Lotrich, and Bartlett]{lyakh2011} Lyakh,~D.~I.; Lotrich,~V.~F.; Bartlett,~R.~J. The 'tailored' CCSD(T) description of the automerization of cyclobutadiene. \emph{Chem. Phys. Lett.} \textbf{2011}, \emph{501}, 166--171\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Whitman and Carpenter(1980)Whitman, and Carpenter]{cyclobutadiene-exp} Whitman,~D.~W.; Carpenter,~B.~K. Experimental evidence for nonsquare cyclobutadiene as a chemically significant intermediate in solution. \emph{J. Am. Chem. Soc} \textbf{1980}, \emph{102}, 4272--4274\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Carpenter(1983)]{cyclobutadiene-exp2} Carpenter,~B.~K. Heavy-atom tunneling as the dominant pathway in a solution-phase reaction? Bond shift in antiaromatic annulenes. \emph{J. Am. Chem. Soc.} \textbf{1983}, \emph{105}, 1700--1701\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Bhaskaran-Nair \latin{et~al.}(2008)Bhaskaran-Nair, Demel, and Pittner]{Bhaskaran-Nair2008} Bhaskaran-Nair,~K.; Demel,~O.; Pittner,~J. Multireference state-specific {M}ukherjee’s coupled cluster method with noniterative triexcitations. \emph{J.~Chem.~Phys.} \textbf{2008}, \emph{129}, 184105\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Pierrefixe and Bickelhaupt(2008)Pierrefixe, and Bickelhaupt]{Pierrefixe2008} Pierrefixe,~S. C. A.~H.; Bickelhaupt,~F. M.~J. Aromaticity and antiaromaticity in 4-, 6-, 8-, and 10-membered conjugated hydrocarbon rings. \emph{J.~Phys.~Chem.~A} \textbf{2008}, \emph{112}, 12816--12822\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \end{mcitethebibliography} \end{document}
1,314,259,995,422
arxiv
\section*{Introduction} Nous nous intéressons dans un premier temps aux courbes analytiques singulières sur une surface analytique réelle. Nous décrivons puis énumérons leurs configurations topologiques, d'abord au voisinage des singularités, puis dans leur globalité. Nous adaptons dans un second temps la description topologique aux courbes algébriques singulières sur une surface algébrique réelle lisse. Enfin, nous majorons le nombre de types topologiques de courbes algébriques singulières du plan projectif en fonction du degré. Les caractérisations topologiques sont formulées en termes d'objets combinatoires comme des diagrammes de cordes ou des graphes, et les résultats d'énumération portent sur la nature des séries génératrices, les premiers termes, et l'asymptotique des suites de nombres énumérant les objets combinatoires en question. \subsection*{Topologie et combinatoire des singularités analytiques} Au voisinage d'une singularité, une courbe analytique est une union de branches qui traversent le point singulier. Traçons un petit cercle centré sur la singularité : chacun de ces arcs rentre à l'intérieur, passe par la singularité et en ressort; intersectant ainsi le cercle en exactement deux points. La structure locale d'une singularité analytique du plan est donc encodée par un diagramme de cordes : des points distincts sur le cercle appariés deux à deux, ou encore un mot cyclique dont chaque lettre apparaît deux fois. Les diagrammes ainsi obtenus, qualifiés d'analytiques, forment une infime proportion de tous les diagrammes de cordes. La première section en rappelle plusieurs caractérisations montrées dans \cite{GhySim:2020}. Dans la deuxième section, nous approfondissons une description supplémentaire de ces diagrammes et nous l'exploitons afin de les énumérer par des méthodes de combinatoire analytique. Voici l'un des trois principaux résultats d'énumération des configurations locales. \begin{thm} La série génératrice ordinaire $A$ comptant le nombre $A_n$ de diagrammes analytiques enracinés est algébrique: $(z^3+z^2)A^6-z^2A^5-4zA^4+(8z+2)A^3-(4z+6)A^2+6A-2=0$. Ainsi $A_n\sim a_0\,n^{-\frac{3}{2}}\alpha^{-n}$ où $4<10^3a_0<5$ et $15.792395< \alpha^{-1}< 15.792396$. Les premiers termes de la suite $A_n$ sont: $1,1,3,15,105,923,9417,105815,1267681,15875631,205301361$. \end{thm} \subsection*{Topologie globale et dénombrement des courbes analytiques} Globalement, une courbe analytique réelle sur une surface analytique compacte connexe est formée de parties lisses rejoignant ses singularités. Dans la troisième section nous construisons d'abord un modèle pour cette structure appelé courbe combinatoire : c'est un graphe plongé dans la surface dont les sommets, associés aux singularités, sont décorés par leurs diagrammes de cordes, et dont les arêtes correspondent aux arcs lisses entre les singularités. Ensuite, nous montrons qu'il n'y a aucune obstruction globale restreignant la topologie des courbes analytiques. Autrement dit, dans une surface analytique quelconque, tout plongement d'un graphe de diagrammes de cordes analytiques provient d'une courbe analytique. La preuve, qui utilise des résultats de Grauert et Cartan sur la représentabilité des variétés analytiques, se fait en deux étapes : on résout les singularités par éclatement puis on approche à la Whitney les courbes lisses par des courbes analytiques. Cette description nous permet enfin d'associer le dénombrement des diagrammes de cordes analytiques avec celui des découpages de Tutte \cite{Tutte:1962} pour énumérer, en fonction du nombre d'arêtes, les courbes combinatoires de la sphère provenant de courbes analytiques singulières. \begin{prop} Le nombre de courbes combinatoires analytiques de la sphère ayant $c$ arêtes pour $s$ sommets indexés et marqués de tailles respectives $k_1,\dots, k_v$ vaut: \[ \frac{(c-1)!}{(c-s-2)!} \: \prod_{v=1}^{s}{k_v \binom{2k_v}{k_v}A_{k_v}} .\] \end{prop} \subsection*{Topologie globale et dénombrement des courbes algébriques} Dans la section 4, nous expliquons d'abord quelques notions propres aux courbes et aux surfaces algébriques en vue d'énnoncer le théorème d'approximation de Nash-Tognoli des courbes lisses par des courbes algébriques. Nous déterminons, en éclatant les singularités et en approchant les courbes, quelles obstructions globales restreignent la topologie de ces courbes algébriques. Ces obstructions de nature homologique dépendent de la structure algébrique de la surface sous-jacente, et disparaissent pour une surface rationnelle : sur une sphère ou un plan projectif, toute courbe combinatoire dont les sommets sont décorés par des diagrammes de cordes analytiques provient d'une courbe algébrique. Nous établissons enfin, en associant la formule précédente avec les formules de Pl\"ucker, une majoration du nombre de types topologiques de courbes algébriques réelles de degré $d$ du plan projectif dont le lieu réel est connexe. \begin{thm} Le nombre $Cal_{\,\mathbb{R}\P^2\,}(d)$ de types toplogiques de courbes algébriques réelles connexes de degré $d$ du plan projectif se comporte comme: \[Cal_{\,\mathbb{R}\P^2\,}(d)=o\left(12^{d^2}\right)\] \end{thm} \paragraph{Remerciements.} Cet article est le fruit de mes recherches effectuées sous la direction d'\'Etienne Ghys durant l'été 2018 ; il prend racines dans les problématiques exposées au long de sa promenade \cite{Ghys:2017}. Sa disponibilité, son encouragement et son exigence me donnent l'envie d'aller au fond des choses et m'ont permis d'amener ce travail à maturité. Les discussions combinatoires avec Mickaël Maazoun m'ont guidé et débloqué à maintes reprises; je tiens à le remercier chaleureusement pour son temps et sa bonne humeur. Je suis également redevable à David Coulette pour avoir optimisé mon implémentation symbolique de l'inversion de Lagrange, et tracé proprement les courbes algébriques de ce document. Je remercie enfin Patrick Popescu-Pampu pour ses relectures attentives ainsi qu'Olivier Benoist pour m'avoir expliqué le théorème de Nash-Tognoli. \section{Notions préliminaires} Résumons d'abord la description topologique d'une courbe analytique réelle $\{f(x,y)=0\}$ du plan $\mathbb{R}^2$, au voisinage d'une singularité. Elle est formulée dans \cite{GhySim:2020} au moyen du diagramme de cordes associé et du graphe d'entrelacement de ses branches (voir aussi \cite{Ghys:2017}). Nous esquisserons ensuite la factorisation de Cunningham, utile dans la deuxième section pour approfondir l'étude combinatoire de ces deux familles d'invariants: graphes d'entrelacement et diagrammes de cordes. \subsection{Topologie locale : graphe d'entrelacement des branches} \paragraph{De la courbe au diagramme de cordes.} Soit $\gamma$ un germe de courbe analytique réelle du plan orienté au point $o$ qui n'est pas vide ou réduit à un point; et $\mathbb{D}_\epsilon$ le disque de rayon $\epsilon$ centré en $o$. Pour $\epsilon$ suffisamment petit tous les $\gamma \cap \mathbb{D}_\epsilon$ sont homéomorphes donc la topologie de $\gamma$ sur un voisinage de l'origine est définie sans ambiguïté. Plus précisément, la courbe $(\gamma,o)$ est localement homéomorphe au cône sur son intersection avec un petit cercle $\gamma \cap \partial \mathbb{D}_\epsilon$ appelée son \emph{halo réel}. Par ailleurs, la courbe $\gamma$ est l'union d'un nombre fini de branches, chaque branche donnant lieu à exactement deux points du halo. Par conséquent, ce dernier définit un \emph{diagramme de cordes}: un nombre pair $2c$ de points distincts du cercle orienté, deux à deux appariés par une corde comme sur la figure \ref{fig:cordiag}. Formellement, c'est un mot cyclique sur un alphabet à $c$ lettres dont chaque lettre apparaît deux fois. Les diagrammes de cordes provenant de germes de courbes analytiques seront dit \emph{analytiques}. \begin{figure}[H] \centering \includegraphics[width=0.4\textwidth]{alg_singularity_cheat.pdf} \includegraphics[width=0.38\textwidth]{cordiag.pdf} \caption{\label{fig:cordiag} Représentation d'une singularité analytique par un diagramme de cordes.} \end{figure} L'article \cite{GhySim:2020} donne un moyen récursif pour déterminer l'analycité d'un diagramme. Supposons qu'un diagramme de cordes $c$ possède l'un des motifs suivants: \begin{itemize} \item une corde isolée: $\dots aa \dots $ \item une fourche: $\dots b \dots aba $ \item une paire de vrais jumeaux: $\dots ab \dots ab$ \item une paire de faux jumeaux: $\dots ab \dots ba$ \end{itemize} On appelle \emph{simplification} le diagramme $c'$ obtenu en enlevant la corde $a$ d'un tel motif. \begin{figure}[H] \centering \includegraphics[width=0.6\textwidth]{isolee_fourche_jumeaux.pdf} \caption{\label{fig:isole_fourche_jumeaux} Corde isolée, fourche, vrais et faux jumeaux dans le diagramme.} \end{figure} \begin{Thm}[Description récursive des diagrammes analytiques \cite{Ghys:2017}] Le mot vide est un diagramme analytique. Si un diagramme non vide ne contient pas de corde isolée, de fourche, ou une paire de jumeaux; alors il n'est pas analytique. Si par contre il contient un tel motif, alors il est analytique si et seulement si une de ses simplifications quelconques l'est aussi. \end{Thm} \begin{rem}(Analytique versus algébrique) Les diagrammes de cordes provenant des singularités algébriques et analytiques sont les mêmes. \end{rem} \paragraph{Du diagramme de cordes au graphe d'entrelacement.} Dans cette section, un \emph{graphe} sera la donnée d'un ensemble $V$ de sommets et d'un sous-ensemble d'arêtes $E$ des parties à deux éléments de $V$. Il n'y a donc ni boucles ni arêtes multiples. L'ensemble $V$ est muni de la \emph{métrique de graphe} $d_G(v,v')$ à valeurs dans $\mathbb{N}\cup\{\infty \}$ qui calcule l'infimum du nombre d'arêtes parmi les chemins entre deux sommets $v$ et $v'$. Notons $N_G(v)$ l'ensemble des voisins de $v$, définis comme les sommets à distance un de $v$. Le \emph{graphe d'entrelacement} d'un diagramme de cordes (associé ou non à un germe de courbe) est construit en choisissant un sommet pour chaque corde, et en reliant deux sommets distincts lorsque les cordes associées s'intersectent, c'est-à-dire que leurs extrémités s'entrelacent pour l'ordre cyclique. Le graphe d'entrelacement du diagramme de cordes à droite de la figure \ref{fig:cordiag} est représenté à droite de la figure \ref{fig:acessibility}. \begin{rem}[Graphes d'entrelacements de diagrammes de cordes] Deux graphes sont localement équivalents si l'on peut passer de l'un à l'autre par une suite de transformations locales, consistant à inverser la relation d'incidence autour d'un sommet donné. Bouchet a montré dans \cite{Bouchet:1994} que les graphes d'entrelacement de diagrammes de cordes sont précisément ceux dont aucun équivalent local ne contient l'un des trois motifs suivants comme sous-graphe induit. \end{rem} \begin{figure}[H] \centering \includegraphics[width=0.7\textwidth]{bouchet_local_motif.pdf} \caption{\label{fig:bouchet_local_motifs} Transformation locale renversant l'incidence au sommet central. Motifs interdits. } \end{figure} L'analyticité d'un diagramme de cordes se lit sur son graphe d'entrelacement, et le théorème suivant donne diverses caractérisations de ces graphes. Pour une discussion plus exhaustive des liens entre ces propriétés on pourra consulter l'article \cite{GhySim:2020} et ses références. \begin{Thm}[Diagrammes de cordes analytiques \cite{GhySim:2020}] Les graphes d'entrelacement $G=(V,E)$ provenant d'un diagramme de cordes analytique sont caractérisés par chacune des propriétés équivalentes suivantes: \begin{itemize} \item \emph{Repliable}: se réduit à un ensemble de sommets isolés par suppression itérée: \begin{itemize} \item de sommets pendants: incidents à une seule arête, \item de sommets $v$ ayant un jumeau $v'$: c'est-à-dire qui a les mêmes voisins pourvu que l'on ignore toute arête éventuelle entre eux. \end{itemize} \item \emph{Distance-héréditaire}: pour tout sous-graphe induit $G'$ par un ensemble $V'\subset V$, la métrique induite par $d_G$ sur $V'$ est égale à $d_{G'}$. \item \emph{Buisson}: pour quatre sommets $a_i,a_j,a_k,a_l$ quelconques, deux des trois sommes des longueurs de paires de diagonales opposées: $d_G(a_i,a_j)+d_G(a_k,a_l)$ sont égales. \item Ne contient pas de maison, gemme, domino ou $(n\geq5)$-cycle comme graphe induit. \end{itemize} \end{Thm} \begin{figure}[H] \centering \includegraphics[width=0.5\textwidth]{house-gem-domino.pdf} \caption{\label{fig:house-gem-domino} Maison, Gemme, Domino.} \end{figure} \begin{rem}[terminologique] Le nom buisson semble approprié pour décrire la troisième propriété ainsi que pour suggérer l'allure des graphes qu'elle définit ; c'est désormais ainsi que nous les désignerons. En effet, la définition métrique ci-dessus est similaire à celle des espaces $0$-hyperboliques au sens de Gromov. En effet, pour ces derniers on requiert l'égalité des deux plus grandes parmi les trois quantités $d_G(a_i,a_j)+d_G(a_k,a_l)$. Or toute partie finie d'un espace $0$-hyperbolique est isométrique à celle d'un arbre métrique et réciproquement. Par ailleurs l'apparence géométrique que peuvent prendre ces graphes fait penser à des buissons: certaines parties sont arborescentes, tandis que d'autres sont plus touffues. Comme nous le verrons plus loin, cette structure est plus apparente après leur avoir appliqué la décomposition en arbre-de-graphes. \end{rem} Les seuls diagrammes de cordes dont le graphe d'entrelacement est une maison, une gemme, un domino et un $n>4$-cycle, sont ceux de la figure suivante. \begin{figure}[H] \centering \includegraphics[width=0.5\textwidth]{cordiag_motifs_interdits.pdf} \caption{\label{fig:cordiag_motifs_interdits} Diagrammes dont l'entrelacement est une maison, gemme, domino, $n>4$ cycle.} \end{figure} \begin{Cor}[Analycité: motifs interdits] Un diagramme de cordes est analytique si, et seulement si, il ne contient pas comme sous-diagramme l'un de ceux de la figure \ref{fig:cordiag_motifs_interdits}. \end{Cor} \subsection{La décomposition de Cunningham} Rappelons brièvement le théorème de décomposition de Cunningham revisité par Gioan et Paul \cite{GioPau:2012} (voir aussi \cite[section 4.8.5]{ChDuMo:2012}). Dans ce paragraphe nous ne considérons que des graphes connexes. Une \emph{décomposition} d'un graphe $G$ est une partition de l'ensemble de ses sommets en deux ensembles $A_1$ et $A_2$ ayant au moins deux éléments chacun, et contenant tous les deux des ensembles de sommets $B_j\subset A_j$ tels que les arêtes entre les $A_j$ soient exactement les arêtes du graphe biparti complet sur les $B_j$. \begin{figure}[H] \centering \includegraphics[width=0.45\textwidth]{split_2.pdf} \caption{\label{fig:split} Un graphe décomposable.} \end{figure} Une telle décomposition permet de factoriser $G$ en séparant les graphes induits par les ensembles $A_j$, que l'on assemble dans une même structure en ajoutant une arête $\{x_1,x_2\}$ d'un autre type, dont chaque extrémité $x_j$ est connectée aux sommets de $B_j$ (figure \ref{fig:split_facto} gauche). En ajoutant une feuille connectée à chaque sommet de $G$ (figure \ref{fig:split_facto} milieu), cela donne un \emph{arbre-de-graphes}; c'est-à-dire un arbre $T$ auquel est associé à chaque n\oe ud interne $k$ un graphe connexe $G_k$ muni d'une bijection $\varphi_k$ de l'ensemble de ses sommets vers les sommets $N_T(k)$ de l'arbre qui sont adjacents au n\oe ud $k$. Après la première factorisation, l'arbre possède deux n\oe uds internes reliés par une arête et décorés par les graphes $G_k$ induits par les $A_k\cup \{x_k\}$. On peut ensuite poursuivre la factorisation en essayant de décomposer les $G_k$, et ainsi de suite, pour obtenir une série d'arbres-de-graphes. La factorisation se termine lorsque les n\oe uds sont indécomposables ou dégénérés; expliquons ces termes. \begin{figure}[H] \centering \includegraphics[width=1\textwidth]{split_factorisation_2.pdf} \caption{\label{fig:split_facto} Factorisation en arbre-de-graphes avec $2$ n\oe uds $G_k$ reliés par une arête.} \end{figure} Un graphe est \emph{indécomposable} s'il a au moins quatre sommets et n'admet aucune décomposition. Par exemple la maison, la gemme, le domino et les $(n>4)$-cycles sont indécomposables. A l'opposé, un graphe est \emph{dégénéré} si toute bipartition de ses sommets en parties de cardinal au moins deux est une décomposition: un tel graphe, s'il est connexe, est soit une clique $K_n$ soit une étoile $S_n$ (star) c'est-à-dire le graphe biparti $K_{1,n}$. En particulier tout graphe sur au plus trois sommets est dégénéré. \begin{figure}[H] \includegraphics[width=0.5\textwidth]{accessibility_tree_flat.pdf} \hfill \includegraphics[width=0.4\textwidth]{accessibility_graph_relabeled.pdf} \caption{\label{fig:acessibility} Un arbre-de-graphes et le graphe d'accessibilité de ses feuilles.} \end{figure} Réciproquement $G$ se retrouve à partir d'une factorisation en arbre-de-graphes $T$ comme le \emph{graphe d'accessibilité} de ses feuilles, que nous définissons maintenant. L'ensemble de ses sommets $V_G$ correspond aux feuilles de $T$. Deux feuilles distinctes $v_1$, $v_2$ de $T$ définissent un plus court chemin dans $T$. Supposons qu'en chaque n\oe ud interne $k\in V_T$ emprunté par ce chemin, les sommets de $G_k$ qui s'envoient par $\varphi_k$ sur les arêtes du chemin incidentes à $k$ sont connectés par une arête. Alors, et seulement dans ce cas, les sommets $v_1,v_2\in V_G$ sont connectés par une arête. La figure \ref{fig:acessibility} montre le graphe d'accessibilité d'un arbre de graphes. Ce procédé de factorisation en arbre-de-graphes dépend des choix des décompositions effectuées à chaque étape, mais il s'avère que l'arbre-de-graphe résultant est essentiellement unique au sens suivant. Disons qu'un arbre-de-graphes est \emph{réduit} si ses n\oe uds internes sont de degré au moins trois et s'il vérifie de plus les deux conditions suivantes. D'une part ses n\oe uds internes sont décorés par des graphes indécomposables ou dégénérés. Il faut d'autre part que deux cliques ne soient jamais reliées et qu'une arête de l'arbre entre deux étoiles relie leurs centres ou deux non-centres. \begin{figure}[H] \centering \includegraphics[width=1.0\textwidth]{clique_star_join.pdf} \caption{\label{fig:clique_star_join} Fusion des étoiles et des cliques} \end{figure} Cette dernière condition est nécessaire en vue d'assurer l'unicité de la factorisation dans le théorème qui suit. En effet, deux cliques $K_n$ et $K_m$ décorant des n\oe uds reliés dans l'arbre peuvent être fusionnées en une clique $K_{m+n-1}$, le graphe d'accesibilité sera inchangé. Il en est de même de la fusion de deux étoiles $S_n$ et $S_m$, reliées par un centre et un non-centre, en une étoile $S_{n+m-1}$. Il peut sembler plus naturel de poursuivre la décomposition des graphes dégénérés en arbres de $K_3$ et $S_3$, cependant l'unicité n'est plus assurée puisque, par exemple, n'importe quel arbre de $K_3$ ayant $n$ feuilles fusionne en un $K_n$. \begin{thm}[Cunningham \cite{Cunning:1982}, Gioan-Paul \cite{GioPau:2012}]\label{thm_cunningham} Tout graphe connexe est le graphe d'accessibilité d'un unique arbre-de-graphes réduit. \end{thm} \begin{rem}[structure d'opérade sur les graphes] Cette décomposition en arbre-de-graphes révèle une structure d'opérade sur les graphes (étiquetés et enracinés en un sommet). Nous verrons plus loin que les buissons se comportent bien pour l'opération de composition : une composition de buissons est un buisson, autrement dit ils forment une sous-opérade. C'est cette structure que nous allons exploiter pour en effectuer le dénombrement. Un chapitre de \cite{Ghys:2017} introduit les opérades à travers plusieurs exemples en lien avec ce contexte ; la référence \cite{Fresse:2017} leur consacre une étude approfondie. \end{rem} \section{Combinatoire: buissons et diagrammes analytiques} \subsection{Dénombrement des buissons étiquetés connexes} Un graphe est \emph{complètement décomposable} si tout sous-graphe connexe induit ayant au moins quatre sommets est décomposable. Un graphe ayant moins de trois sommets vérifie trivialement cette propriété. Cela implique qu'il admet une factorisation en arbre-de-graphes dont tous les n\oe uds sont décorés par des $K_3$ ou des $S_3$. En fusionnant, l'arbre-de-graphes réduit a donc ses noeuds décorés par des graphes dégénérés. Réciproquement, le graphe d'accessibilité d'un tel arrbre-de-graphes-dégénérés est totalement décomposable. Par ailleurs, on montre facilement par récurrence dans \cite{GhySim:2020} que les graphes connexes complètement décomposables sont précisément les buissons connexes. On en déduit la description suivante des buissons qui va nous permettre de les énumérer. \begin{Cor}\label{buisson_decomposition} Un buisson connexe ayant au moins trois sommets est le graphe d'accessibilité d'un unique arbre-de-graphes-dégénérés réduit, et réciproquement. Nous les appellerons $SK$-arbres (clique-star graphs dans \cite{GioPau:2012}), et identifierons ces deux structures; la figure \ref{fig:acessibility} illustre cette correspondance. \end{Cor} \begin{rem}[Permutations séparables] Cette caractérisation des buissons est analogue à celle des permutations séparables en termes de $(\oplus,\ominus)$-arbres. Nous avons donc généralisé le fait, montré par \'Etienne Ghys dans \cite{Ghys:2017}, que les permutations séparables décrivent précisément les combinatoires possibles pour un germe de graphes polynomiaux du plan réel. \end{rem} Considérons la classe combinatoire $\mathcal{B}$ des buissons connexes dont les sommets sont étiquetés par les entiers d'un segment initial de $\mathbb{N}$, le sommet $0$ est désigné comme sa racine. Appelons \emph{taille} du buisson le nombre de sommets différents de la racine (égale à l'étiquette maximale), et supposons qu'elle est non nulle; notre classe contient donc zéro tels buissons de taille $0$, un seul de taille $1$ qualifié d'élémentaire, et quatre de taille $2$. Dès que la taille est au moins $2$, sa factorisation donne un $SK$-arbre dont une feuille est enracinée (les autres étiquetées), le \emph{n\oe ud racine} désigne celui qui est incident à la feuille racine. \begin{figure}[H] \centering \includegraphics[width=0.7\textwidth]{buisson_operad_roots_3.pdf} \caption{\label{fig:buisson_operad_root} Buisson élémentaire et types de n\oe uds racines avec les embranchements permis.} \end{figure} Soit $B(z)$ la série génératrice exponentielle de la classe $\mathcal{B}$ et $B_K$, $B_{S^*}$ et $B_{S'}$ celles qui énumèrent respectivement le nombre de $SK$-arbres n'ayant pas de clique au n\oe ud racine, n'ayant pas d'étoile dont le centre soit relié à la feuille racine, et n'ayant pas d'étoile dont un non-centre soit relié à la feuille racine. L'unicité dans le théorème de Cunningham selon Gioan et Paul permet de dénombrer les buissons connexes décorés, en traduisant la construction itérative d'un $SK$-arbre enraciné en termes d'équations sur ces séries génératrices. En effet remarquons tout d'abord qu'un tel arbre qui n'est pas élémentaire, possède comme n\oe ud racine une clique $K$, une étoile $S^*$ ou une étoile $S'$, donc $B(z)=\frac{1}{2}(B_K+B_{S^*}+B_{S'}-z)$. De plus, un $SK$-arbre dont la racine n'est pas (par exemple) une clique, est élémentaire ou bien possède une racine qui est une étoile $S^*$ ou $S'$ avec $n>1$ tiges libres. Les tiges de $S^*$ portent des arbres n'ayant pas de $S^*$ à la racine, tandis que parmi celles de $S'$, il y en a une qui porte un arbre n'ayant pas de $S'$ à la racine et les autres portent des arbres n'ayant pas de $S^*$ à la racine. En prenant garde à diviser par la factorielle du nombre de tiges jouant le même rôle, on obtient la première équation du système suivant, les autres s'en déduisent de même : \begin{align} \label{systemB1} &B_K(z)=z+\sum_{n>1}{\frac{B_{S^*}^n}{n!}}+\sum_{n>1}{B_{S'}\frac{B_{S^*}^{n-1}}{(n-1)!}}=z+(1+B_{S'})\exp{B_{S^*}}-B_{S^*}-B_{S'}-1 \\ \label{systemB2} &B_{S^*}(z)=z+\sum_{n>1}{\frac{B_K^n}{n!}}+\sum_{n>1}{B_{S'}\frac{B_{S^*}^{n-1}}{(n-1)!}}=z+\exp{B_K}+B_{S'}\exp{B_{S^*}}-B_K-B_{S^*}-1 \\ \label{systemB3} &B_{S'}(z)=z+\sum_{n>1}{\frac{B_K^n}{n!}}+\sum_{n>1}{\frac{B_{S^*}^{n}}{n!}}=z+\exp{B_K}+\exp{B_{S^*}}-B_K-B_{S^*}-2 \end{align} \begin{Thm} Le nombre de buissons connexes étiquetés de taille $n$ équivaut à \begin{equation}\label{asymptoB}\tag{$\sim_B$} \frac{b_0}{2\sqrt{\pi n^3}}.\beta^{-n}.n! \end{equation} où $\beta=2\sqrt{3}-1+2\log{\frac{1+\sqrt{3}}{2}}$ dont l'inverse vérifie $6.26<\beta^{-1}<6.27$, et $b_0=\sqrt{\frac{\beta}{\sqrt{3}}}$. Voici les premiers termes: \begin{gather*} 0,\, 1,\, 4,\, 38,\, 596,\, 13072,\, 368488,\, 12693536,\, 516718112,\, 24268858144,\\ 1291777104256,\, 76845808729472,\, 5052555752407424 \end{gather*} \end{Thm} \begin{proof} Les expressions \ref{systemB1} et \ref{systemB2} donnent $B_K=B_{S^*}$, puis \ref{systemB2} et \ref{systemB3} : $B_{S'}=1-\exp(-B_K)$, et substituant dans \ref{systemB2} on a $B_K=z+f(B_K)$ où $f(w)=2\exp(w)+\exp(-w)-w-3$. On reconnait là un \og smooth implicit-function schema\fg{} donc d'après \cite[Thm VII.3]{FlajoSedge:2009}: les coefficients de $B_K$ vérifient (\ref{asymptoB}) où $\beta$, son rayon de convergence, satisfait $f'(B_k(\beta))=1$. En posant $s=B_K(\beta)$ on trouve $2=2e^s-e^{-s}$, une équation du second degré en $e^s$, dont on cherche la solution positive: $s=\log{\frac{1+\sqrt{3}}{2}}$. En évaluant $B_K=z+f(B_K)$ en $z=\beta$, on trouve $\beta=2s+1+2e^{-s}$. Ensuite, \cite[Thm VII.3]{FlajoSedge:2009} donne la valeur de $b_0$ en fonction de $\beta$ et $s$. Enfin, $B=(g(B_K)-z)/2$ avec $g(w)=2w+1-\exp(-w)$ donc ses coefficients vérifient (\ref{asymptoB}); et leur expression exacte se déduit de l'inversion de Lagrange: \[ g(B_K(z))=g(z)+\sum_{k=1}^\infty{\frac{1}{k!}\left(\frac{\partial}{\partial z}\right)^{k-1}\left( g'(z)f(z)^k\right)} .\] Comme $f(z)=\frac{3}{2}z^2+o(z^2)$, la somme jusqu'au rang $n$ fournit un calcul effectif des $n$ premiers termes du développement. \end{proof} \begin{rem}[cohérence avec l'OEIS] La suite des premiers termes est par ailleurs connue de l'OEIS, recensée à la référence \hyperlink{http://oeis.org/A277869}{A277869} \cite{OEIS}. \end{rem} \begin{rem}[Buissons non étiquetés et automorphismes] Pour en déduire une asymptotique du nombre de buissons non étiquetés il faudrait connaître la taille typique de leurs groupes d'automorphismes. Pour cela il serait bon d'étudier la forme typique de l'arbre-de-graphes et la distribution des degrés des sommets pour décomposer les symétries des buissons. \end{rem} \subsection{Structure et dénombrement des diagrammes analytiques} \subsubsection{Opérade des diagrammes de cordes analytiques connexes enracinés} Les graphes dégénérés décrivent l'entrelacement d'un unique diagramme de cordes (figure \ref{fig:degenrate_cordiag_root} en ignorant le marquage). Bouchet montra dans \cite{Bouchet:1987} que c'est également le cas des graphes indécomposables. La décomposition de Cunningham permet donc de montrer, comme dans \cite[section 4.8.5]{ChDuMo:2012}, que deux diagrammes de cordes ayant le même graphe d'entrelacement diffèrent par une série de mutations. Une \emph{mutation} consiste à appliquer une symétrie du groupe rectangulaire $(\mathbb{Z}/2\mathbb{Z})^2$ à un sous-diagramme de cordes définit par deux intervalles sur le cercle (qui peuvent être vides ou consécutifs). En effet, la seule ambiguïté apparaissant lors de la reconstruction d'un diagramme de cordes à partir de la factorisation du buisson décrivant son entrelacement, provient du choix de l'orientation lors du remplacement d'une corde par un sous-diagramme de cordes. \begin{figure}[H] \centering \includegraphics[width=0.7\textwidth]{cordiag_mutation.pdf} \caption{\label{fig:cordiag_mutation} Mutations d'un sous-diagramme défini par deux intervalles.} \end{figure} Considérons la classe combinatoire $\mathcal{C}$ des diagrammes de cordes analytiques \emph{connexes} et enracinés. La connexité d'un diagramme est définie comme celle de son graphe d'entrelacement. L'enracinement désigne ici le choix d'une corde $v$ distinguée que l'on oriente : cela revient à marquer un seul caractère $v^+$ du mot cyclique associé. Les cordes sont donc ordonnées et orientées par le parcours dans le sens positif depuis la tête $v^+$ de la racine. Appelons \emph{taille} du diagramme le nombre de cordes non enracinées et supposons qu'elle est non nulle. Notre objectif est de décrire la combinatoire de cette classe et d'estimer le nombre de tels diagrammes quand la taille tend vers l'infini. Pour cela, définissons une \emph{poulie} comme l'un des trois diagrammes enracinés de la figure \ref{fig:degenrate_cordiag_root}: $T_n$, $D^*_n$ et $D'_{k,l}$ pour $n>1$ et $k+l>0$. Une poulie enracinée en $v$ possède un côté gauche de $v^+$ à $v^-$ et un côté droit de $v^-$ à $v^+$. \begin{figure}[H] \centering \includegraphics[width=1.0\textwidth]{degenerate_cordiag_root.pdf} \caption{\label{fig:degenrate_cordiag_root} Les trois types de poulies et leurs graphes dégénérés associés.} \end{figure} Rappelons que l'enracinement d'un arbre définit un ordre partiel sur ses sommets $V_T$, dit \emph{généalogique}, dont les segments initiaux sont les géodésiques partant de la racine. Si de plus l'arbre est plan, il en découle un ordre linéaire privilégié sur chaque ensemble de voisins d'un n\oe ud donné. Appelons \emph{cordage} la donnée d'un arbre plan $T$ enraciné en une feuille, dont chaque n\oe ud interne $x$ est décoré par une poulie $P_x$. La structure plane induit une bijection $\varphi_x$ entre l'ensemble des cordes de $P_x$ et l'ensemble des voisins de $x$, envoyant la corde enracinée sur le prédécesseur immédiat de $x$ pour l'ordre généalogique susmentionné. Un cordage sera dit \emph{réduit} s'il vérifie les conditions suivantes, portant sur les types de poulies associés à deux n\oe uds de l'arbre liés par une arête : deux poulies de type $T$ ne sont pas connectées, deux poulies de type $D^*$ ne sont pas connectées, et si $P_x$ est de type $D'$, nous demandons que $\varphi_x$ n'envoie pas la corde intersectant la racine sur un diagramme de type $D'$, et n'envoie pas les cordes parallèles à la racine sur des diagrammes du type $D^*$. Les feuilles sont décorées par l'unique diagramme enraciné connexe de taille $1$. Un cordage se \emph{contracte} en insérant dans chaque corde non enracinée $c$ de $P_x$ la poulie $P_{\varphi_x(c)}$, appliquant son intervalle de gauche sur $c^-$ et son intervalle de droite sur $c^+$, comme indiqué sur la figure \ref{fig:cord_operad}. Cette contraction fournit un diagramme de cordes analytique enraciné. \begin{Lem}\label{unique_cordage} Un diagramme analytique enraciné connexe provient d'un unique cordage. \end{Lem} \begin{proof}[Preuve] \emph{Existence:} La factorisation $T(G)$ de son graphe d'entrelacement $G$ est un $SK$-arbre enraciné en une feuille. Enracinons les décorations $G_x$ de ses n\oe uds internes au sommet qui est le plus proche de la feuille racine. Nous obtenons alors un cordage $T(D)$ associé à $D$ en remplaçant les étiquettes des n\oe uds internes comme dans la figure \ref{fig:degenrate_cordiag_root}. \emph{Unicité:} Un cordage $T(D)$ associé $D$ fournit immédiatement la factorisation $T(G)$. L'unicité découle donc de celle du \ref{buisson_decomposition}, de l'unicité des diagrammes associés aux graphes dégénérés, et aux orientations imposées lors des insertions des poulies dans les cordes. \end{proof} \begin{figure}[H] \centering \includegraphics[width=0.6\textwidth]{acd_operad_composition_2.pdf} \hspace{0.5cm} \includegraphics[width=0.25\textwidth]{acd_operad_composition.pdf} \caption{\label{fig:cord_operad} Composition dans l'opérade $\mathcal{C}$: insertion des poulies dans les cordes.} \end{figure} \subsubsection{Nombre de diagrammes de cordes analytiques connexes enracinés} Ce lemme permet d'énumérer les objets de la classe $\mathcal{C}$ par taille. Soit $C(z)$ la série génératrice ordinaire de la classe $\mathcal{C}$; et $C_T$, $C_{D^*}$ et $C_{D'}$ celles qui énumèrent le nombre de cordages n'ayant pas pour n\oe ud racine une poulie du type indiqué par leur indice. Le lemme précédent assure l'unicité de la construction itérative donc $C(z) = \frac{1}{2}(C_T+C_{D^*} + C_{D'}-z)$ et: \begin{align}\label{systemC1} &C_T(z) = z + \sum_{n>1}{C_{D^*}^n} + \sum_{k+l>0}{C_{D'}C_{D^*}^{k+l}} = z + \frac{C_{D^*}^2}{1-C_{D^*}} + C_{D'} \left[ \frac{1}{(1-C_{D^*})^2}-1\right] \\ \label{systemC2} &C_{D^*}(z) = z + \sum_{n>1}{C_{T}^n} + \sum_{k+l>0}{C_{D'}C_{D^*}^{k+l}} = z + \frac{C_{T}^2}{1-C_{T}} + C_{D'} \left[ \frac{1}{(1-C_{D^*})^2}-1 \right] \\ \label{systemC3} &C_{D'}(z) = z + \sum_{n>1}{C_{D^*}^n} + \sum_{n>1}{C_{T}^n} = z + \frac{C_{D^*}^2}{1-C_{D^*}} + \frac{C_{T}^2}{1-C_{T}} \end{align} De \ref{systemC1} et \ref{systemC2} on tire $ C_T = C_{D^*} $. On substitue dans \ref{systemC3} puis on calcule $ C = \frac{C_T}{1-C_T} $; et \ref{systemC1} donne après simplification: $ C_T^3 - 4 C_T^2 + (1+z) C_T - z= 0$. On en déduit la proposition suivante. \begin{Prop} La série génératrice ordinaire $C$ du nombre $C_n$ de diagrammes de cordes analytiques connexes enracinés de taille $n$, est algébrique: $C=z+2zC+(z+2)C^2+2C^3$. Par conséquent $C_n \sim c_0\,n^{-\frac{3}{2}} \,\gamma^{-n}$ pour $c_0>0$ et $\gamma$ la racine réelle de $4x^3-49x^2+164x-12$: \[\gamma =\frac{1}{12}\left(49-\frac{433}{\sqrt[3]{24407-1272\sqrt{318}}}-\sqrt[3]{24407-1272\sqrt{318}} \right).\] Le nombre $C_n$ de diagrammes analytiques connexes enracinés, s'exprime par la formule: \[ C_n=\frac{1}{n}\sum_{k=0}^{n-1}{\binom{n-1+k}{n-1}\binom{2n+k}{n-1-k}2^k} .\] Les premiers termes sont: $1, 4, 27, 226, 2116, 21218, 222851, 2420134, 26954622, 306203536$. \end{Prop} \begin{proof} L'équation de $C$ s'obtient en substituant $C_T=\frac{C}{1+C}$ dans celle de $C_T$. Les séries $C$ et $C_T$ ont le même rayon de convergence, car $C_T$ n'atteint pas la valeur $1$ avant sa première singularité. Cette singularité $\gamma$ dicte la croissance exponentielle des coefficients des séries $C_T$ et $C$. C'est la plus petite racine du discriminant selon la variable $y$ du polynôme $y^3-4y^2+(1+x)y-x$, définissant l'équation cubique pour $C_T$. Elle correspond en effet au premier point où cette cubique admet une tangente verticale, et n'est plus redevable du théorème des fonctions implicites. Le discriminant en question est l'équation qui apparaît dans l'ennoncé. Remarquons en passant que $\gamma$ est bien l'unique racine de plus petit module du discriminant du polynôme: $2y^3+(x+2)y^2+(2x-1)y+x \in \mathbb{C}(x)[y]$ définissant l'équation sur $C$, confirmant le fait que $C$ et $C_T$ ont le même rayon. L'asymptotique annoncée découle à nouveau du \og smooth implicit-function schéma\fg{} \cite[Thm VII.3]{FlajoSedge:2009}. Sur un voisinage de l'origine l'équation pour $C$ peut s'écrire $z=\frac{C}{\varphi(C)}$ avec $\varphi(v)=\frac{(1+v)^2}{1-2v-2v^2}$; donc par inversion de Lagrange $C_{n}=\frac{1}{n}[v^{n-1}]\varphi(v)^n$. Pour calculer $\varphi^n$, après avoir développé son dénominateur en série entière, on distribue son numérateur aux termes de la somme et on développe le binôme de Newton: \begin{align*} \varphi(v)^n &=(1+v)^{2n}\cdot \left( \frac{1}{1-2v(1+v)} \right)^n = (1+v)^{2n}\cdot \sum_{k\in\mathbb{N}}{\binom{n-1+k}{n-1}(2v)^k(1+v)^k} \\ &= \sum_{k\in\mathbb{N}}{\binom{n-1+k}{n-1}(2v)^k(1+v)^{2n+k}} = \sum_{j,k\in\mathbb{N}}{\binom{n-1+k}{n-1}2^k\binom{2n+k}{j}v^{k+j}} \:.\end{align*} On extrait alors le terme de $v^{n-1}$ et on élimine l'indice $j$ de la somme: \[ C_n=\frac{1}{n}\sum_{k+j=n-1}{\binom{n-1+k}{n-1}\binom{2n+k}{j}2^k} =\frac{1}{n}\sum_{k=0}^{n-1}{\binom{n-1+k}{n-1}\binom{2n+k}{n-1-k}2^k} .\] \end{proof} \begin{rem}[Reformulation en terme de grammaire algébrique] Reformulons le cheminement de ce paragraphe en termes de grammaires algébriques (aussi dénommées acontextuelles). L'ensemble des diagrammes analytiques connexes enracinés forme un langage. La description de sa structure (d'opérade) en termes de poulies et de cordages en fournit une grammaire génératrice qui est algébrique: \begin{itemize} \item $S \;\longrightarrow \quad z \;\mid\; T_{1} \;\mid\; D^*_{1} \;\mid\; D'_{1}$ \item $T_{1} \longrightarrow \quad z \;\mid\; t_l D^*D^*t_r \;\mid\; t_l D^*D't_r \;\mid\; t_l D'D't_r$ \item $T \;\longrightarrow \quad z \;\mid\; t_l D^*D^*t_r \;\mid\; t_l D^*D't_r \;\mid\; t_l D'D't_r \;\mid\; TT$ \item $D^*_{1} \longrightarrow \quad z \;\mid\; d^*_l TTd^*_r \;\mid\; d^*_l TD'd^*_r \;\mid\; d^*_l D'D'd^*_r$ \item $D^* \longrightarrow \quad z \;\mid\; d^*_l TTd^*_r \;\mid\; d^*_l TD'd^*_r \;\mid\; d^*_l D'D'd^*_r \;\mid\; D^*D^*$ \item $D'_{1} \longrightarrow \quad z \;\mid\; d'_l Ld'_mMd'_nRd'_r$ \hspace*{3.45cm} $ \forall M \in \{T_1,D^*\},\; \forall L,R \in \{T,D'\}$ \item $D' \,\longrightarrow \quad z \;\mid\; d'_l Ld'_mMd'_nRd'_r \;\mid\; D'D'$ \hspace*{2cm} $\forall M \in \{T_1,D^*\},\; \forall L,R \in \{T,D'\}$ \end{itemize} Les indices $1$ permettent de distinguer les emplacements où la lettre en question ne peut être dupliquée: au n\oe ud racine ainsi que dans la corde de $D'$ intersectant la corde racine. Le lemme \ref{unique_cordage} signifie que cette grammaire est non ambigüe: chaque diagramme n'est engendré qu'une fois. On peut donc en tirer un système d'équations algébriques (plus volumineux que le système utilisé (\ref{systemC1},\ref{systemC2},\ref{systemC3}) qui n'est pas algébrique), dont le diagramme de dépendance montre qu'il est irréductible (voir \cite[VII.6.3]{FlajoSedge:2009} pour les notions de ce paragraphe). Les solutions sont des séries apériodiques puisque leurs coefficients sont positifs à partir d'un certain rang, et le théorème \cite[VII.5]{FlajoSedge:2009} permet d'en déduire l'asymptotique de leurs coefficients. De manière analogue, le système d'équations sur les séries $C_T$, $C_{D^*}$, $C_{D'}$ vérifie les conditions du théorème de Drmota-Lalley-Woods \cite[VII.6]{FlajoSedge:2009}. \end{rem} \begin{rem}[Développement asymptotique des coefficients] Ce théorème de Drmota-Lalley-Woods prévoit un développement asymptotique complet des $C_n$ de la forme suivante: \[ \gamma^{-n}n^{-\frac{3}{2}}\left( \sum_{k\in \mathbb{N}}{c_k\,n^{-k}}\right) .\] Le développement de l'équation au voisinage du point critique permettrait de calculer les $c_k$. \end{rem} \begin{quest}[Interprétation combinatoire de l'équation sur $C$] L'équation sur la série génératrice ordinaire des diagrammes de cordes analytiques connexes enracinés peut s'écrire sous la forme $C=z+2zC+(z+2)C^2+2C^3$. Le terme de gauche est la série $C$ recherchée, tandis qu'à droite nous voyons un polynôme à coefficients entiers positifs en la variable $z$ et la série $C$. C'est le genre d'identités qui découle d'une grammaire algébrique \cite{BanDrmota:2015} : il est donc naturel d'espérer pouvoir en trouver une interprétation combinatoire, permettant alors de la déduire instantanément. \end{quest} \subsubsection{Dénombrement des diagrammes de cordes analytiques enracinés} Enumérons désormais la classe combinatoire $\mathcal{A}$ des diagrammes de cordes analytiques enracinés, comme précédemment, en un \emph{rayon}, c'est-à-dire en une lettre du mot cyclique. Remarquons que cet enracinement revient à considérer des \emph{diagrammes linéaires}, autrement dit des suites de $2n$ lettres, chaque lettre apparaissant deux fois. La \emph{taille} est toujours définie comme le nombre de cordes et nous autorissons cette fois-ci que la taille soit nulle. Les diagrammes à zéro et une corde sont inclus. \begin{rem}[diagrammes enracinés versus linéaires] Pour $n>1$ nous avons a une bijection entre les diagrammes linéaires de taille $n$ et les diagrammes enracinés de taille $n-1$, obtenue en enracinant la premiere lettre située après le point marqué. Diagrammes enracinés et linéaires sont donc les mêmes objets, à ceci près que leurs fonctions taille sont décalées de un, et que les conventions sur les petits objets diffèrent. Selon le contexte et le type de construction récursive il peut être parfois commode de jongler entre les deux façons de penser: la première se prête bien à l'insertion de diagrammes à la place des cordes d'un autre (structure d'opérade), tandis que le second est adéquat pour la concaténation de diagrammes les uns à la suite des autres (structure séquentielle, ou de somme connexe). \end{rem} Soit $\mathcal{L}$ la classe combinatoire des diagrammes analytiques connexes linéaires, c'est-à-dire marqués en un point du cercle; et dont la taille est le nombre de cordes. Nous incluons cette fois-ci le diagramme à une corde mais pas celui à zéro cordes. Sa série génératrice est $L(z)=z+zC(z)$, toujours d'après la remarque précédente. Par conséquent $L$ est algébrique elle aussi, vérifiant $2L^3+(z^2-4z)L^2+z^2L+z^3=0$, et ses coefficients sont en $\Theta(n^{-\frac{3}{2}} \gamma^{-n})$. \begin{Lem} Les series generatrices $A$ et $L$ sont liées par l'équation $A=1+L(zA^2)$. \end{Lem} \begin{proof} Disons qu'une corde $a$ d'un diagramme linéaire est \emph{recouverte} par la corde $b$ si $b^+$ vient avant $a^+$ et $b^-$ vient après $a^-$. C'est une relation d'ordre partielle sur l'ensemble des cordes. Désignons par \emph{composante connexe} du diagramme, les cordes correspondant à une composante connexe dans le graphe d'entrelacement. Appelons \emph{rez-de-chaussée} d'un digramme linéaire, la suite des diagrammes formés par les composantes connexes des cordes non recouvertes, c'est-à-dire maximale pour cette relation de recouvrement. Un diagramme de cordes analytique linéaire est formé de son rez-de-chaussée, et de diagrammes linéaires qui viennent s'imbriquer dans les interstices délimités par les cordes d'une même composante connexe. \begin{figure}[H] \centering \includegraphics[width=0.9\textwidth]{linear_diagram_first_floor_decomposition.pdf} \caption{\label{fig:linear_diagram_first_floor_decomposition} Rez-de-chaussée du diagramme en gras, sous diagrammes dans les interstices} \end{figure} \noindent Un élément de la classe $\mathcal{A}$ peut contenir zéro cordes, sinon son rez-de-chaussée possède $k>0$ composantes connexes non vides, éléments de la classe $\mathcal{L}$. Chaque composante connexe recouvre un certain nombre (dépendant de sa taille comme expliqué ci-dessus) d'éléments de la classe $\mathcal{A}$. Nous utilisons ici de manière cruciale les conventions adoptées pour les petits objets de nos classes. On en déduit la relation: \[A=1+\sum_{k\in \mathbb{N}^*}{ \sum_{(l_1,\dots l_k)\in\mathcal{L}^k}{ \prod_{j=1}^{k}{ z^{\lvert l_j\rvert}A^{2\lvert l_j\rvert-1} } } } .\] La série $A$ est inversible pour la multiplication dans l'anneau des séries formelles (ou des séries convergentes) car son coefficient constant est $1$. En utilisant ce fait, on peut factoriser chaque terme de la somme sur $k$ pour reconnaître ensuite l'évaluation de la série $L$ en $zA(z)^2$: \[ A-1=\sum_{k\in \mathbb{N}^*}{\frac{1}{A^k}\sum_{(l_1,\dots l_k)\in\mathcal{L}^k}{ \prod_{j=1}^{k}{ (zA^2)^{\lvert l_j\rvert} } } } =\sum_{k\in \mathbb{N}^*}{\frac{1}{A^k}\left[ \sum_{l\in\mathcal{L}}{ (zA^2)^{\lvert l\rvert} } \right]^k }= \sum_{k\in \mathbb{N}^*}{\frac{1}{A^k}L(zA^2)^k } .\] Par conséquent, \[ A=\sum_{k\in \mathbb{N}}{\left(\frac{L(zA^2)}{A}\right)^k} = \frac{A}{A-L(zA^2)} .\] Et en utilisant à nouveau l'inversibilité de $A$ on obtient bien $A=1+L(zA^2)$. \end{proof} \begin{Thm}\label{diaganalin} La série génératrice ordinaire $A$ comptant le nombre $A_n$ de diagrammes analytiques enracinés est algébrique: \[(z^3+z^2)A^6-z^2A^5-4zA^4+(8z+2)A^3-(4z+6)A^2+6A-2=0 .\] Ainsi $A_n\sim a_0\,n^{-\frac{3}{2}}\,\alpha^{-n}$ pour $4<10^3\,a_0<5$ et $\alpha$ la plus petite racine du discrimant de cette expression en tant qu'élément de $\mathbb{R}(z)[A]$. Elle vérifie $0.063321613 < \alpha < 0.063321614$, et son inverse $15.792395< \alpha^{-1}< 15.792396$. Les premiers termes de la suite $A_n$ sont: \begin{gather*} 1,1,3,15,105,923,9417,105815,1267681,15875631,205301361 \end{gather*} \end{Thm} \begin{proof} On évalue l'équation vérifiée par $L$ en $zA^2$ pour obtenir celle sur $A$. Ensuite la preuve est similaire aux deux précédentes. Une assistance informatique permet d'encadrer la plus petite racine $\alpha$ du discriminant de cette équation. On identifie la branche de la courbe qui correspond à la série $A$ grâce aux premiers termes obtenus avec l'équation par la méthode des coefficients indéterminés. Cette fois-ci on applique le \cite[Théorème VII.8]{FlajoSedge:2009} pour obtenir une asymptotique en $A_n\sim a_0\,n^{-1-k_0/\kappa}\alpha^{-n}$ où dans notre cas: $k_0=1$ est l'ordre de $y(z)$ en $z=0$, et $\kappa=2$ vaut la multiplicité plus un de $\alpha$ en tant que racine du discriminant. Enfin, le théorème \cite[Thm VII.3]{FlajoSedge:2009} permet de calculer $a_0$ en fonction de $\alpha$ et $A(\alpha)$ et on trouve informatiquement l'encadrement $4<10^3\,a_0<5$. \end{proof} \begin{rem}[Vérification algorithmique des neuf premiers coefficients] Après toutes ces manipulations combinatoires il est naturel de vérifier si les premiers termes de la série génératrice coïncident effectivement avec le nombre de diagrammes linéaires analytiques. La description récursive des diagrammes analytiques nous a permis de tous les engendrer et de confirmer algorithmiquement les neuf premiers coefficients annoncés. Ceci constitue une vérification expérimentale du travail aboutissant aux équations sur les séries $C$ et $A$. \end{rem} \subsubsection{Asymptotique du nombre de diagrammes de cordes analytiques} Etudions désormais la classe $\Tilde{\mathcal{A}}$ des diagrammes de cordes analytiques; c'est-à-dire des mots dans lesquels chaque lettre apparaît deux fois, considérés à permutation cyclique près, qui décrivent la topologie des singularités de courbes analytiques du plan orienté. Le groupe cyclique à $2n$ éléments agit par rotation sur l'ensemble $\mathcal{A}_n$ des diagrammes linéaires analytiques à $n$ cordes, et les diagrammes de cordes analytiques correspondent aux orbites. Notons $\mathcal{A}^d_n$ l'ensemble des diagrammes linéaires à $n$ cordes dont le stabilisateur pour l'action cyclique est de cardinal $d$, et $A^d_n$ son cardinal. D'après la formule des classes: \begin{equation}\label{formule_classes} \Tilde{A}_n=\sum_{w\in \mathcal{A}_n}{\frac{1}{\lvert \omega(w)\rvert}} =\sum_{w\in \mathcal{A}_n}{\frac{\lvert \mathrm{Stab}(w)\rvert}{n}} =\frac{1}{n}\sum_{d\mid 2n}{d\, A^d_n} \:.\end{equation} L'objectif est désormais d'estimer les quantités $A^d_n$ selon la valeur de $d\mid 2n$ en vue de montrer que $A_n\sim A^1_n$. Pour celà, commençons par discuter la notion de diagramme quotient. \paragraph{Inversions de cordes, diagramme quotient, monodromies.} Si un diagramme linéaire $w$ possède une symétrie de rotation $d\in \mathrm{Stab}(w)$ qui envoie l'extrémité d'une corde, c'est-à-dire une lettre du mot cyclique, sur l'autre extrémité de la même corde; alors nous dirons que cette symétrie \emph{inverse une corde} du diagramme. Remarquons qu'elle est nécessairement d'ordre deux: $d=n\in \mathbb{Z}/2n\mathbb{Z}$. Supposons au contraire que $d$ n'inverse pas de cordes, alors \emph{le quotient de $w$ par l'action de $d$}, autrement dit son orbite par $d$, s'identifie à un diagramme linéaire $\overline{w}=w \pmod{d}$ de taille $t=n/d$. On peut alors reconstruire $w$ à partir de son quotient $\overline{w}$ dont chaque corde est décorée d'un entier modulo $\frac{n}{t}$: sa \emph{monodromie}. Concrètement (mais abusivement), on considère les entiers numérotés de $0$ à $2n-1$, et on apparie les points $i<j$ si $i \pmod{t}$ et $j \pmod{t}$ sont appariés dans $\overline{w}$ et si $\lfloor \frac{j-i}{t} \rfloor$ est égal à la monomdromie de la corde correspondante dans $\overline{w}$. \begin{Lem}[Le quotient préserve l'analycité] Si $w\in \mathcal{A}_n$ possède une symétrie $d$ qui n'inverse pas de cordes, alors $w \pmod{d} \in \mathcal{A}_{t}$. \end{Lem} \begin{proof} Fixons la rotation $d$ et raisonnons par récurrence sur le nombre de cordes du diagramme quotient. S'il en a moins de quatre il est analytique. Sinon, le diagramme $w$ possède une corde isolée, une fourche ou une paire de jumeaux comme sur la figure \ref{fig:isole_fourche_jumeaux}. Ce motif est préservé par la symétrie et il passe donc au quotient en un motif similaire dans $\overline{w}$. On peut donc le simplifier pour obtenir $\overline{w}'$. Mais ce diagramme est aussi un quotient du diagramme analytique $w'$ obtenu en simplifiant tous les motifs dans une même orbite par la symétrie. L'hypothèse de récurrence implique que $\overline{w}'$ est analytique, donc par la caractérisation récursive de l'analycité $\overline{w}$ aussi. \end{proof} \begin{Lem} \[\sum_{2<d\leq 2n}{A^d_n} \leq \sum_{2<d\leq 2n}{d \, A^d_n} =o(8^n) .\] \end{Lem} \begin{proof} Pour $d>2$, le quotient de $w\in \mathcal{A}^d_n$ est bien défini et c'est un diagramme de corde analytique. Comme $w$ est alors uniquement déterminé par son quotient $w \pmod{d}$ décoré des monodromies de ses cordes, on a: $A_n^d\leq \left(\frac{n}{t}\right)^{t} A_{t}$. Or $\sup \{\left(\frac{n}{t}\right)^{t}\vert 0<t\}=e^{\frac{n}{e}}<2^n$. Ainsi, en se souvenant que $\alpha^{-1}<16$: \[ \sum_{2<d\leq 2n}{A^d_n} \leq \sum_{2<d\leq 2n}{d \cdot A^d_n} = O\left( \sum_{2<d\leq 2n}{n 2^n n^{-\frac{3}{2}} \alpha^{-t}}\right) = O\left(n^{\frac{1}{2}} 2^n \alpha^{-\frac{n}{2}} \right) =o\left( 8^n \right) .\] \end{proof} \begin{Lem} $A^2_n=o(12^n)$. \end{Lem} \begin{proof} On partitionne les éléments de $\mathcal{A}^2_n$ selon le nombre $k$ de cordes laissées fixes par la symétrie $n$ d'ordre $2$. En enlevant ces cordes, on peut passer au quotient pour obtenir un diagramme analytique $\overline{w}$ de taille plus petit que $\frac{n}{2}$. Le diagramme $w$ initial est uniquement déterminé par l'emplacement des $k$ cordes fixes par l'involution, et par le quotient $\overline{w}$ muni des monodromies (modulo $2$) de ses cordes. La symétrie du diagramme implique que les cordes fixes sont déterminées par les $k$ extremitées situées dans les $n$ premières lettres de $w$. Par conséquent, en se rappelant que $\alpha^{-1}<16$, on a bien: \[ A^2_n \leq \sum_{k=0}^{n} \binom{n}{k}2^{\frac{n}{2}}A_{\frac{n}{2}} = O\left( 2^n 2^{\frac{n}{2}} \alpha^{-\frac{n}{2}} \right) =o(12^n) .\] \end{proof} \begin{Prop} Le nombre $\Tilde{A}_n$ de diagrammes de cordes analytiques vérifie: \[ \Tilde{A}_n \sim \frac{A_n}{n} \sim a_0 \, n^{-\frac{5}{2}}\, \alpha^{-n} .\] \end{Prop} \begin{proof} D'après les deux lemmes précedents on a: $A_n=\sum_{d}{A^d_n}=A^1_n+o(12^n)$ or $12^n = o(A_n)$ donc $A^1_n\sim A_n$. En appliquant tout ce qui précède à l'identité \ref{formule_classes}: \[\Tilde{A}_n =\frac{1}{n} \sum_{d\mid 2n}{d\cdot A^d_n} =\frac{A^1_n}{n}+o(12^n) \sim \frac{A^1_n}{n} \sim \frac{A_n}{n} .\] \end{proof} \begin{rem}[Diagrammes analytiques diédraux] On prouve de manière analogue que le nombre de diagrammes anlytiques modulo symétrie diédrale équivaut à $\frac{A_n}{2}$. \end{rem} \section{Courbes analytiques réelles singulières} Rappelons qu'une surface analytique réelle $S$ est une variété de dimension $2$ donnée par un atlas dont les changements de cartes sont analytiques, et qu'une courbe analytique est le lieu d'annulation d'une fonction analytique $f\colon S \to \mathbb{R}$. Dans cette section nous décrivons d'abord la topologie globale d'une courbe analytique réelle singulière sur une surface analytique réelle : après avoir introduit le concept de courbe combinatoire, nous verrons qu'il n'y pas d'obstructions globales quant à sa réalisation par des courbes analytiques. Nous énumèrons ensuite les types topologiques possibles pour une courbe analytique réelle de la sphère en fonction du degré topologique de ses singularités. Enfin, nous majorons cette quantité par une fonction du nombre d'arêtes uniquement. \subsection{Topologie globale : en analytique il n'y a pas d'obstructions} \subsubsection{Question directrice.} Une courbe analytique réelle $\gamma$ sur la sphère possède un nombre fini de singularités, à chacune lui étant associée un diagramme de cordes. Le dessin topologique est donc celui d'un certain nombre de points singuliers dont les rayons sont reliés par des arcs lisses disjoints. Par ailleurs, si on se place en un point non singulier de la courbe et qu'on choisit une orientation locale de la courbe en ce point, il existe une unique manière de poursuivre son chemin y compris à travers les singularités. En effet, une branche qui arrive en un point singulier est appariée avec une autre qui en ressort. Au bout d'un certain temps on sera retourné au point de départ. On peut alors parcourir d'autres brins de la courbe $\gamma$. Réciproquement imaginons un ensemble de germes de courbes analytiques singulières de la sphère dont les rayons sont reliés deux à deux par des arcs lisses disjoints. A quelles conditions ce dessin provient-il d'une courbe analytique ? Insistons sur le fait qu'on demande (implicitement dans l'expression \og type topologique \fg) non seulement que la courbe analytique soit homéomorphe à ce dessin mais aussi que les traversées des singularités soient les mêmes, autrement dit que les diagrammes de cordes soient les mêmes. Formalisons $\dots$ \begin{figure}[H] \centering \includegraphics[width=0.35\textwidth]{cardio_coeur_ellipse_corde.pdf} \includegraphics[width=0.6\textwidth]{cardio_coeur_ellipse_py.pdf} \caption{\label{fig:cardio_coeur_ellipse_py} Un dessin topologique provenant d'une courbe algébrique.} \end{figure} \subsubsection{Formulation combinatoire.} \label{formulation_combinatoire} Dans cette section nous adoptons une définition équivalente d'un diagramme de cordes $C$. C'est la donnée d'un ensemble $R$ de cardinal $2c$, qu'on appelle ses \emph{rayons}, muni d'une permutation cyclique $\sigma$ et d'une involution $\tau$ sans points fixes. Les orbites sous l'involution sont appelées ses \emph{cordes}. On rappelle qu'un diagramme de corde est analytique si c'est le halo d'une singularité de courbe analytique réelle plane. On utilise aussi une notion plus générale de \emph{graphe}. C'est une paire d'ensembles de sommets et de demi-arêtes $(V,E)$, munie d'applications extrémités $ e\in E \mapsto e_\pm \in V$; et d'une involution sans points fixes $\alpha \colon E \to E$ telle que $\alpha(e)_-=e_+$, dont les orbites sont les arêtes. \begin{Define}[courbe combinatoire] Une \emph{courbe combinatoire} est la donnée d'un ensemble de $s$ diagrammes de cordes $C_1,\dots,C_s$ et d'une involution sans points fixes $\alpha$ sur l'ensemble $R=R_1\cup \dots \cup R_s$ de leurs rayons. \end{Define} \paragraph{La carte associée.} On peut associer à une telle courbe combinatoire un graphe $G$ dont les sommets sont les diagrammes de cordes et dont les demi-arêtes incidentes à un diagramme sont en bijection avec ses rayons. Quant aux arêtes, elles sont données par l'involution $\alpha$ qui apparie les demi-arêtes. L'ensemble $R$ des demi-arêtes de $G$ est muni d'une permutation $\sigma=\sigma_1 \dots \sigma_s$ associée à la rotation simultanée des diagrammes ainsi que d'une involution $\alpha$ définie par l'appariement choisi entre les demi-arêtes: nous avons donc une carte combinatoire $(R,\sigma,\alpha)$. \begin{rem} Le livre \cite{LandoZvonkine:2004} étudie les apparitions de ces objets sous diverses formes et dans des contextes variés. Rappelons simplement que la structure de carte combinatoire, c'est-à-dire l'ensemble $R$ muni de deux permutation dont l'une est une involution sans points fixes, équivaut à celle faite à homéomorphisme près, d’un graphe plongé dans une surface, avec un complémentaire qui est une union disjointe de disques. Insistons : le genre de la surface se déduit de la donnée des permutations. \end{rem} \paragraph{Réciproquement...} En plus de la carte, la courbe vient avec une involution supplémentaire sur les demi-arêtes: le produit $\tau=\tau_1 \dots \tau_s$ des involutions provenant de chaque diagramme de cordes. Réciproquement, la donnée $(R,\alpha,\sigma,\tau)$ d'une carte combinatoire et d'une involution sans points fixes qui préserve l'incidence des demi-arêtes (c'est-à-dire que les orbites selon $\tau$ forment une partition des orbites selon $\sigma$), équivaut à celle de la courbe combinatoire. \begin{Define}[Brins] Les deux involutions $\alpha$ et $\tau$ engendrent un sous-groupe du groupe symétrique $\mathfrak{S}(R)$ sur l'ensemble des demi-arêtes et nous définissons un \emph{brin} comme une orbite sous son action. \end{Define} \begin{ex} Par exemple, toute application de $\kappa$ cercles dans une surface définissant un plongement lisse en dehors d'un nombre fini de singularités, induit une courbe combinatoire à $\kappa$ brins. Une courbe analytique réelle définit également une courbe combinatoire, qualifiée \emph{d'analytique}. Un brin correspond à l'idée naturelle que l'on se fait d'un parcours le long d'une courbe dont la traversée d'une singularité reste sur la même corde: lorsqu'on arrive à un sommet selon un certain rayon, on en ressort selon le rayon qui lui est apparié par le diagramme de cordes. \end{ex} \begin{rem} Dans ces cas, il faut cependant prendre garde au fait que la surface retrouvée par la structure de courbe combinatoire ne sera pas nécessairement homéomorphe à la surface initiale ; ceci n'est assuré que lorsque le complémentaire de la courbe dans la surface initiale est une union de disques. Ce détail n'est pas problématique pour le théorème \ref{cocan} de réalisation à venir car la structure qui importe est celle du graphe de diagrammes de cordes plongé dans une surface. La notion de courbe combinatoire aura ici son intérêt pour les questions de dénombrement sur la sphère et le plan projectif. \end{rem} On peut désormais reformuler la question initiale : à quelles conditions une courbe combinatoire est-elle analytique ? On dit que la courbe combinatoire vérifie \emph{l'hypothèse locale} si ses diagrammes de cordes sont analytiques. \begin{Thm}[Courbes combinatoires analytiques] \label{cocan} Toute courbe combinatoire dans une surface analytique réelles vérifiant l'hypothèse locale est associée à une courbe analytique. \end{Thm} \begin{proof} \emph{0/ Deux résultats.} Rappelons \cite[no. 8]{Cartan:1958} pour quelques éléments de géométrie analytique réelle. Grauert a montré \cite{Grauert:1958} que toute surface analytique réelle $S$ admet un plongement propre analytique dans un espace numérique et le théorème d'approximation de Whitney \cite{Whitney:1934} affirme que sa dimension peut être prise égale à $4$. Pour alléger les notations, ces plongements analytiques seront fixés de manière à identifier les objets avec leurs images dans ces espaces numériques. Par ailleurs, Cartan a montré \cite[no. 7.2]{Cartan:1957} qu'une courbe de $S$ définie localement comme le lieu d'annulation de fonctions analytiques est le lieu d'annulation d'une fonction analytique définie globalement sur $S$. \emph{1/ Préparation du terrain.} L'hypothèse locale permet de choisir pour chaque diagramme $C_j$ un germe de fonction analytique réelle $f_j$ définie sur un disque ouvert $U_j$ de $S$, ne contenant qu'un seul point critique $x_j$, et tel que $(U_j,\{f_j=0\})$ réalise $C_j$. Prenons les disques $U_j\subset \mathbb{R}^2$ fermetures deux à deux disjoints; et choisissons des morceaux de courbes lisses de la surface, qui connectent sans s'intersecter les branches des germes $f_j=0$ selon l'appariement $\alpha$. Remarquons qu'on peut même choisir leur classe d'homotopie dans la surface $S$ relativement à l'union des $U_j$. Le tout forme une courbe $\xi$ qui est lisse et sans intersections en dehors des $x_j$, et qui coïncide avec les $f_j=0$ sur les $U_j$. \emph{2/ \'Eclatement aux points singuliers pour se ramener au monde lisse.} D'après le théorème de résolution des singularités de Noether, on peut éclater la surface au-dessus des $U_j$ de manière à résoudre la courbe $f_j=0$ comme dans \cite[1.3]{GhySim:2020}. Cela fournit une surface analytique (non-orientable) $M_j$ munie d'une application analytique $\pi_j\colon M_j\to U_j$ qui est un difféomorphisme en dehors du diviseur exceptionnel $\pi_j^{-1}(x_j)$. En éclatant $S$ simultanément en tous les $x_j$, on obtient une surface analytique $S'\subset \mathbb{R}^{N'}$ munie d'une application birationnelle propre $\pi$ vers la surface $S$. La transformée stricte de $\xi$ par $\pi$ est une courbe lisse $\xi'$ de $S'$ transverse au diviseur exceptionnel $\delta=\bigcup_j {\delta_j}$. \emph{3/ Approximation analytique d'une courbe lisse dans la surface éclatée et implosion.} Choisissons une paramétrization $u\colon \mathbb{R}/\mathbb{Z}\to \mathbb{R}^{N'}$ de la courbe $\xi'$. Les sommes partielles des séries de Fourier de ses composantes l'approchent pour la topologie $\mathscr{C}^1$ : soit $v \colon \mathbb{S}^1 \to \mathbb{R}^{N'}$ une telle approximation. En paramétrant $\mathbb{S}^1=\{x^2+y^2=1\}$ par les fonctions trigonométriques circulaires et en écrivant $\cos(m\theta)$ et $\sin(m\theta)$ comme des polynômes (de Tchebychev) en ces fonctions, on voit que $v$ est analytique. La projection orthogonale sur la surface $S'$ est bien définie et analytique dans un petit voisinage de $S'$ (voir \cite{Kollar:2017}), donc l'image de $v$ définit une courbe localement analytique $\gamma'$ de $S$. Cette courbe peut être choisie arbitrairement proche de $\xi'$ pour la topologie $\mathscr{C}^1$ et en particulier possédant les mêmes intersections (toutes transverses) avec le diviseur exceptionnel. La courbe localement analytique est en fait globalement analytique sur $S'$ donc $\gamma=\pi(\gamma')$ est une courbe analytique de $S$ qui répond au problème. \end{proof} \begin{rem}[Petite amélioration] On prouve en fait qu'on peut préserver un nombre arbitraire de dérivées des branches aux singularités : il suffit d'éclater suffisament les singularités. \end{rem} \subsection{\'Enumération des courbes combinatoires analytiques de la sphère} \paragraph{Objectif.} Nous pouvons désormais énumérer les courbes combinatoires analytiques de la sphère, en fonction du nombre de sommets et de leurs degrés. En effet, la description de la topologie locale d'une singularité analytiques nous a permi de dénombrer les diagrammes de cordes associés. Comme la topologie globale d'une courbe analytiques en dehors de ses singularités ne possède pas d'obstructions, il ne reste plus qu'à compter ces configurations globales, ce que Tutte a déjà fait pour nous dans \cite{Tutte:1962}. Les objets combinatoires de cette section (courbes, cartes, découpages, dissections) seront supposés connexes. \paragraph{Découpages de Tutte.} Soit $P$ une sphère privée de $s$ disques. L'orientation de la sphère en induit une sur le bord. Nous dessinons dans le plan qu'il faudra imaginer compactifié par un point à l'infini, avec l'orientation trigonométrique. Les bords sont donc orientés en sens horaire. Indexons les composantes de bords $J_1,\dots,J_s$ et disposons sur $J_l$ un nombre pair non nul $2k_l$ de points, en quantité totale $2c$. Notons $P_k$ une surface ainsi décorée, où $k$ dénote un multi-indice $(k_1,\dots,k_s)\in \left( \mathbb{N}^*\right)^s$. Nous appelons \emph{découpage} de $P_k$ toute carte combinatoire connexe ayant pour sommets l'ensemble des points sur son bord, et pour arêtes les arcs du bord entre deux points ainsi qu'un ensemble de courbes simples et disjointes reliant tous les points deux à deux. Disons que le découpage est \emph{marqué} si nous avons distingué un sommet par composante de bord. Deux découpages marqués sont égaux s'ils sont images l'un de l'autre par un difféomorphisme de la surface préservant l'orientation et les sommets distingués. Cela revient à considérer ces cartes modulo isotopies relativement au bord de l'espace ambiant. \begin{figure}[H] \centering \includegraphics[width=0.55\textwidth]{decoupage.pdf} \caption{\label{fig:decoupage} Un découpage de $P_{(6,4,4,4,6,4,4,4)}$.} \end{figure} Tutte a calculé dans \cite{Tutte:1962} le nombre de découpages marqués de $P_k$ pour s>0, avec la convention $n!=1$ si $n<0$: \[ \frac{(c-1)!}{(c-s-2)!} \: \prod_{v=1}^{s}{k_v \binom{2k_v}{k_v}} .\] \begin{Prop} Le nombre de courbes combinatoires analytiques de la sphère ayant $c$ arêtes pour $s$ sommets indexés et marqués de tailles respectives $k_1,\dots, k_v$ vaut: \[ \frac{(c-1)!}{(c-s-2)!} \: \prod_{v=1}^{s}{k_v \binom{2k_v}{k_v}A_{k_v}} .\] \end{Prop} \paragraph{Dissections.} L'indexation des composantes de bord et le choix d'un sommet distingué sur chacune d'elles représentent beaucoup de décorations. Montrons comment retrouver tout cela avec l'enracinement d'un seul sommet. Appelons donc \emph{dissection} la donnée analogue à celle d'un découpage, où l'on a oublié la numérotation des composantes de bords ainsi que tous les sommets enracinés sauf un seul: sa racine. Deux dissections sont égales si elles sont images l'une de l'autre par un homéomorphisme préservant l'orientation et la racine. \paragraph{De la dissection au découpage: numérotation des composantes de bord.} La dissection induit une carte enracinée de la sphère (sous-entendu connexe), obtenue en contractant chaque composante de bord en un point. Les sommets de cette carte sont donc les composantes de bords, les arêtes sont données par les segments entre deux sommets de la dissection autres que les arcs du bord. Il peut y avoir des boucles et des arêtes multiples, comme d'habitude avec ce genre d'objets. Les demi-arêtes de la carte correspondent aux sommets de la dissection, et les orientations cycliques aux sommets sont données par celle des composantes de bords. Le sommet enraciné de la dissection enracine la carte en une demi-arête, toujours en accord avec l'usage. Commençons par définir une numérotation privilégiée des composantes de bords grâce à cette carte enracinée de la sphère. Numérotons par $1$ la composante de bord contenant le sommet racine. Ensuite effectuons un parcours en largeur (BFS) du graphe sous-jacent à la carte en partant de la racine et selon l'ordre suivant. On impose de visiter les voisins (non précédemment visités) d'un sommet $x$ dans l'ordre induit par l'orientation des demi-arêtes partant de $x$. Quand tous les voisins de $x$ ont été visités, on reprend le parcours au plus petit sommet déjà numéroté dont tous les voisins n'ont pas été visités. Ce procédé termine, et par connexité il visite tous les sommets de la carte un par un, induisant un ordre sur les composantes de bords. La figure \ref{fig:dissection} montre la numérotation des composantes de bords ainsi obtenue par l'enracinement au sommet rouge. On visite en premier les bords voisins $2,3,4,5$ du bord $1$, puis on se place en $2$, on visite $6$, puis on se place en $3$, on visite $7$, puis on se place en $4$ et on visite $8$. Ce paragraphe montre qu'une dissection possède un \emph{passeport} bien défini $k=(k_1,\dots,k_n)$ où $2k_j$ est nombre de points sur le bord numéro $j$. \paragraph{De la dissection au découpage: marquage.} Montrons désormais comment cette numérotation des composantes de bords avec un sommet enraciné sur $J_1$, permet de faire un choix de sommet distingué sur chaque autre composante de bord $J_l$. A ces fins introduisons une relation d'ordre totale sur l'ensemble $\mathcal{C}_l$ des chemins de la carte qui partent de la racine, aboutissent en un point sur $J_l$, rencontrent chaque composante de bord au plus une fois, et parcourent les composantes de bord selon l'orientation induite par celle de la surface. Par connexité, cet ensemble est non vide, et l'extrémité du plus petit chemin sera le point distingué recherché sur $J_l$. Un chemin $p \in \mathcal{C}_l$ recontre une suite de composantes de bords, que l'on complète à gauche par des zéros pour que sa longueur soit $s$; on la note $u_p$. Par exemple, il y a quatre chemins dans $\mathcal{C}_8$ ayant pour suite $(0,0,0,0,0,1,5,8)$ sur la figure \ref{fig:decoupage}, indépendamment de l'emplacement de la racine sur $J_1$. Ensuite, un élément $c\in \mathcal{C}_l$ traverse un certain nombre d'arêtes dans chaque composante de bord, notons $v_p$ la suite de ces entiers. On dit alors que $p<p'$ si $u_p<u_{p'}$ pour l'ordre lexicographique, ou si $u_p=u_{p'}$ et $v_p<v_{p'}$ pour l'ordre lexicographique. Il est clair que $u_p$ et $v_p$ déterminent $p$ donc c'est bien une relation d'ordre. Il est par ailleurs linéaire par construction, donc $\mathcal{C}_l$ admet un plus petit élément et l'on choisit son extrémité comme point distingué de $J_l$. Les sommets encerclés de la figure \ref{fig:dissection} représentent les sommets marqués pour le choix de la racine rouge sur $J_1$. \begin{figure}[H] \centering \includegraphics[width=0.55\textwidth]{dissection.pdf} \caption{\label{fig:dissection} Dissection enracinée au point rouge et découpage marqué induit.} \end{figure} \paragraph{Enumération des courbes combinatoires analytiques enracinées sur la sphère.} Résumons: l'oubli de l'indexation et du marquage du bord induit une application des découpages marqués vers les dissections enracinées, dont les fibres sont de cardinal $(s-1)!\prod_{v>1}{2k_v}$. Par ailleurs, une dissection possède un ordre privilégié sur ses composantes de bords ainsi qu'un sommet distingué sur chacune d'entre-elles. Son passeport est la suite $(k_1,\dots, k_n)$ des tailles de ses bords. Une courbe combinatoire est enracinée si la carte combinatoire sous-jacente est enracinée par le choix d'une demi-arête. Dans notre contexte cela équivaut à choisir un élément distingué de l'ensemble $R_1$. Les paragraphes précédents permettent d'en déduire un ordre total sur les ensembles des rayons $R_l$, et de choisir un rayon distingué dans chacun d'eux. Ainsi, étant donné une dissection avec une suite de diagrammes enracinés ayant les bonnes tailles, il y a une façon canonique de les recoller sur les composantes de bord, de manière à récupérer une courbe combinatoire enracinée avec les bonnes tailles. Réciproquement, une courbe combinatoire analytique enracinée de la sphère se décompose, d'une part en une suite de $C_1,\dots,C_s$ diagrammes analytiques linéaires ayant $k_1,\dots,k_s$ cordes, et d'autre part en un découpage de $P_k$. La donnée de cette décomposition caractérise la courbe de manière unique. Ceci prouve la proposition qui suit. Insistons sur le fait que nous ne disons rien sur le nombre de courbes enracinées dont les degrés des sommets appartiennent à un ensemble donné, même en isolant la taille de la racine à part: le passeport est un uplet. Cette dernière question semble bien plus difficile. \begin{Prop}\label{nombre_courbes_global} Le nombre de courbes combinatoires analytiques enracinées de passeport $k$ sur la sphère vaut: \[ 2k_1\, \frac{(c-1)!}{(s-1)!(c-s-2)!} \: \prod_{v=1}^{s}{\binom{2k_v}{k_v}\frac{A_{k_v}}{2}} .\] \end{Prop} \begin{rem}[Autres surfaces] Tout le travail effectué jusqu'à présent se généralise aux surfaces de genre supérieur. La description de la topologie des courbes combinatoires analytiques et la méthode d'enracinement (qui n'utilise pas la planarité) permettent de décomposer l'énumération en une partie locale et une autre globale. La première correspond aux diagrammes analytiques linéaires, et l'autre à un découpage enraciné d'une surface à bord, avec un passeport $k$ donné. Si $m_k(g)$ dénombre ces derniers, alors le nombre de courbes combinatoires enracinées analytiques sur la surface compacte de genre $g$ vaut: $m_k(g) \prod_{v=1}^{s}{A_{k_v}}$ \end{rem} \subsection{Majoration en fonction des paramètres topologiques} Nous majorons désormais le nombre de courbes combinatoires analytiques de la sphère par une fonction exponentielle du nombre d'arêtes indépendament de la croissance du nombre de singularités et de leurs degrés. \begin{Prop}\label{maj_cc_sphere} Le nombre $Calc_{\, \S^2 \,}(c)$ de courbes combinatoires analytiques enracinées de la sphère ayant $c$ arêtes vérifie, pour une certaine constante $\rho$ l'inégalité: $Calc_{\, \S^2 \,}(c) \leq c^3\, \rho^c$. On peut choisir $\rho\leq 96\,e^\frac{1}{3}<134$ donc $Calc_{\, \S^2 \,}(c)= o\left(134^c\right)$. \end{Prop} \begin{conj} D'après le théorème \ref{diaganalin}, si $k>0$ on a l'inégalité $A_{k}\leq a_0'\,k^{-\frac{3}{2}}\,\alpha^{-k}$, pour un certain $a'_0$ indépendant de $k$. On conjecture, au vu des premiers termes de la suite $A_k$ que $a_0'=\alpha$ convient. En reprenant la preuve ci-dessous, on montrerait alors qu'on peut choisir $\rho < 83$. Par ailleurs, une connaissance de la distribution de $s$ et de $k$ quand $c$ tend vers l'infini permettrait d'améliorer encore ce choix. Peut-on prendre $\rho=4\alpha^{-1}\,\exp\left(\sqrt{\frac{2\,a_0}{\sqrt{\pi}}}\right)$ ? \end{conj} \begin{proof} Fixons d'abord le nombre de singularités $s<c$ et majorons le nombre $Calc_{\,\S^2\,}(c,s)$ de courbes analytiques combinatoires enracinées à $s$ singularités et $c$ arêtes. D'après ce qui précède, nous cherchons à majorer la somme suivante indexée par les uplets $k$ de $s$ entiers strictement positifs dont la somme vaut $c$ (les compositions de $c$ en $s$ parties): \begin{equation}\label{majo} Calc_{\,\S^2\,}(c,s)\leq \sum_{[k]=(s,c)}{ 2k_1\frac{(c-1)!}{(s-1)!(c-s-2)!} \prod_{v=1}^{s}{\binom{2k_v}{k_v} \frac{A_{k_v}}{2}}} .\end{equation} On sait d'après \cite[§1.8]{GhySim:2020} que le nombre de diagrammes analytiques à $k$ cordes est plus petit que $6^{k-1}$ fois le $k-1$ ième nombre de Catalan. Avec $2k$ choix pour l'enracinement on a donc $A_k\leq 2\times 6^{k-1} \binom{2k-2}{k-1}$. Or par récurrence, pour tout $k\in \mathbb{N}$: $\binom{2k}{k}\leq \frac{4^{k}}{\sqrt{3\,k_v+1}}$. Par conséquent: \[ k_1\prod_{v=1}^{s}{\binom{2k_v}{k_v} \frac{A_{k_v}}{2}} \leq k_1 \prod_{v=1}^{s}{ \frac{4^{k_v}}{\sqrt{3k_v+1}} \frac{(6\times4)^{k_v}}{(6\times4)\sqrt{3k_v-2}} } = \frac{96^c}{36^s} \prod_{v=1}^{s}{\frac{k_1}{2\sqrt{(k_v+\frac{1}{3})(k_v-\frac{2}{3})}}}\leq \frac{96^c}{36^s} .\] En réinjectant dans l'expression \ref{majo}, on reconnaît le nombre de compositions de $c$ en $s$ parties, qui vaut $\binom{c-1}{s-1}=\frac{s}{c}\binom{c}{s}$. Ensuite on majore $\frac{(c-1)!}{(s-1)!(c-s-2)!}\leq s\,c\, \binom{c}{s}$ et $\binom{n}{p}\leq \left(\frac{e\,n}{p} \right)^p$ pour obtenir: \[ Calc_{\,\S^2\,}(c,s)\leq 2\, \frac{(c-1)!}{(s-1)!(c-s-2)!} \sum_{[k]=(s,c)}{\frac{96^c}{36^s}} \leq 2\, \frac{96^c}{36^s}\, s\,c\,\binom{c}{s}\, \frac{s}{c}\binom{c}{s} \leq 2s^2\,96^c \left( \frac{e\,c}{6s} \right)^{2s} .\] Pour tout $c$, le maximum selon $1\leq s\leq c$ de $\left(\frac{e\,c}{6s} \right)^{2s}$ est atteint en $s=\frac{c}{6}$, et vaut $e^{\frac{c}{3}}$. Ainsi: \[ Cacl_{\,\S^2\,}(c,s)\leq 2\,s^2\,\left(96 e^{\frac{1}{3}} \right)^c .\] La somme des $2\,s^2$ pour $1\leq s\leq c$ est plus petite que $c^3$ pour $c>3$, ce qui donne la formule voulue dans ce cas; et l'expression annoncée majore le nombre total de courbes combinatoires pour $c\leq 3$, ce qui conclut. \end{proof} \begin{quest}[Non-optimalité de la borne] La majoration de la somme $\sum_{[k]=(s,c)}{ \prod_{v=1}^{s}{ k_v^{-1}}}$ était grossière. A cause de cela, la preuve fonctionne aussi pour les courbes marquées. On devrait pouvoir l'améliorer en partitionnant les compositions en indice selon la taille du carré maximal de la partition qu'elles induisent. Une autre approche serait d'utiliser le fait que c'est le coefficient de $x^c$ de la série $f(x)^s$ où $f$ est la primitive nulle à l'origine de $\frac{1}{x}\,\log \frac{1}{1-x}$ et se ramener à l'étude des singularités de $f$ dans le plan complexe. \end{quest} \begin{rem}[Courbes du plan] La différence entre une carte (respectivement une courbe) combinatoire du plan et de la sphère, revient au choix sur la sphère, d'une face infinie. Il y a $c-s+2$ faces, donc $Calc_{\,\mathbb{R}^2\,}(c,s)\leq (c-s+2)\, Calc_{\,\S^2\,}(c,s)\leq c^4 \rho^c$. \end{rem} \section{Courbes algébriques réelles} Dans cette section nous rappelons d'abord quelques éléments de géométrie algébrique réelle, la source principale étant \cite{BoCoRo:1992}. Ensuite nous décrivons quelles courbes combinatoires sur une surface munie d'une structure algébrique réelle lisse proviennent de courbes algébriques. Enfin, nous majorons en fonction du degré le nombre de types topologiques de courbes algébriques projectives réelles. \subsection{Homologie d'une surface algébrique réelle} \subsubsection{Quelques précisions sur la notion de courbe algébrique réelle} Nous nous intéressons à la topologie du lieu réel d'une courbe algébrique singulière sur une surface algébrique projective lisse définie sur le corps des réels. Ces termes sont employés au sens de la géométrie algébrique classique : les objets sont définis par des équations homogènes en un certain nombre de variables ; les morphismes correspondent à des changements de variables et les applications rationnelles à des changements de variables bien définis en dehors de sous-variétés algébriques de codimension positive. Ainsi, une variété algébrique $X$ définie sur le corps des réels possède donc un lieu réel, parfois noté $X(\mathbb{R})$, et un lieu complexe noté $X(\mathbb{C})$ ; de la même façon, un morphisme $f\colon Y\to X$ induit une application partout définie $f(\mathbb{R}) \colon Y(\mathbb{R})\to X(\mathbb{R})$ entre les lieux réels (comme nous travaillerons presque toujours avec le lieu réel nous n'aurons pas à spécifier le corps sous-jacent). En particulier dans une surface, une courbe algébrique, est fermée pour la topologie de Zariski, et son lieu réel n'est pas forcément connexe ni purement de dimension $1$. Par exemple, si on intersecte le parapluie de Whitney, qui est une surface de $\mathbb{R}^3$ d'équation $x^2z=y^2$, avec une petite sphère centrée autour de sa singularité à l'origine, on obtient une courbe algébrique singulière de la sphère qui possède une composante connexe homéomorphe à un "huit", et une autre qui est un point isolé. On peut également construire des exemples avec plusieurs composantes de dimension $0$ et $1$. Les points isolés d'une courbe algébrique correspondent à des branches conjuguées du lieu complexe qui intersectent le lieu réel de la surface en un point. Il se peut également que deux telles branches complexes conjuguées intersectent le lieu réel de la surface en un point qui est également sur une branche du lieu réel, dans ce cas la courbe présente une singularité en ce point, même si elle n'est pas apparente dans le lieu réel. Dans une surface algébrique réelle $S$, une courbe algébrique dont le lieu réel est connexe définit une courbe combinatoire (au sens d'un graphe plongé dans la surface $S$ dont les sommets sont décorés par des diagrammes de cordes, les arêtes d'un sommet étant mises en bijection avec les rayons du diagramme le décorant) comme suit. Il y a un diagramme de cordes pour chaque singularité ; cela comprend le cas d'une composante de dimension $0$ qui est un point isolé, dans ce cas le diagramme de cordes est \emph{trivial}, il n'a pas de cordes, c'est le mot vide ; cela comprend également le cas des singularités non apparentes, dans ce cas le diagramme de cordes contient une seule corde. Les singularités sont reliées, comme dans le cas analytique, par les arcs lisses de la courbe. Une courbe combinatoire sera dite \emph{algébrique} si elle est ainsi associée à une courbe algébrique. Elle sera dite \emph{sans points isolés} si elle ne contient pas de diagrammes de cordes triviaux (cela équivaut au fait que le graphe ne possède pas de sommets isolés duquel ne sort aucune arête). Le théorème principal de cette section \ref{cocalg}, celui de réalisabilité d'une courbe combinatoire par une courbe algébrique, concernera les courbes combinatoires sans points isolés, autrement dit le graphe pourra ou non être connexe, il pourra également posséder des sommets isolés, mais alors ceux-ci ne seront pas décorés par des diagrammes sans cordes. \subsubsection{Homologie algébrique d'une surface algébrique réelle} Soit $S$ une surface projective réelle \textit{lisse}. Afin de motiver l'introduction des éléments de géométrie algébrique dans ce paragraphe, rappelons le schéma de la preuve du théorème \ref{cocan}, car nous l'adapterons pour réaliser une courbe combinatoire $\Gamma$ par une courbe algébrique de $S$. Pour le problème de sa réalisation analytique, nous avons d'abord éclaté afin de nous ramener au monde lisse, où nous avons pu y appliquer des résultats d'approximation résultant des travaux de Whitney, Grauert et Cartan. En particulier, le fait qu'une courbe localement analytique soit globalement le lieu d'annulation d'une fonction analytique correspond à l'annulation d'un certain groupe de cohomologie (le premier groupe de cohomologie du faisceau des fonctions analytiques, c'est d'ailleurs comme cela que Cartan l'obtient \cite[no. 7]{Cartan:1957}). Pour introduire le résultat d'approximation analogue dont nous aurons besoin dans le cas algébrique, définissons d'abord le premier groupe d'homologie algébrique de $S$. Une courbe algébrique singulière se décompose en un complexe cellulaire de dimension $1$ en prenant chaque point singulier pour une $0$-cellule et en ajoutant une $0$-cellule par composante lisse, le reste forme des arcs lisses entre ces points. Ce complexe cellulaire est un cycle modulo $2$ et définit donc un élément de $H_1(S;\mathbb{Z}/2\mathbb{Z})$ appelé la \emph{classe fondamentale} de la courbe algébrique (voir \cite[11.3.2]{BoCoRo:1992}). Notons $H_1^{alg}(S;\mathbb{Z}/2\mathbb{Z})$ le sous espace vectoriel engendré par les classes algébriques. On définit la cohomologie algébrique par dualité de Poincaré. Répétons la définition \cite[12.4.10]{BoCoRo:1992} : une sous-variété $Y$ compacte et $\mathscr{C}^\infty$ d'une variété algébrique réelle non singulière et affine $X$ \emph{possède une approximation algébrique} dans $X$ si pour tout voisinage $\Omega$ dans $\mathscr{C}^\infty(Y,X)$ de l'application d'inclusion $i\colon Y \to X$ il existe $h\in \Omega$ tel que $h(Y)$ est un fermé de Zariski non singulier de $X$. Nous pouvons maintenant énoncer le théorème \cite[12.4.11]{BoCoRo:1992} de Nash-Tognoli dans le cas où la variété $X$ est de dimension $2$ : une hypersurface $Y$ compacte et lisse de $X$ possède une approximation algébrique si et seulement si la classe d'homologie $[Y]$ définie par $Y$ est algébrique, c'est-à-dire appartient à $H_1^{alg}(X;\mathbb{Z}/2\mathbb{Z})$. Pour satisfaire à l'hypothèse de non-singularité, nous appliquerons ce théorème dans un éclaté $X$ de la surface $S$ à la transformée stricte $Y$ d'une sous-variété singulière $C$ de $S$. Nous devons donc nous assurer du bon comportement des sous-groupes des classes d'homologie algébriques par éclatement. Soit $p\colon S' \to S$ l'éclatement de $S$ en un point et $E\subset S$ le diviseur exceptionel. On a $H_1(S';\mathbb{Z}/2\mathbb{Z})=H_1(S;\mathbb{Z}/2\mathbb{Z})+ \mathbb{Z}/2\mathbb{Z}\cdot [E]$. Comme $[E]$ est clairement algébrique et comme toute classe algébrique de $S$ se relève en une classe algébrique de $S'$ en prenant sa transformée stricte, on a que $H_1^{alg}(S;\mathbb{Z}/2\mathbb{Z})+\mathbb{Z}/2\mathbb{Z}\cdot[E] = H_1^{alg}(S';\mathbb{Z}/2\mathbb{Z})$ (par abus de notation, le groupe d'homologie dans le membre de gauche fait ici référence au dual de Poincaré de l'image réciproque par $p^*$ du premier groupe de cohomologie algébrique de $S$, voir \cite{BoCoRo:1992} pour plus de détails). Ainsi, une sous-variété analytique $C$ de $S$ définit une classe d'homologie algébrique si et seulement si c'est le cas de sa transformée stricte dans $S'$, et une sous-variété $Y$ de $S'$ définit une classe d'homologie algébrique si et seulement si c'est le cas de sa projection par $p$ dans $S$. La discussion est la même si $p\colon S' \to S$ est l'éclatement itéré en un nombre fini de points. Ceci montre également que toute classe d'homologie d'une surface rationnelle (c'est-à-dire birationnelle au plan projectif, donc en particulier la sphère) est algébrique. \subsection{Courbes combinatoires algébriques} Répétons par souci de clarté les éléments utiles pour la formulation du résultat de cette section. Comme remarqué pour le théorème \ref{cocan}, la notion de courbe combinatoire dans le théorème de ce paragraphe peut être prise au sens plus large d'un graphe de diagrammes de cordes plongé dans la surface (c'est l'idée que l'on se fait d'un tracé topologique d'une courbe singulière sur la surface) ; autrement dit il n'est pas nécessaire que la surface $S$ dans laquelle le graphe est plongé soit celle associée à la carte combinatoire sous-jacente à la courbe combinatoire (ce qui est le cas si et seulement si la courbe combinatoire remplit la surface, c'est-à-dire que son complémentaire soit une union de disques, en particulier le graphe doit être connexe). Une courbe combinatoire de $S$ est \emph{algébrique et sans points isolés} si elle est associée à une courbe algébrique réelle sur la surface n'ayant pas de points isolés Rappellons qu'un diagramme de corde provient d'une singularité de courbe algébrique si et seulement s'il est analytique ; et qu'on dit d'une courbe combinatoire qu'elle vérifie l'hypothèse topologique locale si ses sommets sont décorés par des diagrammes de cordes analytiques. Nous avions introduit les brins d'une courbe combinatoire dans la section \ref{formulation_combinatoire}. Chaque brin définit une classe d'homologie dans $H_1(S;\mathbb{Z}/2\mathbb{Z})$ et on appelle la somme de ces classes sur tous les brins la \emph{classe d'homologie de la courbe}. \begin{Define} Nous dirons qu'une courbe combinatoire vérifie \emph{l'hypothèse globale} si sa classe d'homologie est algébrique, c'est-à-dire qu'elle apartient à $H_1^{alg}(S,\mathbb{Z}/2\mathbb{Z})$. \end{Define} \begin{Thm} \label{cocalg} Soit $S$ une surface algébrique. Toute courbe combinatoire sans points isolés de $S$ vérifiant les hypothèses locale et globale est algébrique. En particulier si $S$ est rationnelle, par exemple la sphère ou le plan projectif, toute courbe combinatoire non triviale vérifiant l'hypothèse locale est algébrique. \end{Thm} \begin{proof} Bien sûr une courbe combinatoire algébrique vérifie les hypothèses locales et globales. Réciproquement soit $\Gamma$ une courbe combinatoire sur une surface algébrique $S$ vérifiant les hypothèses locales et globales. Le schéma de la preuve est essentiellement le même que pour le théorème \ref{cocan}, on utilise les notions discutées dans la section précédente. \emph{1/ Préparation du terrain.} On commence par appliquer le théorème \ref{cocan} pour réaliser $\Gamma$ par une courbe analytique, que l'on peut paramétrer par $i\colon M \to S$ avec $M$ une union de cercles. Comme remarqué après ce théorème, on peut fixer pour $k$ fini quelconque, les $k$-jets de $i$ aux singularités (pourvu que la topologie requise par les diagrammes de cordes soit respectée). \emph{2/ Eclatement et approximation de la transformée stricte.} Notons $p \colon S'\to S$ un éclatement itéré de $S$ résolvant les singularités de $i$. Soit $i'\colon M\to S'$ la transformée stricte de $i$. Comme la classe de $i$ dans $H_1(S;\mathbb{Z}/2\mathbb{Z})$ est algébrique, celle de $i'$ dans $H_1(S;\mathbb{Z}/2\mathbb{Z})$ l'est aussi. Mais $i'$ est également lisse, donc par le théorème \cite[Thm 12.4.11]{BoCoRo:1992} de Nash-Tognoli on peut approcher $i'$ par une courbe algébrique $j'$ arbitrairement près pour la topologie $\mathscr{C}^\infty$. Notons $J'$ l'image de $j'$. \emph{3/ Implosion.} La projection $J=p(J')$ est une courbe algébrique de $S$ dont les composantes de dimension $1$ approchent $i$ arbitrairement près pour la topologie $\mathscr{C}^\infty$, avec les mêmes $k$-jets aux singularités (l'égalité des $k$-jets se déduit de l'approximation pour la topologie $\mathscr{C}^0$ de $i$ pourvu qu'on ait itéré de manière appropriée les éclatements). Comme $p$ est un homéomorphisme en dehors des singularités, il ne peut y avoir de composantes "supplémentaires" de dimension $0$ dans $J$ qu'à ses points singuliers. La courbe $J$ répond donc au problème. \end{proof} \begin{quest} Peut-on réaliser les courbes combinatoires sans introduire d'avantage de multiplicité aux singularités provenant des branches conjugués du lieu complexe ? \end{quest} \begin{quest} Etant donné, dans une surface topologique, une courbe combinatoire non triviale vérifiant l'hypothèse locale : existe-t-il un modèle algébrique de la surface tel que la courbe combinatoire soit réalisée par une courbe algébrique ? La réponse serait positive si étant donné dans une surface topologique, une classe du premier groupe d'homologie, on savait qu'il existe un modèle algbrique de la surface tel que cette classe soit algébrique. Le théorème \cite[11.3.8]{BoCoRo:1992} s'en rapproche mais n'est pas tout a fait ce dont nous avons besoin. \end{quest} \subsection{Majoration du nombre de courbes algébriques par le degré} Nous cherchons désormais à majorer le nombre de types topologiques de courbes algébriques réelles singulières de degré $d$ du plan projectif dont le lieu réel est connexe. Un \emph{type topologique} désigne une classe d'isotopie dans la catégorie des courbes lisses par morceaux, enrichi par les informations locales données par les halos réels des singularités. C'est précisément la donnée de la courbe combinatoire. Comme toute classe d'homologie du plan projectif est algébrique, nous cherchons donc à majorer le nombre de courbes combinatoires connexes vérifiant l'hypothèse locale. Nous commençons par les majorer en fonction du nombre d'arêtes pour aboutir ensuite des fonctions du degré via les formules de Plücker généralisées. \begin{rem} Dans \cite{KarlaOrev:2003, KarlaOrev:2004}, Kharlamov et Orevkov encadrent asymptotiquement le nombre de classes d'isotopie de courbes algébriques lisses (pas nécessairement connexes dans le lieu réel) de degré $d$ du plan projectif réel, par des expressions de la forme $\exp{(Cd^2+o(d^2))}$. \end{rem} La généralisation de Rosenlicht des formules de Plücker (voir par exemple \cite{Dieu:1974}), entraîne que toute courbe algébrique du plan projectif réel vérifie: \[\sum_{v,\; k_v=1}{1}+\sum_{v,\; k_v>1}{\frac{k_v(k_v-1)}{2}} \leq \frac{(d-1)(d-2)}{2} \mathrm{\quad et\; donc\quad } c=\sum_{v}{k_v}\leq \frac{(d-1)(d-2)}{2}.\] Avec la borne précédente selon le nombre d'arêtes nous en déduisons le théorème suivant. \begin{Thm} Le nombre $Cal_{\,\mathbb{R}\P^2\,}(d)$ de courbe combinatoires enracinées algébriques de degré $d$ du plan projectif vérifie, pour une certaine constante $\rho$, la majoration: \[Cal_{\,\mathbb{R}\P^2\,}(d) \leq \frac{d^8}{16} \rho^\frac{d^2}{2} .\] Comme on peut prendre $\rho < 134$, on a $Cal_{\,\mathbb{R}\P^2\,}(d)=o\left(12^{d^2}\right)$. \end{Thm} \begin{rem} Si comme conjecturé, $\rho = 83$ convenait; alors on aurait $Cal_{\,\mathbb{R}\P^2\,}(d)=o\left(10^{d^2}\right)$. \end{rem} \begin{proof} Considérons une courbe combinatoire $\Gamma$ provenant d'une courbe algébrique $\gamma$ de degré $d$ du plan projectif. Enracinons $\Gamma$ en une demi-arête $r$. Géométriquement, on peut penser à $r$ comme à un segment issu d'un point singulier $r_-$ de $\gamma$, dont l'autre extrémité est un point $r_+$ à mi-chemin d'une autre singularité. Notons $s$, $c$ et $k$ respectivement le nombre d'arêtes, de sommets, et le passeport (degrés des sommets), de $\Gamma$. Choisissons une droite $D$ qui intersecte transversalement la courbe sans passer par ses singularités, et supposons qu'elle passe par $r_+$. Contractons désormais la droite $D$; alors notant avec un prime les images par la projection: on obtient une courbe algébrique $\gamma'$ dans la sphère ayant une singularité de plus en $r'_+$, dont le diagramme associé est du type $T$ avec $k_{s+1}\leq d$ cordes. Enracinons sa courbe combinatoire $\Gamma'$ en la demi-arête issue de $r'_-$ empruntant l'arête $r'$. La courbe combinatoire $\Gamma'$ possède $c'\leq c+d$ arêtes. Les arêtes supplémentaires proviennent de la duplication de celles de $\Gamma$ qui intersectaient la droite $D$. Faisons pour chaque courbe combinatoire $\Gamma$ provenant d'une courbe algébrique de degré inférieur ou égal à $d$, le choix d'une telle droite $D$. On obtient donc une application des courbes combinatoires enracinées algébriques de degré plus petit que $d$ du plan projectif vers les courbes combinatoires enracinées à moins de $\frac{(d-1)(d-2)}{2}+d\leq \frac{d^2}{2}$ arêtes dans la sphère. Majorons le cardinal de ses fibres en énumérant le nombre maximum de relevés distincts d'une courbe combinatoire de la sphère dans l'image de cette application. Pour relever une courbe $\Gamma'$ de la sphère ayant (au plus) $s+1$ sommets et $c+d$ arêtes, il suffit de choisir un sommet dont le diagramme de cordes est du type $T$, et l'éclater. Il y a au plus $s+1\leq c+d \leq \frac{d^2}{2}$ tels diagrammes. Par conséquent, en utilisant la proposition \ref{maj_cc_sphere}: \[Cal_{\,\mathbb{R}\P^2\,}(d) \leq \frac{d^2}{2}\, Calc_{\,\S^2\,}\left(\frac{d^2}{2}\right)\leq \left(\frac{d^2}{2}\right)^4 \rho^\frac{d^2}{2}.\] \end{proof} \bibliographystyle{alpha}
1,314,259,995,423
arxiv
\section{Introduction}\label{Int} A plethora of observations concur that the Universe at present enters a phase of accelerated expansion. In fact, most cosmologists accept that over 70\% of the Universe content at present corresponds to the elusive dark energy; a substance with pressure negative enough to cause the observed acceleration \cite{DE}. The simplest form of dark energy is a positive cosmological constant $\Lambda$, which however, needs to be incredibly fine-tuned to explain the observations \cite{L}. This is why theorists have looked for alternatives, which could explain the observations while setting \mbox{$\Lambda=0$}, as was originally assumed. A promising idea is to consider that the Universe at present is entering a late-time inflationary period \cite{early}. The credibility of this option is supported also by the fact that the generic predictions of inflation in the early Universe are in excellent agreement with the observations. The scalar field responsible for this late-inflation period is called quintessence because it is the fifth element after baryons, photons, CDM and neutrinos \cite{Q}. Since they are based on the same idea, it is natural to attempt to unify early Universe inflation with quintessence. Quintessential inflation was thus born \cite{quinf,QI,jose,eta}. This attempt has many advantages. Firstly, quintessential inflation models allow the treatment of both inflation and quintessence within a single theoretical framework. Also, quintessential inflation dispenses with the tuning problem of the initial conditions for quintessence. Finally, unified models for inflation and quintessence are more economic because they avoid introducing yet another unobserved scalar field. For quintessential inflation to work one needs a scalar field with a runaway potential, such that the minimum has not been reached until today and, therefore, there is residual potential density, which can cause the observed accelerated expansion. String moduli fields are suitable because they are typically characterised by such runaway potentials. The problem with such fields, however, is how to stabilise them temporarily, in order to use them as inflatons in the early Universe. In this work (see also Ref.~\cite{ours}) we achieve this by considering that, during its early evolution our modulus crosses an enhanced symmetry point (ESP) in field space. When this occurs the modulus is trapped temporarily at the ESP \cite{trap}, which leads to a period of inflation. After inflation the modulus picks up speed again in field space resulting into a period of kinetic density domination (kination) \cite{kination}. Kination ends when the thermal bath of the hot big bang (HBB) takes over. During the HBB, due to cosmological friction \cite{cosmofric}, the modulus freezes at some large value and remains there until the present, when its potential density dominates and drives the late-time accelerated expansion \cite{eta}. Is is evident that, in order for the modulus to become quintessence, it should not decay after the end of inflation. Reheating, therefore should be achieved by other means. We assume that the thermal bath of the HBB is due to the decay of some curvaton field \cite{curv} as suggested in Refs.~\cite{eta,curvreh}. By considering a curvaton we do not add an {\it ad~hoc\/} degree of freedom, because the curvaton can be a realistic field, already present in simple extensions of the standard model (e.g. a right-handed sneutrino \cite{sneu}, a flat direction of the (N)MSSM \cite{mssm} or a pseudo Nambu-Goldstone boson \cite{pngb,orth} possibly associated with the Peccei-Quinn symmetry \cite{PQ}). Apart from reheating, the curvaton can provide the correct amplitude of curvature perturbations in the Universe. Consequently, the energy scale of inflation can be much lower than the grand unified scale \cite{liber}. In fact, in certain curvaton models, the Hubble scale during inflation can be as low as the electroweak scale \cite{orth,low}. \section{The runaway scalar potential} String theories contain a number of flat directions which are parametrised by the so-called moduli fields, which correspond to the size and shape of the compactified extra dimensions. Many such flat directions are lifted by non-perturbative effects, such as gaugino condensation or D-brane instantons \cite{Derendinger:1985kk}. The superpotential, then, is of the form \begin{equation} W=W_0+W_{\rm np}\quad \textrm{with} \quad W_{\rm np}=Ae^{-cT}\,, \end{equation} where $W_0\approx$~const. is the tree level contribution from fluxes, $A$ and $c$ are constants and $T$ is a K\"ahler modulus in units of $m_P$. Hence, the non-perturbative superpotential $W_{\rm np}$ results in a runaway scalar potential characteristic of string compactifications. For example, in type IIB compactifications with a single K\"ahler modulus, $\sigma\equiv$~Re($T$) is the so-called volume modulus, which parametrises the volume of the compactified space. In this case, the runaway behaviour leads to decompactification of the internal manifold. The tree level K\"ahler potential for a modulus, in units of $m_P^2$, is \begin{equation} K=-3\,\ln\,(T+\bar{T})\equiv-3\ln(2\sigma)\,, \label{tree} \end{equation} and the corresponding supergravity potential is% \footnote{We considered $c\sigma>1$ to secure the validity of the supergravity approximation and we have assumed that the ESP lies at a minimum in the direction of Im($T$).} \begin{equation} \label{Vnp} V_{\rm np}(\sigma)\simeq \frac{cAe^{-c\sigma}}{2\sigma^2m_P^2} \left(\frac{c\sigma}{3}Ae^{-c\sigma}-W_0\right)\,. \end{equation} To study the cosmology, we turn to the canonically normalised modulus $\phi$ which, due to Eq.~(\ref{tree}), is associated with $\sigma$ as \begin{eqnarray} \label{fs} & \sigma(\phi)= \exp\left(\sqrt{\frac{2}{3}}\, \phi/m_P\right)\,. & \end{eqnarray} Suppose that the Universe is initially dominated by the above modulus. The non-perturbative scalar potential in Eq.~(\ref{Vnp}) is very steep (exponential of an exponential), which means that the field soon becomes dominated by its kinetic density. Once this is so, the particular form of the potential ceases to be of importance. To achieve inflation we assume that, while rolling, the modulus crosses an ESP and becomes temporarily trapped at~it. \section{At the Enhanced Symmetry Point} In string compactifications there are distinguished points in moduli space at which there is enhancement of the gauge symmetries \cite{Hull:1995mz}. This results in some massive states of the theory becoming massless at these points. Even though from the classical point of view an ESP is not a special point, as the modulus approaches it certain states in the string spectrum become massless \cite{Watson:2004aq}. In turn, these massless modes create an interaction potential that may drive the field back to the symmetry point. In that way a modulus can become trapped at an ESP \cite{trap}. The strength of the symmetry point depends on the degree of enhancement of the symmetry. Such modulus trapping can lead to a period of so-called `trapped inflation' \cite{trap}, when the trapping is strong enough to make the kinetic density of the modulus fall below the potential density at the ESP. However, it turns out that the number of e-foldings of trapped inflation cannot be very large. Therefore, with respect to cosmology, the main virtue of the ESPs relies on their ability to trap the field and hold it there, at least temporarily. Because ESPs are fixed points of the symmetries we have \begin{equation}\label{eq:firstderiv} V^{\prime}(\phi_0)=0\,, \end{equation} where the prime denotes derivative with respect to $\phi$ and $\phi_0$ is the value of the modulus at the ESP. The above means that the ESP is located either at a local extremum (maximum or minimum) or at a flat inflection point of the scalar potential, where \mbox{$V'(\phi_0)=V''(\phi_0)=0$}. This means that the presence of an ESP deforms the non-perturbative scalar potential (see Fig.~\ref{esp}). This deformation may be enough so that, after trapped inflation, the field undergoes slow-roll inflation over the flat region of the scalar potential at the vicinity of the ESP. The total duration of inflation may, thus, be enough to solve the flatness and horizon problems of the HBB. \begin{figure} \hspace{1cm}\psfig{file=esp.eps,width=3.5in} \caption{% Illustration of how the appearance of an ESP at $\phi=\phi_0$ deforms the non-perturbative scalar potential $V_{\rm np}$ to generate, for example, a local maximum at potential density $V_0$. The crossing modulus is temporarily trapped by the emergence of an interaction potential $V_{\rm int}$ due to its enhanced interaction with other fields. After released from trapping, the modulus may drive slow-roll inflation while sliding over the potential hill.} \label{esp} \end{figure} \subsection{Trapped Inflation} Let us briefly study the trapping of the modulus at the ESP. We assume that around the ESP there is a contribution to the scalar potential due to the enhanced interaction between the modulus $\phi$ and another field $\chi$, which we take to be also a scalar field \cite{trap}. The interaction potential is \begin{equation} \label{Vint} V_{\rm int}(\phi,\chi)=\frac{1}{2}\,g^2\chi^2\bar\phi^2\,, \end{equation} where $\bar{\phi}\equiv\phi-\phi_0$ with $g$ being a dimensionless coupling constant. Thus, at the ESP the $\chi$ particles are massless. The time dependence of the effective (mass)$^2$ of the $\chi$ field results in the creation of $\chi$-particles. This takes place when the field is within the production window $|\phi|<\Delta\phi\sim(\dot{\phi}_0/g)^{1/2}$, where $\frac{1}{2}\dot\phi_0^2$ is the kinetic density of the modulus when crossing the ESP and the dot denotes derivative with respect to the cosmic time~$t$. The effective scalar potential near the ESP is $V_{\rm eff}(\phi)\approx V_0+\frac{1}{2}g^2\langle\chi^2\rangle\bar{\phi}^2$ where $V_0\equiv V(\phi_0)$ with $V(\phi)$ being the `background' scalar potential. Following Ref.~\cite{trap} we have $\langle\chi^2\rangle\simeq n_{\chi}/g|\phi|$, where $n_{\chi}$ denotes the number density of $\chi$ particles produced after the crossing of the ESP. This means that $V_{\rm eff}(\phi)\sim V_0+gn_{\chi}|\phi|$ and the field climbs a {\it linear} potential since $n_{\chi}$ is constant outside the production window. After the first crossing, the field reaches the amplitude $\Phi_1$, determined by its initial kinetic density. To avoid overshooting the ESP we require $\Phi_1\mbox{\raisebox{-.9ex}{~$\stackrel{\mbox{$<$}}{\sim}$~}} m_P$ since for larger values the coupling softens \cite{Brustein:2002mp}. After reaching $\Phi_1$, the field reverses direction and crosses the production window again, generating more $\chi$ particles and, therefore, increasing $n_\chi$. Thus, it now has to climb a steeper potential reaching an amplitude $\Phi_2<\Phi_1$. The process continues until the ever decreasing amplitude becomes comparable to the production window (see Fig.~\ref{trap1}). At this moment particle production stops. \begin{figure} \hspace{2cm}\psfig{file=trap1.eps,width=3in} \caption{% Illustration of the trapping of a modulus crossing the ESP during particle production. Outside the production window, the modulus oscillates in a linear interaction potential, which steepens progressively due to the production of more $\chi$-particles every time the modulus crosses the ESP.} \label{trap1} \end{figure} After the end of particle production, $\langle\chi^2\rangle$ remains roughly constant during an oscillation and the modulus continues oscillating in the quadratic interaction potential. Studying this oscillation, we found that, due to the Universe expansion, the amplitude and frequency decrease as $\Phi\sim\Delta\phi/a$ and $\langle\overline{\chi^2}\rangle\propto a^{-2}$ \cite{ours}, where the scale factor $a(t)$ is normalised to unity at the end of particle production. Hence, the quadratic potential becomes gradually ``diluted'' due to the Universe expansion (see Fig.~\ref{trap2}). The above mean that the kinetic density of the oscillating modulus scales as \mbox{$\rho_{\rm osc}\propto a^{-4}$}. When $\rho_{\rm osc}$ becomes redshifted below $V_0$, trapped inflation begins. \begin{figure} \hspace{2cm}\psfig{file=trap2.eps,width=3in} \caption{% Illustration of the trapping of a modulus crossing the ESP after particle production. Inside the production window, the modulus oscillates in a quadratic interaction potential, which becomes gradually diluted due to the Universe expansion.} \label{trap2} \end{figure} The above process assimilates a multitude of initial conditions (provided overshooting the ESP is avoided) because any kinetic density in excess of $V_0$ is depleted before the onset of trapped inflation. Trapped inflation dilutes exponentially the density of the $\chi$--particles, which quickly redshifts $V_{\rm int}$. Therefore, after a rather limited number of e-foldings of trapped inflation, the modulus is released from the ESP. \subsection{Slow-Roll inflation} Since the ESP is located at a locally flat region of the potential there is a chance that, after $V_{\rm int}$ becomes negligible, the modulus drives a period of slow-roll inflation while sliding away from the ESP. To study this period we need to quantify the deformation of the scalar potential due to an ESP. The appearance of an ESP generates either a local extremum or a flat inflection point at $\phi_0$. In all cases, in the vicinity of the ESP, the scalar potential can be approximated by a cubic polynomial\cite{ours}. Hence, the characteristics of the potential depend only on $m_\phi^2\equiv V''(\phi_0)$ and $V^{\textrm{\tiny(3)}}_0\equiv V'''(\phi_0)$. In fact, we can parametrise the deformation of the scalar potential using \cite{ours} \begin{equation} \label{xi0} |V^{\textrm{\tiny{(3)}}}_0|\sim \xi^2\sigma_0^3H_*^2/m_P\,, \end{equation} where $\sigma_0\equiv\sigma(\phi_0)$ and $H_*$ is the Hubble parameter during inflation: $H_*^2\approx V_0/3m_P^2$. The $\xi$ parameter accounts for the strength of the symmetry point; the smaller the $\xi$, the stronger the deformation and the wider the inflationary plateau. The requirement that the deformation becomes negligible at distances larger than $m_P$ results in the lower bound $\xi>1$, which also guarantees that the modulus does not overshoot the ESP \cite{ours}. By studying inflation after the modulus escapes trapping, we have obtained the following results, depending on the ESP morphology \cite{ours}. In each case, one has to achieve enough inflationary e-folds to solve the horizon and flatness problems, while also taking care that the curvature perturbations due to the modulus are not excessive compared to observations. Consider first the case of a flat inflection point. In this case, we can have enough e-foldings of slow roll inflation if $|V^{\textrm{\tiny{(3)}}}_0|<g^2H_*\ll H_*$. The case of a local minimum is indistinguishable from the above if $m_\phi^2<g^2H_*|V^{\textrm{\tiny{(3)}}}_0|$. If the opposite is true then the modulus becomes trapped in the local minimum and must escape through tunnelling. Afterwards the modulus can drive a period of slow-roll inflation with total number of e-foldings given by $N\sim(H_*/m_\phi)^2$. Hence, to solve the horizon and flatness problems we need $m_\phi\ll H_*$. Finally, in the case of a local maximum, after the end of trapping, one can have a phase of fast/slow roll inflation provided $|m_\phi|\mbox{\raisebox{-.9ex}{~$\stackrel{\mbox{$<$}}{\sim}$~}} H_*$. Thus, we have found that, in all cases, {\em enough slow-roll inflation to solve the horizon and flatness problems of the HBB is attainable provided} $|m_\phi|, |V^{\textrm{\tiny{(3)}}}_0|<H_*$. Choosing for illustrative purposes an intermediate value for the Hubble scale: $H_*\sim 1$~TeV we have found that one can achieve enough inflationary e-foldings (up to $N_{\rm max}\sim 10^4$) without producing excessive curvature perturbation if $1<\xi^2<10^4$. Thus, {\em there is ample parameter space for slow-roll inflation to occur after the modulus escapes trapping at the ESP}. Note also, that, while $H_*$ is determined by the location of the ESP in field space (by the vacuum density $V_0$), the values of $|m_\phi|$ and $|V^{\textrm{\tiny{(3)}}}_0|$ are due to the deformation of the scalar potential at the vicinity of the ESP which is not directly related to $V_0$. Hence, the requirement that the latter are smaller than $H_*$ does not necessarily imply fine-tunning. \section{After the end of inflation} After inflation, the field rolls away from the ESP. Soon the influence of the ESP on the scalar potential diminishes and $V(\phi)\approx V_{\rm np}(\phi)$. The steepness of $V_{\rm np}$ results in the kinetic domination of the modulus density. As a result a period of kination occurs, during which the field equation is: $\ddot\phi+3H\dot\phi\simeq 0$. Hence, the density of the Universe scales as $\rho\simeq\frac{1}{2}\dot\phi^2\propto a^{-6}$ \cite{kination}. During kination, the scalar field is oblivious of the particular form of the potential. Kination is terminated when the density of the decay products of a curvaton field dominates the kinetic density of the modulus \cite{curvreh}. Thus, the end of kination corresponds to reheating, with reheating temperature $T_{\rm reh}\sim\sqrt{H_{\rm reh}m_P}$, where $H_{\rm reh}$ is the Hubble parameter at reheating. After the onset of the HBB, the rolling scalar field is subject to cosmological friction \cite{eta,cosmofric}, which asymptotically freezes the field at the value $\phi_F/m_P\simeq\frac{1}{\sqrt 6}\ln(V_0/T_{\rm reh}^4)$. Note that this value depends on $T_{\rm reh}$ which, in turn, is determined by curvaton physics. The modulus remains frozen until the present when it plays the role of quintessence. This guarantees that there is no dangerous variation of fundamental constants during the HBB. The evolution of the modulus until today is depicted in Fig.~\ref{kin}. \begin{figure} \hspace{2cm} \psfig{file=kination.eps,width=3in} \caption{% Illustration of the evolution of the modulus density $\rho_\phi$ and the density of the curvaton and its decay products $\rho_{\rm curv}\,$ with respect to the scale factor of the Universe $a$. In inflation, $\rho_{\rm curv}\,$ is subdominant and remains constant until, after the end of inflation (denoted by `end') the curvaton begins oscillating (at time denoted by `osc'). During the oscillations, $\rho_{\rm curv}\,$ scales as pressureless matter. Sometime afterwards (denoted by `dec') the curvaton decays into the thermal bath of the HBB. This thermal bath dominates the Universe at reheating (denoted by `reh') soon after which the modulus freezes (at time denoted by `frz') assuming constant potential density comparable to the density today.} \label{kin} \end{figure} \section{Quintessence} Since $\sigma_F\equiv\sigma(\phi_F)\sim(V_0/T_{\rm reh}^4)^{1/3}>1$, the modulus rolls to large values before freezing. At such values we can assume that the scalar potential is \begin{equation} \label{exps} V(\sigma)\simeq\frac{C_n}{\sigma^n} \quad\Rightarrow\quad V(\phi)\simeq C_ne^{-b\phi/m_P}\,, \end{equation} where $C_n$ is a density scale and $b=\sqrt{\frac{2}{3}}\,\,n$. The above is a typical uplift potential introduced by flux compactifications as discussed below. If the modulus is to account for the required dark energy, it must satisfy the coincidence requirement: $V(\sigma_F)\simeq\Omega_\Lambda\rho_0$, where $\Omega_\Lambda\simeq 0.73$ is the dark energy density parameter and $\rho_0$ is the critical density at present. Hence, \begin{equation} \label{Treh} T_{\rm reh}\sim V_0^{1/4} \left(\rho_0/C_n\right)^{\sqrt{3/8n^2}}. \end{equation} Thus, the density scale $C_n$ is determined by $T_{\rm reh}$ which, in turn, is determined by curvaton physics. An upper bound on $C_n$ is obtained by demanding that reheating occurs before big bang nucleosynthesis (BBN): \begin{equation} \label{Cbound} C_n\mbox{\raisebox{-.9ex}{~$\stackrel{\mbox{$<$}}{\sim}$~}}\rho_0\left(V_0^{1/4}/T_{\rm BBN}\right)^{2n\sqrt{2/3}}, \end{equation} where $T_{\rm BBN}\sim 1$~MeV is the temperature at BBN. The scalar potential in Eq.~(\ref{exps}) may have a multitude of origins. For example, using the volume modulus, we may consider a stack of $\overline{D3}-$branes located at the tip of a Klebanov-Strassler throat. The uplift potential is \cite{Giddings:2001yu} \begin{equation} \delta V\sim\exp(-8\pi K/3Mg_s)\,m_P^4/\sigma^2 \equiv C_2/\sigma^2\,, \end{equation} where $M$ and $K$, in the warp factor, are the units of RR and NS three-form fluxes. To satisfy Eq.~(\ref{Cbound}) we must have $C_2^{1/4}\lsim10^{-20}m_P$. This can be attained by choosing the ratio of fluxes as $K/Mg_s\gsim22$. Taking $g_s=0.1$, only twice as many units of $K$ flux as those of $M$ flux are needed. It is also possible to consider fluxes of gauge fields on $D7-$branes \cite{Burgess:2003ic}. In this case, the scalar potential obtains a contribution \begin{equation} \delta V\sim 2\pi E^2/\sigma^3 \equiv C_3/\sigma^3\,, \end{equation} where $E$ depends on the strength of the gauge fields considered. The constraint in Eq.~(\ref{Cbound}) requires now $C_3^{1/4}\mbox{\raisebox{-.9ex}{~$\stackrel{\mbox{$<$}}{\sim}$~}} 10^{-15}m_P\sim 1$~TeV. The future of the modulus after unfreezing depends on the steepness of the scalar potential, or equivalently the value of $b$ in Eq.~(\ref{exps}). \noindent{$\bullet$} For $b\leq\sqrt 2$, the modulus dominates the Universe for ever, leading to eternal acceleration. This results in future horizons, which pose a problem for the formulation of the S-matrix in string theory \cite{S}. \noindent{$\bullet$} For $\sqrt 2< b\leq\sqrt 3$, the modulus dominates the Universe but results only in a brief accelerated expansion period. Such is the fate of the $n=2$ case. \noindent{$\bullet$} For $\sqrt 3<b\leq\sqrt 6$, the modulus does not dominate the Universe, albeit causing a brief period of accelerated expansion. Afterwards the modulus density remains at a constant ratio with the background matter density. This is the fate of the $n=3$ case. \noindent{$\bullet$} For $b>\sqrt 6$, the modulus does not cause any accelerated expansion and so cannot be used as quintessence. After unfreezing, the modulus rolls fast down the quintessential tail of the scalar potential with its density approaching asymptotically kinetic domination (and subsequently freezing at a value larger than $\sigma_F$ \cite{eta}). This case corresponds to $n>3$. The brief acceleration period caused by the unfreezing modulus is due to the fact that the modulus oscillates around an attractor solution \cite{oscil}, which in itself does not result to acceleration \cite{jose}. In Ref.~\cite{cline} it was claimed that brief acceleration occurs if $\sqrt 2<b\leq 2\sqrt 6$, which corresponds to the range $\sqrt 3<n\leq 6$. More recent studies, however, have reduced this range. Brief acceleration in the range $\sqrt 2<b\leq\sqrt 3$ has been confirmed by Ref.~\cite{rosen}. This includes the $n=2$ case which corresponds to the most popular uplift potential. The range for $b$ was expanded further in Ref.~\cite{blais}, where it is shown that brief acceleration can explain the observations at least up to \mbox{$b\simeq\frac{3}{4}\sqrt 6\approx 1.837$}. Since the data are interpreted using a number of priors, we believe that the $n=3$ case is still marginally acceptable. \section{Conclusions} Quintessential inflation is possible to achieve in string theory with flux compactifications using as inflaton a string modulus which rolls down its runaway potential. Inflation is due to the presence of an enhanced symmetry point (ESP), which traps the modulus and creates a locally flat region over which the modulus can slow-roll. There is ample parameter space for successful inflation provided: $m_\phi,|V^{\textrm{\tiny{(3)}}}_0|\ll H_*$. Trapping assimilates a multitude of initial conditions provided overshooting the ESP is avoided. After inflation, the modulus becomes (again) kinetically dominated causing a period of kination. Reheating is due to the Universe domination by the decay products of a curvaton field, which also accounts for the correct amplitude of the curvature perturbations in the Universe. During the Hot Big Bang, the modulus freezes and remains constant until the present. At the frozen value, the potential is dominated by an uplift term of exponential form. This residual potential density begins to dominate today, when the modulus unfreezes, leading to a brief period of late inflation. Curvaton physics fixes the reheating temperature and determines the value of the frozen modulus $\sigma_F$ and the density scale $C_n$ in the uplift potential. Coincidence and BBN constrains on $C_n$ allow realistic values much less fine-tunned than the cosmological constant $\Lambda$ in the $\Lambda$CDM model. \section*{Acknowledgements} This work was done with Juan C. Bueno S\'{a}nchez and was supported by the E.U. Marie Curie Research and Training Network {\sl ``UniverseNet"} (MRTN-CT-2006-035863) and by PPARC (PP/D000394/1). I am grateful to the Royal Society for supporting my participation to the IDM2006 conference.
1,314,259,995,424
arxiv
\section{Introduction} This paper is concerned with the theory of Kazhdan-Lusztig cells in a Coxeter group $W$, following the general setting of Lusztig \cite{bible}. This involves a weight function $L$ which is an integer-valued function on $W$ such that $L(ww')=L(w)+L(w')$ whenever $\ell(ww')=\ell(w)+\ell(w')$ ($\ell$ is the usual length function on $W$). We shall only consider weight function such that $L(w)>0$ for all $w\neq 1$. The case where $L=\ell$ is known as the equal parameter case. The partition of $W$ into cells is known to play an important role in the study of the representations of the corresponding Hecke algebra. Here, we are primarily concerned with affine Weyl groups. Let $W$ be an irreducible affine Weyl group, together with a weight function $L$. Let $W_{0}$ be the finite Weyl group associated to $W$. In the equal parameter case, Shi \cite{Shi1} described the lowest two-sided cell $c_{0}$ with respect to the preorder $\leq_{LR}$ using the Lusztig $a$-function (see \cite{bible} for further details on the $a$-function) as follows $$c_{0}=\{w\in W|\ a(w)=\ell(w_{0})\}$$ where $w_{0}$ is the longest element of $W_{0}$. Then, he gave an upper bound for the number of left cells contained in $c_{0}$, namely $|W_{0}|$. In \cite{Shi2}, he showed that this bound is exact and he described the left cells in $c_{0}$. His proof involved some deep properties of the Kazhdan-Lusztig polynomials in the equal parameter case, such as the positivity of the coefficients. Using the positivity property, Lusztig ([6]) proved, in the equal parameter case, a number of results concerning the $a$-function, such as \begin{equation*} \text{if $z\leq_{LR}z'$ then $a(z)\geq a(z')$} \tag{P4} \end{equation*} and \begin{equation*} \text{if } z\leq_{LR} z' \text{ and } a(z)=a(z') \text{ then } z \sim_{LR} z'. \tag{P11} \end{equation*} Since we know that $a(z)\leq \ell(w_{0})$ for any $z\in W$, this shows that $c_{0}$ is the lowest two-sided cell with respect to $\leq_{LR}$. In the unequal parameter case, the positivity of the coefficients of the Kazhdan-Lusztig polynomials does not hold anymore. However, Bremke \cite{Bremke} and Xi \cite{Xi} proved that the lowest two-sided cell $c_{0}$ can be described in the same way as in \cite{Shi1}, using the general $a$-function. Let $J$ be a subset of $S$ and consider the corresponding parabolic subgroup $W_{J}=\mathbf{h} J\mathbf{i}$. We denote by $w_{J}$ the longest element in $W_{J}$. Let $$\displaystyle \tilde{\nu}=\max_{J\subset S, W_{J}\simeq W_{0}}L(w_{J}).$$ Then, we have $$c_{0}=\{w\in W|\ a(w)=\tilde{\nu}\}.$$ Let $\textbf{S}$ be the set which consists of all the subset $J$ of $S$ such that $W_{J}\simeq W_{0}$ and $L(w_{J})=\tilde{\nu}$. Then we have the following alternative description of $c_0$, and it is this description with which we work throughout the paper $$c_{0}=\{w\in W|\ w=x.w_{J}.y,\ x,y\in W,\ J\in \textbf{S}\}$$ where for any $x,y,w\in W$ the notation $w=x.y$ means that $w=xy$ and $\ell(w)=\ell(x)+\ell(y)$ (and similarly $w=x.y.z$ for $w,x,y,z\in W$). In \cite{Bremke}, Bremke showed that $c_{0}$ contains at most $|W_{0}|$ left cells. She proved that this bound is exact when the parameters are coming from a graph automorphism and, again, the proof involved some deep properties of the Kazhdan-Lusztig polynomials in the equal parameter case. In this paper, we will prove that $|W_{0}|$ is the exact bound for the number of left cells in $c_{0}$ for any choice of parameters; see Theorem \ref{ltsc}. The method is based on a variation of the induction of left cells as in \cite{Geck}. The main new ingredient is a geometric argument to find a ``local'' bound on the degree of the structure constants of the Hecke algebra of $W$ with respect to the standard basis; see Theorem \ref{bound}. Our proof uniformly works for any choice of parameters. \section{Multiplication of the standard basis and geometric realization} In this section, we introduce the Hecke algebra of a Coxeter group with respect to a weight function. Then, we present a geometric realization of an affine Weyl group. Finally, using this geometric realization, we give a bound on the degree of the structure constants with respect to the standard basis. \begin{what}{\bf Weight functions, Hecke algebras.} In this section, $(W,S)$ denotes an arbitrary Coxeter system. The basic reference is \cite{bible}. Let $L$ be a weight function. In this paper, we will only consider the case where $L(w)>0$ for all $w\neq 1$. A weight function is completely determined by its value on $S$ and must only satisfy $L(s)=L(t)$ if $s$ and $t$ are conjugate. Let $\mathcal{A}=\mathbb{Z}[v,v^{-1}]$ and $\mathcal{H}$ be the generic Iwahori-Hecke algebra associated to $(W,S)$ with parameters $\{L(s)\mid s\in S\}$. $\mathcal{H}$ has an $\mathcal{A}$-basis $\{T_{w}\mid w\in W\}$, called the standard basis, with multiplication given by \begin{equation*} T_{s}T_{w}= \begin{cases} T_{sw}, & \mbox{if } sw>w,\\ T_{sw}+(v^{L(s)}-v^{-L(s)})T_{w}, &\mbox{if } sw<w, \end{cases} \end{equation*} (here, ``<'' denotes the Bruhat order) where $s\in S$ and $w\in W$. Let $x,y\in W$. We write $$\displaystyle T_{x}T_{y}=\sum_{z\in W}f_{x,y,z}T_{z}$$ where $f_{x,y,z}\in\mathcal{A}$ are the structure constants with respect to the standard basis. In this paper, we will be mainly interested in the case where $W$ is an irreducible affine Weyl group (with corresponding Weyl group $W_{0}$). In that case, it is known that there is a global bound for the degrees of the structure constants $f_{x,y,z}$. Namely, set $\nu=\ell(w_{0})$, $\tilde{\nu}=L(w_{0})$, where $w_{0}$ is the longest element of $W_{0}$, and $\xi_{s}=v^{L(s)}-v^{-L(s)}$ for $s\in S$. In \cite{Bremke}, Bremke proved that \begin{enumerate} \item As a polynomial in $\xi_{s}$, $s\in S$, the degree of $f_{x,y,z}$ is at most $\nu$ \item The degree of $f_{x,y,z}$ in $v$ is at most $\tilde{\nu}$. \end{enumerate} Our aim will be to find a ``local'' bound for the degrees of these polynomials, which depends on $x,y \in W$. For this purpose, we will work with a geometric realization of $W$, as described in the next section. \end{what} \begin{what}{\bf Geometric realization.} \label{gem} In this section, we present a geometric realization of an affine Weyl group. The basic references are \cite{Bremke,Lus1,Xi}. \\ Let $V$ be an euclidean space of finite dimension $r\geq 1$. Let $\Phi$ be an irreducible root system of rank $r$ and $\check{\Phi}\subset V^{*}$ the dual root system. We denote the coroot corresponding to $\alpha\in\Phi$ by $\check{\alpha}$ and we write $\mathcal{h}x,y\mathcal{i}$ for the value of $y\in V^{*}$ at $x\in V$. Fix a set of positive roots $\Phi^{+}\subset \Phi$. Let $W_{0}$ be the Weyl group of $\Phi$. For $\alpha\in\Phi^{+}$ and $n\in \mathbb{Z}$, we define a hyperplane $$H_{\alpha,n}=\{x\in V\mid \mathcal{h}x,\check{\alpha}\mathcal{i}=n\}.$$ Let $$\mathcal{F}=\{H_{\alpha,n}\mid \alpha\in \Phi^{+}, n\in\mathbb{Z}\}.$$ Any $H\in\mathcal{F}$ defines an orthogonal reflection $\sigma_{H}$ with fixed point set $H$. We denote by $\Omega$ the group generated by all these reflections, and we regard $\Omega$ as acting on the right on $V$. An alcove is a connected component of the set $$V-\underset{H\in\mathcal{F}}{\bigcup}H.$$ $\Omega$ acts simply transitively on the set of alcoves $X$. Let $S$ be the set of $\Omega$-orbits in the set of faces (codimension 1 facets) of alcoves. Then $S$ consists of $r+1$ elements which can be represented as the $r+1$ faces of an alcove. If a face $f$ is contained in the orbit $t\in S$, we say that $f$ is of type $t$. Let $s\in S$. We define an involution $A\rightarrow sA$ of $X$ as follows. Let $A\in X$; then $sA$ is the unique alcove distinct from $A$ which shares with $A$ a face of type $s$. The set of such maps generates a group of permutations of $X$ which is a Coxeter group $(W,S)$. In our case, it is the affine Weyl group usually denoted $\tilde{W_{0}}$. We regard $W$ as acting on the left on $X$. It acts simply transitively and commutes with the action of $\Omega$. Let $L$ be a weight function on $W$. In \cite{Bremke}, Bremke showed that if a hyperplane $H$ in $\mathcal{F}$ supports faces of type $s,t\in S$ then $s$ and $t$ are conjugate in $W$ which implies that $L(s)=L(t)$. Thus we can associate a weight $c_{H}\in\mathbb{Z}$ to $H\in \mathcal{F}$ such that $c_{H}=L(s)$ if $H$ supports a face of type $s$. Assume that $W$ is not of type $\tilde{A}_{1}$ or $\tilde{C}_{r}$ ($r\geq 2$); Let $H,H'$ be two parallel hyperplanes such that $H$ supports a face of type $s$ and $H'$ a face of type $s'$. Then, Bremke \cite{Bremke} proved that $s$ and $s'$ are conjugate. In other words, if $W$ is not of type $\tilde{A}_{1}$ or $\tilde{C}_{r}$ then any two parallel hyperplanes have the same weight. In this paper, we will often have to distinguish the case where $W$ is of type $\tilde{A}_{1}$ or $\tilde{C}_{r}$, because of this property. In the case where $W$ is of type $\tilde{C}_{r}$ with generators $s_{1},...,s_{r+1}$ and $W_{0}$ is generated by $s_{1},...,s_{r}$, by symmetry of the Dynkin diagram, we can assume that $c_{s_{1}}\geq c_{s_{r+1}}$. Similarly, if $W$ is of type $\tilde{A}_{1}$ with generators $s_{1},s_{2}$ and $W_{0}=\mathcal{h}s_{1}\mathcal{i}$, we can assume that $c_{s_{1}}\geq c_{s_{2}}$. For a $0$-dimensional facet $\lambda$ of an alcove, define $$m(\lambda)=\underset{H\in\mathcal{F},\ \lambda\in H}{\sum}c_{H}.$$ We say that $\lambda$ is a special point if $m(\lambda)$ is maximal. Let $T$ be the set of all special points. For $\lambda\in T$, denote by $W_{\lambda}$ the stabilizer of the set of alcoves containing $\lambda$ in their closure with respect to the action of $W$ on $X$. It is a maximal parabolic subgroup of $W$. Let $S_{\lambda}=S\cap W_{\lambda}$ and write $w_{\lambda}$ for the longest element of $W_{\lambda}$. Note that, following \cite{Bremke}, and with our convention for $\tilde{C_{r}}$ and $\tilde{A}_{1}$, $0\in V$ is a special point. Moreover, if $\lambda=0\in V$, the definition of $W_{\lambda}$ is consistent with the definition of $W_{0}$ given before. Let $\lambda$ be a special point, a quarter with vertex $\lambda$ is a connected component of $$V-\underset{H,\ \lambda\in H}{\bigcup}H.$$ It is an open simplicial cone. It has $r$ walls. Let $H=H_{\alpha,n}\in \mathcal{F}$. Then $H$ divides $V-H$ into two half-spaces \begin{align*} V_{H}^{+}&=\{x\in V\mid \mathbf{h} x,\check{\alpha}\mathbf{i}>n\},\\ V_{H}^{-}&=\{x\in V\mid \mathbf{h} x,\check{\alpha}\mathbf{i}<n\}. \end{align*} Finally, let $A_{0}$ be the fundamental alcove defined by $$A_{0}=\{x\in V\mid 0<\mathbf{h} x,\check{\alpha}\mathbf{i}<1 \text{ for all $\alpha\in\Phi^{+}$}\}.$$ Let $A\in X$, $w\in W$. It is well known that the length of $w$ is the number of hyperplanes which separate $A$ and $wA$. \end{what} \begin{what}{\bf Multiplication of the standard basis.} Let $(W,S)$ be an irreducible affine Weyl group associated to the Weyl group $(W_{0},S_{0})$. Recall that, for $x,y\in W$, we have $$\displaystyle T_{x}T_{y}=\sum_{z\in W}f_{x,y,z}T_{z}.$$ After the preparations in 2.2, we will now be able to find a ``local'' bound for the degree of the polynomials $f_{x,y,z}$ which depends on $x,y\in W$. For two alcoves $A,B\in X$, let $$H(A,B)=\{H\in\mathcal{F}\mid H \text{ separates } A \text{ and } B\}.$$ Let $\overline{\mathcal{F}}$ be the set of directions of hyperplanes in $\mathcal{F}$. For $i\in \overline{\mathcal{F}}$, we denote by $\mathcal{F}_{i}$ the set of all hyperplanes $H\in\mathcal{F}$ of direction $i$. The connected components of $$V-\underset{H\in\mathcal{F}_{i}}{\bigcup}H$$ are called ``strip of direction $i$''. We denote by $U_{i}(A)$ the unique strip of direction $i$ which contains $A$, for $A\in X$. There exists a unique $\alpha\in\Phi^{+}$ and a unique $n\in\mathbb{Z}$ such that $$U_{i}(A)=\{\mu\in V\mid n<\mathcal{h}\mu,\check{\alpha}\mathcal{i}<n+1\},$$ in other words $$U_{i}(A)=V_{H_{\alpha,n}}^{+}\cap V_{H_{\alpha,n+1}}^{-}.$$ We say that $U_{i}(A)$ is defined by $H_{\alpha,n}$ and $H_{\alpha,n+1}$. Note that our definition of strips is slightly different from the one in \cite{Bremke}, where the strips were the connected components of $$V-\underset{ c_{H}=c_{i}}{\underset{H\in\mathcal{F}_{i}}{\bigcup}}H \quad\text{where}\quad c_{i}=\underset{H\in\mathcal{F},\ \overline{H}=i}{\max}c_{H}. $$ In fact, as noticed in Section \ref{gem}, if $W$ is not of type $\tilde{A}_{1}$ or $\tilde{C}_{r}$, then the two definitions are the same. Let $\sigma\in\Omega$. Let $i,j\in \overline{\mathcal{F}}$ such that $\sigma(i)=j$. We have $$(U_{i}(A))\sigma=U_{j}(A\sigma)$$ and the strip $U_{j}(A\sigma)$ is defined by the two hyperplanes $(H_{\alpha,n_{i}})\sigma$ and $(H_{\alpha,n_{i}+1})\sigma$. \\ Let $x,y\in W$; then we define \begin{align*} H_{x,y}&=\{H\in \mathcal{F}\mid H\in H(A_{0},yA_{0})\cap H(yA_{0},xyA_{0})\},\\ I_{x,y}&=\{i\in\overline{\mathcal{F}}\mid\exists H, \overline{H}=i, H\in H_{x,y}\}. \end{align*} For $i\in I_{x,y}$, let $$c_{x,y}(i)=\underset{H\in\mathcal{F},\ \overline{H}=i,\ H\in H_{x,y}}{\text{max }} c_{H}$$ and $$c_{x,y}=\underset{i\in I_{x,y}}{\sum}c_{x,y}(i).$$ We are now ready to state the main result of this section. \begin{Th} \label{bound} Let $x,y\in W$ and $$T_{x}T_{y}=\underset{z\in W}{\sum}f_{x,y,z}T_{z}\ \ \text{ where $f_{x,y,z}\in\mathcal{A}$}.$$ Then, the degree of $f_{x,y,z}$ in $v$ is at most $c_{x,y}$. \end{Th} \end{what} \section{Proof of Theorem \ref{bound}} In order to prove Theorem \ref{bound}, we will need the following lemmas. \begin{Lem} \label{Lem1} Let $x,y\in W$ and $s\in S$ be such that $x<xs$ and $y<sy$. We have $$c_{xs,y}=c_{x,sy}.$$ \end{Lem} \begin{proof} Let $H_{s}$ be the unique hyperplane which separates $yA_{0}$ and $syA_{0}$. Since $x<xs$ and $y<sy$, one can see that \begin{align*} H(A_{0},yA_{0})\cup\{H_{s}\}&=H(A_{0},syA_{0}),\\ H(A_{0},yA_{0})\cap\{H_{s}\}&=\emptyset, \end{align*} and \begin{align*} H(syA_{0},xsyA_{0})\cup\{H_{s}\}&=H(yA_{0},xsyA_{0}),\\ H(syA_{0},xsyA_{0})\cap\{H_{s}\}&=\emptyset. \end{align*} Therefore we have \begin{align*} H_{x,sy}&=H(A_{0},syA_{0})\cap H(syA_{0},xsyA_{0})\\ &= (H(A_{0},yA_{0})\cup\{H_{s}\})\cap H(syA_{0},xsyA_{0})\\ &= (H(A_{0},yA_{0})\cap H(syA_{0},xsyA_{0}))\cup (\{H_{s}\}\cap H(syA_{0},xsyA_{0}))\\ &=H(A_{0},yA_{0})\cap H(syA_{0},xsyA_{0}) \end{align*} and \begin{align*} H_{xs,y}&=H(yA_{0},xsyA_{0})\cap H(A_{0},yA_{0})\\ &=(H(syA_{0},xsyA_{0})\cup\{H_{s}\})\cap H(A_{0},yA_{0})\\ &=(H(syA_{0},xsyA_{0})\cap H(A_{0},yA_{0}))\cup(\{H_{s}\}\cap H(A_{0},yA_{0}))\\ &=H(syA_{0},xsyA_{0})\cap H(A_{0},yA_{0})\\ &=H_{x,sy}. \end{align*} Thus $c_{x,sy}=c_{xs,y}$. \end{proof} \begin{Lem} \label{lem0} Let $x,y\in W$ and $s\in S$ be such that $xs<x$ and $sy<y$. We have $$c_{xs,sy}\leq c_{x,y}.$$ \end{Lem} \begin{proof} Let $H_{s}$ be the unique hyperplane which separate $yA_{0}$ and $syA_{0}$. One can see that $$H_{xs,sy}=H_{x,y}-\{H_{s}\}.$$ The result follows. \end{proof} \begin{Lem} \label{Lem2} Let $x,y\in W$ and $s\in S$ be such that $xs<x$ and $sy<y$. Let $H_{s}$ be the unique hyperplane which separates $yA_{0}$ and $syA_{0}$. Then we have $$\overline{H_{s}}\notin I_{xs,y} \quad \text{and}\quad \overline{H_{s}}\in I_{x,y}.$$ \end{Lem} \begin{proof} We have \begin{align*} sy<y&\Longrightarrow H_{s}\in H(A_{0},yA_{0}),\\ xs<x&\Longrightarrow H_{s}\in H(yA_{0},xyA_{0}). \end{align*} Thus $H_{s}\in H_{x,y}$ and $\overline{H}_{s}\in I_{x,y}$. Let $\alpha_{s}\in\Phi^{+}$ and $n_{s}\in\mathbb{Z}$ be such that $H_{s}=H_{\alpha_{s},n_{s}}$. Assume that $n_{s}\geq 1$ (the case where $n_{s}\leq 0$ is similar). Since $H_{s}\in H(A_{0},yA_{0})$ and $yA_{0}$ has a facet contained in $H_{s}$, we have $$n_{s}<\mathcal{h}\mu,\check{\alpha}_{s}\mathcal{i}<n_{s}+1\text{ for all $\mu\in yA_{0}$}.$$ Therefore, for all $m>n_{s}$, we have $H_{\alpha_{s},m}\notin H(A_{0},yA_{0})$. Now, since $xs<x$, we have $$xsyA_{0}\subset \{\mu\in V\mid n_{s}<\mathcal{h}\mu,\check{\alpha}_{s}\mathcal{i}\}.$$ Therefore, for all $m\leq n_{s}$, we have $H_{\alpha_{s},m}\notin H(yA_{0},xsyA_{0})$. Thus, there is no hyperplane parallel to $H_{s}$ in $H_{xs,y}$, as required. \end{proof} We keep the setting of the previous lemma. We denote by $\sigma_{s}$ the reflection with fixed point set $H_{s}$. Assume that $I_{xs,y}\neq \emptyset$ and let $i\in I_{xs,y}$. Recall that $U_{i}(yA_{0})$ is the unique strip of direction $i$ which contains $yA_{0}$. Since $i\in I_{xs,y}$ we have $$A_{0}\not\subset U_{i}(yA_{0})\text{ and } xsyA_{0}\not\subset U_{i}(yA_{0}).$$ One can see that one and only one of the hyperplanes which defines $U_{i}(yA_{0})$ lies in $H_{xs,y}$. We denote by $H^{(i)}$ this hyperplane. Let $H\in H_{xs,y}$. By the previous lemma we know that $H$ is not parallel to $H_{s}$. Consider the 4 connected components of $V-\{H,H_{s}\}$. We denote by $E_{A_{0}}$, $E_{yA_{0}}$, $E_{syA_{0}}$ and $E_{xsyA_{0}}$ the connected component which contains, respectively, $A_{0}$, $yA_{0}$, $syA_{0}$ and $xsyA_{0}$. Assume that $(H)\sigma_{s}\neq H$. Then, we have either $$(H)\sigma_{s}\cap E_{yA_{0}}\neq\emptyset \text{ and }(H)\sigma_{s}\cap E_{A_{0}}\neq\emptyset$$ or $$(H)\sigma_{s}\cap E_{xsyA_{0}}\neq\emptyset \text{ and } (H)\sigma_{s}\cap E_{syA_{0}}\neq\emptyset.$$ Furthermore, in the first case, $(H)\sigma_{s}$ separates $E_{xsyA_{0}}$ and $E_{syA_{0}}$, and, in the second case, $(H)\sigma_{s}$ separates $E_{yA_{0}}$ and $E_{A_{0}}$. In particular, we have \begin{align*} (H)\sigma_{s}\cap E_{yA_{0}}\neq\emptyset \quad&\Longrightarrow\quad (H)\sigma_{s}\in H(syA_{0},xsyA_{0})\\ (H)\sigma_{s}\cap E_{xsyA_{0}}\neq\emptyset\quad &\Longrightarrow\quad (H)\sigma_{s}\in H(A_{0},yA_{0}). \end{align*} Moreover, we see that \begin{align*} \label{one} (H)\sigma_{s}\cap E_{yA_{0}}\neq\emptyset\quad &\Longrightarrow\quad (H)\sigma_{s}\in H(syA_{0},xsyA_{0}) \notag\\ &\Longrightarrow \quad (H)\sigma_{s}\in H(yA_{0},xsyA_{0}), \end{align*} since $H_{s}$ is the only hyperplane in $H(yA_{0},syA_{0})$ and $(H)\sigma_{s}\neq H_{s}$. We will say that $H\in H_{xs,y}$ is of $s$-type $1$ if $(H)\sigma_{s}\cap E_{yA_{0}}\neq \emptyset$ and of $s$-type 2 if $(H)\sigma_{s}\cap E_{xsyA_{0}}\neq \emptyset$. To sum up, we have \begin{enumerate} \item[-] if $H$ is of $s$-type 1 then $(H)\sigma_{s}\in H(yA_{0},xsyA_{0})$; \item[-] if $H$ is of $s$-type 2 then $(H)\sigma_{s}\in H(A_{0},yA_{0})$. \end{enumerate} We illustrate this result in Figure 1. Note that if $H, H'\in H_{xs,y}$ are parallel, then they have the same type. \begin{center} \begin{pspicture}(-5,3.3)(5,-3.3) \psset{unit=0.75cm} \psline(-5.5,0)(-1,0) \psline(6,0)(1,0) \psline(-2,3)(-4,-3) \psline[linestyle=dashed](-2,-3)(-4,3) \psline(-1.5,1)(-2,1) \psline(-1.5,1)(-1.75,1.5) \psline(-2,1)(-1.75,1.5) \rput(-1.625,0.8){$xsyA_{0}$} \psline(-1.5,-1)(-2,-1) \psline(-1.5,-1)(-1.75,-1.5) \psline(-2,-1)(-1.75,-1.5) \rput(-1.625,-0.7){$xyA_{0}$} \psline(-1,-2.5)(-1,-3) \psline(-1,-2.5)(-0.5,-2.75) \psline(-1,-3)(-0.5,-2.75) \rput(-0.7,-3.15){$A_{0}$} \psline(-5,0.5)(-5,-0.5) \psline(-4.5,0)(-5,0.5) \psline(-5,-0.5)(-4.5,0) \rput(-4.3,0.48){$yA_{0}$} \rput(-4.3,-0.45){$syA_{0}$} \rput(-0.7,0){$H_{s}$} \rput(-1.7,3){$H$} \rput(-4.8,3){$(H)\sigma_{s}$} \rput(-0.7,2){$E_{xsyA_{0}}$} \rput(-0.7,-2){$E_{A_{0}}$} \rput(-5,-2){$E_{syA_{0}}$} \rput(-5,2){$E_{yA_{0}}$} \rput(-2.9,4){$s$-type 1} \rput(3.6,4){$s$-type 2} \psline(4.75,1)(4.25,1) \psline(4.75,1)(4.5,1.5) \psline(4.25,1)(4.5,1.5) \rput(4.625,0.8){$xsyA_{0}$} \psline(4.75,-1)(4.25,-1) \psline(4.75,-1)(4.5,-1.5) \psline(4.25,-1)(4.5,-1.5) \rput(4.625,-0.72){$xyA_{0}$} \psline(5.25,-2.5)(5.25,-3) \psline(5.25,-2.5)(5.75,-2.75) \psline(5.25,-3)(5.75,-2.75) \rput(5.55,-3.15){$A_{0}$} \psline(1.55,0.5)(1.55,-0.5) \psline(2.05,0)(1.55,0.5) \psline(1.55,-0.5)(2.05,0) \rput(2.2,0.5){$yA_{0}$} \rput(2.2,-0.45){$syA_{0}$} \rput(6.25,0){$H_{s}$} \rput(5.3,3){$(H)\sigma_{s}$} \rput(2.25,3){$H$} \psline[linestyle=dashed](4.55,3)(2.55,-3) \psline(4.55,-3)(2.55,3) \rput(6.25,2){$E_{xsyA_{0}}$} \rput(6.25,-2){$E_{A_{0}}$} \rput(1.55,-2){$E_{syA_{0}}$} \rput(1.55,2){$E_{yA_{0}}$} \rput(0.5,-4){FIGURE 1. $s$-type 1 and $s$-type 2 hyperplanes} \end{pspicture} \end{center} \begin{Lem} \label{Lem3} Let $x,y\in W$ and $s\in S$ be such that $xs<x$ and $sy<y$. Let $H_{s}$ be the unique hyperplane which separates $yA_{0}$ and $syA_{0}$ and let $\sigma_{s}$ be the corresponding reflection. The following holds. \begin{enumerate} \item[a)] Let $H\in\mathcal{F}$. We have \begin{eqnarray*} H\in H(yA_{0},xsyA_{0})\Rightarrow (H)\sigma_{s}\in H(yA_{0},xyA_{0}). \end{eqnarray*} \item[b)] Let $H\in H_{xs,y}$ be of $s$-type 1; then $H\in H_{x,y}$. \item[c)] Let $H\in H_{xs,y}$ be of $s$-type 2; then $(H)\sigma_{s}\in H_{x,y}$. \item[d)] Let $H\in H_{xs,y}$ such that $(H)\sigma_{s}=H$; then $H\in H_{x,y}$. \end{enumerate} \end{Lem} \begin{proof} We prove (a). Let $H\in H(yA_{0},xsyA_{0})$. Then $(H)\sigma_{s}$ separates $yA_{0}\sigma_{s}$ and $xsyA_{0}\sigma_{s}$. But we have $$yA_{0}\sigma_{s}=syA_{0}\quad\text{and}\quad xsyA_{0}\sigma_{s}=xssyA_{0}=xyA_{0}.$$ Since $H\neq H_{s}$, we have $(H)\sigma_{s}\neq H_{s}$ and this implies that $(H)\sigma_{s}$ separates $yA_{0}$ and $xyA_{0}$. We prove (b). We have $H\in H_{xs,y}=H(A_{0},yA_{0})\cap H(yA_{0},xsyA_{0})$. The hyperplane $H$ is of $s$-type 1 thus $(H)\sigma_{s}\in H(yA_{0},xsyA_{0})$. Using (a) we see that $H\in H(yA_{0},xyA_{0})$. Therefore, $H\in H_{x,y}$. We prove (c). Since $H$ is of $s$-type 2 we have $(H)\sigma_{s}\in H(A_{0},yA_{0})$. Moreover, $H\in H(yA_{0},xsyA_{0})$ thus, using (a), we see that $(H)\sigma_{s}\in H(yA_{0},xyA_{0})$. Therefore, $(H)\sigma_{s}\in H_{x,y}$. We prove (d). Using (a), we see that $(H)\sigma_{s}=H\in H(yA_{0},xyA_{0})$ and since $H\in H_{xs,y}\subset H(A_{0},yA_{0})$, we get $H\in H_{x,y}$.\\ \end{proof} \begin{Lem} \label{Lem4} Let $x,y\in W$ and $s\in S$ be such that $xs<x$ and $sy<y$. Let $H_{s}$ be the unique hyperplane which separates $yA_{0}$ and $syA_{0}$. There is an injective map $\varphi$ from $I_{xs,y}$ to $I_{x,y}-\{ \overline{H_{s}} \}$. \end{Lem} \begin{proof} Let $\sigma_{s}$ be the reflection with fixed point set $H_{s}$. If $I_{xs,y}=\emptyset$ then the result is clear. We assume that $I_{xs,y}\neq\emptyset$. We define $\varphi$ as follows. \begin{enumerate} \item If $(H^{(i)})\sigma_{s}\in H(A_{0},yA_{0})$ then set $\varphi(i)=\sigma_{s}(i)$; \item set $\varphi(i)=i$ otherwise. \end{enumerate} We need to show that $\varphi(i)\in I_{x,y}-\{\overline{H_{s}}\}$. The fact that $\varphi(i)\neq \overline{H_{s}}$ is a consequence of Lemma \ref{Lem2}, where we have seen that $\overline{H_{s}}\notin I_{xs,y}$. Indeed, since $\varphi(i)$ is either $i$ or $\sigma_{s}(i)$ and $i\neq \overline{H_{s}}$ we cannot have $\varphi(i)=\overline{H_{s}}$. \\ Let $i\in I_{xs,y}$ be such that $\sigma_{s}(H^{(i)})\in H(A_{0},yA_{0})$. By Lemma \ref{Lem3} (a), we have $\sigma_{s}(H^{(i)})\in H(yA_{0},xyA_{0})$. It follows that $\sigma_{s}(H^{(i)})\in H_{x,y}$ and $\sigma_{s}(i)\in I_{x,y}$ as required.\\ Let $i\in I_{xs,y}$ be such that $\sigma_{s}(H^{(i)})\notin H(A_{0},yA_{0})$. Then $H^{(i)}$ is of $s$-type 1. By the previous lemma we have $H^{(i)}\in H_{x,y}$ and $i\in I_{x,y}$. \\ We show that $\varphi$ is injective. Let $i\in I_{xs,y}$ be such that $\varphi(i)=\sigma_{s}(i)$ and assume that $\sigma_{s}(i)\in I_{xs,y}$. We have $$(U_{i}(yA_{0}))\sigma_{s}=U_{\sigma_{s}(i)}(syA_{0})=U_{\sigma_{s}(i)}(yA_{0})$$ and $(H^{(i)})\sigma_{s}$ is one of the hyperplane which defines $U_{\sigma_{s}(i)}(yA_{0})$. Furthermore since $(H^{(i)})\sigma_{s}\in H(A_{0},yA_{0})$ we must have $(H^{(i)})\sigma_{s}=H^{(\sigma_{s}(i))}$. It follows that $(H^{(\sigma_{s}(i))})\sigma_{s}\in H(A_{0},yA_{0})$ and $\varphi(\sigma_{s}(i))=i$. The result follows. \end{proof} \begin{Lem} \label{Lem5} Let $x,y\in W$ and $s\in S$ such that $xs<x$ and $sy<y$. Let $H_{s}$ be the unique hyperplane which separates $yA_{0}$ and $syA_{0}$. We have $$c_{xs,y}\leq c_{x,y}-c_{x,y}(\overline{H_{s}}).$$ \end{Lem} \begin{proof} Let $\varphi$ be as in the proof of the previous lemma. We keep the same notation. If $I_{xs,y}=\emptyset$ then the result is clear, thus we may assume that $I_{xs,y}\neq 0$. First assume that $W$ is not of type $\tilde{C}_{r}$ ($r\geq 2$) or $\tilde{A}_{1}$. Then any two parallel hyperplanes have the same weight, therefore we obtain, for $i\in I_{xs,y}$ $$c_{xs,y}(i)=c_{H^{(i)}}.$$ Moreover, since $c_{H}=c_{(H)\sigma}$ for all $H\in\mathcal{F}$ and $\sigma\in\Omega$, one can see that $$c_{xs,y}(i)=c_{x,y}(\varphi(i)),$$ and the result follows using Lemma \ref{Lem4}. Now, assume that $W$ is of type $\tilde{C}_{r}$, with graph and weight function given by \begin{center} \begin{pspicture}(0,0)(7,1) \rput(1,0.5){\circle{0.2}} \psline(0.99,0.55)(1.81,0.55) \psline(0.99,0.45)(1.81,0.45) \rput(2,0.5){\circle{0.2}} \psline(2,0.5)(2.8,0.5) \rput(3,0.5){\circle{0.2}} \psline[linestyle=dashed](3,0.5)(3.8,0.5) \rput(4,0.5){\circle{0.2}} \psline(4,0.5)(4.8,0.5) \rput(5,0.5){\circle{0.2}} \psline(4.99,0.55)(5.81,0.55) \psline(4.99,0.45)(5.81,0.45) \rput(6,0.5){\circle{0.2}} \rput(0.9,0.2){$s_{1}$} \rput(0.9,0.8){$a$} \rput(1.9,0.2){$s_{2}$} \rput(1.9,0.8){$c$} \rput(2.9,0.2){$s_{3}$} \rput(2.9,0.8){$c$} \rput(3.9,0.2){$s_{r-1}$} \rput(3.9,0.8){$c$} \rput(4.9,0.2){$s_{r}$} \rput(4.9,0.8){$c$} \rput(5.9,0.2){$s_{r+1}$} \rput(5.9,0.88){$b$} \end{pspicture} \end{center} In \cite{Bremke}, Bremke proved that the only case where two parallel hyperplanes $H$, $H'$ do not have the same weight is when one of them, say $H$, supports a face of type $s_{1}$ and $H'$ supports a face of type $s_{r+1}$. If $a=b$, then parallel hyperplanes have the same weight and we can conclude as before. Now assume that $a>b$. Let $i\in\overline{\mathcal{F}}$ be such that not all the hyperplanes with direction $i$ have the same weight. Let $H=H_{\alpha,n}$ be a hyperplane with direction $i$ and weight $a$. Then $H_{\alpha,n-1}$ and $H_{\alpha,n+1}$ have weight $b$ because otherwise all the hyperplanes with direction $i$ would have weight $a$. Let $i\in I_{xs,y}$. If no hyperplane of direction $i$ supports a face of type $s_{1}$ or $s_{r+1}$ then, as before, we can conclude that $$c_{xs,y}(i)=c_{x,y}(\varphi(i)).$$ Now, in order to prove that $c_{xs,y}\leq c_{x,y}-c_{x,y}(\overline{H_{s}})$, the only problem which may appear is when there exists $i\in I_{xs,y}$ such that $$c_{xs,y}(i)=a\quad\text{and}\quad c_{x,y}(\varphi(i))=b.$$ Fix such a $i\in I_{xs,y}$. We claim that \begin{enumerate} \item $H^{(i)}$ is of $s$-type 1 and $(H^{(i)})\sigma_{s}\in H(A_{0},yA_{0})$; \item $\sigma_{s}(i)\in I_{xs,y}$, $\varphi(\sigma_{s}(i))=i$ and $$c_{xs,y}(\sigma_{s}(i))=b\quad\text{and}\quad c_{x,y}(i)=a.$$ \end{enumerate} We prove (1). Let $j\in I_{xs,y}$ be such that $H^{(j)}$ is of $s$-type 2. Then $(H^{(j)})\sigma_{s}\in H(A_{0},yA_{0})$ and $\varphi(j)=\sigma_{s}(j)$. Let $H\in H_{xs,y}$ be such that $\overline{H}=j$. Then $H$ is also of $s$-type 2 and $(H)\sigma_{s}\in H_{x,y}$ (see Lemma \ref{Lem3}(c)). It follows that $c_{x,y}(\varphi(j))\geq c_{xs,y}(j)$.\\ Let $j\in I_{xs,y}$ be such that $(H^{(j)})\sigma_{s}=H^{(j)}$. Then $(H^{(j)})\sigma_{s}\in H(A_{0},yA_{0})$ and $\varphi(j)=j$. Let $H\in H_{xs,y}$ be such that $\overline{H}=j$. Then $(H)\sigma_{s}=H$ and $H\in H_{x,y}$ (see Lemma \ref{Lem3}(d)). It follows that $c_{x,y}(\varphi(j))\geq c_{xs,y}(j)$.\\ Finally let $j\in I_{xs,y}$ be such that $H^{(j)}$ is of $s$-type 1 and $H^{(j)}\notin H(A_{0},yA_{0})$. Then $\varphi(j)=j$. Let $H\in H_{xs,y}$ be such that $\overline{H}=j$. Then $H$ is also of $s$-type 1 and $H\in H_{x,y}$ (see Lemma \ref{Lem3}(b)). It follows that $c_{x,y}(\varphi(j))\geq c_{xs,y}(j)$.\\ Thus since $c_{x,y}(\varphi(i))< c_{xs,y}(i)$, we get that $H^{(i)}$ is of $s$-type 1 and $(H^{(i)})\sigma_{s}\in H(A_{0},yA_{0})$. \\ We prove (2). We know that $H^{(i)}$ is of $s$-type 1 and $(H^{(i)})\sigma_{s}\in H(A_{0},yA_{0})$. Thus $(H^{(i)})\sigma_{s}\in H_{x,y}$ and $\varphi(i)=\sigma_{s}(i)$. In particular, since $c_{x,y}(\varphi(i))=b$, we must have $c_{(H^{(i)})\sigma_{s}}=b$, which implies that $c_{H^{(i)}}=b$. \\ Since $H^{(i)}$ is of $s$-type 1 we have $(H^{(i)})\sigma_{s}\in H(yA_{0},xsyA_{0})$ which implies that $(H^{(i)})\sigma_{s}\in H_{xs,y}$. Thus $\sigma_{s}(i)\in I_{xs,y}$. Arguing as in the proof of Lemma \ref{Lem4}, we obtain $(H^{(i)})\sigma_{s}=H^{(\sigma_{s}(i))}$ and $\varphi(\sigma_{s}(i))=i$.\\ Let $\alpha\in\Phi^{+}$ and $n\in\mathbb{Z}$ be such that $H^{(i)}=H_{\alpha,n}$. Since $c_{xs,y}(i)=a$, one can see that one of the hyperplanes $H_{\alpha,n-1}$, $H_{\alpha,n+1}$ lies in $H_{xs,y}$. We denote this hyperplane by $H$. Note that $c_{H}=a$ thus, since $c_{x,y}(\sigma_{s}(i))=b$, we cannot have $(H)\sigma_{s}\in H_{x,y}$. Both hyperplanes $(H)\sigma_{s}$ and $(H^{(i)})\sigma_{s}$ separate $yA_{0}$ and $xyA_{0}$ but only $(H^{(i)})\sigma_{s}$ lies in $H_{x,y}$. This implies that $A_{0}$ lies in the strip defined by $(H)\sigma_{s}$ and $(H^{(i)})\sigma_{s}$. Since $(H^{(i)})\sigma_{s}=H^{(\sigma_{s}(i))}$ this shows that the only hyperplane of direction $\sigma_{s}(i)$ which lies in $H_{xs,y}$ is $H^{(\sigma_{s}(i))}$. Thus we have $c_{xs,y}(\sigma_{s}(i))=b$. Moreover $\varphi(\sigma_{s}(i))=i$ and $H$ is of $s$-type 1, thus $H\in H_{x,y}$ (see Lemma \ref{Lem3} (b)) and $c_{x,y}(i)=a$, as required.\\ Let $I_{>}$ be the subset of $I_{xs,y}$ which consists of the directions $i$ such that $c_{xs,y}(i)=a$ and $c_{x,y}(\varphi(i))=b$. Using (1) and (2), we see that the set $\sigma_{s}(I_{>})$ is a subset of $I_{xs,y}$ such that for all $i\in \sigma_{s}(I_{>})$ we have $c_{xs,y}(i)=b$ and $c_{x,y}(\varphi(i))=a$. Therefore we can conclude that $c_{xs,y}\leq c_{x,y}-c_{x,y}(\overline{H_{s}})$ in the case where $W$ is of type $\tilde{C}_{r}$ ($r\geq 2$).\\ In the case where $W$ is of type $\tilde{A}_{1}$, the result is clear, since we always have $I_{xs,y}=\emptyset$. The lemma is proved. \end{proof} \begin{proof}[Proof of Theorem \ref{bound}] Let $x,y\in W$ and $$T_{x}T_{y}=\underset{z\in W}{\sum}f_{x,y,z}T_{z}\ \ \text{ where $f_{x,y,z}\in\mathcal{A}$}.$$ We want to prove that the degree of $f_{x,y,z}$ in $v$ is less than or equal to $c_{x,y}$. We proceed by induction $\ell(x)+\ell(y)$. \\ If $\ell(x)+\ell(y)=0$ the result is clear. \\ If $c_{x,y}=0$ then $H_{x,y}=\emptyset$ and $xy=x.y$. Thus $T_{x}T_{y}=T_{xy}$ and the result follows.\\ We may assume that $H_{x,y}\neq \emptyset$, which implies that $\ell(x)>0$ and $\ell(y)>0$. Let $x=s_{k}\ldots s_{1}$ be a reduced expression of $x$. There exists $1\leq i\leq k$ such that $$\ell(s_{i-1}\ldots s_{1}y)=\ell(y)+i-1 \text{ and } s_{i}s_{i-1}\ldots s_{1}y<s_{i-1}\ldots s_{1}y.$$ Let $x_{0}=s_{k}\ldots s_{i}$ and $y_{0}=s_{i-1}\ldots s_{1}y$. Let $H_{s_{i}}$ be the unique hyperplane which separates $y_{0}A_{0}$ and $s_{i}y_{0}A_{0}$. Note that $c_{H_{s_{i}}}=L(s_{i})$. We have \begin{align*} T_{x}T_{y}&=T_{x_{0}}T_{y_{0}} \end{align*} Using Lemma \ref{Lem1}, we obtain $c_{x,y}=c_{x_{0},y_{0}}$. We have \begin{align*} T_{x_{0},y_{0}}&=T_{s_{k}\ldots s_{i+1}}T_{s_{i}}T_{y_{0}}\\ &=T_{s_{k}\ldots s_{i+1}}(T_{s_{i}y_{0}}+\xi_{s_{i}}T_{y_{0}})\\ &=T_{s_{k}\ldots s_{i+1}}T_{s_{i}y_{0}}+\xi_{s_{i}}T_{s_{k}\ldots s_{i+1}}T_{y_{0}}\\ &=T_{x_{0}s_{i}}T_{s_{i}y_{0}}+\xi_{s_{i}}T_{x_{0}s_{i}}T_{y_{0}} \end{align*} By induction, $T_{x_{0}s_{i}}T_{s_{i}y_{0}}$ is an $\mathcal{A}$-linear combination of $T_{z}$ with coefficients of degree less than or equal to $c_{x_{0}s_{i},s_{i}y_{0}}$. Using Lemma \ref{lem0}, we have $c_{x_{0}s_{i},s_{i}y_{0}}\leq c_{x_{0},y_{0}}=c_{x,y}$.\\ By induction, $T_{x_{0}s_{i}}T_{y_{0}}$ is an $\mathcal{A}$-linear combination of $T_{z}$ with coefficients of degree less than or equal to $c_{x_{0}s_{i},y_{0}}$. Therefore the degree of the polynomials occurring in $\xi_{s_{i}}T_{x_{0}s_{i}}T_{y_{0}}$ is less than or equal to $L(s_{i})+c_{x_{0}s_{i},y_{0}}$. Applying Lemma \ref{Lem5} to $x_{0}$ and $y_{0}$ we obtain $$c_{x_{0}s_{i},y_{0}}\leq c_{x_{0},y_{0}}-c_{x_{0},y_{0}}(\overline{H_{s_{i}}})$$ Since $c_{x_{0},y_{0}}(\overline{H_{s_{i}}})\geq c_{H_{s_{i}}}=L(s_{i})$ we obtain $$L(s_{i})+c_{x_{0}s_{i},y_{0}}\leq c_{x_{0},y_{0}} =c_{x,y}.$$ The theorem is proved. \end{proof} \section{The lowest two-sided cell} \begin{what}{\bf Kazhdan-Lusztig cells.} Let $(W,S)$ be a Coxeter group and $L$ a weight function on $W$. Let $\mathcal{A}=\mathbb{Z}[v,v^{-1}]$ and $\mathcal{H}$ be the generic Iwahori-Hecke algebra corresponding to $(W,S)$ with parameters $\{L(s)|s\in S\}$. Let $a\mapsto \overline{a}$ be the involution of $\mathcal{A}$ which takes $v^{n}$ to $v^{-n}$ for all $n\in\mathbb{Z}$. We can extend it to a ring involution from $\mathcal{H}$ to itself by the formula $$\overline{\underset{w\in W}{\sum}a_{w}T_{w}}=\underset{w\in W}{\sum}\overline{a}_{w}T^{-1}_{w^{-1}}\ , \text{ where $a_{w}\in \mathcal{A}$}.$$ Let $\mathcal{A}_{\leq 0}=\mathbb{Z}[v^{-1}]$ and $\mathcal{A}_{<0}=v^{-1}\mathbb{Z}[v^{-1}]$. For $w\in W$ there exists a unique element $C_{w}\in\mathcal{H}$ such that $$\overline{C}_{w}=C_{w} \text{ and } C_{w}=T_{w}+\underset{y<w}{\underset{y\in W}{\sum}}P_{y,w}T_{w}, $$ where $P_{y,w}\in \mathcal{A}_{<0}$ for $y<w$. In fact, the set $\{C_{w},w\in W\}$ forms a basis of $\mathcal{H}$, known as the Kazhdan-Lusztig basis. The elements $P_{y,w}$ are called the Kazhdan-Lusztig polynomials. We set $P_{w,w}=1$ for any $w\in W$. The Kazhdan-Lusztig left preorder $\leq_{L}$ on $W$ is the relation generated by $$ \begin{cases} y\leq_{L} w \text{ if there exists some $s\in S$ such that $C_{y}$ appears with} \\ \text{ a non-zero coefficient in $T_{s}C_{w}$, expressed in the $C_{w}$-basis} \end{cases} $$ One can see that $$\mathcal{H} C_{w}\subseteq \underset{y\leq_{L} w}{\sum}\mathcal{A} C_{y}\text{ for any $w\in W$.}$$ The equivalence relation associated to $\leq_{L}$ will be denoted by $\sim_{L}$ and the corresponding equivalence classes are called the left cells of $W$. Similarly, we define $\leq_{R}$, $\sim_{R}$ and right cells. We say that $x\leq_{LR} y$ if there exists a sequence $$x=x_{0}, x_{1},..., x_{n}=y$$ such that for all $1\leq i\leq n$ we have $x_{i-1}\leq_{L} x_{i}$ or $x_{i-1}\leq_{R} x_{i}$. We write $\sim_{LR}$ for the associated equivalence relation and the equivalence classes are called two-sided cells. The preorder $\leq_{LR}$ induces a partial order on the two-sided cells of $W$. \end{what} \begin{what}{\bf The lowest two-sided cell.} Let $(W,S)$ be an irreducible affine Weyl group. In this section, we look at the set $$c_{0}=\{w\in W\mid w=z'.w_{\lambda}.z,\ z,z'\in W,\ \lambda\in T\}$$ where $T$ is the set of special points (see Section 2). We show that $c_{0}$ is the lowest two-sided cell and we determine the decomposition of $c_{0}$ into left cells. Recall that, for $\lambda$ a special point, $W_{\lambda}$ is the stabilizer in $W$ of the set of alcoves containing $\lambda$ in their closure, $w_{\lambda}$ is the longest element of $W_{\lambda}$ and $S_{\lambda}=S\cap W_{\lambda}$. In particular, we have $$sw_{\lambda}<w_{\lambda} \text{ for any $s\in S_{\lambda}$}.$$ For $\lambda\in T$ and $z\in W$ such that $w_{\lambda}z=w_{\lambda}.z$, we set $$N_{\lambda,z}=\{w\in W\mid w=z'.w_{\lambda}.z,\ z'\in W\}.$$ In \cite[Proposition 5.1]{Bremke}, it is shown that $N_{\lambda,z}$ is included in a left cell.\\ For $\lambda\in T$, we set $$M_{\lambda}=\{z\in W\mid w_{\lambda}z=w_{\lambda}.z,\ sw_{\lambda}z\notin c_{0}\text{ for all $s\in S_{\lambda}$}\}.$$ Following \cite{Shi2}, we choose a set of representatives for the $\Omega$-orbits on $T$ and denote it by $R$. Then $$c_{0}=\underset{\lambda\in R,\ z\in M_{\lambda}}{\bigcup}N_{\lambda,z}\quad \text{(disjoint union)}$$ It is known (\cite{Shi2}) that this is a union over $|W_{0}|$ terms. We are now ready to state the main result of this paper. \begin{Th} \label{main} Let $\lambda\in T$ and $z\in M_{\lambda}$. The set $N_{\lambda,z}$ is a union of left cells. \\ Furthermore, for $y\in W$ and $w\in N_{\lambda,z}$, we have $$y\leq_{L} w\ \Longrightarrow \ y\in N_{\lambda,z}.$$ \end{Th} The proof of this theorem will be given in the next section. We now discuss a number of consequences of Theorem \ref{main}. \begin{Cor} \label{lc} Let $\lambda\in T$ and $z\in M_{\lambda}$. Then, the set $N_{\lambda,z}$ is a left cell. \end{Cor} \begin{proof} The set $N_{\lambda,z}$ is a union of left cells which is included in a left cell (see \cite[Proposition 5.1]{Bremke}). Hence it is a left cell. \end{proof} The next step is to prove the following. \begin{Prop} The set $c_{0}$ is included in a two-sided cell. \end{Prop} \begin{proof} Let $R=\{\lambda_{1},...,\lambda_{n}\}$ be a set of representatives for the $\Omega$-orbits on $T$. For example, if $W$ is of type $\tilde{G}_{2}$ we have $n=1$ and if $W$ is of type $\tilde{B_{n}}$ ($n\geq 3$) we have $n=2$. Set $$c_{\lambda_{i}}=\{w\in W|\ w=z'.w_{\lambda_{i}}.z,\ z,z'\in W \}$$ One can see that $$c_{0}=\bigcup_{i=1}^{i=n}c_{\lambda_{i}}$$ and for $1\leq i\leq j\leq n$ we have $c_{\lambda_{i}}\cap c_{\lambda_{j}}\neq\emptyset$. Therefore to prove the proposition, it is enough to show that each of the set $c_{\lambda_{i}}$ is included in a two-sided cell. Fix $1\leq i\leq n$. Let $w,w' \in c_{\lambda_{i}}$ and $z,z',y,y'\in W$ be such that $w=z'.w_{\lambda_{i}}.z$ and $w'=y'.w_{\lambda_{i}}.y$. Using \cite[Proposition 5.1]{Bremke}, together with its version for right cells, we obtain $$z'w_{\lambda_{i}}z\ \sim_{L}\ w_{\lambda_{i}}z\ \sim_{R} \ w_{\lambda_{i}}y\ \sim_{R}\ y'w_{\lambda_{i}}y.$$ The result follows. \end{proof} Finally, combining the previous results of Shi, Xi and Bremke with Theorem \ref{main}, we now obtain the following description of the lowest two-sided cell in complete generality. \begin{Th} \label{ltsc} Let $W$ be an irreducible affine Weyl group with associated Weyl group $W_{0}$. Let $$c_{0}=\{w\in W|\ w=z'.w_{\lambda}.z,\ z,z'\in W, \lambda\in T\}$$ where $T$ is the set of special points. We have \begin{enumerate} \item $c_{0}$ is a two-sided cell. \item $c_{0}$ is the lowest two-sided cell for the partial order on the two-sided cell induced by the preorder $\leq_{LR}$. \item $c_{0}$ contains exactly $|W_{0}|$ left cells. \item The decomposition of $c_{0}$ into left cells is as follows $$c_{0}=\underset{\lambda\in R,\ z\in M_{\lambda}}{\bigcup}N_{\lambda,z}.$$ \end{enumerate} \end{Th} \begin{proof} We have seen that $c_{0}$ is included in a two-sided cell. Let $w\in c_{0}$ and $y\in W$ such that $y\sim_{LR} w$. In particular we have $y\leq_{LR} w$. We may assume that $y\leq_{L} w$ or $y\leq_{R} w$. We know that $$c_{0}=\underset{\lambda\in R,\ z\in M_{\lambda}}{\bigcup}N_{\lambda,z}.$$ Thus $w\in N_{\lambda,z}$ for some $\lambda\in R$ and $z\in M_{\lambda}$. If $y\leq_{L} w$ then, using Theorem \ref{main}, we see that $y\in N_{\lambda,z}$ and thus $y\in c_{0}$. If $y\leq_{R} w$ then using \cite[\S 8.1]{bible}, we have $y^{-1}\leq_{L} w^{-1}$. But $c_{0}$ is stable by taking the inverse, thus, as before, we see that $y^{-1}\in c_{0}$ and $y\in c_{0}$. This implies that $c_{0}$ is a two-sided cell and that it is the lowest one with respect to $\leq_{LR}$. By \cite{Shi2}, we know that $$c_{0}=\underset{\lambda\in R,\ z\in M_{\lambda}}{\bigcup}N_{\lambda,z}$$ is a disjoint union over $|W_{0}|$ terms. By Corollary \ref{lc}, the result follows. \end{proof} \end{what} \section{Proof of Theorem \ref{main}} We keep the setting of the previous section. For $\lambda\in T$ we denote by $X_{\lambda}$ the set of minimal left coset representatives of $W_{\lambda}$ in $W$, that is $$X_{\lambda}:=\{w\in W| \ell(ws)>\ell(w)\text{ for all $s\in S_{\lambda}$}\}.$$ One can easily check that $$X_{\lambda}=\{z\in W| zw_{\lambda}=z.w_{\lambda}\} \quad\text{and}\quad X_{\lambda}^{-1}=\{z\in W| w_{\lambda}z=w_{\lambda}.z\}.$$ Let $\lambda\in T$ and $z\in X_{\lambda}^{-1}$. For $z'\in W$, we have the following equivalence $$z'w_{\lambda}z=z'.w_{\lambda}.z \Longleftrightarrow z'\in X_{\lambda}.$$ Indeed, if $z'w_{\lambda}z=z'.w_{\lambda}.z$ then we must have $z'w_{\lambda}=z'.w_{\lambda}$ and $z'\in X_{\lambda}$. Conversely, if $z'\in X_{\lambda}$ then since $z\in X_{\lambda}^{-1}$ we have $z'w_{\lambda}z=z'.w_{\lambda}.z$ (see \cite[Lemma 3.2]{Shi2}). Therefore we see that $$N_{\lambda,z}=\{w\in W| w=xw_{\lambda}z, x\in X_{\lambda}\}.$$ \begin{what}{\bf Preliminaries.} \begin{Lem} \label{endp} Let $\lambda\in T$, $z\in M_{\lambda}$, $y\in X_{\lambda}$ and $v_{1}<w_{\lambda}z$. Then, $P_{v_{1},w_{\lambda}z}T_{y}T_{v_{1}}$ is an $\mathcal{A}$-linear combination of $T_{z'}$ ($z'\in W$) with coefficient in $\mathcal{A}_{<0}$. \end{Lem} \begin{proof} We can write uniquely $v_{1}=w.v'$, where $w\in W_{\lambda}$ and $v'\in X_{\lambda}^{-1}$ (see \cite[Proposition 2.1.1]{gp}). First, assume that $w=w_{\lambda}$. In that case, we have $yv_{1}=y.v_{1}$ and $T_{y}T_{v_{1}}=T_{yv_{1}}$. Since $P_{v_{1},w_{\lambda}z}\in\mathcal{A}_{<0}$ the result follows. Next, assume that $w< w_{\lambda}$. Let $w_{v_{1}}\in W$ such that $w_{v_{1}}w=w_{\lambda}$. The Kazhdan-Lusztig polynomials satisfy the following relation (see \cite[Theorem 6.6.c]{bible}) $$P_{x,w}=v^{-L(s)}P_{sx,w}, \text{ where $x<sx$ and $sw<w$ }.$$ Therefore, one can see that \begin{enumerate} \item $P_{v_{1},w_{\lambda}z}\in v^{-L(w_{v_{1}})}\mathcal{A}_{<0}$ if $w_{\lambda}.v'<w_{\lambda}z,$ \item $P_{v_{1},w_{\lambda}z}= v^{-L(w_{v_{1}})}$ if $w_{\lambda}.v'=w_{\lambda}z.$ \end{enumerate} Thus, to prove the lemma, it is sufficient to show that the polynomials occurring in the expression of $T_{y}T_{v_{1}}$ in the standard basis are of degree less than or equal to $L(w_{v_{1}})$ in the first case and $L(w_{v_{1}})-1$ in the second case. Using Theorem \ref{bound}, we know that the degree of these polynomials is less than or equal to $c_{y,v_{1}}$ (for the definition of $c_{y,v_{1}}$, see Section 2.3). Let $w_{v_{1}}=s_{n}...s_{m+1}$ and $w=s_{m}...s_{1}$ be reduced expressions, and let $H_{i}$ be the unique hyperplane which separates $s_{i-1}...s_{1}v'A_{0}$ and $s_{i}...s_{1}v'A_{0}$. Note that $c_{H_{i}}=L(s_{i})$. Let $\lambda'$ be the unique special point contained in the closure of $v'A_{0}$ and $w_{\lambda}v'A_{0}$ (note that $W_{\lambda'}=W_{\lambda}$). By definition of $X_{\lambda}$, $yv_{1}A_{0}$ lies in the quarter $\mathcal{C}$ with vertex $\lambda'$ which contains $v_{1}A_{0}$. Let $1\leq i\leq m$. Let $\alpha_{i}$ and $k\in\mathbb{Z}$ such that $H_{i}=H_{\alpha_{i},k}$. Assume that $k>0$ (the case $k\leq 0$ is similar). We have $v_{1}A_{0}\in V_{H_{i}}^{+}$. Now, since $\lambda'$ lies in the closure of $v_{1}A_{0}$ and $\lambda'\in H_{i}$, one can see that $$k<\mathcal{h}\mu,\check{\alpha_{i}}\mathcal{i}<k+1 \text{ for all $\mu\in v_{1}A_{0}$}.$$ Moreover, $yv_{1}A_{0}\in \mathcal{C}$ implies that $$k<\mathcal{h}\mu,\check{\alpha_{i}}\mathcal{i} \text{ for all $\mu\in yv_{1}A_{0}$}.$$ From there, we conclude that all the hyperplanes $H_{\alpha_{i},l}$ with $l\leq k$ do not lie in $H(v_{1}A_{0},yv_{1}A_{0})$ and that all the hyperplanes $H_{\alpha_{i},l}$ with $l>k$ do not lie in $H(A_{0},v_{1}A_{0})$. Thus $\overline{H_{i}}\notin I_{y,v_{1}}$ and we have $$I_{y,v_{1}}\subset \{\overline{H_{m+1}},...,\overline{H_{n}}\},$$ which implies $$c_{y,v_{1}}\leq \overset{i=n}{\underset{i=m+1}{\sum}} c_{y,v_{1}}(\overline{H_{i}}).$$ Now, if $W$ is not of type $\tilde{C}_{r}$ or $\tilde{A}_{1}$ then any two parallel hyperplanes have same weight and we have $$ c_{y,v_{1}}(\overline{H_{i}})= \begin{cases} 0 & \mbox{if } i\notin I_{y,v_{1}},\\ L(s_{i}) &\mbox{otherwise}. \end{cases} $$ Thus $$c_{y,v_{1}}\leq \overset{i=n}{\underset{i=m+1}{\sum}} L(s_{i})=L(w_{v_{1}}),$$ as required in the first case. Assume that $W$ is of type $\tilde{C}_{r}$ or $\tilde{A}_{1}$. Then, one can see that, since $\lambda'$ is a special point, we have for all $1\leq i\leq n$, $c_{H_{i}}=c_{\overline{H_{i}}}=L(s_{i})$ and we can conclude as before. Assume that we are in case 2. Let $j\in\overline{\mathcal{F}}$. Recall that in \cite{Bremke}, the author defined the strip of direction $j$ as the connected component of $$V-\underset{c_{H}=c_{j}}{\underset{H\in\mathcal{F}, \overline{H}=j}{\bigcup}}H, \quad\text{where}\quad c_{j}=\underset{H,\ \overline{H}=j}{\max}c_{H}.$$ To avoid confusion, we will call them maximal strips of direction $j$. Let $$\mathcal{U}(A)=\underset{U \text{ maximal strip}, A\subset U}{\bigcup}U.$$ Now, $v_{1}=w.v'<w_{\lambda}v'=w_{\lambda}z$ with $z\in M_{\lambda}$. Recall that $$M_{\lambda}=\{z\in W\mid w_{\lambda}z=w_{\lambda}.z,\ sw_{\lambda}z\notin c_{0}\text{ for all $s\in S_{\lambda}$}\}$$ thus $v_{1}=w.v'\notin c_{0}$. In \cite{Bed}, B\'edard showed (in the equal parameter case) that the lowest two-sided cell $c_{0}$ can be described as follows $$c_{0}=\{w\in W\mid wA_{0}\not\subset \mathcal{U}(A_{0})\}.$$ In \cite{Bremke}, Bremke proved that this presentation remains valid in the unequal parameter case. Therefore since $v_{1}A_{0}\notin c_{0}$, we have $v_{1}A_{0}\in\mathcal{U}(A_{0})$ and there exists a maximal strip $U$ which contains $A_{0}$ and $v_{1}A_{0}$. Let $1\leq k\leq n$ be such that $U$ is of direction $\overline{H_{k}}$. For $1\leq i\leq m$, the hyperplane $H_{i}$ separates $A_{0}$ and $v_{1}A_{0}$ and $c_{H_{i}}=c_{\overline{H_{i}}}$, thus we must have $k>m$. If $W$ is not of type $\tilde{C}_{r}$ or $\tilde{A}_{1}$, then our strips and the strips as defined in \cite{Bremke} are the same. Therefore, since $A_{0}$ and $v_{1}A_{0}$ lie in $U$, we have $\overline{H_{k}}\notin I_{y,v_{1}}$ and $$c_{y,v_{1}}\leq \underset{i\neq k}{\overset{i=n}{\underset{i=m+1}{\sum}}} c_{y,v_{1}}(\overline{H_{i}})\leq \underset{i\neq k}{\overset{i=n}{\underset{i=m+1}{\sum}}} L(s_{i})<L(w_{v_{1}}),$$ as required. Assume that $W$ is of type $\tilde{C}_{r}$ or $\tilde{A}_{1}$. First, if all the hyperplanes with direction $\overline{H_{k}}$ have same weight, then we have $\overline{H_{k}}\notin I_{y,v_{1}}$ and we can conclude as before.\\ Assume not, then we must have $c_{H_{k}}=c_{\overline{H_{k}}}$ (since $\lambda'\in H_{k}$) and there is no hyperplane of direction $\overline{H_{k}}$ and maximal weight which separates $A_{0}$ and $v_{1}A_{0}$. Therefore $$c_{y,v_{1}}\leq \overset{i=n}{\underset{i=m+1}{\sum}} c_{y,v_{1}}(\overline{H_{i}})<\underset{i\neq k}{\overset{i=n} {\underset{i=m+1}{\sum}}}c_{y,v_{1}}(\overline{H_{i}})+c_{\overline{H_{k}}}\leq \overset{i=n} {\underset{i=m+1}{\sum}}L(s_{i})=L(w_{v_{1}}),$$ as required. \end{proof} \end{what} \begin{what}{\bf Proof of Theorem \ref{main}.} In this section we fix $\lambda\in T$ and $z\in M_{\lambda}$. We set $v=w_{\lambda}z$. The following argument is inspired by a paper of Geck \cite{Geck}. \begin{Lem} The submodule $\mathcal{M}:=\mathbf{h} T_{x}C_{v}\mid x\in X_{\lambda}\mathbf{i}_{\mathcal{A}}\subset \mathcal{H}$ is a left ideal. \end{Lem} \begin{proof} Since $\mathcal{H}$ is generated by $T_{s}$ for $s\in S$, it is enough to check that $T_{s}T_{x}C_{v}\in\mathcal{M}$ for $x\in X_{\lambda}$. According to Deodhar's lemma (see \cite[Lemma 2.1.2]{gp}), there are three cases to consider \begin{enumerate} \item $sx\in X_{\lambda}$ and $\ell(sx)>\ell(x)$. Then $T_{s}T_{x}C_{v}=T_{sx}C_{v}\in \mathcal{M}$ as required. \item $sx\in X_{\lambda}$ and $\ell(sx)<\ell(x)$. Then $T_{s}T_{x}C_{v}=T_{sx}C_{v}+(v_{s}-v_{s}^{-1})T_{x}C_{v}\in \mathcal{M}$ as required. \item $t:=x^{-1}sx\in S_{\lambda}$. Then $\ell(sx)=\ell(x)+1=\ell(xt)$. Now, since $tv<v$, we have (see in \cite[\S 5.5, Theorem 6.6.b]{bible}) $$T_{t}C_{v}=v^{L(t)}C_{v}.$$ Thus, we see that $$T_{s}T_{x}C_{v}=T_{sx}C_{v}=T_{xt}C_{v}=T_{x}T_{t}C_{v}=v^{L(t)}T_{x}C_{v}$$ which is in $\mathcal{M}$ as required. \end{enumerate} \end{proof} Note that $\{T_{y}C_{v}\mid y\in X_{\lambda}\}$ is an $\mathcal{A}$-linearly independent subset of $\mathcal{H}$. Indeed, for $y\in X_{\lambda}$ we have \begin{align*} T_{y}C_{v}&=T_{y}T_{v}+\underset{u<v}{\sum}P_{u,v}T_{y}T_{u}\\ &=T_{yv}+\text{ an $\mathcal{A}$-linear combination of $T_{w}$ with $\ell(w)<\ell(yv)$.} \end{align*} So, by an easy induction on the length and since the $T_{z}$ form a basis of $\mathcal{H}$, we get the result. It follows that $\{T_{y}C_{v}\mid y\in X_{\lambda}\}$ is a basis of $\mathcal{M}$. Following \cite{Geck}, we shall now exhibit another basis of $\mathcal{M}$. \begin{Lem} Let $y\in X_{\lambda}$, we can write uniquely $$T_{y^{-1}}^{-1}C_{v}=\underset{x\in X_{\lambda}}{\sum}\overline{r}_{x,y}T_{x}C_{v}\ \text{ , }\ \ \overline{r}_{x,y}\in \mathcal{A},$$ where $r_{y,y}=1$ and $r_{x,y}=0$ unless $x\leq y$. \end{Lem} \begin{proof} We have $$T_{y^{-1}}^{-1}=T_{y}+\underset{z<y}{\sum}\overline{R}_{z,y}T_{z}.$$ where $R_{y,v}\in \mathcal{A}$ (see \cite[\S 4]{bible}). Now let $z\in W$ be such that $T_{z}$ occurs in the above expression. We can write $z$ uniquely under the form $x.w$ with $w\in W_{\lambda}$ and $x\in X_{\lambda}$ (see \cite[Proposition 2.1.1]{gp}). Note that $x\leq z<y$. Since $xw=x.w$, we have $T_{z}=T_{x}T_{w}$ and $$T_{y^{-1}}^{-1}C_{v}=T_{y}C_{v}+ \text{an $\mathcal{A}$-linear combination of }T_{x}T_{w}C_{v}, \text{where $x<y$}$$ Since $w\in W_{\lambda}$, we know that $\ell(ww_{\lambda})=\ell(w_{\lambda})-\ell(w)$ and $T_{w}C_{v}=v^{L(w)}C_{v}$. Therefore we see that $r_{y,y}=1$ and $r_{x,y}=0$ unless $x\leq y$. \end{proof} \begin{Lem} \label{delta} Let $x,y\in X_{\lambda}$. We have $$\underset{x\leq z\leq y}{\underset{z\in X_{\lambda}}{\sum}}\overline{r}_{x,z}r_{z,y}=\delta_{x,y}. $$ \end{Lem} \begin{proof} The proof is similar to the one in \cite{Geck}, once we know that $\{T_{y}C_{v}\mid y\in X_{\lambda}\}$ is an $\mathcal{A}$-linearly independent subset of $\mathcal{H}$. \end{proof} \begin{Prop} \label{nb} For any $y\in X_{\lambda}$, we have $$C_{yv}=T_{y}C_{v}+\underset{x\in X_{\lambda},\ x<y}{\sum}p^{*}_{x,y}T_{x}C_{v}\ \ \text{where} \ p^{*}_{x,y}\in \mathcal{A}_{<0}.$$ \end{Prop} \begin{proof} Fix $y\in X_{\lambda}$ and consider a linear combination $$\tilde{C}_{yv}:=\underset{x\in X_{\lambda}, x\leq y}{\sum}p^{*}_{x,y}T_{x}C_{v}$$ where $p^{*}_{y,y}=1$ and $p^{*}_{x,y}\in\mathcal{A}_{<0}$ if $x<y$. Our first task is to show that the $p^{*}$-polynomials can be chosen such that $\tilde{C}_{yv}=\overline{\tilde{C}_{yv}}$. In order to do so, we proceed as in the proof of the existence of the Kazhdan-Lusztig basis in \cite{Lus2}. We set up a system of equations with unknown the $p^{*}$-polynomials and then use an inductive argument to show that this system has a unique solution. As in \cite{Geck}, one sees that the condition $\overline{\tilde{C}}_{yv}=\tilde{C}_{yv}$ is equivalent to $$\underset{x\in X_{\lambda},z\leq x\leq y}{\sum}\overline{p}^{*}_{x,y}\overline{r}_{z,x}=p^{*}_{z,y}\ \ \text{for all $z\in X_{\lambda}$ such that $z\leq y$}.$$ In other words, the coefficient $p^{*}_{x,y}$ must satisfy \begin{align*} p^{*}_{y,y}&=1, &(1)\\ \overline{p}^{*}_{x,y}-p^{*}_{x,y}&=\underset{x<z\leq y}{\underset{z\in X_{\lambda}}{\sum}}r_{x,z}p^{*}_{z,y} \text{ for $x\in X_{\lambda}$, $x<y$. } &(2) \end{align*} We now consider (1) and (2) as a system of equations with unknown $p^{*}_{x,y}$ ($x\in X_{\lambda}$). We solve it by induction. Let $x\in X_{\lambda}$ be such that $x\leq y$. If $x=y$, we set $p^{*}_{y,y}=1$, so (1) holds.\\ Now assume that $x<y$. Then, by induction, $p^{*}_{z,y}$ ($z\in X_{\lambda}$) are known for all $x<z\leq y$ and they satisfy $p^{*}_{z,y}\in \mathcal{A}_{<0}$ if $z<y$. In other words, the right-hand side of (2) is known, we denote it by $a\in\mathcal{A}$. Using Lemma \ref{delta} and the same argument as in \cite{Geck} yields $\overline{a}=-a$. Thus the identity $\overline{p}^{*}_{x,y}-p^{*}_{x,y}=a$ together with the condition that $p^{*}_{x,y}\in\mathcal{A}_{<0}$ uniquely determine $p^{*}_{x,y}$. We have proved that the coefficient $p^{*}_{x,y}$ can be chosen such that $\tilde{C}_{yv}$ is fixed by the involution $h\rightarrow \overline{h}$. Furthermore, we have \begin{align*} \tilde{C}_{yv}&=T_{y}C_{v}+\underset{x\in X_{\lambda}}{\underset{x<y}{\sum}}p^{*}_{x,y}T_{x}C_{v}\\ &= T_{y}(T_{v}+\underset{v_{1}<v}{\sum}P_{v_{1},v}T_{v_{1}})+ \underset{x\in X_{\lambda}}{\underset{x<y}{\sum}}p^{*}_{x,y}T_{x}(\underset{v_{1}\leq v}{\sum}P_{v_{1},v}T_{v_{1}})\\ &=T_{y}T_{v}+\underset{v_{1}<v}{\sum}P_{v_{1},v}T_{y}T_{v_{1}}+ \underset{x\in X_{\lambda}}{\underset{x<y}{\sum}}\underset{v_{1}\leq v}{\sum}p^{*}_{x,y}P_{v_{1},v}T_{x}T_{v_{1}}. \end{align*} Now, in the above expression, the elements of the form $P_{v_{1},v}T_{x}T_{v_{1}}$, with $x\in X_{\lambda}$ and $v_{1}<v$, can give some coefficient in $\mathcal{A}_{\geq 0}$ (compare to the situation in \cite{Geck}). However, by Lemma \ref{endp}, we get $$\tilde{C}_{yv}=T_{yv}+\text{ an $\mathcal{A}_{<0}$-linear combination of $T_{z}$ with $\ell(z)<\ell(yv)$}$$ and then by definition of the Kazhdan-Lusztig basis, we conclude that $\tilde{C}_{yv}=C_{yv}$ as required. \end{proof} \begin{Cor} \label{F} We have $$\mathcal{M}=\mathbf{h} C_{yv}\mid y\in X_{\lambda}\mathbf{i}_{\mathcal{A}}.$$ \end{Cor} \begin{proof} Let $y\in X_{\lambda}$. By Proposition \ref{nb} we have $$C_{yv}=T_{y}C_{v}+\underset{x<y}{\underset{x\in X_{\lambda}}{\sum}}p^{*}_{x,y}T_{x}C_{v}.$$ Thus $C_{yv}\in\mathcal{M}$. Now, by a straightforward induction, we see that $$T_{y}C_{v}=C_{yv}+\text{ a $\mathcal{A}$-linear combination of $C_{xv}$},$$ which yields the required assertion. \end{proof} We can now prove Theorem \ref{main}.\\ Recall that $$N_{\lambda,z}=\{w\in W\mid w=xv,\ x\in X_{\lambda}\}.$$ Let $y\in X_{\lambda}$. Let $z\in W$ such that $z\leq_{L} yv$. We want to show that $z\in N_{\lambda,z}$. To prove the theorem it is enough to consider the case where $C_{z}$ appears with a non-zero coefficient in $T_{s}C_{yv}$ for some $s\in S$. By Corollary \ref{F}, $C_{yv}\in\mathcal{M}$. Since $\mathcal{M}$ is a left ideal, we know that $T_{s}C_{yv}\in \mathcal{M}$. Using Corollary \ref{F} once more, we obtain $$T_{s}C_{yv}=\underset{y_{1}\in X_{\lambda}}{\sum}a_{y_{1},yv}C_{y_{1}v}$$ and this expression is the expression of $T_{s}C_{yv}$ in the Kazhdan-Lusztig basis of $\mathcal{H}$. We assumed that $C_{z}$ appears with non zero coefficient in that expression, therefore there exists $y_{1}\in X_{\lambda}$ such that $z=y_{1}v$, and $z\in N_{\lambda,z}$ as required. The theorem follows. \end{what}
1,314,259,995,425
arxiv
\section{Introduction} \noindent A unique nonlinear feature of gravitational waves is that it leaves an imprint over the spacetime it passes through \citep{Favata:2010}. This phenomena is known as the gravitational wave memory effect and it causes a permanent change in the separation between freely falling test particles \citep{Favata:2010,Bieri:2017arXiv}. It is one of the strong field effects of General Relativity (GR) that remains undetected, till date, in astrophysical observations \citep{Hubner:2021,Aggarwal:2019}. \noindent Gravitational memory effects were earlier classified into linear \citep{Zeldovich:1974,Braginsky:1985} and nonlinear effects \citep{Christodoulou:1991} based on the type of theory used to study it. In the linear memory effect it was shown that in case of hyperbolic scattering, the metric perturbation before and after the event changes \citep{Zeldovich:1974}. This was found to be related to the difference in the quadrupole moment of the radiation at late and early times \cite{Braginsky:1985}. On the nonlinear side, Christodoulou, using full GR, showed how gravitational waves travelling to null infinity gives rise to an additional contribution to memory. Recent work \citep{Bieri_gaugeinv:2014} using gauge invariant observables have shown that this classification is a misnomer and one can find both types of memory effects in linearized gravity. Subsequently, they have been identified as the ordinary and null memory effects corresponding to gravitational radiation sourced by massive and massless particles respectively \citep{Tolish1:2014}. Similar works in this direction appear in \citep{Tolish:2014, Madler:2016,Madler:2017}. Moreover, an interesting theoretical connection has been conjectured between the memory effects, the Bondi-van der Burg-Metzner-Sachs (BMS) symmetries and soft theorems \citep{Strominger:2016,Strominger:2017}. It is shown how the gravitational memory effect can be realized in asymptotically flat spacetimes as a transition between two degenerate Minkowski vacua having different numbers of soft hair \citep{Compere:2019}. Such relationships in infrared physics have been investigated, apart from GR, for Brans-Dicke gravity \citep{HouJHEP:2020,Hou1:2020,Tahura:2021} and Chern-Simons theory \citep{Hou1:2021}. \noindent In this work, we consider a much simplified scenario consisting of an exact plane wave spacetime \citep{Bondi:1959,Peres:1959}. Exact plane gravitational waves are nonlinear vacuum solutions of Einstein Field Equations. There already exists considerable amount of literature on memory effects in this spacetime \citep{Zhang:2017,Zhang:2017soft,Zhang:2018vel,Zhang:2018,Zhang:2018SL,Chak:2020,Cvetkovic:2021}. Our current work is motivated from an analysis done in \citep{Zhang:2017soft} where the authors demonstrate memory effects by studying geodesic evolution. In this geometry, there are {\em free functions} in the metric line element which may be used to appropriately define a pulse profile in the gravitational wave. The authors in \citep{Zhang:2017soft} carried out the analysis by solving the geodesic equations {\em numerically} for a Gaussian pulse profile. Parallel comoving geodesics were shown to exhibit monotonically increasing separation after encountering the pulse, thereby showing both displacement (change in the separation) and velocity (change in relative velocity) memory effects. The present work provides a complete {\em analytic} solution of the memory effect in the exact plane wave spacetime, {\em albeit} with a different pulse profile. To achieve this, we consider a {\em square pulse} for our calculation, which, not only leads to exact solvability but is also representative of the basic physics related to memory. \noindent Using the same pulse profile, earlier work on geodesic congruences by the current authors have shown that gravitational waves cause timelike geodesic congruences to focus while the shear is found to diverge \citep{Chak:2020}. The focusing depends on the amplitude and width of the pulse. This type of memory effect is called ${\cal B}$-memory in the literature \citep{Loughlin:2019,Srijit:2019}. As mentioned earlier, in this paper, we work out displacement and velocity memory effects for geodesics traversing this spacetime. Though in ${\cal B}$-memory we study the gradient of the velocity field, it is not the same as velocity memory. \noindent A square pulse profile in an exact plane gravitational wave spacetime gives rise to a {\em sandwich wave geometry}. Such a geometry has two flat Minkowski spacetimes separated by a region containing gravitational waves. We solve geodesic equations analytically in all the three regions separately. The geodesic solutions along transverse directions are assumed to be continuous and differentiable along the boundaries of the pulse. The evolution of the geodesic separation is noted in all the three regions. We, at first, examine separate cases of plus and cross polarization and then study the case having both polarizations to be non-zero. In all the cases, we find monotonic displacement memory and constant shift velocity memory effects. Furthermore, a separate analysis is carried out for understanding the effects of gravitational wave memory on a ring of particles. The deformation in the configuration is shown to be related to the shape of the pulse. Relevant kinematical variables are computed and the results are shown to be consistent with results obtained from earlier work \citep{Harte:2012,Chak:2020}. Finally, a brief note on the memory effect obtained using geodesic deviation is also provided here. The two methods (geodesic and deviation) are shown to yield identical results. \noindent The organization of the paper is as follows. In Section II, we discuss exact plane wave spacetimes. Section II A describes the various coordinate systems used for studying gravitational plane wave. In Section II B, we focus on the sandwich wave geometry. Section II C gives the geodesic equations in this spacetime. Memory effects inferred from studying the geodesic equations for only plus and cross polarization are given in Section III A and B respectively. The analysis for a ring of particles is also provided under the same respective sections. Memory effects when both polarizations are simultaneously present is discussed in Section III C. Section III D touches upon memory effects using the geodesic deviation equation. A brief summary of the results presented in this paper is given in Section IV. \vspace{-0.2in} \section{Exact plane wave spacetimes} \noindent The family of {\em pp}-wave (plane fronted waves with parallel rays) spacetimes are defined to have a covariantly constant null vector field \citep{Stephani:2003,Griffiths:2009}. Exact plane wave spacetimes fall within the class of general {\em pp}-wave spacetimes where the Riemann curvature is constant over each wavefront \citep{Griffiths:2009}. There are two standard coordinate systems in which the line element may be written\textemdash (a) Baldwin-Rosen-Jeffrey (BJR) coordinates \citep{Rosen:1937} and (b) Brinkmann coordinates \cite{Brinkmann:1925}. We first discuss each of the coordinate systems and state the reason for choosing the Brinkmann coordinates in the ensuing calculations on memory effects. Thereafter, we describe the sandwich wave spacetime geometry used in this work followed by a discussion on the geodesic equations in such spacetimes. \subsection{Coordinate systems} \vspace{-0.1in} \subsubsection{BJR coordinates} \noindent The line element in BJR coordinates is given below. \begin{align}\label{eq:BJR} ds^2=-2dudV+a_{ij}(u)dX^idX^j \end{align} \noindent Here, $u$ is the retarded null coordinate. Considering $u=t-z$ and $V=t+z$, we find that the curvature disturbances propagate along the $z$-direction with the speed of light. The spacetime gets distorted in the space orthogonal ({\em i.e.} along $X^1, X^2$) to the direction of propagation. Constant $u$ hypersurfaces correspond to planar wavefronts. Hence, the above line element represents a plane gravitational wave spacetime. The function $a_{ij}(u)$ encodes the gravitational wave field. If $a_{ij}=\mathbb{1}_{2\times2}$, the full metric is simply Minkowskian. Moreover, in the transverse traceless (TT) gauge, linearized plane waves can be written down in a perturbative form in this coordinate system, i.e. \begin{equation} a_{ij}=\eta_{ij}+h_{ij} \end{equation} Thus, one may say, that the exact plane wave spacetimes (as given in Eq.(\ref{eq:BJR})) in full, nonlinear GR, are, in some sense, generalizations of the linearized gravitational plane waves expressed in the transverse traceless (TT) gauge. \noindent In BJR coordinates, Ricci flatness of plane waves spacetimes results in a Sturm-Liouville equation like \citep{Zhang:2017soft}, \begin{equation} \ddot{\chi}+\omega^2(u)\chi=0 \label{eq:det_BJR} \end{equation} \noindent Here, $\chi=det(a_{ij})^{1/4}$ and $\omega^2(u)=\dfrac{1}{8}\mathrm{Tr}[(\bm{\gamma}^{-1} \dot{\bm{\gamma}})^2]$ where $\gamma_{ij}=\chi^{-2} a_{ij}$. Since $\omega^2(u)>0$, Eq.(\ref{eq:det_BJR}) denotes an attractive oscilattor with time($u$)-dependent frequency. Hence, $\ddot{\chi}<0$ which implies $\chi=0$ at some finite value of $u$. Since the determinant of $a_{ij}$ vanishes at some $u$-value for the plane wave metric, this results in a coordinate singularity. Thus, we do not use BJR coordinates any further, for our calculations in this article. \vspace{-0.2in} \subsubsection{Brinkmann coordinates} \noindent The line element in the Brinkmann coordinate system is, \begin{equation} ds^2= -H(u,x,y) du^2 -2 dudv + dx^2+dy^2 \label{eq:brinkmann} \end{equation} \noindent Coordinate transformations relating BJR to Brinkmann coordinates can be found in \citep{Zhang:2017soft}. Note that the $u$-coordinate is the same in both the coordinate systems. Consistent with the definition of a {\em pp}-wave, we find that the null vector field $\partial_v$ is covariantly constant ($\nabla.\partial_v=0$). Here, $u$-constant hypersurfaces are flat due to absence of matter. These wave surfaces are spanned by the vectors $\partial_x$ and $\partial_y$. For vacuum solutions, the metric function $H(u,x,y)$ satisfies, \begin{equation} H,_{xx}+H,_{yy}=0 \label{eq:laplacian} \end{equation} \noindent The general solution \footnote{For Maxwell fields we have a third term in $H(u,x,y)$ like $B(u)(x^2+y^2)$. } of Eq.(\ref{eq:laplacian}) is \begin{equation} H(u,x,y)=\dfrac{1}{2}A_+(u)[x^2-y^2]+A_{\times}(u)xy \end{equation} \noindent Such a solution of $H(u,x,y)$ is also consistent with the vacuum wave equation. The functions $A_+(u)$ (plus), $A_{\times}(u)$ (cross) denote the pulse profiles of the two polarizations of the gravitational wave field. The nonzero Riemann tensor components then become \begin{equation} R_{xuxu}=\frac{1}{2}A_+(u) \hspace{1.5cm} R_{yuyu}=-\frac{1}{2}A_+(u) \hspace{1.5cm} R_{xuyu}=\frac{1}{2}A_\times(u) \label{eq:curvature} \end{equation} \noindent From Eq.(\ref{eq:curvature}) we find that the Riemann curvature is same along $u$-constant hypersurfaces (hence the name {\em plane}). They denote planar wavefronts. Since the curvature of the spacetime depends only on the retarded time coordinate $u$ and not on its derivative, one can construct sandwich and shock waves for appropriately chosen pulse profiles $A_+(u)$ and $A_\times(u)$ \citep{Griffiths:2009}. In our work, we consider such a sandwich wave geometry to understand analytically, the non-linear feature of gravitational waves known as the {\em memory effect}. \noindent Note that for exact plane waves, all the curvature scalars vanish. Thus, any singularity present in the spacetime is realized by computing the tidal tensor which remains regular for {\em nonsingular} pulse profiles \citep{Blau:2011}. Henceforth, we perform all the calculations using the Brinkmann coordinate system. \subsection{Sandwich wave spacetime geometry} \noindent The sandwich wave geometry is easily visualized in BJR coordinates. Consider a curved wave region sandwiched between two flat Minkowski spacetimes. The wave travels along the $V$-direction. The null hyperplanes $u=u_1$ and $u=u_2$ are the boundaries of the curved wave region in this geometry. \vspace{-0.2cm} \begin{figure}[H] \centering \includegraphics[scale=0.4]{Screenshot.eps} \caption{Sandwich wave spacetime construction in BJR coordinates. The wave region is shown in blue. $u_1$ and $u_2$ correspond to wavefronts of the exact plane wave. The flat spacetimes correspond to $\ddot{a}_{ij}(u)=0$ while in the wave region, $\ddot{a}_{ij}(u)\neq 0$.} \label{fig:BJR_sandwich_wave} \end{figure} \begin{comment} \begin{figure}[H] \centering \begin{tikzpicture} \hspace*{1cm}\draw[blue, thick] (-1,-1) -- (2,2)node[anchor=west]{$u=u_1$}; \draw[blue, thick] (-2,0) -- (1,3) node[anchor=west]{$u=u_2$}; \draw[black, thick, ->] (0,-1) -- (0,3) node[anchor=south]{$t$}; \draw[black, ->] (-3,1) -- (3,1) node[anchor=west]{$z$}; \fill[blue!15, opacity=0.8] (-2,0) -- (-1,-1) -- (2,2) -- (1,3); \node[text=red, text width=8cm, rotate=45] at (4.5,1.5) {{\small {FLAT \\SPACETIME}}}; \node[text=blue, text width=8cm, rotate=45] at (1.7, 2.8) {WAVE REGION}; \node[text=red, text width=8cm, rotate=45] at (-0.5,4.5) {\small {FLAT \\SPACETIME}}; \draw[black, thick, ->] (-2.5,-1.5) -- (-2,-1) node[anchor=south]{$V$}; \draw[black, thick, ->] (-2.5,-1.5) -- (-3,-1) node[anchor=south]{$u$}; \end{tikzpicture} \caption{Sandwich wave spacetime construction in BJR coordinates. The wave region is shown in blue. $u_1$ and $u_2$ correspond to wavefronts of the exact plane wave. The flat spacetimes correspond to $\ddot{a}_{ij}(u)=0$ while in the wave region, $\ddot{a}_{ij}(u)\neq 0$. } \label{fig:BJR_sandwich_wave} \end{figure} \end{comment} \noindent A similar construction in Brinkmann coordinates follows from choosing suitable pulse profiles $A_+(u)$ or $A_\times(u)$. We work with the following analytical form of the pulse profile \begin{equation} A_+(u)=2A_0^2[\Theta(u+a)-\Theta(u-a)]. \label{eq:pluspulse} \end{equation} \begin{figure}[H] \centering \includegraphics[scale=0.5]{pulse_profile.eps} \caption{\small{Square pulse with $a$=0.5,$A_0$=1.}} \label{fig:Squarepulse} \end{figure} \noindent Fig.(\ref{fig:Squarepulse}) represents a sandwich wave spacetime geometry in Brinkmann coordinates. Region-I ($u\leq-a$) and Region-III ($u\geq+a$) are flat while in Region-II ($-a\leq u\leq +a$), there is a constant amplitude pulse. This is similar to Fig.(\ref{fig:BJR_sandwich_wave}) which was obtained in BJR coordinates. The wavefronts are denoted by blue dashed lines in Fig.(\ref{fig:Squarepulse}). The width and amplitude of the pulse profile is given by $2a$ and $A_0$ respectively. Comparing with Fig.(\ref{fig:BJR_sandwich_wave}) gives $u_1=-a$ and $u_2=+a$. \noindent In the following subsections, we will try to illustrate how the choice of a simple profile like the square pulse lead to exactly integrable geodesic solutions which can help us in our understanding of the memory effect. \subsection{The geodesic equations} \noindent The geodesic equations in Brinkmann coordinates having both non-zero polarizations are given below. \begin{align}\label{eq:combined_x} \ddot{x}=-\frac{1}{2}A_+(u)x-\frac{1}{2}A_\times(u)y \end{align} \begin{align}\label{eq:combined_y} \ddot{y}=\frac{1}{2}A_+(u)y-\frac{1}{2}A_\times(u)x \end{align} \begin{align}\label{eq:combined_v} \ddot{v}+\frac{1}{4}\frac{dA_+(u)}{du}(x^2-y^2)+A_+(u)(x\dot{x}-y\dot{y})+A_\times(u)(y\dot{x}+x\dot{y})+\frac{1}{2}\frac{dA_\times(u)}{du}xy=0 \end{align} \noindent Notice that we have used $u$ as an affine parameter. This is easily checked by writing down the geodesic equation for the $v$ coordinate. The overdot denotes differentiation w. r. t. $u$. The general form for $\dot{v}(u)$ can be obtained from the geodesic Lagrangian (derived from the metric) in Eq.(\ref{eq:brinkmann}). \begin{equation} \dot{v}=\dfrac{1}{2}(\dot{x}^2+\dot{y}^2)-\dfrac{1}{4}A_+\big( x^2-y^2\big)-\dfrac{1}{2}A_\times(u)xy+\dfrac{k}{2} \label{eq:first_integral_v} \end{equation} \noindent $k$ is $0$ or $1$ for null or timelike geodesics. The first integral of the coordinate $v(u)$ can be further integrated using Eqs.(\ref{eq:combined_x}) and (\ref{eq:combined_y}) to yield \begin{equation} v(u)=v_0+\frac{1}{2}(x\dot{x}+y\dot{y}+ku) \label{eq:v_soln} \end{equation} \noindent Here, $v_0$ is the integration constant. Thus, for any pulse of a given polarization, if Eqs.(\ref{eq:combined_x}) and (\ref{eq:combined_y}) for $x(u)$ and $y(u)$ are analytically solvable, then $v(u)$ also can be analytically obtained. \noindent Corresponding to the sandwich wave geometry, we find that the R. H. S. of Eqs.(\ref{eq:combined_x}) and (\ref{eq:combined_y}) are nonzero only for Region-II of Fig.(\ref{fig:Squarepulse}). \section{Gravitational memory effect} \noindent Gravitational wave memory effects can be analysed using either the geodesic equations or the geodesic deviation equations or both \citep{Zhang:2017,Chak1:2020, Siddhant:2020}. The general methodology used for finding out the geodesic memory effect is enlisted below. \noindent \textbullet A pulse profile in chosen to qualitatively mimic a gravitational wave burst scenario. In this article, we use a square pulse as shown in Fig.(\ref{fig:Squarepulse}). \noindent \textbullet Setting the initial transverse velocity to zero and using the chosen pulse profile, one solves (analytically or numerically) the geodesic equations for two or more geodesics along the spatial directions. \noindent \textbullet The evolution of the {\em geodesic separation} is studied. The change in the value of the geodesic separation before and after the passage of the gravitational wave pulse gives the {\em displacement memory effect}. \noindent \textbullet Differentiating the geodesic solutions (the solutions along $x$ and $y$ should be at least $C^1$) and studying their evolution gives the {\em velocity memory effect}. In general, pulse profiles satisfying $\int_{-\infty}^{+\infty} A_+(u) du \neq 0$, always have nonzero velocity memory \citep{Flanagan:2020,Divarkala:2021}. \noindent \textbullet Memory effect, in general, is defined by the difference in geodesic separation at infinite future ($u\to+\infty$) and infinite past ($u\to-\infty$). But, in our article, we find focusing of geodesics at finite value of $u$. The point of focusing can be increased to higher $u$-value by appropriately choosing the initial data and parameters defining the geometry of the pulse. Such focusing in plane wave spacetimes have been studied in works like \citep{Penrose:1965,Harte:2012,Chak:2020}. This benign focusing of the geodesics, starting from an initial parallel configuration, provides a smoking gun for the presence of {\em graviational memory} in the current context. \noindent In the following, we mainly focus our study on the geodesic equations in order to understand memory effects arising in such spacetimes. We will consider two different scenarios. First, the geodesic separation between a pair of geodesics are analyzed and then, the shape evolution of a ring of particles (each particle moving along a geodesic) is studied. In both these cases, we will show how memory effects are encoded in the occurence of benign focusing. Later, we discuss gravitational memory effect obtained by solving the geodesic deviation equation and show that the results are exactly similar to the geodesic analysis. \subsection{Memory effects for plus polarization} \noindent We begin our calculations assuming $A_\times(u)=0$. Geodesic equations for the transverse spatial coordinates $x$ and $y$ for such a profile (given in (\ref{eq:pluspulse})) become, \begin{gather} \ddot{x}+\frac{1}{2}A_+(u)x=0 \label{eq:x_sqp}\\ \ddot{y}-\frac{1}{2}A_+(u)y=0 \label{eq:y_sqp} \end{gather} \noindent The geodesic equations (\ref{eq:x_sqp}) and (\ref{eq:y_sqp}) resemble a non-relativistic oscillator (or an inverted oscillator). The pulse profiles act as squared frequencies which are {\em time} (or $u$)-dependent \citep{Chak:2020}. Solving Eqs.(\ref{eq:x_sqp}) and (\ref{eq:y_sqp}) yields, \begin{align} x(u) \begin{cases} \alpha & u\leq -a \\ \alpha \cos[A_0(u+a)] & -a\leq u \leq a\\ \alpha A_0(a-u)\sin[2aA_0]+\alpha \cos[2aA_0] & u\geq a \end{cases} \label{eq:xsolsqp} \end{align} \begin{align} y(u) \begin{cases} \beta & u\leq -a \\ \beta \cosh[A_0(u+a)] & -a\leq u \leq a\\ \beta A_0(u-a)\sinh[2aA_0]+\beta \cosh[2aA_0] & u\geq a \end{cases} \label{eq:ysolsqp} \end{align} \noindent Eqs.(\ref{eq:xsolsqp}) and (\ref{eq:ysolsqp}) show the behaviour of a single geodesic along $x$ and $y$ directions in all the three regions. Considering two neighbouring geodesics having initial positions set to $(x_1=\alpha_1, y_1= \beta_1)$ and $(x_2=\alpha_2, y_2=\beta_2)$ respectively, we find that the geodesic separation remains constant until the gravitational wave pulse arrives. After the pulse leaves, the final separation obtained (Region-III, $u>a$) is dependent on the initial position of the geodesics and also on the height and width of the pulse. At $u=a+A_0^{-1}\cot(2aA_0)$, we note that the separation along $x$ direction [$x_2(u)-x_1(u)$] vanishes irrespective of the value of $\alpha_1$ and $\alpha_2$. No such behaviour in found along the $y$-direction. This represents focusing of geodesic trajectories at a finite value of $u$. Substituting the analytical solutions of the geodesics for the coordinates $x$ and $y$ from Eqs.(\ref{eq:xsolsqp}) and (\ref{eq:ysolsqp}) respectively in the R. H. S. of Eq.(\ref{eq:v_soln}) gives, \begin{equation} v(u)= \begin{cases} v_0+\dfrac{k}{2}u & u\leq -a \vspace{0.15in}\\ v_0+\dfrac{k}{2}u+\dfrac{A_0}{4}\bigg(\beta ^2\sinh[2(u+a)A_0]-\alpha^2\sin[2(u+a)A_0]\bigg) & -a\leq u \leq a \vspace{0.15in} \\ v_0+ \dfrac{k}{2}u -\dfrac{\alpha^2A_0}{4}\bigg(\sin[4aA_0]+2A_0(a-u)\sin^2[2aA_0]\bigg) \vspace{0.05in}\\ + \dfrac{\beta^2A_0}{4}\bigg(\sinh[4aA_0]+2A_0(u-a)\sinh^2[2aA_0]\bigg) & u\geq a \end{cases} \label{eq:vsolsqp} \end{equation} \noindent where $k$ is equal to 0 or 1 for null or timelike geodesics respectively. Eq.(\ref{eq:vsolsqp}) shows that $v(u)$ is continuous. This is because both $x(u)$ and $y(u)$ are continuous and differentiable. \begin{figure}[H] \centering \begin{subfigure}[t]{0.33\textwidth} \centering \includegraphics[width=\textwidth]{x_plus_mem.eps} \caption{ \centering{\small $x$-direction}} \label{fig:x_plus} \end{subfigure \begin{subfigure}[t]{0.33\textwidth} \centering \includegraphics[width=\textwidth]{y_plus_mem.eps} \caption{\centering{\small $y$-direction}} \label{fig:y_plus} \end{subfigure} \begin{subfigure}[t]{0.33\textwidth} \centering \includegraphics[width=\textwidth]{v_plus_mem.eps} \caption{\centering{\small $v$-direction}} \label{fig:v_plus} \end{subfigure} \caption{\small{Displacement memory effect along $x,y,v$ directions for the first (blue) and second (red) geodesics respectively. The plots are done using the following values of the parameters: $A_0=1, a=0.5, k=1, v_0=1, \alpha= 1$(blue), 2(red), $\beta$ = 3(blue), 4(red). The curved wave region (Region-II) is shown (in all the plots) to be the region inside the purple vertical lines.}} \label{fig:Disp_memory_plus} \end{figure} \noindent The plots \footnote{All the plots in this article have been generated using {\em Mathematica 12}.} in Fig.(\ref{fig:Disp_memory_plus}) show the evolution of the geodesic separation along $x$, $y$ and $v$ coordinates. Along $x$ and $y$ directions we find that the separation is monotonically increasing and thus, there is non-zero {\em displacement memory}. We also find from Fig.(\ref{fig:x_plus}) that there is a point where the trajectories meet. {As discussed earlier, this does not signify that the spacetime is singular. It only refers to the formation of a {\em benign caustic} \citep{Chak:2020}.} Along the $v$-direction shown in Fig.(\ref{fig:v_plus}), we find that the geodesics are non-differentiable at $u=-a,+a$. This is due to the discontinuous nature of the pulse profile. \noindent Since the geodesic solutions along $x$ and $y$ show monotonically increasing displacement memory, it is warranted that they should also exhibit velocity memory. We write down the velocities of the geodesics by differentiating Eqs.(\ref{eq:xsolsqp}), (\ref{eq:ysolsqp}) and (\ref{eq:vsolsqp}). \begin{align} \dot{x}(u)=\begin{cases} 0 & u\leq -a\\ -\alpha A_0\sin[(u+a)A_0] & -a\leq u\leq a\\ -\alpha A_0\sin[2aA_0] & u\geq a \end{cases} \label{eq:xvelsqp} \end{align} \begin{align} \dot{y}(u)=\begin{cases} 0 & u\leq -a\\ \beta A_0\sinh[(u+a)A_0] & -a\leq u\leq a\\ \beta A_0 \sinh[2aA_0] & u\geq a \end{cases} \label{eq:yvelsqp} \end{align} \begin{align} \dot{v}(u)=\begin{cases} \dfrac{k}{2} & u<-a \vspace{0.15in}\\ \dfrac{k}{2}+\dfrac{A_0^2}{2}\bigg(\beta^2\cosh[2A_0(u+a)]-\alpha^2\cos[2A_0(u+a)]\bigg) & -a< u< a \vspace{0.15in}\\ \dfrac{k}{2}+\dfrac{A_0^2}{2}\bigg(\alpha^2\sin^2[2aA_0]+\beta^2\sinh^2[2aA_0]\bigg) & u> a \end{cases} \label{eq:vvelsqp} \end{align} \noindent Eqs.(\ref{eq:xvelsqp}) and (\ref{eq:yvelsqp}) show that both $\dot{x}(u)$ and $\dot{y}(u)$ are continuous. But, they are not differentiable at $u=-a, +a$. Moreover, we find that $\dot{v}(u)$ is piecewise continuous since $v(u)$ is not differentiable as shown in Fig.(\ref{fig:v_plus}). \begin{figure}[H] \centering \begin{subfigure}[t]{0.33\textwidth} \centering \includegraphics[width=\textwidth]{x_plus_vel.eps} \caption{\centering{\small $x$-direction}} \label{fig:x_vel_plus} \end{subfigure \begin{subfigure}[t]{0.33\textwidth} \centering \includegraphics[width=\textwidth]{y_plus_vel.eps} \caption{\centering{\small $y$-direction}} \label{fig:y_vel_plus} \end{subfigure} \begin{subfigure}[t]{0.33\textwidth} \centering \includegraphics[width=\textwidth]{v_plus_vel.eps} \caption{\centering{\small $v$-direction}} \label{fig:v_vel_plus} \end{subfigure} \caption{{\small Velocity memory effect along $x,y,v$ directions.}} \label{fig:velocity_memory_plus} \end{figure} \noindent Figs.(\ref{fig:velocity_memory_plus}) shows the nature of velocity profiles of the two geodesics. Along $x$ and $y$-directions, we find that initially the two geodesics are comoving. After the passage of the pulse, there is a finite velocity difference. The velocity of each geodesic builds up in the wave region and settles to a constant value (different for different geodesics). This is termed as {\em constant shift velocity memory}. As explained earlier, we observe a discontinuity at $u=-a,+a$ along the $v$-direction (Fig.(\ref{fig:v_vel_plus})). \vspace{0.2in} { \noindent \textbf{Memory effect for a ring of particles?} \noindent In our previous calculation we examined how two comoving geodesics behave in the presence of a gravitational wave pulse. Here, we try to extend this investigation for a ring of particles. From elementary gravitational physics knowledge we know that a ring of particles settles down to its initial configuration after the pulse has left. But, generally, this scenario is {\em modified} due to the gravitational wave memory effect. The shape of the ring is permanently distorted. In the following, we will try to analytically show how the configuration of a ring of particles changes after the departure of the pulse, using our simplified setup. \noindent In Region-I ($u\leq -a$), the solution for $x(u)=\alpha =r\cos\phi$ and $y(u)=\beta=r\sin\phi$. Thus, the loci corresponding to the initial configuration is a circle: $x^2+y^2=r^2$. In Region-III, we have (Eqs.(\ref{eq:xsolsqp}) and (\ref{eq:ysolsqp})), \begin{gather} x(u)= r [\cos (2\xi)- (\nu-1) \xi \sin(2\xi)] \cos \phi =R_1 \cos\phi \label{eq:ring_x_III}\\ y(u) = r [\cosh (2\xi)+ (\nu-1) \xi \sinh(2\xi)] \sin \phi =R_2 \sin\phi \label{eq:ring_y_III} \end{gather} \noindent In Eqs.(\ref{eq:ring_x_III}) and (\ref{eq:ring_y_III}), $\xi= a A_0 , u=\nu a \,(\therefore \nu>1)$. Moreover, one can check that $R_1\neq R_2$. Hence, after the pulse departs, the loci becomes an {\em elllipse}. \begin{equation} \dfrac{x^2}{R_1^2}+\dfrac{y^2}{R_2^2}=1 \label{eq:ellipse_plus} \end{equation} \begin{figure}[H] \centering \includegraphics[scale=0.9]{ring_plus.eps} \caption{Change in configuration of a ring of particles from a circle to an ellipse upon the passage of a gravitational wave pulse having only plus polarization.} \label{fig:circle_ellipse} \end{figure} \noindent Fig.(\ref{fig:circle_ellipse}) demonstrates the change in the shape of the configuration induced by the pulse. The nature of the ellipse is determined by $\xi$. Thus, the geometry of the pulse determines the change in shape of the configuration. Now, in order to observe focusing we have to check whether $R_1$ and/or $R_2$ vanishes or not. Setting $R_1=0$ we get, \begin{equation} \tan (2\xi)=\frac{1}{(\nu-1)\xi} \label{eq:R_1} \end{equation} \noindent Similarly, setting $R_2=0$ gives, \begin{equation} \tanh (2\xi) = -\frac{1}{(\nu-1)\xi} \label{eq:R_2} \end{equation} \noindent Eqs.(\ref{eq:R_1}) and (\ref{eq:R_2}) are transcendental and hence, analytic solutions are not possible. We try to find solutions using plots. \begin{figure}[H] \centering \includegraphics[scale=0.7]{ringplus.eps} \caption{Solutions for the transcendental equations (\ref{eq:R_1}) and (\ref{eq:R_2}) are shown in the left ($R_1=0$) and right plots ($R_2=0$) respectively.} \label{fig:ring_plus} \end{figure} \noindent Fig.(\ref{fig:ring_plus}) shows that there exist solutions for $\xi$ when $R_1=0$. But, no solution exists for $R_2=0$. Thus, we find that the ellipse degenerates to a straight line along the $y$-direction. This is consistent with our previous exercise where we had shown the formation of a benign caustic as the geodesic separation along $x$-direction vanishes ({\em i.e.} $x_2-x_1=0$ in $u>a$) at a finite $u$-value. One may calculate the expansion, shear and rotation (collectively called as the {\em kinematic varianbles}) corresponding to this two-dimensional deformation. Following the formalism given in \citep{Shaikh:2014}, the gradient of the velocity vector field can be written as, \begin{equation} \partial_j U^i= \large{ \begin{pmatrix} \frac{\theta}{2}+\sigma_+ & \sigma_\times+\omega \\ \sigma_\times - \omega & \frac{\theta}{2} - \sigma_+ \label{eq:kv_ring} \end{pmatrix}} \end{equation} \noindent The velocity vector field $U^i=(\dot{x},\dot{y})$. Given Eq.(\ref{eq:kv_ring}), we can find all the kinematic variables using the following equations. \begin{equation} \begin{split} & \theta = \partial_x \dot{x} +\partial_y \dot{y} \hspace{2.8cm} \sigma_+= \frac{1}{2}(\partial_x \dot{x} -\partial_y \dot{y}) \\ & \sigma_\times= \frac{1}{2}(\partial_y \dot{x} +\partial_x \dot{y}) \hspace{2cm} \omega=\frac{1}{2}(\partial_y \dot{x} -\partial_x \dot{y}) \end{split} \end{equation} \noindent From Eqs.(\ref{eq:ring_x_III}) and (\ref{eq:ring_y_III}) we find that \begin{equation} \theta= \bigg(\dfrac{\dot{R}_1}{R_1}+\dfrac{\dot{R}_2}{R_2}\bigg) \hspace{1.5cm} \sigma_+=\dfrac{1}{2}\bigg(\dfrac{\dot{R}_1}{R_1}-\dfrac{\dot{R}_2}{R_2}\bigg) \hspace{1.5cm} \sigma_\times=\omega=0. \label{eq:kv_plus} \end{equation} \noindent Setting $R_1=0$, we find that $\theta\to-\infty, \sigma_+\to-\infty$. This is shear-induced focusing which was earlier shown in the context of ${\cal B}$-memory by the present authors in \citep{Chak:2020}. Since the expansion variable captures the area of the deformation \cite{Kar:2006}, its value diverging to minus infinity (focusing) is in agreement with the vanishing of the area of the ellipse (straight line along the $y$-axis of length $R_2$) when $R_1= 0$. } \subsection{Memory effects for cross polarization} \noindent We now focus our attention towards the other case of cross polarization, i.e. $A_+(u)=0, A_\times(u)\neq 0$. The analytical form of the pulse profile is the same as was taken for the plus polarization. In this new scenario, we define normal coordinates ($X,Y$) such that, $(x+y)=X$ and $(x-y)=Y$. The geodesic equations satisfied by $X,Y$ coordinates are similar to the geodesic equations (\ref{eq:x_sqp}) and (\ref{eq:y_sqp}) for the plus polarization. Hence, the solutions for coordinates $X$ and $Y$ are similar to the ones obtained for coordinates $x$ and $y$ in the earlier case (Eqs.(\ref{eq:xsolsqp}) and (\ref{eq:ysolsqp})). Thus, the solutions in the old coordinates ($x$ and $y$) are obtained after finding solutions in the normal coordinates ($X$ and $Y$). We have, \begin{gather} x(u)=% \begin{cases} \frac{1}{2}(\rho+\sigma) & u\leq -a \vspace{0.2in}\\ \frac{1}{2}\bigg(\rho \cos[(u+a)A_0]+\sigma \cosh[A_0(u+a)]\bigg) & -a\leq u \leq a \vspace{0.2in}\\ \frac{1}{2}\bigg(\rho \cos[2aA_0]+\rho A_0(a-u)\sin[2aA_0]\\+\sigma\cosh[2aA_0]+\sigma A_0(u-a)\sinh[2aA_0]\bigg) & u\geq a \end{cases} \label{eq:x_cross} \end{gather} \vspace{0.2in} \begin{gather} y(u) \begin{cases} \frac{1}{2}(\rho-\sigma) & u\leq -a \vspace{0.2in}\\ \frac{1}{2}\bigg(\rho \cos[(u+a)A_0]-\sigma \cosh[A_0(u+a)]\bigg) & -a\leq u \leq a \vspace{0.2in}\\ \frac{1}{2}\bigg(\rho \cos[2aA_0]+\rho A_0(a-u)\sin[2aA_0]\\-\sigma\cosh[2aA_0]-\sigma A_0(u-a)\sinh[2aA_0]\bigg) & u\geq a \end{cases} \label{eq:y_cross} \end{gather} \noindent Eqs.(\ref{eq:x_cross}) and (\ref{eq:y_cross}) reveal that the focusing $u$-value, unlike the earlier case, will also depend on the initial positions of the geodesics. This is quantified in the following equations. \begin{gather} u_{x}=a+A_0^{-1}\bigg[\frac{(\rho_2-\rho_1)\cos(2aA_0)+(\sigma_2-\sigma_1)\cosh(2aA_0)}{(\rho_2-\rho_1)\sin(2aA_0)-(\sigma_2-\sigma_1)\sinh(2aA_0)}\bigg] \label{eq:u_focus_x_cross}\\ \vspace*{0.3in} u_{y}=a+A_0^{-1}\bigg[\frac{(\rho_2-\rho_1)\cos(2aA_0)-(\sigma_2-\sigma_1)\cosh(2aA_0)}{(\rho_2-\rho_1)\sin(2aA_0)+(\sigma_2-\sigma_1)\sinh(2aA_0)}\bigg] \label{eq:u_focus_y_cross} \end{gather} \noindent $\rho_1,\sigma_1$, $\rho_2,\sigma_2$ are initial positions of a pair of geodesics in normal coordinates along $X$ and $Y$ directions respectively. {Eqs.(\ref{eq:u_focus_x_cross}) and (\ref{eq:u_focus_y_cross}) show that the focusing occurs in Region-III depending on the initial values of the geodesics and shape of the pulse.} The solution for the $v$ coordinate is provided below. \begin{gather} v(u) \begin{cases} v_0+\dfrac{k}{2}u & u\leq -a \vspace{0.2in}\\ v_0+\dfrac{k}{2}u+\dfrac{A_0}{8}\bigg(-\rho^2\sin[2(u+a)A_0]+\sigma^2\sinh[2(u+a)A_0]\bigg) & -a\leq u \leq a \vspace{0.2in}\\ v_0+\dfrac{k}{2}u+\dfrac{A_0}{8}\bigg(-\rho^2 \big(\sin[4aA_0]+2A_0(a-u)\sin^2[2aA_0]\big)\\ +\sigma^2\big(\sinh[4aA_0]+2A_0(u-a)\sinh^2[2aA_0]\big)\bigg) & u\geq a \end{cases} \label{eq:v_cross} \end{gather} \begin{figure}[H] \centering \begin{subfigure}[t]{0.32\textwidth} \centering \includegraphics[width=\textwidth]{x_cross_mem.eps} \caption{\centering{ \small $x$-direction}} \label{fig:x_cross_geo} \end{subfigure}\hspace{0.3em} \begin{subfigure}[t]{0.32\textwidth} \centering \includegraphics[width=\textwidth]{y_cross_mem.eps} \caption{\centering{ \small $y$-direction}} \label{fig:y_cross_geo} \end{subfigure} \begin{subfigure}[t]{0.32\textwidth} \centering \includegraphics[width=\textwidth]{v_cross_mem.eps} \caption{\centering{\small $v$-direction}} \label{fig:v_cross_geo} \end{subfigure} \caption{\small{Displacement memory effect along $x,y,v$ directions for the first (blue) and second (red) geodesics respectively. The plots are done using the following values of the parameters: $A_0=1, a=0.5, k=1, v_0=1, \rho= 2$(blue), 5(red), $\sigma$ = 1(blue), 2(red).}} \label{fig:cross_sqp} \end{figure} \noindent Figs.(\ref{fig:x_cross_geo}) and (\ref{fig:y_cross_geo}) show monotonic displacement memory along $x$ and $y$ directions respectively. The two trajectories meet at different values of $u$ since they depend on the values of initial positions of each geodesic. In Fig.(\ref{fig:v_cross_geo}), we find that the nature is continuous but not differentiable at the edges of the pulse ($u=-a, +a$). The expressions for the velocities are given below. \begin{gather} \dot{x}(u) \begin{cases} 0 & u\leq -a \vspace{0.1in}\\ \dfrac{A_0}{2}\bigg(-\rho \sin[(u+a)A_0]+\sigma \sinh[(u+a)A_0]\bigg) & -a\leq u \leq a \vspace{0.1in}\\ \dfrac{A_0}{2}\bigg(-\rho \sin[2aA_0]+\sigma \sinh[2aA_0]\bigg) & u\geq a \end{cases} \label{eq:x_vel_cross} \end{gather} \begin{gather} \dot{y}(u) \begin{cases} 0 & u\leq -a \vspace{0.1in}\\ -\dfrac{A_0}{2}\bigg(\rho \sin[(u+a)A_0]+\sigma \sinh[(u+a)A_0]\bigg) & -a\leq u \leq a \vspace{0.1in}\\ -\dfrac{A_0}{2}\bigg(\rho \sin[2aA_0]+\sigma \sinh[2aA_0]\bigg) & u\geq a \end{cases} \label{eq:y_vel_cross} \end{gather} \begin{gather} \dot{v}(u) \begin{cases} \dfrac{k}{2} & u<-a \vspace{0.15in}\\ \dfrac{k}{2}+\dfrac{A_0^2}{4}\bigg(-\rho^2\cos[2(u+a)A_0]+\sigma^2\cosh[2(u+a)A_0]\bigg) & -a<u<a \vspace{0.15in}\\ \dfrac{k}{2}+\dfrac{A_0^2}{4}\Big(\rho^2\sin^2[2aA_0]+\sigma^2\sinh^2[2aA_0]\Big) & u>a \end{cases} \label{eq:v_vel_cross} \end{gather} \noindent Similar to the previous analysis for the plus polarization, we find here that the velocities $\dot{x}(u)$ and $\dot{y}(u)$ are continuous but not differentiable. The strict inequality in Eq.(\ref{eq:v_vel_cross}) for $\dot{v}(u)$ follows from Eq.(\ref{eq:first_integral_v}). \begin{figure}[H] \centering \begin{subfigure}[t]{0.32\textwidth} \centering \includegraphics[width=\textwidth]{x_cross_vel.eps} \caption{\centering{\small $x$-direction}} \label{fig:x_vel_cross} \end{subfigure} \begin{subfigure}[t]{0.32\textwidth} \centering \includegraphics[width=\textwidth]{y_cross_vel.eps} \caption{\centering{\small $y$-direction}} \label{fig:y_vel_cross} \end{subfigure} \begin{subfigure}[t]{0.32\textwidth} \centering \includegraphics[width=\textwidth]{v_cross_vel.eps} \caption{\centering{\small $v$-direction}} \label{fig:v_vel_cross} \end{subfigure} \caption{\centering{{\small Velocity memory effect along $x,y,v$ directions}}} \label{fig:vel_cross_sqp} \end{figure} \noindent Figs.(\ref{fig:x_vel_cross}) and (\ref{fig:y_vel_cross}) show similar behaviour as noted earlier in Figs.(\ref{fig:x_vel_plus}) and (\ref{fig:y_vel_plus}) for plus polarization. There is constant shift velocity memory along both these directions. For the $v$-direction shown in Fig.(\ref{fig:v_vel_cross}), we find that the velocities are not continuous at $u=-a,+a$ due to the analytical form of $A_\times(u)$. { \noindent For a ring of particles, we follow the same procedure as was done for the plus polarization. We first perform the calculation in normal ($X,Y$) coordinates and then revert back to the old $(x,y)$ coordinates. Before the pulse has arrived ($u\leq-a$), we have \begin{gather} X=\rho= r\cos\theta \hspace{2cm} Y= \sigma=r\sin\theta \end{gather} \noindent Thus, the loci is a circle: $X^2+Y^2=r^2$. Transforming to old coordinates we find that the loci is again a circle with a different radius, $x^2+y^2=\bigg(\dfrac{r}{\sqrt{2}}\bigg)^2$. \noindent In Region-III ($u\geq a$), we get the loci of an ellipse using normal coordinates having axes lengths $2R_1$ and $2R_2$ (look at Eq.(\ref{eq:ellipse_plus})). Reverting back to the old coordinates we find, \begin{equation} \dfrac{\bigg(\dfrac{x}{\sqrt{2}}+\dfrac{y}{\sqrt{2}}\bigg)^2}{\bigg(\dfrac{R_1}{\sqrt{2}}\bigg)^2}+\dfrac{\bigg(\dfrac{x}{\sqrt{2}}-\dfrac{y}{\sqrt{2}}\bigg)^2}{\bigg(\dfrac{R_2}{\sqrt{2}}\bigg)^2}=1 \label{eq:ellipse_cross} \end{equation} \vspace{0.1in} \noindent The loci given by Eq.(\ref{eq:ellipse_cross}) also corresponds to an ellipse having center at $\{x,y\}=\{0,0\}$ with axes lengths $\sqrt{2}R_1$ and $\sqrt{2}R_2$. It is rotated by an angle of $\pi/4$ w. r. t. the $x$-axis along one of the axes of the ellipse having length $\sqrt{2}R_1$ (Fig.(\ref{fig:circle_ellipse_cross})). Similar to the earlier analysis, $R_1=0$ is a possible scenario. But now, the straight line is parallel to $R_2$ which is rotated by an angle of $3\pi/4$ w. r. t. the $x$-axis. Calculating the kinematic variables one finds that the expansion is the same as that for the plus polarization. The shear corresponding to this deformation is now given by $\sigma_\times\neq0, \sigma_+=0$. The functional form of $\sigma_\times$ is similar to the expression given in Eq.(\ref{eq:kv_plus}). This can also be understood intuitively from Eq.(\ref{eq:ellipse_cross}) and Fig.(\ref{fig:circle_ellipse_cross}) as the ellipse now has been rotated w. r. t. the $x$-direction. } \begin{figure}[H] \centering \includegraphics[scale=0.9]{ring_cross.eps} \caption{Change in configuration of a ring of particles from a circle to an ellipse upon the passage of a gravitational wave pulse having only cross polarization.} \label{fig:circle_ellipse_cross} \end{figure} { \subsection{Memory effects for both plus and cross polarizations} \noindent Finally, let us consider the scenario when both polarizations $A_+$ and $A_\times$ are present and also have the same pulse profile. If there is a phase shift by $\pi/2$ between the two polarizations, the plane gravitational wave is said to be circularly polarized \citep{Zhang:2018vel}. But, here, we consider $A_+=A_\times\neq0$ (no phase difference) which correspond to {\em linear polarization} \citep{Schutz:1987}. The resultant geodesic equations along $x$ and $y$ directions in this scenario are already given previously in Eqs.(\ref{eq:combined_x}) and (\ref{eq:combined_y}). Geodesic equations in Region-II (where the pulse is present) is given below. \begin{equation} \ddot{x}= -A_0^2 (x+y)\hspace{2cm} \ddot{y} =A_0^2 (y-x) \label{eq:geo_both} \end{equation} \noindent Taking double derivative on both sides of the equations in (\ref{eq:geo_both}) we get, \begin{equation} \ddddot{x}=2A_0^4 x \hspace{2cm} \ddddot{y} =2A_0^4 y \label{eq:geo_II_both} \end{equation} \noindent In Region-I ($u\leq-a$), the solution is $x(u)=\epsilon, y(u)=\delta$. In the presence of the pulse (Region-II, $-a\leq u\leq +a$), the geodesic solution is, \begin{gather} x(u)= C_1 \cosh[m u A_0] +C_2 \sinh [m u A_0] +C_3 \cos [m u A_0] +C_4 \sin[m u A_0] \label{eq:x_both_pol}\\ y(u) = C_5 \cosh[m u A_0] +C_6 \sinh [m u A_0] +C_7 \cos [m u A_0] +C_8 \sin[m u A_0] \label{eq:y_both_pol} \end{gather} \noindent In Eqs.(\ref{eq:x_both_pol}) and (\ref{eq:y_both_pol}), we have $m=2^{1/4}$. The constants of integration ($C_1-C_8$) are given below. \begin{equation} \begin{split} & C_1= \frac{[\epsilon ( m^2-1)-\delta ]\cosh [m a A_0 ]}{2 m^2} \hspace{1.5cm} C_2= \frac{[\epsilon ( m^2-1)-\delta ] \sinh [m a A_0 ]}{2 m^2}\\ & C_3=\frac{[\delta +(m^2+1) \epsilon ] \cos [m a A_0 ]}{2 m^2} \hspace{1.7cm} C_4= -\frac{[\delta +(m^2+1) \epsilon ] \sin [m a A_0 ]}{2 m^2} \\ & C_5= \frac{[\delta (m^2+1)-\epsilon] \cosh [m a A_0 ]}{2 m^2} \hspace{1.5cm} C_6= \frac{[\delta (m^2+1)-\epsilon] \sinh [m a A_0 ]}{2 m^2} \\ & C_7= \frac{[\delta (m^2-1)+\epsilon] \cos [m a A_0 ]}{2 m^2} \hspace{1.75cm} C_8= -\frac{[\delta (m^2-1)+\epsilon]\sin [m a A_0 ]}{2 m^2} \label{eq:constant_x,y} \end{split} \end{equation} \noindent The above expressions in Eq.(\ref{eq:constant_x,y}) are determined by comparing the coefficients in Eq.(\ref{eq:geo_both}) and also using the boundary conditions at $u=-a$ given as: $$x(-a)=\epsilon, \dot{x}(-a)=0, y(-a)=\delta,\dot{y}(-a)=0.$$ \noindent Finally, the solution in Region-III is, \begin{equation} x(u)= C_9 u+C_{10} \hspace{3cm} y(u)= C_{11}u +C_{12} \label{eq:geo_soln_both_III} \end{equation} \noindent The constants ($C_9-C_{12}$) are obtained by assuming continuity and differentiability of the geodesic solutions at the boundary $u=+a$ of the pulse. \begin{equation} \begin{split} & C_9= \frac{A_0}{2m}\big[(\epsilon ( m^2-1)-\delta ) \sinh(2 m a A_0)-( \epsilon(m^2+1)+ \delta)\sin(2 m a A_0)\big] \\ & C_{10}= \frac{(\epsilon(m^2-1)-\delta)}{2m^2}\big[\cosh (2maA_0)-maA_0\sinh(2maA_0)\big]\\ & \hspace{1cm} +\frac{(\epsilon(m^2+1)+\delta)}{2m^2}\big[\cos (2maA_0)+maA_0\sin(2maA_0)\big]\\ & C_{11}=\frac{A_0}{2m}\big[(\delta(m^2+1)-\epsilon) \sinh(2 m a A_0)-(\delta(m^2-1) +\epsilon )\sin(2 m a A_0)\big] \\ & C_{12}= \frac{(\delta(m^2+1)-\epsilon)}{2m^2}\big[\cosh (2maA_0)-maA_0\sinh(2maA_0)\big]\\ & \hspace{1cm} +\frac{(\delta(m^2-1)+\epsilon)}{2m^2}\big[\cos (2maA_0)+maA_0\sin(2maA_0)\big] \end{split} \end{equation} \noindent We, now, try to understand the nature of memory effect exhibited by the geodesic solutions in this scenario. \begin{figure}[H] \centering \begin{subfigure}[t]{0.45\textwidth} \centering \includegraphics[width=\textwidth]{x_both_geo.eps} \caption{\centering{\small $x$-direction}} \label{fig:x_both} \end{subfigure} \hspace{1cm} \begin{subfigure}[t]{0.45\textwidth} \centering \includegraphics[width=\textwidth]{y_both_geo.eps} \caption{\centering{\small $y$-direction}} \label{fig:y_both} \end{subfigure} \caption{{\small Memory effect along $x,y$ directions having both $A_+$ and $A_\times$ polarizations. The plots are done using the following values of the parameters: $A_0=1, a=0.5, \epsilon= 2$(blue),3(red), $\delta$ = 1(blue), 2(red). The constants of integration are: $C_1= -0.0717032$(blue),$-0.316513$(red), $C_2= -0.038232$(blue), $-0.168764$(red), $C_3 =1.70699$(blue), $2.70692$(red), $C_4= -1.15434$(blue), $-1.83054$(red), $C_5=0.173107$ (blue), $0.76413$(red), $C_6=0.0923002$(blue),$0.407433$(red), $C_7= 0.707059$(blue), $1.12124$(red), $C_8= -0.478144$(blue), $-0.758234$(red), $C_9= -2.38178$(blue), $-4.08101$(red), $C_{10}= 1.84942$(blue), $2.77691$(red), $C_{11}= -0.682551$(blue), $-0.348423$(red), $C_{12}= 0.921929$(blue), $1.8383$(red). }} \label{fig:vel_cross_sqp} \end{figure} \noindent The plots in Fig.(\ref{fig:x_both}) and (\ref{fig:y_both}) show monotonically increasing displacement memory effect. One can also obtain the plot for $v$-coordinate using Eq.(\ref{eq:v_soln}). The velocity plots for the solutions are shown in Fig.(\ref{fig:x_both_vel}) and (\ref{fig:y_both_vel}). \begin{figure}[H] \centering \begin{subfigure}[t]{0.45\textwidth} \centering \includegraphics[width=\textwidth]{vel_x_both_geo.eps} \caption{\centering{\small $x$-direction}} \label{fig:x_both_vel} \end{subfigure} \hspace{1cm} \begin{subfigure}[t]{0.45\textwidth} \centering \includegraphics[width=\textwidth]{vel_y_both_geo.eps} \caption{\centering{\small $y$-direction}} \label{fig:y_both_vel} \end{subfigure} \caption{\centering{{\small Velocity memory effect along $x,y$ directions having both $A_+$ and $A_\times$ polarizations.}}} \label{fig:vel_cross_sqp} \end{figure} \noindent Similar to previous scenarios, we again find constant shift velocity memory effects. In case of a ring of particles, one can check that both $\sigma_+$ and $\sigma_\times$ are nonzero. This shows that the polarization of the gravitational wave influences the final configuration.} \subsection{Memory effects using geodesic deviation equation} \noindent Let us now try to analyse memory effects using the geodesic deviation equation which, we recall, is given by, \begin{equation} (U^\alpha\nabla_\alpha) (U^\beta\nabla_\beta) \xi^\mu=-R^\mu\,_{\alpha\beta\gamma}U^\alpha \xi^\beta U^\gamma \end{equation} \noindent $\xi^\mu$ is the deviation vector and $U^\mu$ denotes four-velocity of a set of timelike observers following a geodesic. We consider $U^\mu=(1,0,0,0)$ and $\xi^u=0$. The deviation equations along $x$ and $y$ directions become ($A_+\neq0, A_\times=0$) \begin{gather} \ddot{\xi}^{x}=-\frac{1}{2}A_+(u)\xi^x \label{eq:xi_x}\\ \ddot{\xi}^{y}=\frac{1}{2}A_+(u)\xi^y. \label{eq:xi_y} \end{gather} \noindent Eqs.(\ref{eq:xi_x}) and (\ref{eq:xi_y}) are similar to the geodesic equations (\ref{eq:x_sqp}) and (\ref{eq:y_sqp}). Physically, $\xi^x$ and $\xi^y$ correspond to the separation between a pair of geodesics along $x(u)$ and $y(u)$ directions. Hence, considering two geodesic solutions \{$x_1,y_1$\} and \{$x_2,y_2$\}, the deviation vector is, $$\xi^x=x_1-x_2, \hspace{1cm} \xi^y=y_1-y_2.$$ \noindent Considering $x_2=y_2=0$, we find that the solution $\xi^x=x_1(=x), \xi^y= y_2 (=y)$. Thus, the deviation analysis would reveal identical results as obtained from solving the geodesic equations. However, both the methods differ for spacetimes having nonzero background curvature. In our past work \citep{Siddhant:2020,Chakraborty:2021}, we have shown that memory effects obtained using geodesics is equivalent to the total deviation (wave and background). Since in case of exact plane gravitational waves the curvature disturbance propagates over a flat spacetime, there is zero background deviation. Hence, the results are identical in both the methods. \\ \noindent In this article, our entire analysis relies on the choice of a pulse profile in the Brinkmann gauge. Hence, one may arrive at the conclusion that the entire analysis is {\em coordinate dependent} and gravitational memory, as defined in the present context, is not a gauge invariant observable and different from existing works in astrophysical settings. But, considering detectors (as in LIGO or LISA) that follow timelike geodesic paths, one can show that the final change in separation (memory) in Fermi normal coordinates takes an exactly similar form as given in Eqs.(\ref{eq:xi_x}) and (\ref{eq:xi_y}). This is because gravitational memory is imprinted in the {\em timelike Penrose limit} of the gravitational wave spacetime \citep{Shore:2018}. Since in the Penrose limit, one obtains the exact plane wave metric, the following equations obeyed by the test-detectors is similar to the ones given in Eqs.(\ref{eq:xi_x}) and (\ref{eq:xi_y}). In case of other spacetimes containing gravitational waves like Kundt wave geometries, one needs to analyse the equations in Fermi normal coordinates as seen by an idealized detector. Such treatment has been done in our earlier work \citep{Siddhant:2020, Chakraborty:2021}. \section{Conclusions} \noindent Our main emphasis here has been to present a simple analytical example of gravitational wave memory using a square pulse. To this end, we show how the nature of the pulse profile ({\em i.e.} the chosen functional forms of $A_+(u)$ and/or $A_\times(u)$) influences geodesic evolution in the exact plane gravitational wave spacetime, thereby leading to memory effects. \noindent We have discussed the two standard coordinate systems used while studying exact plane waves. The Brinkmann coordinates are employed in our calculations since they are free of {\em coordinate singularities}. These coordinates have an unconstrained metric function $H(u,x,y)$ denoting the profile and the polarization of the gravitational wave. We choose a {\em square pulse profile} (Fig.(\ref{fig:Squarepulse})) which represents a sandwich wave spacetime geometry. \noindent In this square pulse geometry, we have a region containing plane gravitational waves (Region-II) sandwiched between two flat Minkowski regions (Region-I and Region-III). Setting initial transverse coordinate velocity of a pair of geodesics to zero in Region-I, we solve the geodesic equations analytically along $x$ and $y$ directions assuming that the solutions are continuous and differentiable (at least $C^1$) at the boundaries of the pulse ($u=-a,+a$). The geodesic solution for the coordinate $v(u)$ is obtained once $x(u)$ and $y(u)$ is known (Eq.(\ref{eq:v_soln})). \noindent The analytical solutions obtained are then illustrated through plots showing the geodesic evolution in all the three regions of the spacetime. We analyse three distinct cases of linear polarization in this article viz; plus, cross and both (plus and cross combined). Our results show {\em monotonically increasing displacement memory} and {\em constant shift velocity memory} along $x$ and $y$ directions in all these cases. In the $v$-direction, we observe that the solutions are continuous but suffer from derivative discontinuity at the boundaries of the pulse owing to the {\em step-function} nature of the pulse profile (Eqs.(\ref{eq:pluspulse}) and (\ref{eq:first_integral_v})). \noindent {In the first case where $A_+\neq0, A_\times=0$, we find geodesics meet along the $x$ direction. The focusing $u$-value is independent of the initial position but depends on $a$ and $A_0$. For the second case ($A_+=0, A_\times\neq0$), we get focusing which depends both on the geometry of the pulse and the initial positions along $X$ and $Y$ directions (in {\em normal coordinates}). Finally, when $A_+=A_\times\neq0$, we find similar results as found in the case of cross polarization. Such meeting of trajectories signify the presence of benign caustics. A study of memory effects using geodesic congruences (also known as ${\cal B}$-memory) by the current authors \citep{Chak:2020} in this spacetime, have also shown the existence of such caustics. \noindent Further, we have tried to demonstrate how the gravitational wave pulse distorts an initial configuration of a ring of particles. The loci of the configuration changes from a circle to an ellipse after the passage of the pulse. In case of plus polarization, we find the ellipse degenerates to a straight line along the $y$-axis. The focusing is dependent on the shape of the pulse ($\xi$). The expansion variable corresponding to this deformation at focusing diverges to negative infinity. This is in agreement with the result from geodesic analysis where trajectories meet along the $x$-direction. In the other case having only cross polarization, we find that the final configuration is an ellipse rotated by an angle of $\pi/4$ w. r. t. $x$-axis. Finally, we observe that the nature of shear is determined by the type of polarization of the gravitational wave ({\em i.e.} $\sigma_+\neq0,\sigma_\times=0$ for $A_+\neq0,A\times=0$ and vice versa). } \noindent Gravitational memory obtained from geodesic deviation is shown to yield identical results as obtained from the geodesic analysis. This happens because the background over which the plane wave propagates is flat and the entire contribution of the deviation comes from the radiation itself. \noindent In conclusion, this article shows the crucial role played by explicit solutions of the geodesic equations in developing analytical examples and understanding of gravitational memory effects, in exact plane gravitational wave spacetimes. Similar analytical exercises for other radiative geometries, if possible, will surely be useful and may also reveal new features of memory effects as well as characteristics of the background spacetime. \section*{Acknowledgments} \noindent I. C. is supported by University Grants Commission, Government of India through a Senior Research Fellowship with Reference ID: 523711.\\ \noindent {\bf Data Availability Statement:} This manuscript has no associated data or the data will not be deposited. \bibliographystyle{apsrev4-2}
1,314,259,995,426
arxiv
\section{Conclusion} \label{sec:conclusion} In this paper, we have presented a MILP-based framework for integrating security guarantees with end-to-end timeliness requirements for control transactions in resource-constrained CPS. We have shown that the use of physics-based anomaly/intrusion detectors and intermittent message authentication results in strong QoC performance guarantees in the presence of network-based attacks without significant security-related resource overhead. We have also shown how the security-related overhead can be additionally reduced with the use of cumulative authentication policies, which can be implemented such that real-time guarantees for control-related tasks and messages are retained, while QoC in the presence of attacks is maintained within the permissible design-time limits. In addition, we have presented a method to integrate intermittent authentication policies in a near-optimal manner from the QoC standpoint, to opportunistically exploit available processor time and network bandwidth at runtime. As our approach fully supports cumulative authentication policies, it can be used for dynamical systems where solely authenticating a single sensor measurement periodically or intermittently is not sufficient to provide QoC guarantees under attack. Finally, for large-scale systems where a unified scheduling approach for all ECUs and network may be intractable, we have shown how the problem can be decomposed in a platform/implementation-specific manner. We have demonstrated scalability and effectiveness of our approach on both synthetic systems and a realistic automotive case study and shown that security guarantees can be incorporated without violating existing timeliness properties even with limited resource availability. \section{VUK'S NOTES} \iffalse \PA{State estimation error is taken as the QoC metric, as it directly propagates to system outputs, through actuation commands.} \PA{here, we need to say that when we say security-aware we mean tasks with strong QoC guarantees in the presence of MitM attacks} \PA{think about legacy + integrity enforcement -> data authentication } \PA{make sure to highlight that QoC is state estimation which is directly related to control performance} \PA{We exploit physical properties of the controlled plants to reduce security overhead for defending against MitM attacks in resource-constrained CPS, and thus focus on the use of cumulative authentication policies; } \PA{ our goal is to exploit the physical properties of controlled systems to reduce security-induced processing time and bandwidth overhead, } \LE{Note on how this affects our RTSS paper or not?} \PA{CHECK WHERE IS PARAMETER SYNTHESIS USED FOR THE FIRST TIME} \PA{add discussion below into diagram + security mechanisms into intro} \footnote{This transformation is the generalization of the transformations used in~\cite{lesi_emsoft17,lesi_rtss17}.} \PA{think about this} \PA{I've used general since we capture $f$. maybe to remove}\LE{I removed but we can put back later if we find it fit} \PA{Error as QoC} {\color{red} Schedulability analysis for this workload using the standard task model is highly pessimistic (clearly, the task set would be rejected). The standard task and message models accepting a single WCET parameter coarsely overapproximate the load on the ECUs and the shared network imposed by sparsely added security overhead. Since we wish to exploit the physical properties of controlled systems to reduce security-induced processing time and bandwidth overhead, we need a task/message model that captures the variable execution/transmission time of real-time tasks/or messages. On the other hand, the multi-frame task model \cite{Multiframe} supports tasks that have execution times varying among consecutive invocations (called \emph{frames}) in an arbitrary pattern. However, this model is overly general in that it allows any pattern of frames to be specified, and schedulability analyses for multi-frame tasks often assume that the worst-case alignment of frames is legal --- exactly the scenario we wish to avoid. In our case, it suffices to facilitate two frame sizes, normal and peak, and additional parameters specifying peak frame period and offset to allow capturing of the approach used in Figure~\ref{fig:motivationFig} (right) to obtain a schedulable system. } Additionally, inter-frame offsets are needed to support precedence constraints within every \emph{control transaction}. {\color{red} For each controlled system, we define a \emph{control transaction} $\mathcal{T}_i$ as the chain of invocations of the sensing (i.e.,~transmitting) task $T_i^{sens}$, communication message $M_i^{net}$ (containing sensor measurements for plant $\mathcal{P}_i$), and the controller (i.e., receiving) task $T_i^{ctrl}$. To simplify our presentation, we~use a standard assumption (e.g.,~as in~\cite{relaxingPeriodicityCAN}) that for each plant $\mathcal{P}_i$, the delay bound between sampling and actuation is equal to the control task's (i.e., sampling) period $p_i$. Where no confusion arises, we will refer to all $T_i^{sens}$, $M_i^{net}$, and $T_i^{ctrl}$ as tasks.} Consequently, we may denote any of the tasks as $T_i$ where possible.\PA{why?}\LE{Because equations are identical, so no need to write them individually for $T_i^{sens}$, $M_i^{net}$, and $T_i^{ctrl}$, but maybe this should be introduced only for $T_i^{sens}$ and $T_i^{ctrl}$ } Finally, we may use the term \emph{processor} to refer to ECUs and the shared network where generalization is valid. {\color{red} The tasks $T_i^{sens}$, $M_i^{net}$, and $T_i^{ctrl}$, which utilize the network and ECUs, are all precedence constrained. The earliest a task $T_i^{ctrl}$ may start execution is upon receiving its sensor message. Similarly, access to the network for message $M_i^{net}$ cannot be requested before task $T_i^{sens}$ is ready to transmit it. These precedence constraints can be captured with a set of non-zero offsets and constrained deadlines imposed on these tasks, as illustrated in~\figref{endToEndTiming}. Furthermore, the sampling-to-actuation delay bound constrains the time between each activation of $T_i^{sens}$ and the corresponding absolute deadline for $T_i^{ctrl}$ to the instant of activation of the next sensing task. } Therefore, simultaneous schedulability analyses for all resources (sensing/transmitting and control/receiving ECUs, and the network) are required. Our goal is to develop a methodology for completing a set of transactions on the available set of ECUs and a shared network, while taking into account the required level of periodic data integrity guarantees mapped from predefined quality-of-control requirements. Thus, we assume that non-zero task offsets and constrained deadlines enforcing precedence constraints are not known a priori. Instead, the respective task (message) sets are considered incomplete in the sense that their periods and execution (transmission) times are known, but the offsets and deadlines for each of the tasks (messages) that produce a schedulable set of transactions are to be determined. A seemingly straightforward approach is to fuse formulations given in~\cite{lesi_tecs17} and~\cite{lesi_rtss17}, and thus obtain a solution to the end-to-end scheduling problem in one comprehensive mixed integer linear program. However, as we will discuss in detail in Sec.~\ref{sec:schedulabilityAnalysis}, existing schedulability conditions do not support general offsets in the case of non-preemptive scheduling of communication messages. Recall that this is required in order to enforce precedence constraints among tasks in a control transaction. Therefore, in Sec.~\ref{sec:schedulabilityAnalysis}, we prove schedulability conditions for non-preemptive scheduling of communication messages that can be used in our problem. Secondly, assuming schedulability conditions for all tasks (under preemptive scheduling for ECUs and non-preemptive scheduling for the shared network) are available, formulating the parameter synthesis problem is operationally non-trivial due to the dependencies among offsets and deadlines of individual tasks in a transaction (inherent from precedence constraints and bounded end-to-end delay). As we will discuss in Sec.~\ref{sec:MILP}, this creates a large number of additional variables that inflate problem complexity. Moreover, considering constraints for all ECUs and the network within a single mixed integer linear program is impractical in realistic scenarios, and tradeoffs between expressiveness of the adopted platform model and resulting formulation complexity is discussed at the end of Sec.~\ref{sec:MILP}. \LE{I would finally add here that we also need to figure out how to account for f>1} {\color{red} We can now formalize our scheduling problem. We consider the use of the EDF scheduler uniformly across ECUs and the network. EDF was shown optimal among non-idle schedulers, as well as that it outperforms rate-monotonic schedulers for realistic loads on non-preemptive networks such as CAN~\cite{relaxingPeriodicityCAN,zuberiShin}. The main challenge in determining unknown parameters (task offsets, deadlines and extended frame start times) is capturing schedulability conditions for preemptive-EDF on each of the ECUs, as well as non-preemptive-EDF for the shared network. The following section examines the mapping of the control- and security-related platform requirements into security-aware task and message models. } \fi \end{document} \section{Evaluation} \label{sec:evaluation} In this section, we evaluate our approach both on synthetic transaction sets (Section~\ref{sec:generalEvaluation}) and a realistic automotive case-study (Section~\ref{sec:caseStudy}). \begin{figure}[!t] \centering \pgfplotsset{every axis/.append style={line width=.5pt}} \resizebox{.88\textwidth}{!}{\input{Figures/solverRuntime.tex}} \caption{Average Gurobi solver runtime and $95\%$ confidence intervals for synthetic systems with utilizations $0.1-0.9$, constructed in accordance to the guidelines from~\cite{automotiveBenchmarks2015}.} \label{fig:solverRuntime} \end{figure} \begin{table}[!t] \centering \caption{Distribution of tasks and messages among periods in synthetic workloads used for generic evaluation, as well as non-QoC-related workloads used for the case study; the tasks and messages were obtained using the guidelines for automotive benchmarks from~\cite{automotiveBenchmarks2015}.} \begin{tabular}{c|c|c} \hline \begin{tabular}{@{}c@{}}period\\$[ms]$\end{tabular} & \begin{tabular}{@{}c@{}}share of preemptive\\(ECU) workload\end{tabular} & \begin{tabular}{@{}c@{}}share of non-preemptive\\(CAN bus) workload\end{tabular} \\ \hline \rowcolor[gray]{0.8} 5 & 2.5 \% & 2.63 \%\\ 10 & 31.25 \%& 32.89 \%\\ \rowcolor[gray]{0.8} 20 & 31.25 \%& 32.89 \%\\ 50 & 3.75 \%& 3.95 \%\\ \rowcolor[gray]{0.8} 100 & 25 \%& 26.32 \%\\ 200 & 1.25 \%& 1.32 \%\\ \rowcolor[gray]{0.8} 1000 & 5 \%& ---\\ \hline \end{tabular} \label{tab:workloadDistribution} \end{table} \subsection{Evaluation on Synthetic Systems} \label{sec:generalEvaluation} To evaluate general performance of our approach, we generate over $5000$ synthetic systems, each featuring 10 to 50 control transactions, following the guidelines for design of automotive benchmarks from~\cite{automotiveBenchmarks2015}. Since the guidelines focus on defining ECU-bound workloads, we redistribute the angle-synchronous workload\footnote{Angle-synchronous tasks have periods that depend on the engine speed -- i.e., the crankshaft angle determines job release.} and workloads with periods $1~ms$, $2~ms$ evenly to workloads with other periods. This is done for synthetic message sets as most practical network workloads do not include messages with such short periods. Similar benchmark modifications were used in~\cite{ZengCANFD_2017}, and the resulting distribution among periods is summarized in Table~\ref{tab:workloadDistribution}. As in~\cite{automotiveBenchmarks2015}, we scale execution times to assess performance under different utilization levels. Message transmission times are computed based on full-size CAN bus payload of $64~bits$ by varying transmission rate, to vary network utilization levels. Finally, we randomly assign extended frame distances, and cumulative authentication block lengths in the range $l_i\in[1,5]$ and $f_i\in[1,3]$, while assuming that $25\%-50\%$ of tasks/messages are QoC-related (and the remaining workload are standard real-time tasks/messages). We evaluate scalability of our framework by applying the decomposed MILP approach to all synthetic systems to complete the generated transaction sets. \figref{solverRuntime} summarizes Gurobi solver~\cite{gurobi} runtime as a function of the number of tasks/messages and task/message set utilization,\footnote{All computations are performed on a Sandy Bridge EP-based workstation with dual 3.3~GHz Intel Xeon CPUs and 64GB of~memory.} showing applicability of our approach. Larger task sets typically cause longer solver runtime due to a generally larger parameter space. Relatively large variability can be attributed to random extended frame distances, which determine the hyperperiod and harmonicity of extended frame executions. Also, solver runtime is generally lower for unschedulable transaction sets regardless of the number of tasks since the solver is typically able to quickly prune large portions of the variable space which expedites conclusions about unschedulability -- average runtime in this case is $55~s$. \subsection{Case Study} \label{sec:caseStudy} \begin{figure*}[!t] \begin{center} \subfigure [Adaptive cruise control] { \pgfplotsset{every axis/.append style={line width=0.7pt}} \resizebox{0.3\textwidth}{!}{\input{Figures/accCurve.tex}} \label{fig:AdaptiveCruise} } \subfigure [Driveline management] { \pgfplotsset{every axis/.append style={line width=0.7pt}} \resizebox{0.3\textwidth}{!}{\input{Figures/dmCurve.tex}} \label{fig:DrivelineManag} } \subfigure [Lane keeping] { \pgfplotsset{every axis/.append style={line width=0.7pt}} \resizebox{0.29\textwidth}{!}{\input{Figures/lkCurve.tex}} \label{fig:LaneTrack} } \end{center} \captionsetup{aboveskip=-1pt}\caption{QoC degradation curves for three considered systems --- maximal attack-induced state estimation error is bounded given a specific integrity enforcement policy determined by inter-enforcement distance~$l_i$ and authentication block length~$f_i$. Note that the adaptive cruise control system requires at least two consecutive measurements to be authenticated ($f_{ACC}^{sens}\geq2$).} \label{fig:AllQocCurves} \end{figure*} \begin{figure*}[!t] \begin{center} \centering \pgfplotsset{every axis/.append style={line width=0.5pt}} \resizebox{\textwidth}{!}{\input{Figures/ACCsim.tex}} \caption{Adaptive cruise control QoC under stealthy attack (starts at $t=20~s$) without integrity enforcements (left), periodic cumulative authentication with $l_{ACC}=5$ (center), and with intermittent cumulative authentication with $\hat{l}_{ACC}=2.5$ (right).} \label{fig:ACCsim} \end{center} \end{figure*} \begin{figure*}[!t] \begin{center} \centering \pgfplotsset{every axis/.append style={line width=0.5pt}} \resizebox{\textwidth}{!}{\input{Figures/LKsim.tex}} \captionsetup{aboveskip=-0.2pt}\caption{Lane keeping QoC under stealthy attack (starts at $t=20~s$) without integrity enforcements (left), with a periodic cumulative authentication with $l_{LK}=10$ (center), and with intermittent cumulative authentication with $\hat{l}_{LK}=2.86$ (right).} \label{fig:LKsim} \end{center} \end{figure*} We consider a realistic automotive case study where controllers for adaptive cruise control, lateral control for lane tracking, and driveline management, are mapped onto 3 out of 8 ECUs, with all ECUs also executing non-QoC-related workload as in~Table~\ref{tab:workloadDistribution}. To model the controlled physical plants, we adopted physical system models from~\cite{ACCmodel},~\cite{rajamani2011vehicle},~\cite{drivelineThesis}. The control tasks are receiving sensor measurements from the eight ECUs communicating via a shared CAN bus. The network load consists of $70$ full-sized CAN frames with period distribution specified in~Table~\ref{tab:workloadDistribution}, and $8$ full-sized CAN frames carrying sensor measurements with period $p_{ACC}=p_{LK}=p_{DM}=20~ms$. As $64~bit$ MACs are used to sign sensor measurements, to ensure low probability of forgery, an entire additional frame needs to be transmitted for an authenticated measurement, as the standard CAN payload is only $64~bits$. With the standard $1 Mbps$ CAN rate, regardless of ECU utilization the system is not schedulable when every sensor measurement is signed. \figref{AllQocCurves} shows QoC degradation curves for these systems, based on which we can map admissible levels of state estimation error due to attack into computation and bandwidth requirements. Specifically, we assume that state estimation error due to attack of no more than $0.4~m$ in distance to preceding vehicle, and no more than $0.1~\frac{m}{s}$ in speed is allowed in the case of adaptive cruise control. Similarly, maximum attack-induced state estimation errors for lateral position error, its rate of change, yaw angle error and its rate of change are set to $0.4~m$, $0.1~\frac{m}{s}$, $0.01~rad$, and $0.01~\frac{rad}{s}$, respectively. Finally, drive-shaft torsion, and its rate of change state estimation errors due to attack are limited to $0.02~rad$ and $1~\frac{rad}{s}$, respectively. Thus, inter-enforcement distances and authentication block lengths resulting from these requirements are $l_{ACC}=5, f_{ACC}=3$; $l_{LK}=10, f_{LK}=2$; $l_{DM}=10, f_{DM}=1$. Under these conditions, in the first step of our decomposed MILP approach, Gurobi solver takes an average of $2716~s$ to return minimal deadlines for the considered message set and assign initial authentication start times such that timeliness can be guaranteed for network messages. In the second step, for a MILP that encompasses conditions for the three control ECUs, conditioned by the previously obtained message deadlines, Gurobi takes an average of $937~s$ to complete the secure transaction set with schedulable sensing task offsets and control task deadlines. \figref{ACCsim} and \figref{LKsim} show the resulting trajectories for adaptive cruise control and lane keeping systems, when stealthy attacks start at $t=10~s$. \figref{ACCsim}~and \figref{LKsim}(left) show effects of the attack without authentication; both longitudinal and lateral control of the vehicle are entirely taken over by the stealthy attacker. \figref{ACCsim}~and~\figref{LKsim}(center) show~how the attack impact is contained within permissible limits when integrity of sensor data is enforced with the aforementioned periodic cumulative policies, resulting in network utilization of $U_{net}=0.68$. To demonstrate benefits of using opportunistic scheduling to further improve the overall QoC under attack, we simulate additional sporadic network traffic as well as opportunistically add MACs (as described in Section~\ref{sec:opportunistic}) to sensor measurements that are not authenticated by periodic cumulative authentications. Sporadic messages are assumed to arrive with minimum inter-arrival time of $10~ms$ utilizing up to $5\%$ of the network bandwidth. The resulting mean inter-authentication distance for the three systems under consideration is $\hat{l}_{ACC}=2.5$, $\hat{l}_{LK}=2.86$, and $\hat{l}_{DM}=2.31$, respectively. \figref{ACCsim}~and~\figref{LKsim}(right) show significantly improved QoC levels under attack, while the shared network utilization increases on average by $10\%$ due to opportunistic authentications. The final network utilization is $U_{net}=0.84$. ECU utilization increases on average by only $1.5\%$ to support signing and verification of additional~MACs, illustrating the applicability of the presented~framework. \section{Introduction} \label{sec:intro} In this work, we focus on securing resource-constrained cyber-physical systems (CPS) from network-based false-data injection attacks over low-level networks used for real-time communication of control-related messages. With these \emph{Man-in-the-Middle} (MitM) attacks, attackers can inject maliciously crafted data into communication between sensors and controllers, forcing a controlled physical plant into a potentially unsafe state; this is achieved either directly (by injecting false control commands) or through actions of the controller (if sensor measurements are falsified). Several such attacks have been reported recently (e.g.,~\cite{car_security2010, car_security2011, stuxnet2011,stuxnet2}); for example, susceptibility of modern automotive systems to this type of attacks was illustrated in e.g.,~\cite{wired_jeep15, car_security2011}. These attacks are especially threatening as they enable a \emph{remote} attacker to compromise safety-critical control features of a system, by taking over some of the components with access to a low-level safety-critical network used for control, before using them to transmit malicious control-related messages. Protection against this type of attacks is commonly based on data integrity enforcements using message authentication. Standard methods for ensuring authenticity of sensor data require signing of message authentication codes (MACs) on the sensor electronic control units (ECUs), transmitting sensor measurements along with the MACs, and verification of the MACs at the controller ECUs. However, due to security-related overhead this approach may not be applicable to resource-constrained embedded platforms, which are especially dominant in legacy systems. For example, our experiments on a $96~MHz$ ARM Cortex-M3-based ECU show that executing a single-input-single-output PID controller update takes approximately $5\mu s$, while signing a $128$~bit MAC over a single measurement requires around~$100~\mu s$. Thus, resource constraints may make~it~infeasible to provide continuous protection of sensing data by authenticating every transmitted~sensor measurement. Consequently, in this work we seek to answer the question exactly how much security enforcement is sufficient and how can we exploit available system resources in order to improve the overall security guarantees, in terms of Quality-of-Control (QoC) in the presence~of~attack. Due to the recently reported security incidents, the problem of securing CPS has drawn significant attention, with research efforts focused on the impact of false-data injection attacks on system performance (mainly QoC), as well as the design of attack-detectors and attack-resilient controllers using a physical model of the system (e.g.,~\cite{pasqualetti2013attack,ncs_attack_models,fawzi_tac14,pajic_tcns17,pajic_iccps14,miao_tcns17,shoukry2018smt}). One of the main results is that even when physics-based intrusion detectors are used, by changing messages received at the controller from a subset of system sensors, an attacker could launch stealthy (i.e., non-detectable) attacks that force the plant into any undesired state through the actions of the controller~\cite{mo2010false,kwon2014stealthy,smith_decoupled_attack11}. On the other hand, we have recently shown how physical properties of the controlled system under consideration, can be exploited to relax integrity requirements for secure control~\cite{jovanov_cdc17,jovanov_arxiv17,jovanov_cdc18}. Furthemore, by computing reachable regions of the state estimation error under stealthy attacks, control performance under attack can be evaluated for intermittent integrity enforcement policies -- i.e., policies that only intermittently employ message authentication. In~\cite{lesi_tecs17}, we condense these reachable regions into \emph{QoC degradation curves} that quantify the interplay between computational (and bandwidth) requirements imposed by security services and the QoC-guarantees under attack. However, the use of such policies introduces new challenges for ensuring timeliness of deployed control functionalities, as the standard periodical task and message models under such relaxed integrity enforcement policies feature significant execution and transmission time variations. In~\cite{lesi_tecs17}, we only focus on the computational aspect of the problem and show how to guarantee timeliness for security-aware control tasks, while~\cite{lesi_rtss17} presents our initial attempt to ensure timeliness of communication messages. Yet, both works consider decoupled scenarios where either ECU processing time is the only concern with the assumption that the network is not congested, or where network bandwidth is the only limitation for incorporating security while ECUs are not considered. However, the problem of providing integrated QoC and security guarantees while ensuring timeliness in scenarios where both ECU processing time and network bandwidth are limited remains open. Moreover ~\cite{jovanov_arxiv17} shows that block-authentication of sensor measurements has to be used for general types of dynamics of controlled physical processes, which results in workloads that cannot be modeled within the existing framework from~\cite{lesi_tecs17}. Consequently, in this work we introduce a design-time methodology, illustrated in \figref{methodology}, that ensures that existing control functionalities will not be negatively affected by adding message authentication to enforce data integrity. Specifically, the methodology provides sensing-to-actuation timeliness guarantees for security-aware control that employs intermittent message authentication in order to guarantee that a desired QoC level is maintained even under attack. To capture the cases where block-authentication is needed while further reducing bandwidth requirements for the QoC guarantees under attack, we propose the use of intermittent cumulative authentication policies. We specifically address modeling as well as capture schedulability conditions for security-aware sensing tasks (which are preemptive) that perform cumulative authentication, and security-aware messages (which are non-preemptive) that support arbitrary offsets; we show in Section~\ref{sec:schedulabilityAnalysis} that existing conditions do not support general offsets, which limits the use of our preliminary approach from~\cite{lesi_rtss17}. To further utilize resources available at runtime, we show how by opportunistically authenticating additional sensor measurements when computation time/bandwidth is available, we can further enhance QoC guarantees under attack. Finally, we show applicability of our approach on both synthetic systems that are designed according to established guidelines for automotive benchmarks, as well as an automotive case study. \begin{figure}[!t] \centering \includegraphics[width=0.64\linewidth]{methodology.pdf} \vspace{-4pt} \caption{Design-time methodology to integrate security in resource-constrained CPS; the use of cumulative intermittent message authentication policies enables tradeoffs between (i)~required system resources (i.e., to ensure that all control functionalities still perform within specifications even after security is `added' to the system), and (ii) Quality-of-Control (QoC) guarantees in the presence of network-based false-data injection attacks on sensor measurements delivered to controllers.} \vspace{-2pt} \label{fig:methodology} \end{figure} This paper is organized as follows. In Section~\ref{sec:motivation}, we present the system and attack models, before introducing intermittent authentication policies for secure control of CPS (Section~\ref{sec:intermittent}), and formalizing the end-to-end transaction modeling for secure control (Section~\ref{sec:modeling}). Schedulability analysis pertaining to the models is presented in Section~\ref{sec:schedulabilityAnalysis}, while Section~\ref{sec:MILP} transforms the corresponding parameter synthesis problem into a mixed integer linear program (MILP). Opportunistic use of remaining resources to improve the overall QoC guarantees in the presence of attacks is presented in Section~\ref{sec:opportunistic}, before evaluating our approach in Section~\ref{sec:evaluation}. Finally, Section~\ref{sec:relatedWork} presents related work before concluding remarks are provided in Section~\ref{sec:conclusion}. \section{Synthesis of Schedulable Secure Control Transactions} \label{sec:MILP} The schedulability conditions from Section~\ref{sec:schedulabilityAnalysis}, along with the task-precedence constraints from Section~\ref{sec:CT}, can be used to formulate a parameter synthesis problem that produces a feasible set of task deadlines, offsets, and initial authentication offsets. However, non-linearity of functions counting the number of task invocations and message transmissions~\eqref{eq:etaNormal} and ~\eqref{eq:etaExtended} precludes efficient search of the parameter space. Thus, in this section we map the demand-based schedulability conditions into a set of linear constraints, and formulate a mixed-integer linear program~(MILP) to synthesize task and message parameters that result in a schedulable set of secure control transactions. Since the schedulability conditions for preemptive and non-preemptive EDF differ only in the constant term $c_{max}$ on the right side of the demand constraints from Theorems~\ref{thm:feasibilityCond} and~\ref{thm:NPRfeasibilityCond}, in this section we may omit superscripts $sens$, $ctrl$, and $net$ for specific variables, where no confusion about the parameters~arises. Consider the workload of a sensing task $T_i^{sens}$ that also incorporates cumulative periodical authentications. Let binary variables $a_{k,j,m}^i$ for $T_i^{sens}$ indicate that the absolute deadline of the $m^\text{th}$ extended frame of the $j^\text{th}$ block of cumulative authentications is at or earlier than a time-testing instant $t_k$. This can be specified~as \begin{equation}\label{eq:aVariables} a_{k,j,m}^i = 1 \Leftrightarrow t_k\geq (s_i+m)p_i+\phi_i+d_i+(j-1)l_ip_i, \end{equation} \begin{equation}\label{eq:aVariablesIndices} \begin{aligned} 1\leq i\leq N,\qquad &1\leq k \leq |TS_{arr}|+|TS_{dead}|,\qquad 1\leq j\leq \floor*{\frac{t^{max}}{l_ip_i}},\qquad &0\leq m\leq f_i-1, \end{aligned} \end{equation} where $TS_{arr}$ and $TS_{dead}$ are defined in~\eqref{eq:timeTestingSets}. Note that control tasks $T_i^{ctrl}$ and messages $M_i^{net}$ are supported by simply removing the authentication iterator $m$ (since $f_i^{ctrl}=f_i^{net}=1$). A similar relation can be established for regular frames, where binary variables $b_{k,h}^i$ indicate that the $h^\text{th}$ regular frame of the $i^\text{th}$ sensing task is due by the $k^\text{th}$ time testing instant $t_k$. This can be captured~by \begin{equation}\label{eq:bVariables} b_{k,h}^i = 1 \Leftrightarrow t_k\geq \phi_i+d_i+(h-1)p_i, \quad 1\leq i\leq N, 1\leq k \leq |TS_{arr}|+|TS_{dead}|,1\leq h\leq \floor*{\frac{t^{max}}{p_i}}.\\ \end{equation} Identical constraint can be written for control tasks $T_i^{ctrl}$ and messages $M_i^{net}$. These variables enable us to concisely specify the number of respective jobs from \eqref{eq:etaNormal}~and~\eqref{eq:etaExtended} respectively~as \begin{align} \eta_i^{r\&e}(t_{k_1},t_{k_2}) &= \sum_{j=1}^{\frac{t^{max}}{p_i}}\left(b_{k_2,h}^i-b_{k_1,h}^i\right), \label{eq:n1}\\ \eta_i^{ext}(t_{k_1},t_{k_2}) &= \sum_{m=0}^{f_i-1}\sum_{j=1}^{\frac{t^{max}}{l_ip_i}}\left(a_{k_2,j,m}^i-a_{k_1,j,m}^i\right). \label{eq:n2} \end{align} Hence, a task's processor demand can be cast as a linear function of variables $a_{k,j,m}^i$ and~$b_{k,h}^i$ when~\eqref{eq:n1},~\eqref{eq:n2} are instantiated in~\eqref{eq:dfK1K2}. Note that since network and ECUs may not have the same hyperperiod, $t^{max}$ should be computed independently for each ECU. Note that, since task offsets and deadlines are variables, the time testing instants are also variables, as defined in~\eqref{eq:timeTestingSets}. Therefore, we need to ensure that we only consider the schedulability constraints from Theorems~\ref{thm:feasibilityCond} and~\ref{thm:NPRfeasibilityCond} for $k_1$ and $k_2$ such that $t_{k_1} < t_{k_2}$. This is achieved with a set of constraint-enabling variables $e_{k_1,k_2}$ such that \begin{equation}\label{eq:demandEnableRelation} e_{k_1,k_2}=1 \Rightarrow \sum\limits_{i=1}^{N} df_i(t_{k_1},t_{k_2}) \leq t_{k_2}-t_{k_1}, \end{equation} for preemptive EDF, where $e_{k_1,k_2}$ relates to the time testing instants as \begin{equation}\label{eq:demandEnableVariablesRelation} e_{k_1,k_2}=1 \Leftrightarrow t_{k_2} > t_{k_1}. \end{equation} In addition, the right side of~\eqref{eq:demandEnableRelation} should be decremented by $c_{max}^{net}$ when considering message scheduling, due to the scheduling non-preemptivity (\thmref{NPRfeasibilityCond}). Finally, to impose a bounded end-to-end delay, constraints that relate deadlines of tasks in a transaction can be specified as \begin{equation}\label{eq:boundedDeadlines} d_i^{sens}+d_i^{net}+d_i^{ctrl}=p_i, \qquad 1 \leq i\leq N. \end{equation} \iffalse \begin{remark} Constraints~\eqref{eq:aVariables},~\eqref{eq:bVariables}, \eqref{eq:demandEnableRelation}, and \eqref{eq:demandEnableVariablesRelation} cannot be directly specified as such in some MILP solvers. Instead, they can be linearized using the "Big M" method for handling indicator constraints~\cite{bigM}. For example,~\eqref{eq:demandEnableRelation} and~\eqref{eq:demandEnableVariablesRelation} can be cast as (with $M$ being a large constant) % \begin{equation}\label{eq:dfConstraintsBigM} M(e_{k_1,k_2}-1)+\sum\limits_{i=1}^{N} df_i(t_{k_1},t_{k_2}) \leq t_{k_2}-t_{k_1}, \end{equation} \begin{equation}\label{eq:eConstraintsBigM1} t_{k_2}-t_{k_1} > M(e_{k_1,k_2}-1), \end{equation} \begin{equation}\label{eq:eConstraintsBigM2} t_{k_2}-t_{k_1} < Me_{k_1,k_2}, \end{equation} In addition, most MILP solvers do not allow specification of strict inequalities, and thus a small $\varepsilon>0$ needs to be carefully added these constraints to convert them to non-strict inequalities. For example, \eqref{eq:eConstraintsBigM1} can be directly converted into non-strict, while~\eqref{eq:eConstraintsBigM2} requires addition of a small $\varepsilon>0$ on the left side. Note that this may allow the time testing instants to meet during the solving process i.e., $t_{k_1}=t_{k_2}$ is possible for some pair $(k_1,k_2)$. While this does not affect correctness of the formulation, it can introduce redundant trivial demand constraints (i.e., over the zero-length interval). However, this creates an undesirable corner case. Despite the lack of an objective (as we are only interested in finding a feasible solution if such exists), optimizers tend to minimize variables, and may thus choose to zero all deadlines. This corner case is formally allowed if a time testing instant corresponding to a deadline of a task can coincide with its arrival. Since the demand constraint is satisfied (the demand over the interval of length zero is equal to the supply over the same interval), this anomaly requires lower-bounding deadlines of each of the tasks. Simply, $d_i \geq 1$, for all $i$ suffices. \end{remark} \fi \begin{remark}[Handling of Indicator Constraints] While the processor demand conditions can be directly implemented within an MILP, constraints~\eqref{eq:aVariables},~\eqref{eq:bVariables}, \eqref{eq:demandEnableRelation}, and \eqref{eq:demandEnableVariablesRelation} cannot be directly specified as such in some MILP solvers. Those constraints can be linearized by using the "Big M" method for handling indicator constraints~\cite{bigM}. In the case of~\eqref{eq:aVariables} and ~\eqref{eq:bVariables}, we can write % \begin{equation}\label{eq:aConstraintsBigM1} -t_k+\phi_i+d_i+Ma_{k,j,m}^i \leq M-[ s_i+m+(j-1)l_i ]p_i, \end{equation} \begin{equation}\label{eq:aConstraintsBigM2} t_k-\phi_i-d_i-Ma_{k,j,m}^i < [ s_i+m+(j-1)l_i ]p_i, \end{equation} \begin{equation}\label{eq:bConstraintsBigM1} -t_k+\phi_i+d_i+Mb_{k,h}^i \leq M-(h-1)p_i, \end{equation} \begin{equation}\label{eq:bConstraintsBigM2} t_k-\phi_i-d_i-Mb_{k,h}^i < (h-1)p_i, \end{equation} % where $M$ is a large constant. Similarly,~\eqref{eq:demandEnableRelation} and~\eqref{eq:demandEnableVariablesRelation} can be cast as linear constraints by enforcing % \begin{equation}\label{eq:dfConstraintsBigM} M(e_{k_1,k_2}-1)+\sum\limits_{i=1}^{N} df_i(t_{k_1},t_{k_2}) \leq t_{k_2}-t_{k_1}, \end{equation} \begin{equation}\label{eq:eConstraintsBigM1} t_{k_2}-t_{k_1} > M(e_{k_1,k_2}-1), \end{equation} \begin{equation}\label{eq:eConstraintsBigM2} t_{k_2}-t_{k_1} < Me_{k_1,k_2}. \end{equation} \end{remark} \begin{remark}[Handling of Strict Inequalities] Most MILP solvers do not allow specification of strict inequalities. Constraints~\eqref{eq:aConstraintsBigM2} and~\eqref{eq:bConstraintsBigM2} can be converted into non-strict inequalities by adding a small $\epsilon>0$ to every $t_k$. Furthermore, \eqref{eq:eConstraintsBigM1}~can be directly converted into non-strict inequalities, while~\eqref{eq:eConstraintsBigM2} requires addition of a small $\epsilon>0$ on the left-hand side. Note that this may allow the time testing instants to meet during the solving process i.e., $t_{k_1}=t_{k_2}$ is possible for some pair $(k_1,k_2)$. This does not affect correctness of the formulation, but can only introduce redundant trivial demand constraints (i.e., over the interval of zero length). However, this does create an undesirable corner case. Despite the lack of an objective (recall that we are only interested in finding a feasible solution if such exists), solvers tend to minimize variables, and may thus choose to zero all deadlines. This corner case is formally allowed if a time testing instant corresponding to a deadline of a task can coincide with its arrival. Since the demand constraint is satisfied (the processor demand over the interval of length zero is equal to the supply over the same interval), this modeling anomaly requires lower-bounding deadlines of each of the tasks. Simply, $d_i \geq 1$, for all $i$ suffices. Additionally, introducing $\epsilon$ to handle strict inequalities may affect the choice of value for $M$. Specifically, the values for $M$ and $\epsilon$ must be selected such that no negative effects occur with the use of "big-M" methods due to finite precision implementation of the employed MILP solver --- that no constraint become active due to finite values for $M$. Thus, we set these values such that it holds~that \begin{equation*}\label{mEpsilonCond} M\delta_{int}+\delta_{constr} < \epsilon < 1-M\delta_{int}-\delta_{constr}, \end{equation*} where $\delta_{int}$ is the integer feasibility tolerance and $\delta_{constr}$ is the constraint satisfiability tolerance of the employed MILP solver. Moreover, $M$ must be sufficiently large to ensure constraint satisfiability is not compromised for large $t_k$-s from the set $TS$. \end{remark} The aforementioned constraints form a MILP formulation whose variables are the deadlines ($d_i^{sens}, d_i^{net}, d_i^{ctrl}$), offsets ($\phi_i^{sens}, \phi_i^{net}, \phi_i^{ctrl}$) and initial authentication offsets ($s_i^{sens}, s_i^{net}, s_i^{ctrl}$), as well as the introduced binary variables, but without an objective specification. If the feasible set of the problem is non-empty, our transaction set becomes complete and guaranteed schedulable. This approach, however, may be impractical for realistic scenarios. For example, a unified MILP for the case study presented in Section~\ref{sec:caseStudy} features over 10 million variables and 100 million constraints. Therefore, in the rest of the section, we discuss methods for complexity reduction that we apply towards tackling realistic~problems. \subsection{Complexity Reduction \label{sec:complexity} To reduce the number of used variables and constraints, we first consider the time testing sets in~\eqref{eq:timeTestingSets} for preemptive EDF. For a large number of arrival-deadline pairs $(t_{k_1},t_{k_2})$, defining a variable indicating their ordering as in~\eqref{eq:demandEnableVariablesRelation} is not necessary, and thus the corresponding demand constraints can be omitted. For example, arrival time of any single job may never exceed the deadline of that, or any subsequent invocations of the task. Also, the deadline of a specific task invocation always occurs after the arrival of that or any earlier task invocations. Formally, since $e_{k_1,k_2}=0, \forall i, \forall k_2 \geq k_1 \text{~such that~} \phi_i+k_1p_i \geq d_i+k_2p_i$, and $e_{k_1,k_2}=1, \forall i,\forall k_2 \geq k_1 \text{~such that~} \phi_i+k_1p_i < d_i+k_2p_i$, constraints~\eqref{eq:dfConstraintsBigM}--\eqref{eq:eConstraintsBigM2} can be omitted. Similar relations can be drawn pairwise for every two tasks, given specific temporal parameters. This approach greatly reduces the number of used variables and constraints, especially for large~hyperperiods A similar reasoning can be applied to variables $a_{k,j,m}^i$ that control the peak-frame timing. Given specific temporal parameters of tasks, it is not necessary to encode appearance of the $j^\text{th}$ authentication block (i.e., $j^\text{th}$ sequence of $m$ consecutive peak frames) for all instants in the time testing set, as suggested by the general definitions given in ~\eqref{eq:aVariables}--\eqref{eq:aVariablesIndices}. This is true since we only seek to find a schedulable solution, which implies that the $j^\text{th}$ authentication block must occur during the interval $\left[(j-1)l_ip_i,jl_ip_i\right]$, outside of which the value of $a_{k,j,m}^i$ is fixed and fully determined by tasks' temporal parameters. Formally, $$(\forall i,j,k,m) (t_k > j l_i p_i \Rightarrow a_{k,j,m}^{i}=0 \:and\: t_k < (j-1) l_i p_i \Rightarrow a_{k,j,m}^{i}=1).$$ Similar holds for normal frames that must be scheduled within their respective periods: $$(\forall i,k,h) (t_k > h p_i \Rightarrow b_{k,h}^{i}=0 \:and\: t_k < (h-1) p_i \Rightarrow b_{k,h}^{i}=1),$$ and thus the majority of constraints~\eqref{eq:bVariables} and corresponding variables $b_{k,h}^i$ that control normal frame timing can be eliminated. By enforcing these rules during problem encoding, the number of variables and constraints required to encode a realistic problem vastly reduces. \subsection{MILP Decomposition} \label{sec:decomposition} Even with the discussed reductions in number of variables and constraints, the presented MILPs may remain relatively complex for very large transaction sets. For these scenarios, we propose a decomposition approach that formulates the synthesis of schedulable secure control transactions as a sequence of MILPs, rather than a single program, since the schedulability tests from Section~\ref{sec:schedulabilityAnalysis} can be decoupled between the ECUs and network. However, as we consider a parameter synthesis problem, rather than just a schedulability test, this decomposition is nontrivial -- schedulable task parameters obtained for one part of the system do not guarantee feasibility of the remaining parts. In fact, the decomposition approach directly depends on the system architecture and its~implementation. \subsubsection{Synchronous Sensing Platform Model} \label{subsec:syncSens} The most commonly adopted platform model in offset-based scheduling of control transactions (as in~\cite{relaxingPeriodicityCAN}) assumes that all sensing tasks are initially released at the same time (i.e.,~$\forall i, \phi_i^{sens}=0$ and $t_0=n\cdot p_i$ for some $n$ in \figref{endToEndTiming}). In this case, in the first stage, we can run the ECUs' parameter synthesis MILP. In this case, our objective could ensure that sensing tasks are scheduled as early as possible (minimized deadlines) while the opposite is desired for receiving tasks,~i.e., they should execute as late as possible during their respective periods (maximized offsets), in order to ensure that the least conservative timing constraints are imposed on network messages (\figref{endToEndTiming}). Trying to minimize all $d_i^{sens}$ and maximize all $\phi_i^{ctrl}$ effectively results in multivariate optimization that we solve by associating weights with each of the objectives (i.e., using blended objective). In the second stage, the network parameter synthesis MILP is formulated as a feasibility problem (without objective) searching for message offsets and deadlines that yield in a feasible transaction set. Alternatively, in the first stage, we can run the network parameter synthesis MILP with the objective to maximize message offsets $\phi_i^{net}$ (which `leaves' time for transmitting tasks) and minimize deadlines $d_i^{net}$ (which `leaves' time for receiving tasks). However, these objectives are conflicting, and since they have to be specified as a single blended objective function, heuristics can be used to adjust weights of individual message offsets and deadlines according to the execution times of sensing and control tasks (i.e.,~if the sensing task's WCET is longer than the control task's WCET, the message should be delayed more towards the end of the period). In the second stage, the ECUs' parameter synthesis MILP is formulated as a feasibility problem. However, there exist scenarios where this model is not the most accurate one; for instance, an ECU attached to multiple sensors may not necessarily have the capability to sample them instantaneously. Consequently, our approach is that in the first stage we execute the MILP formulation with lower complexity, that is better suited for this architecture, since that would reduce the time cost of reconfiguring task sets in the case that the MILP solver initially returns no solution. \subsubsection{Synchronous Network Access Platform Model} Another option is to assume that network access is synchronized -- i.e.,~$\forall i, \phi_i^{net}=0$ and $t_1=n\cdot p_i$ for some $n$ in \figref{endToEndTiming}. In this case, the network MILP for parameter synthesis is executed first, with only message deadlines being subject to minimization to `leave' most time for sensing and control -- resulting in the most efficient problem decomposition. On the other hand, if the ECUs MILP is run first, both sensing deadlines should be minimized and control offsets should be maximized as described in Section~\ref{subsec:syncSens}. Then, in the second stage, the ECUs' synthesis MILP is a feasibility problem, with additional simplifications since the constraints~\eqref{eq:c1}-\eqref{eq:c3} become active (i.e., equalities hold), and for all $i$, $d_i^{net}$ are pre-specified and $\phi_i^{net}=0$. In terms of complexity, this approach is appropriate for large problems since it decouples the ECU and network analysis. Consequently, this reduces the number of variables and constraints per program since now only a part of the time testing instants remain variables. \iffalse \subsubsection{General Platform Model} Finally, when no specific time instant in the control transaction is assumed to be fixed (i.e.,~all $t_0$, $t_1$, $t_2$, and $t_3$ in~\figref{endToEndTiming} can be arbitrarily chosen), decomposing the parameter synthesis problem may not provide as large computation time-savings, unless specific offsets and deadlines are desired for some tasks by the system designer when this case can be partially reduced to the previous two. {\color{red} remove??} \fi \section{Modeling Secure Control Transactions} \label{sec:modeling} Let us consider the workload imposed by a secure control transaction, such as the one shown in \figref{motivationFig} (center, right). Schedulability analysis for such workloads using the standard task model $(WCET,period,deadline)$ is highly pessimistic -- clearly, the task sets from the figures would be rejected; the reason is that the standard task and message models accepting a single WCET parameter coarsely overapproximate the load on the ECUs and the shared network imposed by sparsely added security overhead. Thus, we need a model that captures the variable execution (or transmission) times of such security-aware real-time~tasks. The multi-frame task model \cite{Multiframe} supports tasks that have execution times varying among consecutive invocations (called \emph{frames}) in an arbitrary pattern. However, this model is overly general in that it allows any pattern of frames to be specified, and schedulability analyses for multi-frame tasks often assume that the worst-case alignment of frames is legal --- exactly the scenario we want to avoid. In our case, it suffices to facilitate two frame sizes, regular and extended, with extended corresponding to executions that include security-related overhead, as well as additional parameters specifying extended frame period and offset; this allows for capturing of periodic cumulative data authentication policies, as the ones applied to tasks in Fig.~\ref{fig:motivationFig} (center, right). Our goal is to develop a methodology for completing a set of transactions on the available shared network and set of ECUs, while taking into account the required level of periodic data integrity guarantees, which are obtained from the predefined QoC under attack requirements. Thus, we assume that non-zero task offsets and constrained deadlines are not known a priori. Instead, the respective task sets are considered incomplete in the sense that their periods and execution/transmission times are known, but the offsets and deadlines for each of the tasks that produce a schedulable set of transactions are to be determined. Consequently, we model the \emph{security-aware tasks} as $T_i(C_i,p_i,\phi_i,d_i,l_i,f_i,s_i)$, where \begin{itemize} \item $C_i=[c_i^{reg},c_i^{ext}]$ is a WCET array for two frame types, regular and extended, respectively -- $c_i^{reg}$ is equal to $c^{sens}, c^{net}$ or $c^{ctrl}$ for $T_i^{sens}$, $M_i^{net}$ and $T_i^{ctrl}$, respectively; \item $p_i$ is the period at which jobs are released, $\phi_i$ is the release offset, and $d_i$ is the task's deadline relative to its activation; \item $l_i$ is the distance (i.e., number of control periods) between consecutive authentication blocks; \item $f_i$ captures the length of the authentication block -- i.e., the number of authenticated frames within one authentication period (i.e., within every interval of length~$l_ip_i$); \item $s_i$ is the initial authentication offset -- i.e., the integer multiple of periods by which the initial authentication is deferred. \end{itemize} Note that the task offset consists of two components: $\phi_i$ and $s_ip_i$; $\phi_i$ is required to encode precedence constraints and applies to all jobs of the considered $i^\text{th}$ task. On the other hand, $s_ip_i$ determines the additional offset of only extended frames, which provides a degree of freedom during scheduling to avoid extended frame alignment scenarios emphasized in the motivating~example (Fig.~\ref{fig:motivationFig} (left)). For tasks in any secure control transaction $\mathcal{T}_i$, some of the above parameters (i.e., $s_i,f_i,l_i$) directly follow from the employed authentication policy $\mu_i(s_i,f_i,l_i)$, as illustrated in~\figref{modelExample} for one example transaction. First, $l_i^{sens}=l_i^{net}=l_i^{ctrl}=l_i$, since the authentication period is the same for both tasks and the communication message. In addition, $f_i^{sens}=f_i$, as $T_i^{sens}$ task computes a cumulative MAC over a block of $f_i$ consecutive measurements, before attaching the MAC to the last message from the block. Also, $f_i^{ctrl}=1$ since $T_i^{ctrl}$ task verifies (i.e., authenticates) a block of consecutive measurements only once when it receives the cumulative MAC, prepared by $T_i^{sens}$ and delivered by $M_i^{net}$. Thus, it also holds that $f_i^{net}=1$. Similarly, initial authentication offsets depend on the authentication policy used. First, $0\leq s_i^{sens}\leq l_i-f_i$ since the first computation of cumulative MAC within a block must be done early enough to allow for execution of $f_i$ consecutive extended frames within $l_i$ periods of $T_i^{sens}$. Additionally, the initial extended frames of the message $M_i^{net}$ and control task $T_i^{ctrl}$ have constrained start times as $s_i^{ctrl}=s_i^{net}=s_i^{sens}+f_i^{sens}-1$, as $T_i^{sens}$ task computes cumulative MAC over $f_i^{sens}$ periods, followed by an authenticated transmission and an authenticating control task, as shown in~\figref{modelExample}. Problem~1 can now be reformulated around synthesis of feasible deadlines ($d_i^{sens}, d_i^{net}, d_i^{ctrl}$), offsets ($\phi_i^{sens}, \phi_i^{net}, \phi_i^{ctrl}$) and initial authentication offsets ($s_i^{sens}, s_i^{net}, s_i^{ctrl}$) for all secure control transactions $\mathcal{T}_i$, $i=1,...,N$, such that the precedence constraints from~\eqref{eq:c1}-\eqref{eq:c3} are satisfied, and for which the obtained complete transaction set $\mathcal{T}$ is schedulable under preemptive EDF for ECUs and non-preemptive EDF for the network. Thus, the following section starts by deriving schedulability conditions for the presented task model under preemptive and non-preemptive EDF~scheduling. \section{Problem Statement } \section{System and Attack Model} \label{sec:motivation} In this section, we present system architecture and model, including the attack model, and introduce cumulative authentication policies that ensure the desired QoC levels in the presence of attacks. We then formalize the problem of adding security guarantees against MitM attacks and outline our design-time methodology (shown in \figref{methodology}) to integrate security in resource-constrained CPS. \begin{figure}[!t] \centering \includegraphics[width=.84\linewidth]{systemArch.pdf} \caption{System architecture with $N$ physical plants ($\mathcal{P}_1,...,\mathcal{P}_N$) that are sampled and controlled in real-time by M ECUs ($\varepsilon_1,...,\varepsilon_M$); the ECUs communicate with the corresponding plants' sensors and actuators over a real-time communication network. We assume that the mapping of controllers for each plant $\mathcal{P}_i$ to a specific ECU $\varepsilon_j$ is already performed.} \label{fig:stdArch} \end{figure} \subsection{System Architecture and Model without Attacks} We consider a common CPS architecture shown in \figref{stdArch}, where sensors for $N$ physical plants $\mathcal{P}_i$ ($i=1,..,N$), as illustrated in the \emph{plant layer} in the figure, communicate with plant controllers over a shared real-time network. We assume that each plant $\mathcal{P}_i$ can be modeled in the standard linear systems form as \begin{equation} \begin{split} \mathbf{x}_i[{k+1}] &= \mathbf{A}_i\mathbf{x}_i[k] + \mathbf{B}_i\mathbf{u}_i[k] + \mathbf{w}_i[k] \\ \mathbf{y}_i[k] &= \mathbf{C}_i\mathbf{x}_i[k] + \mathbf{v}_i[k], \end{split} \label{eq:system} \end{equation} where $\mathbf{x}_i[k], \mathbf{y}_i[k]$ and $\mathbf{u}_i[k]$ denote the plant's state, output and input vectors at time $k$, while $ \mathbf{w}_i[k]$ and $\mathbf{v}_i[k]$ are process and measurement noise. In addition, each plant $\mathcal{P}_i$ is controlled by a feedback controller that in the most general form can be captured as \begin{equation*} \begin{split} \hat{\mathbf{x}}_i[k+1] &= \mathbf{f}_i\left(\hat{\mathbf{x}}_i[k],\hat{\mathbf{y}}_i[k]\right) \\ \mathbf{u}_i[k] &= \mathbf{g}_i\left(\hat{\mathbf{x}}_i[k],\hat{\mathbf{y}}_i[k]\right). \end{split} \end{equation*} Here, $\mathbf{f}_i(\cdot)$ and $\mathbf{g}_i(\cdot)$ denote arbitrary linear mappings, which may for example describe an observer-based state feedback controller illustrated in \figref{controllerArch}. In addition, $\hat{\mathbf{x}}_i[k]$ and $\hat{\mathbf{y}}_i[k]$ denote the estimate of the plant's state and sensor measurements received by the controller at time $k$. Also, as shown in \figref{controllerArch}, we assume that each controller is equipped with a physics-based intrusion/anomaly detector that employs the plant model and a window of previous control inputs ($\mathbf{u}_i[k]$), state estimates ($\hat{\mathbf{x}}_i[k]$) and received sensor measurements ($\hat{\mathbf{y}}_i[k]$) to trigger alarms (e.g.,~as in~\cite{pajic_tcns17,mo2010false, kwon2014stealthy,jovanov_cdc17,miao_tcns17}). \subsubsection{Task and Message Models} For each plant $\mathcal{P}_i$, measurement acquisition, packing and transmission is done by a periodic \emph{sensing} (or \emph{transmitting}) task denoted by $T^{sens}_i$. In addition, periodic \emph{control} (or \emph{receiving}) task $T^{ctrl}_i$, which may be executed on a different ECU, unpacks received measurements before using them for control updates in each sampling (i.e., actuation) period. Hence, the periods of these tasks are equal to the sampling period of the controlled plant -- i.e.,~$p_i^{sens}=p_i^{ctrl}=p_i$. We also assume that mapping of tasks onto ECUs has already occured, as shown in \figref{stdArch} -- i.e., the set $\mathcal{T}_{\mathcal{E}_j}, j=1,...,M$, of tasks executing on each of the $M$ ECUs $\mathcal{E}_1, ...\mathcal{E}_M$ is known; for example, in the \emph{platform layer} in~\figref{stdArch}, the task set $\mathcal{T}_{\mathcal{E}_2}$ that contains $T^{sens}_1$ and $T^{ctrl}_N$ is mapped onto ECU $\mathcal{E}_2$. Thus, we assume that the worst-case execution times (WCET) for all these tasks are known, and let $c_i^{ctrl}$ and $c_i^{sens}$ denote the WCET on the assigned ECUs, for tasks $T_i^{ctrl}$ and $T^{sens}_i$, ($i=1,...,N$). Each sensing task $T^{sens}_i$ communicates sensor measurements to control task $T^{ctrl}_i$ through a real-time message $M_i^{net}$ with the same period $p_i$ and the worst-case transmission time $c_i^{net}$, as illustrated in the task/message layer in \figref{stdArch}. Note that when no confusion arises, we refer to all $T_i^{sens}$, $M_i^{net}$, and $T_i^{ctrl}$ as tasks. Finally, without loss of generality, we assume that actuation is done directly by control tasks, i.e., actuation commands are not transmitted as messages over the network, although the presented model can be easily generalized to cover this case. \begin{figure}[!t] \centering \begin{minipage}[t]{.44\textwidth} \centering \includegraphics[width=\textwidth]{controllerArch.pdf} \captionof{figure}{General controller design. In addition to a standard estimator (i.e., observer) and a feedback controller, the controller employs a physics-based intrusion/anomaly detector.} \label{fig:controllerArch} \end{minipage} \hspace{8pt} \begin{minipage}[t]{.5\textwidth} \centering \includegraphics[width=\linewidth]{endToEnd.pdf} \captionof{figure}{Timing diagram of a control transaction --- the precedence requirements for sensing (transmitting) task $T_i^{sens}$, message $M_i^{net}$ and control (receiving) task $T^{ctrl}_i$ are captured by constraints~\eqref{eq:c1}-\eqref{eq:c3}.} \label{fig:endToEndTiming} \end{minipage}% \end{figure} \paragraph{Control Transactions}\label{sec:CT} For any plant~$\mathcal{P}_i$, we define a \emph{control transaction} $\mathcal{T}_i$ as the chain of invocations of $T_i^{sens}$, $M_i^{net}$ and $T_i^{ctrl}$ with all the tasks being precedence-constrained. Specifically, the earliest time a job of task $T_i^{ctrl}$ may start execution is upon receiving the required sensor message. Similarly, network access for message $M_i^{net}$ cannot be requested before task $T_i^{sens}$ has prepared data for transmission. We capture these precedence constraints with non-zero offsets and constrained deadlines imposed on the tasks (\figref{endToEndTiming}); we model the tasks in the standard $(WCET, period, offset, deadline)$ format as $T_i^{ctrl}(c_i^{ctrl}, p_i,\phi_i^{ctrl},d_i^{ctrl})$, $M_i^{net}(c_i^{net}, p_i,\phi_i^{net},d_i^{net})$, and $T_i^{sens}(c_i^{sens}, p_i,\phi_i^{sens},d_i^{sens})$, with the precedence constraints specified as \begin{align} \label{eq:c1} \phi_i^{net}&\geq \phi_i^{sens} + d_i^{sens},\\ \label{eq:c2} \phi_i^{ctrl}&\geq \phi_i^{net}+d_i^{net},\\ \label{eq:c3} \phi_i^{ctrl}+d_i^{ctrl}&\leq p_i, \end{align} and illustrated in \figref{endToEndTiming}. To simplify our notation, constraint~\eqref{eq:c3} employs a standard assumption (e.g.,~as in~\cite{relaxingPeriodicityCAN}) that the delay between sampling and actuation for each plant $\mathcal{P}_i$ is bounded by the control period~$p_i$; however, these constraints can be easily adjusted for any fixed sampling-to-actuation delay bounds that may be considered. Finally, it is important to highlight that the period $p_i$ and WCET $c_i^{sens}$, $c_i^{net}$, and $c_i^{ctrl}$ are~known~and considered inputs to our design-time procedure, as we do not want to significantly affect the initial (i.e., non-secured) control deployment. On the other hand, to enforce the tasks' precedence, each control transaction imposes the aforementioned constraints between the offsets and deadlines used to model the transaction tasks. Yet, the actual values are \textbf{not} assigned a priori, i.e.,~the transaction set is considered incomplete, and our goal is to determine offsets and deadlines for all tasks that produce a schedulable set of control transactions even when security mechanisms are incorporated. \subsection{Attack Model The considered system architecture is susceptible to network-based attacks, such as MitM attacks, on communication between sensors and controllers. The attacker can use actions of the controller to force the plant away from the desired state by injecting false data that differ from actual sensor measurements, consequently affecting the controller's estimation and thus the applied control inputs. To formally capture this, we use the standard attack model from~\cite{ncs_attack_models,pajic_csm17,pajic_tcns17, mo2010false,fawzi_tac14}, where additional term $\mathbf{a}_i[k]$ captures the vector of values injected by the attacker at time $k$ on compromised measurements -- i.e., with MitM attacks, measurements received by the controller $\hat{\mathbf{y}}_i[k]$ may differ from the actual sensor measurements $ {\mathbf{y}}_i[k]$. Specifically, \begin{equation} \label{eq:attModel} \hat{\mathbf{y}}_i[k] = \begin{cases} \mathbf{y}_i[k], & \text{without MitM attack}\\ \mathbf{y}_i[k] + \mathbf{a}_i[k], & \text{with MitM attack} \end{cases} \end{equation} Due to attacks, the system evolution would not occur according to the model from \eqref{eq:system}. Therefore, we differentiate system evolutions with and without attacks by adding superscript $a$ to all variables affected by the attacker's influence. For example, we denote the plant's state and outputs when the system is under attack as $\mathbf{x}_i^a[k]$ and $\mathbf{y}_i^a[k]$, respectively. Hence, in the case of attacks, sensor measurements delivered to the controller can be modeled as \begin{equation} \hat{\mathbf{y}}_i^a[k]=\mathbf{y}_i^a[k] + \mathbf{a}_i[k]= \mathbf{C}_i\mathbf{x}^a_i[{k}]+\mathbf{v}^a_i[{k}]+ \mathbf{a}_i[k], \label{eq:systemA} \end{equation} The attack vector $\mathbf{a}_i[k]$ is unknown and can have any value assigned by the attacker. The only constraint is that it may be sparse, depending on the set of compromised information flows from sensors to the controller; specifically, if communication from a sensor to the controller for plant $\mathcal{P}_i$ is not corrupted then the corresponding value in $\mathbf{a}_i[k]$ has to be equal to zero. Any assumptions about the set of compromised sensor flows (e.g.,~the number of the flows) can thus be captured by introducing constraints on the sparsity of the vector. However, unless stated otherwise, to simplify our presentation we focus on the worst-case scenario, where the attacker is able to compromise all sensor flows for the plant, once he/she decides to launch an~attack. With the use of standard cryptographic mechanisms, such as MACs, integrity of the received sensor data can be guaranteed, as we assume that the attacker does not have access to the shared secret keys used to generate the MACs. In addition, we assume that one of the attacker's goals is \emph{to remain stealthy}, and thus in time steps when message authentication is used, the attacker cannot inject false data (i.e.,~$\mathbf{a}_i[k]=\mathbf{0}$) or the attack will be detected.\footnote{Note that the attacker, with access to the network, could launch Denial-of-Service attacks that prevent messages, including authenticated ones, from being successfully delivered to the controller. In this work, we do not consider such attacks since they are in general easier to detect in CPS with reliable communication networks.} Furthermore, we assume that the attacker has unlimited computation power and full knowledge of the system, system architecture and plant models, as well as the time-points when authentication will be utilized. This allows him to plan ahead, and smartly craft false measurements to be injected over the network, such that they do not trigger the deployed detector, while deceiving the controller into pushing the plant away from the desired operating~point.\footnote{Examples of such attacks can be found in~\cite{mo2010false, kwon2014stealthy,jovanov_cdc17,jovanov_arxiv17}.} Consequently, the attacker's goal is to maximally reduce control performance (i.e.,~QoC) while remaining stealthy -- i.e.,~undetected by the system. Therefore, in addition to not inserting false data packets in time-frames when data authentication is enforced, the injected falsified sensor measurements should not trigger the anomaly/intrusion detection system employed at the~controller. \section{Defending against Attacks with Intermittent Data Authentication} \label{sec:intermittent} Enforcing data integrity for every communicated measurement packet may be infeasible due to additional computation costs associated with signing and verifying MACs, as well as additional bandwidth required to transmit them. For example, consider three sensing tasks that are being executed on the same ECU, $\{T^{sens}_1(2,10),T^{sens}_2(2,10),T^{sens}_3(5,20)\}$,\footnote{To simplify our notation, when a task $T$ is represented as $T(c, p)$ it is assumed that its offset is equal to zero and relative deadline is equal to the period $p$.} and let us assume that the security-induced computation overhead to sign measurements with a MAC is $2$ time units. As shown in \figref{motivationFig}(left), the new task set $\{T^{sens}_1(4,10),T^{sens}_2(4,10),T^{sens}_3(7,20)\}$ is infeasible; thus, even if the network can deal with the additional communication overhead, the transmitting ECU cannot authenticate (i.e., sign) every~message. \begin{figure*}[!t] \centering \includegraphics[width=\linewidth]{motivExample.pdf} \caption{Task set $T^{sens}_1(2,10)$, $T^{sens}_2(2,10)$, $T^{sens}_3(5,20)$ is infeasible if overhead of signing sensor measurements is $2$ time units in every sampling period (left). However, if $T^{sens}_1$ and $T^{sens}_2$ are allowed to authenticate every other period, and the initial authentication of $T^{sens}_2$ is deferred until the second period, the task set is schedulable (center). On the other hand, if the goal is to maximize QoC guarantees for the first plant by always authenticating $T^{sens}_1$ measurements, authentication rates for $T^{sens}_2$ and $T^{sens}_3$ can be reduced by authenticating every fourth and every other period, respectively, while still providing suitable QoC guarantees under attack, using the QoC degradation curves to guide formal tradeoff analysis (right).} \label{fig:motivationFig} \end{figure*} On the one hand, a stealthy attack may significantly reduce QoC if the attacker has compromised a certain number of sensor flows (e.g.,~\cite{pajic_tcns17,kwon2014stealthy}). For any specific class of controllers from~\figref{controllerArch}, by injecting false sensor data that result in a skewed state estimation, the attacker deceives the controller to apply inappropriate control inputs that steer the plant away from the operating point. On the other hand, in~\cite{jovanov_cdc17,jovanov_arxiv17,jovanov_cdc18}, we show how physical properties of a system can be exploited to relax integrity requirements for secure control of CPS. The idea is that the state estimation errors due to attacks have to increase slowly to avoid attack detection by the deployed physics-based detector from \figref{controllerArch}. In addition, since each plant has its own dominant time-constant, which can be obtained by the plant model $\mathcal{P}_i$, in the presence of a stealthy attack, QoC can be significantly degraded only after some time has elapsed after the attack is launched. QoC degradation under attack occurs due to errors in state estimation caused by the false-data injected at time-points when authentication is not used. Hence, for any data authentication policy, which can be captured as time-points where MACs are used (i.e.,~times $k$ where $\mathbf{a}_i[k]=\mathbf{0}$), system performance under stealthy attacks can be evaluated by computing reachable regions of the state estimation error caused by the false data. Specifically, due to stealthy false-data injection attacks, the reachable regions $\mathcal{R}[k]$ and $\mathcal{R}$ of the state estimation error can be defined as~\cite{jovanov_cdc17,jovanov_arxiv17,jovanov_cdc18} \begin{equation*} \label{eqn:Rk} \mathcal{R}[k]=\left\{ \begin{array}{c|c} \mathbf{e} \in\mathbb{R}^n & \left.\begin{array}{c} \mathbf{e}\mathbf{e}^\intercal\preccurlyeq E[\mathbf{e}^a[k]]E[\mathbf{e}^a[k]]^\intercal + \gamma Cov(\mathbf{e}_k^a), \\~\mathbf{e}^a[k]=\mathbf{e}_k^a(\mathbf{a}_{1..k}),~\mathbf{a}_{1..k}\in\mathcal{A}_k \end{array}\right.\end{array}\right\} \quad\text{and}\quad \mathcal{R}= \bigcup_{k=0}^\infty \mathcal{R}[k]. \end{equation*} Here, $\mathcal{R}$ is the global reachable region of the state estimation error, while $\mathcal{A}_k$ denotes the set of all stealthy attacks $\mathbf{a}_{1..k} = \left[\mathbf{a}[1]^\intercal ... \mathbf{a}[k]^\intercal\right]^\intercal$, and $\mathbf{e}_k^a(\mathbf{a}_{1..k})$ is the estimation error evolution due to the attack $\mathbf{a}_{1..k}$. Note that this general definition allows for the inclusion of additional information, such as the number and location of compromised sensors. Unless otherwise stated, we assume that measurements from all sensors are compromised when authentication is not used. For instance, \figref{sampleRegions} shows the reachable regions of state estimation error due to stealthy attacks over the adaptive cruise control system described in~Sec.~\ref{sec:caseStudy} for the case with and without intermittent authentication. \begin{figure*}[!t] \centering \includegraphics[width=\linewidth]{sampleRegions.png} \caption{State estimation error evolution due to stealthy attack on distance sensing in an adaptive cruise control system -- projections of the reachable regions in the 3-dimensional state space (distance-speed-acceleration) are shown. Note that the attainable state estimation error is significantly reduced (but not zero) if integrity is enforced over every $4^{th}$ measurement, while the regions grow infinitely without any integrity enforcement.} \label{fig:sampleRegions} \end{figure*} In \cite{lesi_tecs17}, we introduced a \emph{QoC degradation curve} $\mathcal{J}_i(l)$ that, for any linear plant $\mathcal{P}_i$, directly quantifies the dependency between the security-induced computation and bandwidth overhead and the control performance (QoC) under attack, which is reduced due to the estimation errors. Specifically, $\mathcal{J}_i(l)$ can be used to bound QoC degradation as a function of $l$ -- the maximal time between consecutive uses of MACs in data authentication~policies. This can be formally captured as \begin{equation*} \label{eqn:R} \mathcal{J}_i(l)= supp\{\|\mathbf{e}^a\|_2 ~|~ \mathbf{e}^a\in\mathcal{R}_i^{l}\},~~~~\text{where }~~~~ \mathcal{R}_i^{l} = \cup_{k=0}^\infty \mathcal{R}_i^{l}[k], \end{equation*} where $\mathcal{R}_i^{l}[k]$ denotes the reachable region $\mathcal{R}_i[k]$ computed for all data authentication policies with inter-authentication distance of $l$. Such QoC-degradation curves enable the designer to accurately adjust the system's working point by balancing between computational or network resource allocated for security and the returning QoC guarantees under attack, as the predefined QoC requirement can be directly mapped into security-induced overhead and vice versa. To illustrate this, let us revisit the example from \figref{motivationFig}, and let us assume that for the first two plants, authenticating sensor measurements in every other sampling period ensures the desired QoC level in the presence of attack. \figref{motivationFig}(center) shows that under such conditions, by deferring the initial authentication of $T^{sens}_2$ until the second period, the task set becomes feasible. Note that, however, if every fourth measurement for the second plant and every other measurement of the third plant are authenticated, then the measurements for the first plant can continuously be authenticated, as in \figref{motivationFig}(right). QoC degradation curves explicitly capture the dependency between required security-related overhead and control performance, and can be used to decide which scenario is more desired with respect to the overall (for all plants) QoC guarantees. \subsection{Cumulative Data Authentication Policies} In general, depending on the considered plant's dynamics (i.e., matrices $\mathbf{A}_i,\mathbf{B}_i,\mathbf{C}_i$ in~\eqref{eq:system}) it may not be sufficient to intermittently authenticate sensor measurements at one time point. Rather integrity of $f_i$ consecutive measurements should be ensured, with these time-windows appearing intermittently during system execution~\cite{jovanov_arxiv17}.\footnote{As shown in~\cite{jovanov_arxiv17}, $f=\min(\psi,q^{un}_i)$ with $\psi$ being the observability index of the $(\mathbf{A}_i,\mathbf{C}_i)$ pair and $q^{un}_i$ is the number of unstable eigenvalues of $\mathbf{A}_i$. } Implementing such data authentication policies with the use of standard MACs, where every authenticated message is signed with its own MAC added to the message, would require that $f_i$ consecutive communication packets are extended to accommodate MACs. As the network is commonly a bottleneck in resource-constrained CPS, in this work we propose the use of \emph{{cumulative} message authentication} where a MAC is computed over several consecutive plant measurements, before being attached to the final message from the block; this significantly reduces the network load by transmitting a MAC for multiple consecutive data points as part of a single message~\cite{streamAuthentication1,nilsson2008efficient}. Therefore, we introduce the following definitions for cumulative data authentication policies that intermittently or periodically authenticate blocks of messages with sensor measurements. \begin{definition}\label{def:intpolicy} An intermittent cumulative data authentication policy $\mu_i=(\left\{t_{j}\right\}_{j=0}^\infty, f_i, l_i)$, with $t_{{j-1}}<t_{j}$ and $l_i=\sup_{j>0} \left(t_{j} - t_{{j-1}}\right)$, ensures that $\mathbf{a}_{t_j} = \mathbf{a}_{t_j+1} =...=\mathbf{a}_{t_j+f_i-1} =\mathbf{0}$, for all $j\geq 0$. \end{definition} \begin{definition}\label{def:perpolicy} A periodic cumulative data authentication policy $\mu_i(s_i,f_i,l_i)$, where $0\leq s_i\leq l_i-1$, ensures that for all $j \geq 0$, $$\mathbf{a}_i[s_i+l_i\cdot j] = \mathbf{a}_i[s_i+1+l_i\cdot j] = ... =\mathbf{a}_i[s_i+f_i-1 + l_i\cdot j] =\mathbf{0}.$$ \end{definition} \noindent Definition~\ref{def:intpolicy} imposes a maximum time of $l_ip_i$ (i.e., $l_i$ control periods) between the initial authenticated measurements within blocks of $f_i$ consecutive authenticated measurements. On the other hand,~with periodic cumulative authentication policies from Definition~\ref{def:perpolicy}, the time between initial authentications for consecutive blocks is always exactly~$l_ip_i$, and authentication blocks start with the initial offset equal to~$s_ip_i$. A control transaction with an intermittent or periodic cumulative authentication policy applied to its tasks (resulting in security-related overheads) is referred to as a {\emph{secure control transaction}}. For example, consider a secure transaction $\mathcal{T}_i$ from \figref{modelExample}, where a periodic cumulative data authentication policy $\mu_i(1,2,4)$ is implemented using cumulative MACs. During every four periods, overhead due to MAC signing for sensing task $T_i^{sens}$ is spread over $f_i=2$ jobs, while only one message $M_i^{net}$ and job of $T_i^{ctrl}$ include overhead due to authentication, and only after the last message from the authenticated block is prepared for transmission by~$T_i^{sens}$. Finally, the use of cumulative authentication introduces delay in verifying data integrity that has to be taken into account when QoC degradation curves are derived. Therefore, in this case QoC degradation curves can be captured as $\mathcal{J}_i(l_i,f_i)$, which are computed from the plant model $\mathcal{P}_i$ using the reachability analysis we introduced in~\cite{jovanov_cdc18}, as illustrated in the upper-right part of \figref{methodology}. Since the reachability analysis considers intermittent cumulative authentication policies from Definition~\ref{def:intpolicy}, when used for periodic policies $\mu_i(s_i,f_i,l_i)$, as defined in Definition~\ref{def:perpolicy}, it provides QoC guarantees \textbf{for any value} of $s_i$. For example, the QoC-degradation curves for adaptive cruise control, driveline management and lane keeping controllers, as functions of inter-authentication distance ($l_i$) and authentication block length ($f_i$), are shown in \figref{AllQocCurves}. Note that the adaptive cruise control system requires that at least two consecutive measurements are authenticated (i.e., $f_{ACC}\geq 2$) due to the properties of the plant's dynamics These QoC-degradation functions $\mathcal{J}_i(l_i,f_i)$ provide the basis for our analysis of tradeoffs between QoC guarantees under attack and the required computational and network resources used for data authentication (i.e., security-related overhead). For each plant $\mathcal{P}_i$, the function $\mathcal{J}_i(l_i,f_i)$ is a non-decreasing function in variable $l_i$. In addition, the minimal required value for $f_i$ can be directly computed from the model of $\mathcal{P}_i$ without significant QoC improvements being obtained by increasing $f_i$. Therefore, the desired QoC requirements (e.g.,~a bound on $\mathcal{J}_i(l_i,f_i)$) can be directly mapped into constraints on the value of $l_i$, the number of non-authenticated communication packets between consecutive block authentications. \subsubsection{Overview of our Approach} Our goal is to ensure the desired level of QoC for all controlled plants in resource-constrained CPS, even in the presence of network-based attacks. As resource constraints prevent continuous authentication of transmitted sensor measurements, we focus on \emph{periodic} cumulative authentication policies, as for such block integrity enforcements are maximally spread apart. To achieve this, we propose the use of the design-time framework from \figref{methodology}, that directly facilitates tradeoff analysis between the QoC guarantees under attack and security (i.e.,~authentication) overhead for ensuring intermittent integrity of sensor measurements. For each plant $\mathcal{P}_i$, $i=1,...,N$, the plant model and corresponding QoC curve $\mathcal{J}_i(l_i,f_i)$ are used to obtain constraints on employed periodic cumulative authentication policies; specifically the values for $l_i$ and $f_i$ (but not $s_i$) that result in the desired QoC. In addition, from the platform model and the initial controller specification, regular (i.e., without overheads) and extended (i.e., including authentication) WCETs can be obtained, along with the control transaction period $p_i$. On the other hand, for the task models to be complete and the intermittent authentication policies to be fully defined, it is necessary to derive feasible (i.e.,~schedulable) tasks' offsets and deadlines, as well as initial authentication offsets ($s_i$) for the cumulative authentication policies. Consequently, to allow for the execution of secure control transactions with the desired levels of QoC in the presence of attacks, in the rest of the paper we focus on the following scheduling~problems. \begin{problem} \label{schedulingProblem} For a set of secure control transactions $\mathcal{T}=\left\{\mathcal{T}_1,...,\mathcal{T}_N\right\}$, complete the respective task/message sets and deployed periodic cumulative authentication policies, such that the obtained secure transaction set $\mathcal{T}$, mapped to available ECUs $\mathcal{E}_1, ...\mathcal{E}_M$, is schedulable under preemptive EDF for ECUs and non-preemptive EDF for the network. \end{problem} \begin{problem} \label{optimalProblem} Starting from a schedulable set of secure control transactions $\mathcal{T}$, obtained from Problem~1, improve the overall QoC guarantees by utilizing remaining resources (ECU time, network bandwidth) with the use of intermittent cumulative data authentication~policies. \end{problem} We consider the use of the EDF scheduler uniformly across ECUs and the network, since EDF is optimal non-idle scheduler for preemptive task scheduling (i.e., on ECUs), while it outperforms rate-monotonic schedulers for realistic loads on non-preemptive networks such as CAN~\cite{relaxingPeriodicityCAN,zuberiShin}. The main challenge in determining unknown parameters (task offsets, deadlines and extended frame start times) is capturing schedulability conditions for preemptive-EDF on each of the ECUs, as well as non-preemptive-EDF for the shared network. Therefore, in next section, we start by examining the mapping of the control- and security-related platform requirements into a security-aware control transaction model, which will provide a basis for our schedulability analysis and parameter synthesis procedure. \begin{remark}[Reduction of Control Rate vs. Reduction of Authentication Rate] The main idea behind this work is that with the simultaneous use of physics-based attack detection and cyber-based security mechanisms, such as message authentication, we will be able to provide strong QoC performance guarantees even in resource-constrained CPS, in which it is not possible to protect integrity of every transmitted sensor measurement. An alternative approach to the use of intermittent authentication would be to reduce the control rate to the levels that ensure that every transmitted sensor message can be authenticated. For instance, for our running example from~\figref{motivationFig}, if control task rates are set to $20$, $20$, and $40$ time units respectively (instead of $10$, $10$, and $20$), MACs can protect integrity of every sensor measurement transmitted over the network. However, reducing the control rates (i.e., by increasing control task/sampling periods) results in a reduced control performance in the case without attacks, compared to the initial system that employs the nominal control periods. On the other hand, our goal is to add protection against network-based attacks with strong QoC guarantees in the presence of attacks, without negative effects on control performance (i.e., QoC) when the system is not under attack. With the use of intermittent authentication policies this can be achieved by ensuring schedulability of the main control functionalities (tasks) at the nominal (i.e., initial) periods/rates even when the authentication mechanisms are only intermittently utilized. \end{remark} \section{Opportunistic Authentications} \label{sec:opportunistic} The design-time framework from Section~\ref{sec:MILP} addresses Problem~1, resulting in schedulable secure control transactions with the desired levels of QoC even in the presence of attacks. However, the overall QoC guarantees may be improved if the overall authentication rates, captured by $l_i$'s, are increased, which can be achieved if additional system resources (ECU time, network bandwidth) are available. While the QoC degradation curves capture the dependency between QoC and authentication rates (i.e., $l_i$), making the distances between authentications $l_i$ variables, instead of predefined values obtained from the QoC requirements, as part of the presented MILP does not scale. Consequently, the methods we introduced in~\cite{lesi_tecs17,lesi_rtss17} to optimally allocate resources in systems where only network or only ECE scheduling is considered, such that the overall QoC under attack is maximized cannot be employed for systems featuring many tasks/messages when both network and task scheduling is considered. On the other hand, for secure transactions with periodic cumulative authentication policies $\mu_i(s_i,l_i,f_i)$ obtained by the MILP-based framework from Section~\ref{sec:MILP}, ECUs and the network will commonly not be entirely utilized at runtime. Thus, in this section we consider the problem of how intermittent authentication can be added at runtime, on top of a system for which we already obtained strong timeliness and QoC-under-attack guarantees (i.e., Problem~2). As our goal is to develop a runtime scheme that allocates available resources (CPU/network time) to authenticate additional sensor messages, we assume that the following holds. First, each ECU needs to have the knowledge of the network's busy intervals, or equivalently, of the temporal parameters of the network's workload, to ensure that additional transmitted MACs do not affect timeliness of existing periodic traffic. This is a valid assumption in low-level control networks (e.g.,~CAN bus that is considered in the case study in Section~\ref{sec:caseStudy}), where traffic patterns are fully defined at design-time. Secondly, each ECU needs to have knowledge of its own available processing time, to ensure that additional MAC signing or verification can be performed without violating timing constraints of existing transactions, and other periodic and worst-case sporadic workload. This is typically satisfied for constrained embedded platforms targeted by this general framework, as they commonly execute reservation-based RTOSs that enforce runtime timeliness guarantees. In such systems, our goal is to develop a runtime policy to determine the optimal, or near-optimal \emph{opportunities} for additional sensor measurements to be authenticated. In essence, this policy defines ECU-side computation of the priority level with which the specific MAC transmission will compete with other ECUs attempting to opportunistically authenticate additional sensor measurements. Intermittent authentications should only be allowed outside the times captured by the deployed periodic cumulative authentication policies $\mu(s_i,l_i,f_i)$. To improve the overall QoC guarantees, we consider QoC degradation curves $\mathcal{J}_i$ for every plant, and assign priority to a MAC transmission based on the level of improvement in the overall QoC that the specific authenticated measurement would contribute. Specifically, we assign a reward $r_i(t)$ at time $t$ to an opportunistic authentication~as $$r_i(t)=\omega_i\mathcal{J}_i(\Delta l_i(t),f_i)\text{, where } \Delta l_i(t) = \floor*{\frac{\min \left({t-t_{i_{k-1}}}, t_{i_{k}}-t\right) }{p_i}},$$ where $t_{i_{k-1}}$ and $t_{i_{k}}$ are the nearest preceding and superseding periodic authentication release times. This ensures that additional authentications are favored in the middle of periods of regularly scheduled authentications, as that results in tighter bounds on the attacker. Moreover, the weights~$\omega_i$ facilitate boosting priority of more important plants (e.g.,~steering over climate control). This approach is practical as the light-weight priority computation can be performed on the ECU itself in the case of the CAN bus, as the standard CAN protocol incorporates message priorities into the message identification field, while transmission conflicts are intrinsically resolved. Alternatively, the centralized scheduler assumed in TTCAN networks can enforce this policy, while each ECU in FlexRay networks features a \emph{bus guardian}, that enforces design-time network access patters at runtime, and can be augmented with the aforementioned functionality. In Section~\ref{sec:caseStudy}, we demonstrate how this approach can be used to significantly improve QoC under attack at runtime, at the expense of small amounts of utilized processing times and network bandwidth. \section{Related Work} \label{sec:relatedWork} Integrating security guarantees into legacy and resource-constrained systems has attracted significant research attention. For instance, in~\cite{OpportunisticExecutionLegacyRT} the authors explore opportunistic execution of security services in legacy real-time systems, while leveraging hierarchical scheduling to ensure that schedulability of existing tasks is not impaired. The security performance metric proposed therein is the frequency of periodic execution of security services. In~\cite{ImprovingEmbeddedSecurity}, a novel scheduling policy is proposed for embedded systems to ensure schedulability of real-time control tasks subject to both timing and security constraints. This is achieved by optimal distribution of slack times which are computed after schedulability of existing control tasks is guaranteed. Among a variety of security services, an optimal schedule is constructed based on abstract relative security levels. In~\cite{StaticRTsecurity}, the authors devise a security-aware EDF schedulability test. Therein, security services are grouped by security level and execution of security services from different groups is combined to increase Quality-of-Security (e.g., message encryption can be combined with authentication to protect both confidentiality and integrity of transmitted data). Consequently, group-based security model is integrated with EDF scheduling and a security-aware optimization problem is formulated around scheduling of suitable security services given a set of real-time tasks. However, it is important to highlight that no existing work provides a direct relationship between resource utilization and actual systems' performance pertaining to its main functionality (i.e., control performance, Quality-of-Control) -- in fact, only abstract \emph{security levels} are considered. Transaction scheduling is typically considered separately for time- and event-triggered communication models. Event-triggered transaction scheduling requires additional overheads for event signalling, i.e.,~synchronization between the transmitting and receiving nodes is explicitly obtained by transmission of additional messages. Examples of works addressing analysis of such transaction implementation schemes are~\cite{distrSync1,distrSync2}. For systems where network traffic patterns are determined by design, and resources (both processor computation power and network bandwidth) are severely constrained, satisfaction of timing constraints for transactions can be achieved by careful offset/dedline enforcement --- the approach considered in this paper. Traditional offset-based schedulability analysis for distributed systems under rate monotonic were presented in the original \emph{holistic analysis} framework from~\cite{holisticDistributedRM}, and further improved in~\cite{staticDynamicOffsets,bcWCdistributedRT}. Furthermore, this analysis has been extended to EDF in~\cite{spuriEDFdistr1}. However, only the standard task models are observed, and these works mostly focus on computing response times while no optimization framework is devised to generate feasible offsets (or deadlines). In~\cite{feasibleDeadlines}, the authors develop a technique to compute a (sufficient) region of admissible deadlines given a set of tasks under EDF, which enables the designer to optimize the desired performance metric. However, this approach is non-trivial to integrate into an end-to-end schedulability analysis framework, due to its recursive algorithmic nature. \section{Schedulability Analysis for Secure Control Transactions} \label{sec:schedulabilityAnalysis} \subsection{Schedulability of Security-Aware Tasks} We consider a schedulability condition for the sensing and control tasks based on the \emph{processor demand criterion}~\cite{Baruah1990}. Note that the condition from~\cite{lesi_tecs17} cannot be used as it does not support the use of cumulative periodic authentication on sensing tasks, as well as general offset and deadline values for tasks and messages in secure control transactions. On the other hand, necessary and sufficient schedulability conditions for the general task model (i.e., with non-zero offsets and deadlines differing from periods) under the preemptive EDF scheduler are formulated in~\cite{Baruah1990,ButtazzoBook}, starting from the~following. \begin{definition}[\cite{Baruah1990}] \label{def:dbfDef} The demand function $df_i$ of a standard task $T_i(c_i,p_i,\phi_i, d_i)$ on interval $[t_1,t_2]$ is $ ~~df_i(t_1,t_2) = \sum\limits_{\substack{\alpha_{i,j} \geq t_1,~\delta_{i,j} \leq t_2}}c_i$, where $c_i$ is the WCET of the $i^{\text{th}}$ task, while $\alpha_{i,j}$ represents the time of the $j^{\text{th}}$ job arrival, and $\delta_{i,j}$ its respective deadline. \end{definition} \begin{theorem}[\cite{Baruah1990}] \label{thm:feasibilityCond} A task set $\{ T_1(c_1,p_1,\phi_1,d_1), T_2(c_2,p_2,\phi_2,d_2)$,..., $T_N(c_N,p_N,\phi_N,d_N) \}$ is schedulable by preemptive EDF if and only~if $ \sum_{i=1}^Ndf_i(t_1, t_2)\leq t_2-t_1,$ for all $t_1, t_2$ such that $t_1< t_2$. \end{theorem} Since, by definition, the demand function is piecewise constant with magnitude increasing in steps at time instants of job deadlines, the condition in \thmref{feasibilityCond} can be evaluated over a discrete and bounded time testing set. Formally, it is necessary to test the processor demand condition for all $t_{k_1}< t_{k_2} \leq t^{max}$ such that \begin{equation}\label{eq:timeTestingSets} \begin{split} t_{k_1} \in TS_{arr} = &\bigcup_{i=1}^{N}\{ t | t=\phi_i+k_1 p_i, k_1\in\mathbb{N}_0, t\leq t^{max}\},\\ t_{k_2} \in TS_{dead} = &\bigcup_{i=1}^{N}\{ t | t=d_i+k_2 p_i, k_2\in\mathbb{N}_0, t\leq t^{max}\}, \end{split} \end{equation} where $t^{max}=\max_{i}\phi_i+\max_{i}d_i+ 2\cdot lcm\{ p_1,...,p_N \}$ is the maximal time up to which the CPU demand has to be tested to ensure correctness of analysis~\cite{LeungMerrill}, and $lcm$ is the least common~multiple. We use this schedulability condition for schedulability analysis of security-aware $T_i^{sens}$ and $T_i^{ctrl}$ tasks -- to simplify notation, we omit superscripts and denote the tasks as $T_i$ where possible. To evaluate the demand function on interval $[t_{k_1},t_{k_2})$, we compute the number of regular and extended frames released at or after $t_{k_1}$, that have deadlines at or before $t_{k_2}$ as \begin{equation}\label{eq:etaNormal} \eta_i^{r\&e}(t_{k_1},t_{k_2}) = max\left\{ 0,\floor*{\frac{t_{k_2}-\phi_i-d_i}{p_i}} - max \left\{ 0,\ceil*{\frac{t_{k_1}-\phi_i}{p_i}} \right\} +1 \right\}. \end{equation} Similarly, extended frames in this interval can be counted as \begin{equation}\label{eq:etaExtended} \begin{split} \eta_i^{ext}(t_{k_1},t_{k_2})= \sum_{m=0}^{f_i-1}\:\: &max \left\{ 0,\floor*{\frac{t_{k_2}-(s_i+m)p_i-\phi_i-d_i}{l_ip_i}}-\right.\\ &max \left. \left\{0,\ceil*{\frac{t_{k_1}-(s_i+m)p_i-\phi_i}{l_ip_i}} \right\} +1 \right\}. \end{split} \end{equation} Here, the appropriate values for $f_i$ should be used -- i.e.,~$f_i^{ctrl}=1$ for $T_i^{ctrl}$ and $f_i^{sens}=f_i$ for $T_i^{sens}$ The demand function for a single task can now be posed as the total processor demand of regular and extended frames~as \begin{equation}\label{eq:dfK1K2} df_i(t_{k_1},t_{k_2}) = c_i^{reg}\eta_i^{r\&p}(t_{k_1},t_{k_2})+\Delta c_i\eta_i^{ext}(t_{k_1},t_{k_2}), \end{equation} where $\Delta c_i = c_i^{ext}-c_i^{reg}$. We can thus formulate the necessary and sufficient schedulability condition as: $\forall t_{k_1}\in TS_{arr},\:\forall t_{k_2}\in TS_{dead}$ \begin{equation}\label{eq:sumDemandCondition} \sum\limits_{i=1}^{N} df_i(t_{k_1},t_{k_2}) \leq t_{k_2}-t_{k_1},\\~~ \hbox{if }~ t_{k_1}<t_{k_2}. \end{equation} \subsection{Schedulability of Security-Aware Messages} To analyze schedulability of security-aware network messages (i.e., with periodic cumulative authentication), we start from the following theorem that provides a necessary and sufficient schedulability condition for \emph{sporadic} real-time messages under non-preemptive~EDF. \begin{theorem}[\cite{nprEDF-CAN}] \label{thm:nonpreemptiveSporadicTheorem} Consider a set of real-time messages $M_i(c_i,p_i,d_i)$, $1\leq i \leq N$, where $p_i$ is the minimum message inter-arrival time. The message set is schedulable under non-preemptive EDF over a network shared with non real-time messages with maximum transmission time $c_{max}^{NRT}$~if and only if $\sum_{i=1}^{N}\frac{c_i}{p_i}\leq 1$ and \begin{equation} \label{eq:NP_schcond} \sum_{i=1}^{N}max\left\{0,\floor*{\frac{t-d_i}{p_i}}+1\right\}c_i + c_m \leq t_k, \forall t_k \in TS, \end{equation} where $TS=\bigcup\limits_{i=1}^{N}\left\{d_i+jp_i|j=0,...,\floor*{\frac{t_{max}-d_i}{p_i}}\right\}$,\\ $t_{max}=\max\left\{ d_1,...,d_N,\left(c_m+\sum_{i=1}^{N}\left(1-\frac{d_i}{p_i} \right)c_i\right) / (1-U_\mathcal{M}) \right\}$, and $c_m=max\{c_{max}^{NRT}, \max_{i=1}^{N}c_i\}$. \end{theorem} \begin{figure}[!t] \centering \includegraphics[width=0.53\linewidth]{falseExample.pdf} \caption{Example message set $M_1(\phi_1=2,c_1=2,p_1=5,d_1=3), M_2(\phi_2=1,c_2=2.1,p_2=10,d_2=10)$ --- although the schedulability test for nonpreemptive messages with offsets from~\cite{zuberiShin} is satisfied, $M_1$ misses its deadline at $t=5$ due to an earlier release of message~$M_2$.} \label{fig:falseExample} \end{figure} To the best of our knowledge, there does not exist an efficient method to test schedulability for strictly periodic asynchronous messages under non-preemptive EDF. The conditions from~\cite{zuberiShin} extend \thmref{nonpreemptiveSporadicTheorem} for messages with offsets in order to support transaction scheduling. The resulting theorem from~\cite{zuberiShin} replaces every appearance of relative deadline $d_i$ in \thmref{nonpreemptiveSporadicTheorem} with absolute deadline $d_i+\phi_i$ to account for offsets. In our case, using this theorem would be pessimistic since the conditions derived for sporadic messages cannot be adjusted for multi-frame messages. Also, examples as in \figref{falseExample} show that the schedulability condition from~\cite{zuberiShin} does not always~hold. On the other hand, a utilization-based test for non-preemptive EDF is derived in~\cite{sanjoyNPRedf}. As our goal is to determine a set of offsets and deadlines that yields a schedulable set of secure transactions, this test cannot be used as it condenses all task properties into a single measure. Still, by following the reasoning presented therein, we formulate the following sufficient schedulability condition. \begin{theorem} \label{thm:NPRfeasibilityCond} A message set $\{ M_1(c_1,p_1,\phi_1,d_1), M_2(c_2,p_2,\phi_2,d_2)$, ..., $M_N(c_N,p_N,\phi_N,d_N) \}$ is nonpreemptively schedulable by EDF if $\sum_{i}df_i(t_1, t_2)\leq t_2-t_1-c_{max},$ for all $t_1, t_2$ such that $t_1< t_2$, where $c_{max}=\max_{i} c_i$ is the longest of transmission times of all $N$ messages. \end{theorem} \begin{proof} Suppose that the theorem's demand-based condition is satisfied for all $t_1, t_2$, and that there is a deadline miss at some instant $t_2^*=t_{dm}$. Let $t_1^* \leq t_{dm}$ be the closest to $t_{dm}$ instant such that the network is busy transmitting only those messages with deadlines $\leq t_{dm}$. Then, right before $t_1^*$, the network may be idle or a message with deadline $\geq t_{dm}$ is being transmitted. In the case when the network is idle right before $t_1^*$, then the total network demand imposed by all messages eligible to be transmitted during $[t_1^*,t_2^*]$ is $\sum_{i}df_i(t_1^*, t_2^*)$, by the definition of the demand function, and since there is a deadline miss at $t_2^*$, the demand must be greater than the network time available, i.e., $\sum_{i}df_i(t_1^*, t_2^*) > t_2^*-t_1^*$. This contradicts the theorem statement. In the case when the network is transmitting a message with deadline $\geq t_{dm}$, then the worst case network demand of all messages eligible to be transmitted during $[t_1^*,t_2^*]$ is $\sum_{i}df_i(t_1^*, t_2^*)+c_{max}$. Since there is a deadline miss at $t_2^*$, the demand must be greater than the available network time, i.e., $\sum_{i}df_i(t_1^*, t_2^*) > t_2^*-t_1^*-c_{max}$, which contradicts the theorem, and thus concludes the~proof. \end{proof} The intuition behind this theorem can be supported by the claim that non-preemptive EDF schedules by time $t^*+c_{max}$ at least as much work imposed by a set of tasks as preemptive EDF schedules by $t^*$~\cite{sanjoyNPRedf}. In this case, the total network demand by a security-aware message can be expressed as in~\eqref{eq:dfK1K2}, with $f_i^{net}=1$ used for extended transmissions in~\eqref{eq:etaExtended}. In addition, the time testing sets remain the same as in~\eqref{eq:timeTestingSets}. As we demonstrate on examples in Section~\ref{sec:generalEvaluation}, this condition is less conservative in cases when message transmission times are significantly shorter than their respective periods. We then show in Section~\ref{sec:caseStudy} that this is commonly true in practical systems. \begin{remark}[Accounting for Jitter] To understand how realistic implementation phenomena such as jitter affect the presented analysis, we consider their effects on task and message scheduling. In the case of task-level jitter, existing approaches to jitter accounting can be applied~\cite{StankovicEDF}. In essence, if a task experiences jitter $j_i$, the inter-arrival spacing may be shorter than $p_i$. From the worst-case schedulability standpoint, this scenario pertains to the arrival pattern where all tasks arrive such that they must complete execution by the relative deadline $d_i-j_i$, rather than by $d_i$ time units. Shortening the permissible deadline by the worst-case jitter can be easily included in the demand-based condition~\eqref{eq:sumDemandCondition}. This does not affect the complexity of the MILP implementation of the parameter synthesis problem, as worst-case jitter figures as a set of known constant parameters. For message scheduling, in most cases we do not need to use this approach, as the $c_{max}$ term introduced in the non-preemptive schedulability conditions to account for the worst-case blocking any message may experience upon arrival, is rarely needed in its entirety; this holds since worst-case blocking will rarely occur. This conservativeness effectively captures jitter, as jitter levels are highly unlikely to exceed message transmission times in any practical network realization. \end{remark}
1,314,259,995,427
arxiv
\section{Introduction} Active Galactic Nuclei (AGN) are powered by mass accreting onto a supermassive black hole (SMBH). The well known \cite{ss73} disc model makes very simple predictions for this emission if it is emitted locally and thermalises to a blackbody. The disc temperature increases inwards (modulo a stress-free inner boundary condition at the innermost stable circular orbit, $R_{\rm ISCO}$), so the total spectrum is the sum over all radii of these different temperature components (multi-colour disc blackbody; e.g., \citealt{mitsuda1984}). However, the observed spectral energy distribution (SED) of AGN are much more complex than this predicts. There is a ubiquitous tail at X-ray energies, as well as an unexpected upturn below 1~keV, termed the 'soft X-ray excess'. The hard X-ray tail indicates that some part of the accretion energy is not dissipated in the optically thick disc (where it would thermalise) but is instead released in an optically thin region (e.g., \citealt{elvis1994}). The resulting Comptonised spectrum from 1--100~keV indicates that this region has electron temperature $kT_e\sim 40$--100~keV and optical depth $\tau\sim 1$--2 (\citealt{lubinski2016,fabian2015}). The origin of the 'soft X-ray excess' is not well understood. It can be fit by a second Comptonisation region with very different parameters from the coronal emission, one where the electrons are warm, $kT_e\sim 0.1$--1~keV and optically thick $\tau\sim 10$--25 (e.g., \citealt{magdziarz1998,czerny2003,marek2004b,porquet2004,petrucci2013,middei2018}). Alternatively, it could be produced by reprocessing/reflection of the coronal emission on the very inner disc, where extremely strong relativistic effects smear out the expected strong line emission from ionised material \citep{crummy2006}. The fastest soft X-ray variability is correlated with, and lags behind, the hard X-ray variability, so some fraction of the soft X-ray excess must be produced from reprocessing/reflection of the corona flux (e.g., \citealt{fabian2013,demarco2013}). However, recent results have shown that the majority of the soft excess does not arise from reflection \citep{509,5548,noda2013,matt2014,boissay2016,porquet2018}, favouring the warm Comptonisation model. The warm Comptonisation scenario also helps to explain another puzzling component of the broadband AGN SED, namely a ubiquitous downturn seen in the UV, at energies far below those expected for the peak disc temperature (e.g., \citealt{zheng1997,davis2007}). A warm Comptonisation spectrum can extend across the absorption gap, connecting the UV downturn and the soft X-ray upturn with a single component \citep{elvis1994,laor1997,richards2006}. This carries a dominant fraction of the luminosity in the SED of AGN at lower Eddington ratio, $L_{\rm bol}/L_{\rm Edd}$ \citep{jin2012a,jin2012b}, again arguing against a purely reprocessing/reflection origin for the soft X-ray excess, though some contribution could be present (e.g., \citealt{lawrence2012}). The SED of high $L_{\rm bol}/L_{\rm Edd}$ are instead dominated by disk emission, which can extend into the soft X-ray bandpass for the lowest mass, Narrow Line Seyfert-1 (NLS1) galaxies, but these still have a small fraction of their bolometric power emitted in a soft X-ray excess component \citep{jin2012a, jin2012b,done2012,jin2013,matzeu2017}. Neither of the Comptonisation components are well understood. However the warm Comptonisation region is especially problematic as, unlike the hot corona, it does not have a clear counterpart in the much lower mass black hole binary systems (BHB). These often show spectra at $L_{\rm bol}/L_{\rm Edd}\sim 0.1$--0.2 which are dominated by the thermal accretion disc emission, with only a small tail to higher energies from a hot Comptonisating corona (e.g., \citealt{kubota2001,marek2004a,steiner2009}). One obvious break in scaling between BHB and AGN is that the SMBHs have discs which peak in the UV rather than X-ray temperature range. The UV is a region in which atomic physics is extremely important whereas plasma physics dominates in BHB. Nontheless, the best models of the accretion disc structure including UV opacities \citep{hubeny2001} find that the spectra are fairly well described by a sum of modified blackbody components (with atomic features superimposed), similar to BHB spectra \citep{davis2006}. The addition of UV opacity within the disc alone then may not be enough to explain the soft X-ray excess (though it does also depend on the heating profile within the disc): instead it may be connected to the ability of UV line opacity to launch winds from AGN discs (e.g., \citealt{proga2000,laor2011}) and/or the huge change in opacity connected to Hydrogen ionisation which may be able to change the entire disc structure away from steady state models \citep{hameury2009}. Constraining the shape of the warm comptonisation component is not easy as it spans the 0.01--1~keV range where interstellar absorption from our own Galaxy obscures our view. Spectral fitting becomes especially degenerate when trying to simultaneously constrain this component along with the hotter coronal component and any residual emission from an outer standard disc (e.g., \citealt{jin2009}). Instead, \cite{done2012} (hereafter D12) assumed that the emission is ultimately powered by energy release from gravity, with the same form as for the thin disc, but that the dissipation mechanism is only blackbody for radii $R>R_{\rm corona}$, Inwards of this, they assumed that the flow instead emits the accretion energy as a warm or hot Comptonisation component. These energy conserving models ({\sc optxagnf}: D12) give an additional physical constraint on the components, and more importantly, highlight the fundamental parameters of mass and mass accretion rate (for any assumed spin) in setting the overall SED (\citealt{jin2012a,jin2012b,ezhikode2017}). These models reveal a systematic change in the SED which can be modeled by a decrease in $R_{\rm corona}/R_{\rm g}$ (where $R_{\rm g}=GM/c^2$) correlated with a increase in the hot Comptonisation power law spectral index as $L_{\rm bol}/L_{\rm Edd}$ increases (\citealt{jin2012a,jin2012b,ezhikode2017}; see also \citealt{shemmer2006,shemmer2008} and \citealt{vasudevan2007,vasudevan2009} for the hard X-ray spectral index). In this paper, we develop a new model which addresses the underlying physics of these changes, where we assume that the flow is completely radially stratified, emitting as a standard disc blackbody from $R_{\rm out}$ to $R_{\rm warm}$, as warm Comptonisation from $R_{\rm warm}$ to $R_{\rm hot}$ and then makes a transition to the hard X-ray emitting hot Comptonisation component from $R_{\rm hot}$ to $R_{\rm ISCO}$. The warm Comptonisation component is optically thick, so we associate this with material in the disc. Nonetheless, the energy does not thermalise to even a modified blackbody, perhaps indicating that significant dissipation takes place within the vertical structure of the disc, rather than being predominantly released in the midplane (e.g., \citealt{davis2005}). At a radius below $R_{\rm hot}$, the energy is emitted in the hot Comptonisation component. This has much lower optical depth, so it is not the disc itself. It could either be a corona above the inner disc, or the disc could truncate, so that the hot material fills the inner region close to the black hole. We show that the observed steepening of the 2--10~keV spectral index with increasing $L_{\rm bol}/L_{\rm Edd}$ can be most easily explained with a true truncation. We describe the model structure in section 2, and apply it to observed broadband spectra of individual AGN in section 3. We use these data to set some of the model parameters, so that we can predict the entire AGN SED as a function of only mass and mass accretion rate (for a given black hole spin) in section 4. In section 5, we show that these models reproduce the observed tight relationship between the UV and X-ray emission in Quasars \citep{lusso2017} as well as predict a decrease in the fraction of reprocessed optical variability with increasing $L_{\rm bol}/L_{\rm Edd}$ as observed. Thus, this AGN SED model succeeds in describing multiple disparate observational trends, which gives confidence that the assumed geometry captures most major aspects of the source behaviour. \section{Overall disc model} \label{sec:overall} We follow D12 and assume a radial emissivity like Novikov-Thorne (hereafter NT), defining the flux per unit area at a radius $R$ on the disc as $F_{\rm NT}(R)=\sigma T_{\rm NT}^4(R)$, where $T_{\rm NT}(R)$ is the effective temperature at this radius. Converting to dimensionless units, with $r=R/R_g$, $\dot{m}=\dot{M}/\dot{M}_{\rm Edd}$ and $L_{\rm Edd}=\eta\dot{M}_{\rm Edd}c^2$ gives $F_{\rm NT}\propto (\dot{m}/M)r^{-3}$ for $r>>6$. Here $\eta$ is a spin dependent efficiency factor, assumed fixed at 0.057 for a non-spinning black hole throughout this paper. \subsection{Standard Disc and warm Comptonisation region} In the standard disc region we assume that the NT emission thermalises locally either to give a blackbody $B_\nu(T_{\rm NT})$ at the local blackbody temperature, defined from ${F}_{\rm NT}(R)=\sigma T_{\rm NT}^4(R)$, or that electron scattering within the disc distorts this into a modified disc blackbody spectrum. This can be approximated as a colour temperature corrected blackbody, $B_\nu(f_{\rm col}T_{\rm NT})/f_{\rm col}^4$, where $f_{\rm col}$ depends on the importance of electron scattering compared to true absorption processes, which itself depends on disc temperature, especially close to Hydrogen ionisation at $\sim 10^4$~K. There are few free electrons below this temperature, so electron scattering is not important, and $f_{\rm col}\sim 1$, whereas above this temperature there are multiple free electrons so $f_{\rm col}> 1$. This effectively shifts the peak of the blackbody over by a factor $f_{\rm col}$, and reduces its norm by a factor $f_{\rm col}^4$ so this gives a shift to higher energy and decrease in normalisation in the disc spectra from each annulus which onsets at around the hydrogen ionisation energy. Thus the standard disc with this colour temperature correction always has less UV emission (shortwards of $\sim 10^{15}$~Hz $\approx$ 2000\AA) than predicted from simple models with $f_{\rm col}=1$ (D12). Figure~\ref{fig:comparison} shows a comparison of the standard disc (geometry I) with $f_{\rm col}=1$ (red solid) to that where $f_{\rm col}(T_{\rm NT})$ is derived from an analytic treatment of the vertical structure of the disc (dashed red line, see also \citealt{davis2006}, D12). This clearly shows how the outer disc emission is identical, while the inner disc emission is shifted to higher temperatures/lower luminosities. \begin{figure} \begin{center} \includegraphics[width=0.95\columnwidth]{fig1} \end{center} \caption{Comparison of spectra for a black hole of $M= 10^8~M_\odot$ with $\dot{m}=0.05$. Geometry I shows the Novikov-Thorne disc extending down to $R_{\rm ISCO}$ without (red solid) and with (red dashed) color-temperature correction. Geometry II shows an outer Novikov-Thorne disc plus a warm Comptonisation region from $r=40$ down to $r_{\rm ISCO}$. The green solid line shows {\sc agnsed} model used in this paper (see section 2.3) where seed photons are from the underlying cool material in the midplane compared to the {\sc optxagnf} assumption of seed photons from the inner edge of the standard disc (green dashed). Geometry III is complete coverage of the warm Compton region over the entire NT disc (blue), which has lower normalisation due to the Compton scattering. We assume a photon index and electron temperature of the warm Comptonising corona of 2.5 and 0.2~keV, respectively.} \label{fig:comparison} \end{figure} Concerning the warm Comptonising region, the UV data do indeed show a downturn, but this is stronger than predicted by the effect of a changing $f_{\rm col}$ in general. \cite{davis2007} show that the observed AGN spectra have redder UV slopes than predicted from disc models even including electron scattering. Instead, what is required to fit this UV downturn is that the emission is much more strongly distorted from a blackbody than predicted in the standard disc. While this could be modeled by a larger colour temperature correction, a shifted blackbody becomes a progressively poorer approximation for the spectrum as $f_{\rm col}$ increases. Hence we replace $f_{\rm col}$ with a fully Comptonised shape and do not include this factor in our new code as the disc vertical structrue is clearly very different to that of \cite{ss73}. Comptonisation also gives the possibility to connect the observed downturn in the UV to an upturn seen in the soft X-ray spectra, forming a single component spanning the unobservable EUV range (D12, \citealt{509,5548}). This warm Comptonising emission could be produced if some fraction of the dissipation takes place higher up in the disc, rather than being concentrated towards the equatorial plane \citep{czerny2003,rozanska2015}. Residual emission in the denser disc material on the midplane can then act as a source of seed photons, together with the reprocessed emission from illumination from the upper layers of the disc \citep{petrucci2017}. Figure 14 in \cite{davis2005} shows the predicted (colour temperature corrected) blackbody spectrum of a disc annulus where the vertical dissipation goes with density as in standard disc models, compared to one where the dissipation is arbitrarily changed so that 40\% of the power is released in the photosphere (Fig. 16 of \citealt{davis2005}). The spectrum is strongly Comptonised into a steep tail to higher energies, but clearly contains the imprint of the seed photons as a downturn at low energies. This seed photon temperature is determined both by the intrinsic dissipation in the lower layers of the disc (the remaining 60\% of the accretion power in this specific example), and the thermalised flux resulting from irradiation by the Comptonising upper layers. Both these physical processes give seed photons which are close to the surface temperature predicted by the standard disc dissipation, so the seed photon temperature imprinted onto the steep Comptonised emission is itself close to this temperature (Fig. 14 of \citealt{davis2005}). Thus the expectation is that the seed photon energy should change with radius in the same way as the expected standard disc temperature. D12 discuss this in their Appendix, but make the simplifying assumption in {\sc optxagnf} that this can be approximated as a single Comptonisation spectrum with seed photon temperature set by the maximum temperature of the standard disc emission i.e. $kT_{\rm seed}=kT_{\rm NT}(R_{\rm corona})$. This is adequate if the low energy part of this component is mostly unobservable due to interstellar absorption. However, there are now data where this region of the spectrum can be seen, motivating a more careful approach. Also, the {\sc optxagnf} approximation always requires that there is an outer standard disc in order to provide the code with a temperature for the seed photons. This need not be the case in the physical situation envisaged. The warm Comptonisation region could instead cover the entire outer disc as its seed photons are from deeper layers of the underlying disc rather than from an external source. \cite{petrucci2017} tested a model where the entire optical/UV/soft X-ray flux is from a warm Comptonisation region with a slab geometry over the disc. They show that reprocessing in this geometry hardwires the Compton amplification factor $A$ to \[ L_{\rm tot}=AL_{\rm seed} =L_{\rm seed} +L_{\rm diss,warm} \] where, $L_{\rm tot}$, $L_{\rm seed}$ and $L_{\rm diss, warm}$ are the total luminosity, seed photon luminosity underneath the Comptonising skin and power dissipated in the warm corona, respectively. Their eq.(19) with the slab corona entirely covering the disc and large optical depth with complete thermalisation, gives \[ \frac{L_{\rm diss,warm}}{L_{\rm seed}}=A-1=2\left(1-\frac{L_{\rm diss, disc}}{L_{\rm seed,tot}}\right)-1 \] where $L_{\rm diss,disc}$ is the intrinsic dissipation which thermalises in the disc. If there is no intrinsic dissipation ('passive disc' on the midplane: \citealt{petrucci2017,petrucci2013}) then all these seed photons are set by thermalisation of the warm Compton as $L_{\rm diss, disc}=0$. Thus \[ \frac{L_{\rm diss,warm}}{L_{\rm seed}}=1 \] hence $L_{\rm seed}=L_{\rm tot}/2$. This is emitted from the same surface area as the standard disc, so thus hardwires the seed photon temperature $T_{\rm seed}\simeq T_{\rm NT}$. \cite{petrucci2017} showed that $A=2$ is equivalent to a photon index of the warm compton component, $\Gamma_{\rm warm}=2.5$, which generally gave a good fit to the observed soft X-ray excess component when combined with a Comptonising electron temperature $kT_{e,{\rm warm}}\sim 0.2$~keV to give the observed rollover in soft X-rays. Based on this passive disc picture, our new model, calculates the Compton emission at each annulus of radius $R$ and width $\Delta R$ in the soft compton region, using the {\sc nthcomp} model \citep{zdziarski1996,zycki1999} in {\sc xspec}. We set the seed photon temperature to the local disc temperature $T_{\rm NT}(R)$, and set the local luminosity to $\sigma T^4 _{\rm NT}(R)\cdot 2\pi R\Delta R\cdot 2$, and sum over all annuli which produce the warm Comptonisation. Both photon index and electron temperature are free parameters, assumed to be the same for all the disc anulii which produce the warm Comptonisation. The green solid line in Fig.\ref{fig:comparison} shows our new version of the warm Comptonisation, summed over all radii from $r_{\rm warm}=40 <r_{\rm out}$ to $r_{\rm ISCO}=6$ as in geometry II of Fig.\ref{fig:comparison}. We compare this to the {\sc optxagnf} model for the same parameters (green dashed line), showing the difference in behaviour around the seed photon energy (see also Fig. A1b in D12). The blue line in Fig.\ref{fig:comparison} corresponds instead to the geometry of \cite{petrucci2017} sketched as geometry III in Fig.\ref{fig:comparison}, i.e. where there is no outer disc so $r_{\rm warm}=r_{\rm out}$. The most noticeable effect is that the normalisation of the SED in the optical/UV is reduced. This is important, as it changes the otherwise quite robust relation between the luminosity in some band on the low energy disc tail, and the mass accretion rate. The NT emissivity sets the seed photon temperature and emissivity, but the Comptonisation acts like a colour temperature correction and shifts the entire spectrum to higher energies. We note here that unlike \cite{petrucci2017}, our new model ties the seed photons for the warm Comptonisation to the parameters of the underlying disc, rather than allowing the seed photon temperature to be a free parameter. Both the warm Comptonisation and standard disc are optically thick, so we assume that the emission is proportional to $\cos i$, where $i$ is the inclination of the disc. \subsection{Hot Comptonisation region} \label{sec:pl} There is also an additional X-ray component which dominates over the soft X-ray excess beyond 1--2~keV, extending up to $kT_e\sim40$--$100$~keV, with $\tau\sim 1$--2 \citep{fabian2015,lubinski2016,petrucci2017}. The low optical depth clearly distinguishes this component from the disc material, and the warm Compton region, so it needs to arise in a different structure. This could either be a corona above a disc, with some fraction of the accretion energy dissipated in this optically thin material, or the optically thick disc could truncate, leaving a true hole in the inner disc. For $\dot{m}\lesssim 0.2$, the hard X-ray photon spectral index is usually $\lesssim 1.9$, i.e. it is flatter than expected even in the limit where all the accretion energy is emitted in the corona. Reprocessing and thermalisation of the (assumed isotropic) illuminating flux even by a completely cold, passive disc, sets a lower limit on $\Gamma_{\rm hot}\sim 1.9$ \citep{haardt1991,stern1995,malzac2005}. Hence we assume that the disc truly truncates at $r_{\rm hot}$ for low $\dot{m}$, as is supported by the lack of strong reflection and lack of strong relativistic smearing these AGN \citep{matt2014,yaqoob2016,porquet2018}. In a truncated disc geometry, the seed photons seen by the hot flow are predominantly from the inner edge of the warm Comptonisation region, so these have typical seed photon energy of $T_{\rm NT}(R_{\rm hot})\cdot \exp(y_{\rm warm})$, where $y_{\rm warm}=4\tau^2 kT_{e,\rm warm}/m_ec^2$ is the Compton $y$-parameter for the warm comptonisating corona. We use the {\sc xspec} model {\sc nthcomp} to describe this, with total power \begin{eqnarray} \label{eq:Lhot} L_{\rm hot}&=&L_{\rm diss,hot}+L_{\rm seed} \end{eqnarray} Here, the inner flow luminosity, $L_{\rm hot}$, is the sum of the dissipated energy from the flow, $L_{\rm diss,hot}$, and the seed photon luminosity which is intercepted by the flow, $L_{\rm seed}$. This gives $L_{\rm diss,hot}$ as \begin{eqnarray} \label{eq:Lhotdiss} L _{\rm diss, hot}&=&2\int_{R_{\rm ISCO}}^{R_{\rm hot}} F_{\rm NT}(R)\cdot 2\pi R dR \nonumber \\ &=&2\int_{R_{\rm ISCO}}^{R_{\rm hot}} \sigma T_{\rm NT} (R)^4\cdot 2\pi R dR \end{eqnarray} with the truncated radius $R_{\rm hot}$. This is shown geometrically in Fig.\ref{fig:geometry}a. Since the X-ray emission is not very optically thick we assume it is isotropic, unlike the disc/warm comptonisation region where we assumed a disc geometry. $L_{\rm seed}$ is the intercepted soft luminosity from both the warm Comptonising region and the outer disk. This can be calculated assuming a truncated disc/spherical hot flow geometry (Fig.\ref{fig:geometry}a, i.e. a flow scale height $H\sim R_{\rm hot}$) as \begin{eqnarray} \label{eq:Lseed} L_{\rm seed}&=&2\int_{R_{\rm hot}} ^{R_{\rm out}} (F_{\rm NT}(R)+F_{\rm rep}(R))\frac{\Theta (R)}{\pi} 2\pi R dR\\ \Theta(R)&=&\theta_0-\frac{1}{2}\sin 2\theta_0 \label{eq:theta} \end{eqnarray} Here, $\Theta(R)/\pi$ is the covering fraction of hot flow as seen from the disc at radius $R>R_{\rm hot}$ with $\sin \theta_0=H/R$, and $F_{\rm NT}(R)+F_{\rm rep}(R)$ is the flux from the warm Comptonised and/or outer disc including reprocessing (discussed in the following section). We caution that there can be many other factors which influence $L_{\rm seed}$ e.g. overlap of the disc and hot flow \citep{zdziarski1999} and/or any radial/vertical gradient in the structure of the hot flow. Nonetheless, we start from the simplest possible assumption which is a spherical, homogeneous source with $H=R_{\rm hot}$ but we leave $H$ as a parameter in the following equations so that it can be used as a tuning parameter for alternative geometries by changing the seed photons intercepted by the source. Smaller $H$ gives a smaller solid angle, so fewer seed photons and harder spectral indices, but has no effect on $R_{\rm hot}$ as this is set by the energetics. \begin{figure} \begin{center} \includegraphics[width=0.95\columnwidth]{fig2} \end{center} \caption{Geometry geometry of the model. (a) The geometry for hot inner flow (blue), warm Compton emission (green) and outer standard disc (magenta). (b) The lamppost geometry used to simplify the calculation of the reprocessed emission.} \label{fig:geometry} \end{figure} \subsection{Modeling reprocessing } \label{sec:reprocess} The assumed geometry shown in Fig.~\ref{fig:geometry}a has some fraction of the hot Comptonisation illuminating the warm Comptonisation and cool outer disc regions. We include this self consistently, so that the irradiating flux increases the local flux above that given by the intrinsic $\dot{M}$. Though our geometry assumes that the hot corona is an extended source, with $H\sim R_{\rm hot}$ as above (see Fig.~\ref{fig:geometry}a), \cite{gardner2017} show that illumination from an extended source can be well approximated by a point source at height $H$ on the spin axis (lampppost: Fig.~\ref{fig:geometry}b). We thus utilized the lamppost geometry for the hot inner flow to calculate the reprocessed flux as it is simpler than integrating over an extended source. The reprocessed flux for a flat disc at a radius $R$ is then written as \begin{eqnarray} \label{eq:frep} F_{\rm rep}(R)&=&\frac{\frac{1}{2}L_{\rm hot}}{4\pi(R^2+H^2)} \frac{H}{\sqrt{R^2+H^2}}(1-a)\nonumber\\ &=&\frac{3GM\dot{M}}{8\pi R^3} \frac{2L_{\rm hot}}{\dot{M}c^2}\frac{H}{6R_{\rm g}}(1-a)\left[1+\left(\frac{H}{R}\right)^2\right]^{-3/2} \end{eqnarray} where $a$ is the reflection albedo. Hence the local flux at radius $R(>R_{\rm hot})$ is $\sigma T_{\rm eff}(R) ^4=F_{\rm NT}(R)+F_{\rm rep}(R)$. Both $F_{\rm NT}(R)$ and $F_{\rm rep}(R)$ depend on radius as $R^{-3}$ for $R\gg R_{\rm g}$, thus the effect of the reprocessing is basically to increase the flux across both the standard and warm Comptonised disc by the same factor which is of order $1 + H/6R_{\rm g}$. Hence a larger X-ray source increases the fraction of X-ray power which illuminates the disc, as well as increasing the fraction of bolometric power which is dissipated in the X-ray region via the change in $R_{\rm hot}=H$. Figure~\ref{fig:reprocess} shows a comparison of example spectra with (solid) and without (dashed) reprocessing for a black hole of $10^8M_\odot$ with $\dot{m}=0.05$ (blue) and $0.5$ (red). We set $L_{\rm diss, hot}=0.02~L_{\rm Edd}$ and $kT_{e,\rm hot}=100$~keV for both, which implies $r_{\rm hot}=23$ and $9$ for $\dot{m}=0.05$ and $0.5$, respectively. $\Gamma_{\rm hot}$ is set to be 1.8 and 2.2 for $\dot{m}=0.05$ and $0.5$, respectively. We also include a warm Comptonisation region with $kT_{e,{\rm warm}}=0.2$~keV and $\Gamma_{\rm warm}=2.5$. Figure~\ref{fig:reprocess}a shows models where the warm Comptonisation region extends from $r_{\rm hot}$ to $r_{\rm warm}=2r_{\rm hot}$, so that there is a standard outer disc region from $r_{\rm warm}$ to $r_{\rm out}$, while Fig.~\ref{fig:reprocess}b shows the alternative geometry where the warm Comptonisation extends over the entire outer disc, i.e. $r_{\rm warm}=r_{\rm out}$. The lower panels of Fig.~\ref{fig:reprocess}a and b highlight the effect of reprocessing by showing the ratio of the spectra including reprocessing to the intrinsic emission. Reprocessing makes a larger fraction of the optical emission for the lower $\dot{m}$, as here the ratio of the X-ray flux to optical disc emission is much larger, and the larger size scale of the X-ray source means that a larger fraction of the X-ray emission is intercepted by the disc. The difference is most marked around the maximum in the SED as the flux increase from reprocessing is enhanced by the associated temperature increase in disc or seed photon energy. We call this new model {\sc agnsed}, and in the appendix define all the parameters. The model is publicly available for use in the {\sc xspec} spectral fitting package. We also release a simplified model {\sc qsosed} where many of the parameters are fixed. This is more suitable for fitting fainter objects such as distant quasars, where the signal to noise is limited. \begin{figure} \begin{center} \includegraphics[angle=0,width=0.95\columnwidth]{fig3a.eps}\\ \includegraphics[angle=0,width=0.95\columnwidth]{fig3b.eps} \end{center} \caption{comparison of spectra with (solid) and without (dashed) reprocessing for a black hole of $M=10^8~M_\odot$ with $\dot{m}=0.05$ (blue) and 0.5 (red) at a distance 100~Mpc. The values of $kT_{e, {\rm warm}}$, $\Gamma_{\rm warm}$, $kT_{e, {\rm hot}}$ and $L_{\rm diss, hot}$ are assumed to be 0.2~keV, 2.5, 100~keV and $0.02L_{\rm Edd}$, respectively. The values of $\Gamma_{\rm hot}$ are assumed to be 1.8 and 2.2 for $\dot{m}=0.05$ and 0.5, respectively. Panel (a) and (b) correspond to with and without outer standard disc, respectively. In panel (a), $r_{\rm warm}$ is set to be $2r_{\rm hot}$. Ratios of reprocessed to intrinsic emission are also shown at each energy. } \label{fig:reprocess} \end{figure} \section{Application to observed spectra} \label{sec:application} We apply {\sc agnsed} to a small sample of AGN, spanning a wide range of $\dot{m}$, chosen to have good multi-wavelength data. We select NGC~5548 ($\dot{m}\sim0.03$; \citealt{5548}) and Mrk~509 ($\dot{m}\sim0.1$; \citealt{509}), for which big multiwavelength observation campaigns have been performed. Based on the long-term observations with the Reflection Grating Spectrometers (RGS) on board XMM-Newton, their intrinsic absorption were extremely well determined \citep{5548,detmers2011,509} and removed from the SEDs. In this paper, we fit the best estimate of the continuum spectra from these AGN, deconvolved from the instrument response, and corrected for reddening and absorption. We read the resulting flux files into {\sc xspec} using the {\sc flx2xsp} command. These deconvolved data were kindly provided by M. Mehdipour. In order to apply the model to higher $\dot{m}$ AGN, we select PG~$1115+407$ from 51 AGN sample analyzed by \cite{jin2012a}. This object has little intrinsic absorption with $\dot{m}\sim 0.4$ \citep{jin2012a}, and emission from the host galaxy is negligible in band of the optical monitor (OM) onboard XMM-Newton \citep{ezhikode2017}. The system parameters for each AGN are given in table \ref{tab:system}. We fit all three SED with {\sc agnsed} with $i=45^\circ$ and limit the mass to the uncertainty range given in table \ref{tab:system}. The outer disc radius, $r_{\rm out}$, is initially set to equal to the self-gravity $r_{\rm sg}$ \citep{laor1989}. \begin{table} \centering \caption{System parameters to calculate each spectrum. Comoving radial distance $D$ and luminosity distance $D_{\rm L}$ is calculated based on $H_0=69.6~{\rm km~s^{-1}Mpc^{-1}}$, and $\Omega_{\rm M}=0.286$ with flat universe \citep{wright2006}. } \label{tab:system} \begin{tabular}{llccc} \hline & &NGC 5548&Mrk 509&PG~$1115+407$\\ \hline \hline $z$ &&0.017175&0.034397&0.154338\\ $D$&Mpc&73.7&147.1&642.1\\ $D_{\rm L}$&Mpc&75.0&152.1&741.2 \\ \hline $M$&$10^7M_\odot$&2--6& 10-30 &4.6--14\\ \hline \multicolumn{2}{l}{Eddington ratio}&$\sim0.03$&$\sim0.1$&$\sim0.4$\\ \hline \multicolumn{2}{l}{observation} &2013&2009&2002\\ \multicolumn{2}{l}{reference} &(1), (2)&(3), (4)&(5), (6)\\ \hline \multicolumn{5}{l}{$^{(1)}$\citealt{kaastra2014}$^{(2)}$\citealt{5548} $^{(3)}$\citealt{kaastra2011-1} }\\ \multicolumn{5}{l}{ $^{(4)}$\citealt{509} $^{(5)}$\citealt{jin2012a}$^{(6)}$\citealt{jin2012b}} \end{tabular} \end{table} \subsection{NGC~5548: $\dot{m}\sim0.03$} \label{sec:ngc5548} The Seyfert-1 galaxy NGC~5548 is one of the most widely studied nearby AGN, with well constrained mass (2--6)$\times 10^7M_\odot$. There was a multi-wavelength campaign on this object in 2012--2014 (e.g., \citealt{kaastra2014}), and the broadband spectra were analyzed by \cite{5548}. We use the data from summer 2013 shown in Fig.10 of \cite{5548}, which includes data points from NuSTAR, INTEGRAL, RGS and the European Photon Imaging Camera (EPIC-pn) on XMM-Newton, the Cosmic Origins Spectrograph (COS) on the Hubble Space Telescope (HST) , the UltraViolet and Optical Telescope (UVOT) on Swift, and from two ground-based optical observatories: the Wise Observatory (WO) and the Observatorio Cerro Armazones (OCA). During this campaign, there are multiple absorption systems seen in the X-ray \citep{5548,kaastra2014,cappi2016}, which have been removed from our data using the best fit modelling of the RGS spectra \citep{5548}. Nonetheless, there is some uncertainty associated with this, which affects the determination of the intrinsic soft X-ray spectrum. There is also strong host galaxy contamination in the optical, so this was removed (\citealt[Fig. 10]{5548}) to isolate the AGN emission. The resulting optical/UV continuum is rather blue, and cannot easily be fit with any disc blackbody based model, either a standard disc or warm Comptonised one. \cite{5548} fit this by a single, warm Comptonisation region, using blackbody (not disc blackbody) seed photons in order to get such a steeply rising optical/UV continuum. It seems more likely that this is a consequence of a slight oversubtraction of the host galaxy, so we ignore the V and I band continuum points in the fit. The X-ray emission is extremely bright compared to the optical/UV, and very hard. We then fit the data with {\sc agnsed}, including three emission regions. There is a clear iron line in the X-ray data, but the accompanying Compton hump is rather weak \citep{ursini2015,cappi2016} so this line probably originates in optically thin material in the BLR \citep{yaqoob2001,brenneman2012,ursini2015}. We model this simply by including a gaussian in the fit. The model overpredicts the optical data for $r_{\rm out}=r_{\rm sg}=880$ for $\dot{m}\sim 0.03$. This could again indicate a slight oversubtraction of the host galaxy from our data, but we are able to fit by allowing the outer disc radius to be a free parameter, giving $r_{\rm out}=280$. The overall continuum shape and luminosity is then fairly well reproduced by {\sc agnsed} with $M=5.5\times 10^7~M_\odot$ and $\dot{m}\simeq0.03$. The best fit parameters are shown in table~\ref{tab:fit}, and the best fit model is overlaid on the decomvolved data points in Fig.~\ref{fig:fit}a. The estimated $\Gamma_{\rm warm}$ of 2.28 is harder than the passive disc prediction of $2.5$ and may indicate patchy corona as suggested by \cite{petrucci2017}. We discuss this in more detail in Section 4.3. We also try to reproduce the data without any standard outer disc component, as in \cite{5548} and \cite{petrucci2017}. However, in our fit the seed photon energy is not a free parameter, but is set at the underlying disc temperature from reprocessing. The UV data are clearly in tension with this, as they show a stronger downturn than predicted by the warm Comptonisation models with seed photon energy set by the underlying disc area from reprocessing. A warm Comptonisation region covering the outer disc also changes some other aspects of the fit. The lower normalisation in the optical/UV (see Fig.~\ref{fig:geometry}, geometry III) means that the same parameters of mass and mass accretion rate underpredicts the optical data. Including the warm layer across all of the disc means that there should be an increase in $(M\dot{M})^{2/3}$ to compensate for the decrease in normalisation from Comptonisation. However, the observed bolometric luminosity $L_{\rm bol} =\eta \dot{M} c^2$ which is fairly well constrained by the data. Hence the only possibility to increase the optical emission is to increase the mass. This now pegs at the upper limit, giving a slight decrease in $\dot{m}\propto \dot{M}/M$. The outer radius is now not constrained from the data, with little change in goodness of fit from $ r_{\rm out}=10^3-10^5$ so we set this back to the self gravity radius. We show the best fit with these assumptions in Fig.~\ref{fig:fit}b, and tabulate the parameters in table~\ref{tab:fit}. In order to explain the optical data points which peak energy at 8--10~eV with entire warm Comptonisation, blackhole mass needs to be as large as $1\times 10^8~M_\odot$ with $\dot{m}\sim 0.01$ and $L_{\rm diss,hot}\sim 0.01L_{\rm Edd}$. This clearly exceeds the reasonable range of blackhole mass of NGC~5548. Hence we conclude there is most likely a standard outer disc in NGC~5548. \subsection{Mrk509: $\dot{m}\sim 0.1$} \label{sec:mrk509} Mrk 509 is the nearby Seyfert-1/quasar, and is one of the first objects in which the soft X-ray excess was discovered \citep{singh1985}. There was a large multi-wavelength campaign on this object \citep{kaastra2011-1,kaastra2011-2}, with simultaneous optical-UV and X-ray monitoring from XMM-Newton's OM and EPIC-pn together with the HST/COS and archival observation by the Far Ultraviolet Spectroscopic Explorer (FUSE) \citep{509}. As in \cite{5548}, they fit the continuum SED without any outer disc emission, just using a warm (0.2~keV) optically thick ($\tau\sim17$) Comptonisation component with free seed photon temperature, together with a hard ($\Gamma\sim1.9$) power-law X-ray continuum. There is a clear iron line in the X-ray spectrum, together with a Compton hump, so we include this in the model using {\sc gsmooth * pexmon} \citep{nandra2007}. The best fit model is shown in Fig.~\ref{fig:fit}c and detailed in table~\ref{tab:fit}. The data are then well fit with our three component {\sc agnsed} continuum (i.e. including an outer disc) for a black hole mass of $1\times10^8~M_\odot$ with $\dot{m}=0.1$. The transition radius from the outer disc to warm Comptonised disc is around $r_{\rm warm}\simeq 40$, while the observed X-ray emission requires $r_{\rm hot}=21$. We also try a fit where the entire outer disc is covered by the warm Comptonisation (Fig.~\ref{fig:fit}d). This is statistically worse than the model with an outer standard disc, but unlike NGC~5548, the data match fairly well to the model around the UV peak. This model also gives $\Gamma_{\rm warm}\simeq 2.6$, consistent with a passive disc. \subsection{PG~1115+407: $\dot{m}\sim 0.4$} \label{sec:pg1115} PG~$1115+407$ is a NLS1 galaxy with mass from single epoch spectra of $4.6\times 10^7~M_\odot$ using historical data with FWHM H$\beta$ of 1720 km/s \citep{porquet2004}, or $9.1\times 10^7~M_\odot$ with the (narrow line subtracted) Sloan Digital Sky Survey (SDSS) FWHM H$\beta$ of 2310 km/s \citep{jin2012a}. Hence we assumed a black hole mass range of (0.46--1.4)$\times 10^8~M_\odot$, including 0.2~dex uncertainty on the SDSS limit (table~\ref{tab:system}). We re-analyzed the same data set shown in \cite{jin2012a,jin2012b}, by concentrating on XMM/OM and XMM/EPIC-pn data alone since the SDSS data points were not simultaneous with the XMM observation We fit the data with {\sc tbabs*redden*agnsed}, where {\sc tbabs} is used with \cite{anders1982} abundance and $E(B-V)$ is tied to $N_{\rm H}\cdot (1.7 \times10^{-22})$ as was done in \cite{jin2012a}. The observation is only 9.4ks, and the spectrum is steep. This makes it difficult to constrain any reflection component so this is not included in the model. As shown in table~\ref{tab:fit}, the model with an outer disc fits the data well, and $N_{\rm H}$ of $2.2\times 10^{20}~{\rm cm^{-2}}$ is consistent with galactic absorption of $(1.5$--$1.9)\times 10^{20}~{\rm cm^{-2}}$ \citep{kalberla2005,dickey1990}. The unabsorbed SED derived from this best fit model is shown in Fig.~\ref{fig:fit}e. Black hole mass is estimated as $M=1.0\times 10^8~M_\odot$ with $\dot{m}=0.4$. The size of the hot corona is $r_{\rm hot}=9.8$, which is much smaller than for the lower $\dot{m}$ AGN. For the warm Comptonising region, while $r_{\rm warm}$ is similar to that of Mrk~509, $\Gamma_{\rm warm}$ is much steeper at $\sim3.1$. This most likely indicates that there is some intrinsic disc power dissipated underneath the warm corona rather than a completely passive disc. We compare this to models where the entire disc is dominated by the warm Compton component. The fit results are shown in table~\ref{tab:fit} and Fig.~\ref{fig:fit}f. The fit is slightly worse in terms of $\chi_\nu^2$, and has slightly larger absorption at $N_{\rm H}=3.1\times 10^{20}~{\rm cm^{-2}}$. \begin{table} \centering \caption{The best estimated parameters. $^\dagger$Electron temperature of the hot flow is fixed at 100~keV. $^\ast$Values in the parenthesis are internally calculated based on the other parameters. $^\ddagger$ The absolute values of $\chi^2 _\nu$ are not meaningful for deconvolved spectral fit. They are shown only for reference to compare the fit goodness between with and without outer disc.Reflection components are modeled by a single gaussian and {\sc gsmooth*pexmon} for NGC~5548 and Mrk~509, respectively. $^{**}$ The values are limited by upper limit of parameters.} \label{tab:fit} \begin{tabular}{lccccc} \hline && NGC~5548&Mrk~509&PG$1115+407$\\ \hline\hline \multicolumn{5}{c}{with outer disc}\\ \hline $N_{\rm H}$ & ${\rm cm^{-2}}$ & --- & --- & $2.2\times 10^{20}$ \\ mass &$M_\odot$&$5.5\times 10^7$&$1.0\times10^8$&$1.0\times10^8$\\ $\dot{m}$ &&$0.027$&$0.10$&$0.40$\\ $\Gamma_{\rm warm}$ &&2.28&2.36&3.06\\ $kT_{e,\rm warm}$ &keV&0.17&0.20&0.50\\ $\Gamma_{\rm hot}$& &1.60&1.96&2.14\\ $kT_{e,\rm hot}$ &keV&39&100$^\dagger$&100$^\dagger$\\ $L_{\rm diss,hot}$&$L_{\rm Edd}$&0.017&0.038&0.026\\ $L_{\rm hot}$&$L_{\rm Edd}$ &(0.018)&(0.042)&(0.040)\\ \multicolumn{5}{c}{..................................... size scales .....................................}\\ \multicolumn{2}{l}{$r_{\rm hot}$} &(43)$^\ast$&(21)$^\ast$&(9.8)$^\ast$\\ \multicolumn{2}{l}{$r_{\rm warm}$} &151&40&35\\ \multicolumn{2}{l}{$r_{\rm out}$} &282&(780)$^\ast$&(1416)$^\ast$\\ \multicolumn{5}{c}{.......................... characteristic temperatures ..........................}\\ $T(R_{\rm hot})$ &K&($2.9\times 10^4$&$5.4\times 10^4$&$1.0\times 10^5$)$^\ast$\\ $T(R_{\rm warm})$ &K&($1.3\times 10^4$&$3.7\times 10^4$&$5.6\times 10^4$)$^\ast$\\ $T(R_{\rm out})$ &K&($8.0\times 10^3$&$4.6\times 10^3$&$4.1\times 10^3$)$^\ast$\\ \hline $\chi^2 _\nu$(dof)&&1.65(1097)$^\ddagger$&1.44(370)$^\ddagger$&1.64(184)\\ \hline \hline \multicolumn{5}{c}{without outer disc}\\ \hline $N_{\rm H}$ & ${\rm cm^{-2}}$ & --- & --- & $3.1\times 10^{20}$ \\ mass &$M_\odot$&$6.0\times10^7$ $^{**}$&$3.0\times 10^8$ $^{**}$&$1.3\times 10^8$ \\ $\dot{m}$ &&$0.024$&$0.041$&$0.46$\\ $\Gamma_{\rm warm}$ &&2.39&2.59&3.35\\ $kT_{e,\rm warm}$ &keV&0.18&0.28&0.52\\ $\Gamma_{\rm hot}$& &1.61&1.87&2.21\\ $kT_{e,\rm hot}$ &keV&33&100$^\dagger$&100$^\dagger$\\ $L_{\rm diss,hot}$&$L_{\rm Edd}$ &0.016&0.013&0.021\\ $L_{\rm hot}$&$L_{\rm Edd}$ &(0.017)&(0.015)&(0.038)\\ \multicolumn{5}{c}{..................................... size scales .....................................}\\ \multicolumn{2}{l}{$r_{\rm hot}$} &(49)$^\ast$&(19)$^\ast$&(9.1)$^\ast$\\ \multicolumn{2}{l}{$r_{\rm warm,out}$} &$1000^{**}$&(407)$^\ast$&(1432)$^\ast$\\ \multicolumn{5}{c}{.......................... characteristic temperatures ..........................}\\ $T(R_{\rm hot})$ &K&($2.5\times 10^4$&$3.4\times 10^4$&$9.9\times 10^4$)$^\ast$\\ $T(R_{\rm warm,out})$ &K&$3.0\times10^3$&$4.4\times 10^3$&$4.0\times 10^3$)$^\ast$\\ \hline $\chi^2 _\nu$(dof) &&2.11(1098)$^\ddagger$&2.01(371)$^\ddagger$&1.77(185)\\ \hline \end{tabular} \end{table} \begin{figure*} \includegraphics[angle=0,width=2\columnwidth]{fig4.eps} \caption{The best estimated models overlayd on the data set of NGC 5548 (\citealt[Fig. 10]{5548}), Mrk~509 (\citealt[Fig. 12]{509}), and PG~$1115+407$ \citep{jin2012a,jin2012b}. The outer disc emission, the warm compton component, and the hard compton component are shown in magenta, green, and blue, respectively. Panels (a), (b), (c), (d), (e) and (f) corresponds to NGC~5548 with and without outer disc, Mrk~509 with and without outer disc and PG~$1115+407$ with and without outer disc, respectively.} \label{fig:fit} \end{figure*} \section{Full AGN broadband spectral model} \label{sec:summary} In this section, we evaluate the results of fitting {\sc agnsed} to the observed SED of NGC~$5548$ ($\dot{m}\sim 0.03$), Mrk~509 ($\dot{m}\sim 0.1$), and PG~$1115+407$ ($\dot{m}\sim 0.4$), and use these, together with other results in the literature, to build a full SED picture where the only free parameters are $M$ and $\dot{m}$. \subsection{Existence of an outer standard disc component} \label{sec:outerdisk} All the AGN in section~\ref{sec:application} are consistent with {\sc agnsed} of three components, where there is an outer disc, together with warm Comptonising region and hot corona, powered by NT emissivity for a low spin black hole. This is always a better fit than assuming $r_{\rm warm}=r_{\rm out}$ i.e., a model where the warm Comptonisation region extends over the entire outer disc, although there are several uncertainties, e.g., on the inclination and absorption corrections. Our model is different to that fit by \cite{5548} and \cite{petrucci2017}, where the optical/UV data are from the warm Comptonisation component alone. This difference is due to our assumption that the warm Comptonisation is intrinsically linked to a NT disc. The data do fit just as well to an unconstrained warm Comptonisation component as this has the same optical/UV shape as a standard disc (see Fig.\ref{fig:comparison} geometry III). However, we have additional requirements on the luminosity and seed photon temperature from our assumed NT emissivity. The optically thick, warm Comptonisation thus supresses the flux below that predicted by the outer disc, so these models require a higher $M$ and/or higher absolute mass accretion rate, $\dot{M}$, but the latter is fairly well constrained by the observed bolometric luminosity from the broadband spectra. A larger $\dot{M}$ through the outer disc could fit the data if this is counteracted by strong energy losses, e.g., if the system powers a UV line driven wind \citep{laor2014}. However, it seems somewhat fine tuned that these wind losses (which vary with $M$ and $\dot{M}$) would be always able to almost exactly compensate for the extra power predicted by NT emissivity which assumes a constant $\dot{M}$ with radius. Similarly, high black hole spin would give higher luminosity for a given $\dot{M}$ through the outer disc, which can overpredict the total luminosity unless this is mainly dissipated in the unobservable EUV bandpass. Again, this seems fine tuned. The simplest solution is that we are seeing evidence for an outer disc whose properties are like the standard disc, and that the wind losses are small, and spin is low. \subsection{Hot coronal emission} \label{sec:hot} As shown in table~\ref{tab:fit}, the observed power dissipated in the hot inner flow is $L_{\rm diss,hot}=0.02$--$0.04L_{\rm Edd}$ in our three AGN with different $\dot{m}$. This is also seen in the sample of 50 AGN in \cite{jin2012a}. These have different masses and $\dot{m}$, but the X-ray luminosities are all the same within a factor 2--3 when the SEDs are stacked into three groups of low, medium and high $\dot{m}$ and referenced to the same black hole mass (Fig. 8b in D12). \subsubsection{truncated disc and inner hot flow geometry} \label{sec:truncated-disc} In our model, the approximately fixed value for $L_{\rm diss,hot}$ measured from the data then determines $r_{\rm hot}$ from the NT emissivity. For a total $\dot{m}\sim 0.03$, most of the entire flow energy is needed to power the hard X-ray region, thus $r_{\rm hot}$ is large. Conversely, for $\dot{m}\sim 0.4$, only a very small fraction of the available power is needed to make the hard X-ray flux, so $r_{\rm hot}$ is small. Figure~\ref{fig:gamma}a shows how the value of $r_{\rm hot}$ decreases as a function of increasing $\dot{m}$ in models where $L_{\rm diss,hot}$ is fixed at $0.01L_{\rm Edd}$ (dashed red), $0.02L_{\rm Edd}$ (solid green) and $0.05L_{\rm Edd}$ (dotted blue). The open stars show the values measured from fitting to the data scatter around the green line, consistent with constant $L_{\rm diss,hot}=0.02L_{\rm Edd}$, especially considering the uncertainties on mass determination. The predicted decrease in size scale of the X-ray source with increasing $L_{\rm bol}/L_{\rm Edd}$ is compatible with the observations that the X-ray variabilty timescale decreases with increasing $L_{\rm bol}/L_{\rm Edd}$ \citep{mchardy2006}. We also calculate the self consistent spectral index, $\Gamma_{\rm hot}$, for the flow from eq.(14) in \cite{beloborodov1999} as \begin{equation} \Gamma_{\rm hot}=\frac{7}{3}\left(\frac{L_{\rm diss, hot}}{L_{\rm seed}}\right)^{-0.1} \label{eq:gamma} \end{equation} With the geometry of the truncated inner flow shown in Fig.~\ref{fig:geometry}a, $L_{\rm diss,hot}$ and $L_{\rm seed}$ are calculated via eq.(\ref{eq:Lhotdiss}) and (\ref{eq:Lseed}), respectively. Figure~\ref{fig:gamma}b shows the resulting $\Gamma_{\rm hot}$ assuming $L_{\rm diss, hot}$ at 0.01$L_{\rm Edd}$ (dashed red), $0.02L_{\rm Edd}$ (solid green) and $0.05L_{\rm Edd}$ (dotted blue). The data (open stars) are in good agreement with the truncated disc geometry for $L_{\rm diss, hot}=0.02L_{\rm Edd}$. The observed spectral indices are then consistent with a truncated disc geometry across the entire range of $\dot{m}$ considered here. This is surprising, especially at high $\dot{m}$, where the corona is more generally drawn as either X-ray hot plasma over the inner disc, or as a lamppost (compact source on the spin axis of the black hole). We explore untruncated disc geometries in section 4.2.2 and 4.2.3. \subsubsection{passive disc and inner hot corona geometry} Instead of the truncated disc and inner hot flow, there can be an inner disc corona extending down to $r_{\rm ISCO}$. The inner disc corona can be characterised by the fraction, $f$, of power dissipated in the corona with the remaining cooler disc material in the midplane \citep{svensson1994}. We first assume a maximal corona, with $f=1$ extending over the inner disc from $r_{\rm hot}$ to $r_{\rm ISCO}$ i.e. a total power dissipated in the corona of $L_{\rm diss, hot}=0.02L_{\rm Edd}$ with a passive disc on the midplane. The difference between this and the truncated disc geometry (section \ref{sec:truncated-disc}) is that the inner disc on the midplane will intercept half of the X-ray emission from the corona for an isotropic source. The albedo, $a$, determines how much of the illuminating flux can be reflected. The reflected fraction at low energies depends strongly on the ionisation of the disc, but photons above $\sim 50-100$~keV cannot be reflected elastically due to Compton downscattering. This gives a maximum albedo for completely ionised (most reflective) material and this value depends on spectral shape. We evaluate this for a Compton spectrum ({\sc nthcomp}) with $kT_{\rm e}=100$~keV and different values of $\Gamma$ and calculate the reflection albedo using {\sc ireflect} \citep{ireflect} with $\xi=10^6~{\rm ergs~s^{-1}~cm}$. We choose this rather than the newer reflection models such as {\sc relxill} as {\sc ireflect} calculates only the reflected emission: the emission lines and recombination continuua will add to the thermalised flux in making soft seed photons. This gives $a_{\rm max}=0.55\sim 0.81$ for $\Gamma=1.5\sim 2.3$. On the other hand, the seed photon power from reprocessing in the corona region results in \[ \frac{L_{\rm diss,hot}}{L_{\rm seed}}=\frac{f}{1-\frac{1}{2}f-\frac{1}{2}fa}=\frac{2}{1-a} \] with maximal corona, $f=1$ (see eq.(3a) in \citealt{haardt1993}). This indicates \[ \Gamma_{\rm hot}=\frac{7}{3}\left(\frac{2}{1-a}\right)^{-0.1} \] via eq.(\ref{eq:gamma}). Thus, $a_{\rm max}$ and $\Gamma_{\rm hot, min}$ is self-consistently determined as $a_{\rm max}\sim0.7$ and $\Gamma_{\rm hot, min}\sim 1.9$ for an inner disc corona geometry. In Fig.~\ref{fig:gamma_slab}, which shows the truncated disc results as a baseline model (green solid line) together with the data (open stars) from Fig.~\ref{fig:gamma}, we plot $\Gamma_{\rm hot}$ as a horizontal dotted black line. This horizontal dotted black line shows this minimum photon index. The observed $\Gamma_{\rm hot}$ for Mrk~509 sits on this lower limit, so can be explained also by this geometry. PG~$1115+407$ is somewhat steeper at $\Gamma_{\rm hot}=2.2$, which can also be explained by this geometry for $a=0.3 (<a_{\rm max})$. However, NGC~5548 and other AGN with $\Gamma_{\rm hot}<1.9$ require a more photon starved geometry. \subsubsection{entire slab hot corona} The value of $\Gamma_{\rm hot}$ becomes larger if there is intrinsic disc power (i.e., $f<1$), which adds to the seed photons in the local slab geometry. This requires larger $r_{\rm hot}$ to keep the same observed $L_{\rm diss, hot}/L_{\rm Edd}$ The most extreme case is where the corona extends over the entire optically thick disc so that $f$ is constant with radius. For such a entire slab geometry, the seed photons are given by eq.(3a) in \cite{haardt1993} as \[ \frac{L_{\rm seed}}{L_{\rm Edd}}= \left(1-\frac{1}{2}f-\frac{1}{2}fa\right)\dot{m} \] In this geometry the hard X-ray dissipation of $L_{\rm diss,hot}/L_{\rm Edd}=f\dot{m}=0.02$ implies $f=0.02/\dot{m}$. We use this in the equation above to calculate $L_{\rm diss, hot}/L_{\rm seed}$ as \[ \frac{L_{\rm diss,hot}}{L_{\rm seed}}=\frac{f\dot{m}}{ (1-\frac{1}{2}f-\frac{1}{2}fa)\dot{m}}=\frac{2}{100\dot{m}-1-a} \] and hence derive the spectral index via eq.(\ref{eq:gamma}). We plot these in Fig.~\ref{fig:gamma_slab} for three values of the albedo namely $a=0$ (dotted), $0.3$ (solid) and $0.7$ (dashed). These always give much steeper $\Gamma_{\rm hot}$ than observed. Figure~\ref{fig:gamma_slab} shows that steep spectra of $\Gamma_{\rm hot}\simeq 3.1$--3.4 are predicted for $\log \dot{m}=-0.6\sim0$ while the observed photon indices are usually harder as $1.7\sim 2.4$ (e.g., from 55 samples in \cite{jin2012a}. For lowest $\dot{m}$, $\Gamma_{\rm hot}$ is still as large as 1.9. Thus the entire slab for the hot Comptonisation component is quite unlikely for both high and low $\dot{m}$. This problem is discussed in detail also e.g., by \cite{stern1995}, \cite{malzac2005} and \cite{poutanen2017} for the case of BHBs in the low/hard state. The problem is even more marked for AGN as the disc seed photons are at lower energies, making Compton cooling more efficient and leading to steeper predicted photon indices \citep{haardt1993}. \subsubsection{summary of constraints on the geometry of the hot corona} To summerize, the data at $\dot{m}\le 0.1$ have $\Gamma_{\rm hot} <1.9$ which is incompatible with a disc-corona geometry even in the limit where all the power is dissipated in the corona. Spectra at higher $\dot{m}\ge 0.1$ have spectra which can be produced in an inner disc-corona geometry, but these require some fine tuning in order to produce the spectral indices observed. By contrast, the truncated disc/hot inner flow geometry can predict the behaviour of the spectral index over the entire range of $\dot{m}$, making it likely that this geometry is continued across all $L_{\rm bol}/L_{\rm Edd}$. \begin{figure} \begin{center} \includegraphics[angle=0,width=1\columnwidth]{fig5.eps} \end{center} \caption{Radius of the hot inner flow, $r_{\rm hot}$ (a) and photon index of the hot Compton component, $\Gamma_{\rm hot}$ (b), are plotted against $\log \dot{m}$. The values are calculated assuming a fixed $L_{\rm diss, hot}$ of $0.01L_{\rm Edd}$ (dashed red line), $0.02L_{\rm Edd}$ (solid green line), and $0.05L_{\rm Edd}$ (dotted blue line). The spectral index is calculated with reprocess. The observed values of $r_{\rm hot}$ and $\Gamma_{\rm hot}$ for Mrk 509, NGC 5548 and PG~$1115+407$ are shown with open stars. } \label{fig:gamma} \end{figure} \begin{figure} \begin{center} \includegraphics[angle=0,width=1\columnwidth]{fig6.eps} \end{center} \caption{Same as Fig.~\ref{fig:gamma}b, but expected values of $\Gamma_{\rm hot}$ for slab geometry with $L_{\rm diss,hot}=0.02L_{\rm Edd}$ (i.e., $f\dot{m}=0.02$ with the fraction of power dissipated in the corona, $f$) are shown in black lines, together with the observed values of $\Gamma_{\rm hot}$ for Mrk 509, NGC 5548 and PG~$1115+407$ (open stars) and $\Gamma_{\rm hot}$ for truncated disc geometry with $L_{\rm diss,hot}=0.02L_{\rm Edd}$ (solid green). Albedo is assumed to be $a=0$ (dash-dot), 0.3 (solid) and 0.7(dash). Horizontal dotted straight lines represent the lower limit of $\Gamma$ under an assumption of localized slab corona with maximal albedo, $a=0.7$ and by assuming 'passive disc' underneath the corona, i.e., $f=1$. } \label{fig:gamma_slab} \end{figure} \subsection{Warm comptonisation region} The warm Comptonisation region extends from $r_{\rm hot}$ to an outer radius $r_{\rm warm}\lesssim r_{\rm out}$. Ideas which associate this with the changing vertical structure of a disc due to the importance of atomic opacities predict that the warm Comptonisation region should onset at an approximately fixed temperature. One attractive idea is that each annulus of the disc which is at the same temperature as an O star would be modified by a UV line driven wind \citep{laor2014}. The maximum disc temperature considering these wind losses is (2--8)$\times 10^4$~K in their models, which is similar to the range seen here for the onset of the warm Comptonisation of (1--6)$\times 10^4$~K. Thus it is possible that the onset of warm Comptonisation does link to the changing disc structure induced by UV opacities, though strong wind losses such as those predicted by \cite{laor2014} are ruled out by the observed efficiencies. This overprediction of wind losses is probably linked to the assumption in \cite{laor2014} that there is only a disc, with no hard X-ray emission which will strongly suppress UV line driving by overionisation \citep{proga2000}. It seems plausible that the UV bright disc launches a UV driven disc wind, but that this becomes ionised as it rises up and is exposed to the X-ray source. The failed wind falls back down, impacting the disc, leading to shock heating of the photosphere, making the warm Comptonisation region. There are no calculations of this at the current time, but we expect that the extent of the failed wind will depend on the level of X-ray ionisation. This is clearly larger for lower $\dot{m}$. Guided by the fits to the individual objects above, we tie the size scale, $r_{\rm warm} =2r_{\rm hot}$. While the data do not favour all the outer disc being covered by the warm corona, they are (mostly) consistent with the idea that the disc underneath the warm Comptonising material is passive i.e., the optically thick material underneath the corona only reprocesses the luminosity dissipated further up in the disc \citep{petrucci2017}. There is a tred seen also in \citep{petrucci2017}, $\Gamma_{\rm warm}$ is somewhat steeper for high $\dot{m}$, (e.g., PG1115+407 indicates some intrinsic power in the disc), and somewhat flatter at low $\dot{m}$ (e.g., NGC~5548). Steeper spectra can easily be produced with some intrinsic emission on the disc midplane. However, the lower spectral indices at low $\dot{m}$ are more difficult to explain. \cite{petrucci2017} suggest that $\Gamma_{\rm warm}\le 2.5$ is from partial covering of the corona over the passive midplane, so that some of the reprocessed photons do not re-intercept the warm Comptonisation region to cool it. While this does indeed allow reprocessed photons to escape, this also means that these seed photons from the disc are seen directly which is not consistent with their assumption that the optical/UV is dominated by warm Comptonisation alone, with no thermal emission from a disc (see also the discussion of this in \citealt{petrucci2017}). Partial covering also seems physically unlikely as the optically thick, warm material has thermal pressure so should expand outwards so this additionally requires magnetic confinement. Instead, we suggest that the harder spectral indices could instead be produced by irradiation heating being more important at low $\dot{m}$ (see also \citealt{lawrence2012}). By definition, irradiation heats the photosphere at $\tau=1$ rather than the deeper regions at $\tau=10-20$ where the majority of the warm Comptonisation must be produced. We suggest that a more accurate treatment of an irradiated warm Comptonisation region above a passive disc could produce the harder indices observed at low $\dot{m}$. \begin{table} \centering \caption{Parameters used in section \ref{sec:uv-x} and \ref{sec:reprocess}. $^\dagger$Geometry of hot inner flow + warm comptonising skin + outer disc. $^\ddagger$Geometry of hot inner flow + warm comptonising skin (i.e., without outer disk). } \label{tab:parameter} \begin{tabular}{llcccc} \hline\hline system parameter&$r_{\rm out}$& $r_{\rm sg}$\\ &inclination angle &$45^\circ$\\ hot inner flow&$T_{e, \rm hot}$&100~keV\\ &$\Gamma_{\rm hot}$&calculated via eq.(\ref{eq:gamma})\\ &$L_{\rm diss, hot}$&$0.02L_{\rm Edd}$\\ warm compton&$T_{\rm warm}$&0.2~keV\\ &$\Gamma_{e, \rm warm}$&2.5\\ &$r_{\rm warm}$&$2r_{\rm hot}~^\dagger$, $r_{\rm out}~^\ddagger$\\ reprocess&&included\\ \hline \end{tabular} \end{table} \section{Predictions of the full SED model for the observational results} \subsection{UV/X relation} \label{sec:uv-x} We use the individual AGN fits to define the full SED model in order to compare to a large sample of objects spanning a wide range in $\dot{m}-M$. \cite{lusso2017} show that there is a well defined relationship between the UV and X-ray luminosity of AGN, and claim this has low enough scatter to be used as a tracer of the cosmological expansion. Finding the underlying physics is then extremely important, as it can only be used with confidence when it is robustly understood. Guided by the individual AGN fits above, we fix $L_{\rm diss, hot}=0.02L_{\rm Edd}$, which defines $r_{\rm hot}$. We assume that the hot inner flow is quasi-spherical, and that the optically thick flow truncates at $r_{\rm hot}$ so that the spectral index, $\Gamma_{\rm hot}$, can be calculated from the ratio of dissipation in the hot region to intercepted seed photons. The best fit models above strongly support a geometry where there are three components, but again we compare the data to a model where the entire outer disc is covered by the warm Comptonising material. The inclination angle is expected to be between $0^\circ$ and $60^\circ$ for type 1 AGN so we fix this at $45^\circ$. Full model parameters are shown in table~\ref{tab:parameter}. The model with $r_{\rm warm}=2r_{\rm hot}$ defines our simplified model {\sc qsosed}. We calculate a grid of models spanning $\dot{m}=0.03$--1 and $M=10^6$--$10^{10}M_\odot$. This is consistent with the range of $\dot{m}$ and $M$ in \cite{lusso2017} (the majority of their sample have $\dot{m}=0.03$--1 and (1--10)$\times 10^8M_\odot$; \citealt{lusso2012}). Figure~\ref{fig:uv-x}a shows the resulting relation between the monochromatic rest frame luminosity at 2500\AA~(i.e., 5 eV) and 2~keV, with lines connecting varying $\dot{m}$ for constant $M$ from our model with an outer standard disc. Figure~\ref{fig:uv-x}b shows the slightly different predictions from a model where the warm Comptonisation region covers the entire outer disc. Both these give a fairly well defined correlation between the UV and X-rays, though with some scatter. We compare these to the best fit UV/X relation derived from 545 SDSS quasars by \cite{lusso2017} of \begin{equation} \log L_{\rm 2keV}-25=0.633 \cdot (\log L_{\rm 2500 \AA}-25)-1.959 \end{equation} (solid black line) on Fig.~\ref{fig:uv-x}a and b. It is clear that the observed correlation is a slightly different slope than predicted by our models over this entire range. Our models with an outer disc match quite well to the observed relation at high masses (Fig.~\ref{fig:uv-x}a), while models with just a warm Comptonisation region are offset by their smaller predicted UV luminosity and match better at low masses (Fig.~\ref{fig:uv-x}b). Our predicted correlation is easy to understand, as it arises due to our assumptions that the X-ray flux is fixed at $L_{\rm X}=0.02L_{\rm Edd}\propto M$ while the UV is from a disc/comptonised disc so $L_{\rm UV}\propto (M\dot{M})^{2/3}\propto (M^2\dot{m})^{2/3}$. Thus this predicts $\log L_{\rm X}=\frac{3}{4}\log L_{\rm UV} - \frac{1}{2}\log\dot{m}+b$ where $b$ is a constant. There is clearly scatter in the models from $\dot{m}$, but the range should be constrained between $0.02-1$. The lower limit is where the entire accretion flow is expected to make a transition to an ADAF. \cite{narayan1995}, so that there is no bright UV emitting disc left, so that there is no source of ionising flux to excite a BLR. The upper limit comes from the expectation that super-Eddington objects are rare. Thus there is only a limited range of 1.5~dex in $\dot{m}$, and this reduces to 0.75~dex with the square root factor. However, the data from \cite{lusso2017} have a scatter of only 0.2~dex. We study the predicted behaviour in more detail in Fig.~\ref{fig:uv-x_lusso}, using the individual data points from \cite{lusso2017} (B. Lusso, private communication). These are selected from the SDSS quasar sample, so have absolute i-band magnitude brighter than $-22$. This already limits the black hole mass to $>10^{7.5}M_\odot$ for objects with a standard disc below Eddington, and the masses reported in \citep[their Fig.6]{lusso2012} are clustered around $(1-10)\times 10^8~M_\odot$. Thus their data only include high black hole masses, so our models with an outer standard disc match fairly well in normalisation and slope to that observed (Fig.~\ref{fig:uv-x_lusso}a), whereas models with complete coverage of the outer disc with a warm Comptonisation underpredict the UV luminosity (Fig.~\ref{fig:uv-x_lusso}b). Nonetheless, there is still a mismatch between the range predicted by our models and the observed data, even including an outer standard disc. The data extend to slightly higher UV luminosity than expected for Eddington limited systems. Some of this can be inclination, as more face on systems will have stronger UV flux, but this makes only a difference of $0.15$ dex between our assumed mean inclination of $45^\circ$ and $0^\circ$. While this may explain the AGN not covered by the grid in the model with an outer standard disc, it is not enough to explain the larger number missed by the grid if the warm corona covers the entire outer disc. Instead, these require a substantial population of super-Eddington AGN. Super-Eddington AGN are seen in the local Universe \citep{jin2015,done2016,jin2017}, though they are rare, but their high UV luminosity enhances their probability of selection. We will extend the model to super-Eddington flows in a later work (Kubota \& Done, in preparation), but note here that there are multiple uncertainties with the structure of these flows which makes robust prediction difficult. There is the opposite problem for the highest mass black holes at the lowest $\dot{m}$, where the grid extends into a region where there are no data points. We suggest that this could be due to selection effects. High mass black holes are rarer, so require sampling a larger space volume in order to have a realistic probability of finding some. This means that they are generally seen at larger distances, so are only selected in flux limited samples if they have high luminosity, weighting the selection of high mass black holes towards higher $\dot{m}$ (including super-Eddington rates). Thus the observed UV/X-ray relation is predicted by our model, where the X-ray lumininosity is fixed at the $0.02L_{\rm Edd}$ and the UV is from an outer standard disc, with selection effects (mostly the limited range of black hole mass) suppressing some of the predicted scatter. This is a very different explanation to that of \cite{lusso2017}. Their 'toy' model uses the same standard disc equations to estimate the UV flux, but they set the X-ray flux using the gravitational power emitted from the outer disc down to the radius at which the disc becomes radiation pressure dominated. As they note in their paper, producing the X-rays at large radii in the disc rather than close to the black hole is in conflict with microlensing size scales, as well as with the rapid X-ray variability. We suggest that their model works because it effectively hardwires $L_{\rm X}$ to a constant value. The radius at which radiation pressure dominates in the disc, $R_{\rm rad}$, increases as $\dot{m}$ increases, so leaving a smaller and smaller fraction of power dissipated in the outer disc, and hence reproducing the observed decrease of $L_{\rm X}/L_{\rm bol}$ with $\dot{m}$ \citep{vasudevan2007}. Formally, this gives $R_{\rm rad} \propto (\alpha M)^{2/21} \dot{m}^{16/21}(1-f)^{6/7} R_g$, where $f$ is the fraction of the accretion power which is dissipated in the hard X-ray corona \citep{svensson1994}. Then the X-ray luminosity down through the corona from $R_{\rm out}-R_{\rm rad}$ is \[L_{\rm X}\propto fGM\dot{M}/R_{\rm rad}\propto \alpha^{-2/21}M^{19/21}\dot{m}^{5/21}f(1-f)^{-6/7} \] (see their eq.(14) in detail). Hence $L_{\rm X}$ is roughly proportional to $M$ and has only a weak dependence on $\dot{m}$, so their toy model is almost identical with our assumption of constant $L_{\rm diss,hot}=0.02 L_{\rm Edd}$. Our model hardwires the same absolute value of $L_{\rm X}$, but in a much more plausible geometry where the X-ray source is produced close to the black hole, and with more physical motivation. \begin{figure*} \includegraphics[angle=0,height=0.7\columnwidth]{fig7.eps} \caption{Monochromatic luminosities log $L_{\rm X}$ against log $L_{\rm UV}$. for black hole of $M=(0.1$--$1)\times10^7M_\odot$ (cyan), $(0.16-1)\times 10^8~M_\odot$ (blue), (0.16--$1)\times 10^9~M_\odot$ (green), (0.16--$1)\times 10^{10}~M_\odot$ (red). From left to right $\dot{m}$ changes from 0.03 to 1. The observed UV/X relation in the range of $\log L_{\rm 2500}-25=$3.8--7.4\citep[Fig. 3]{lusso2017} is shown with a solid line. A dashed line is an extrapolation of the solid line. (a) Our three component flow, with an outer disc, warm Comptonisation region and hot inner flow. (b) A model where there is no standard outer disc. } \label{fig:uv-x} \end{figure*} \begin{figure*} \includegraphics[angle=0,height=0.7\columnwidth]{fig8.eps} \caption{Enlargements of Fig.~\ref{fig:uv-x} are overlayed on the observed data points in Fig. 3 of \citet{lusso2017} shown with open grey circles.} \label{fig:uv-x_lusso} \end{figure*} \subsection{Optical variability} \label{sec:reprocess} The X-rays vary rapidly in a stochastic manner about a mean, so their reprocessed emission should also carry the imprint of this rapid variability. The assumption in our SED is that the hard X-rays carry a fixed luminosity but arise from a smaller region as $\dot{m}$ increases. Hence the reprocessed luminosity depends on $L_{\rm diss, hot} \times r_{\rm hot}$ (see Section 2.3) which decreases as $\dot{m}$ increases. This reprocessed flux is seen against the remaining, constant component from the disc and/or warm Comptonisation region which increases with $\dot{m}$. Thus the variable reprocessed emission forms a smaller fraction of the optical/UV emission at higher $\dot{m}$, which qualitatively matches to what is observed \citep{macleod2010,ai2013,simm2016,kozlowski2016}. Our model explicitly includes the reprocessed emission from X-ray illumination of the outer disc and warm Comptonisation region, so here we calculate the contribution that the varying X-ray emission can make to the optical variability. We illustrate this with our three component SED model, {\sc agnsed}, (with all parameters as above, tabulated in table~\ref{tab:parameter} with $r_{\rm warm}=2r_{\rm hot}$) for AGN of $10^8M_\odot$ with $\dot{m}=0.05$ and 0.5 in Fig.~\ref{fig:reprocess_time}a and b, respectively. The black lines show spectra based on the simple NT emissivity including reprocessing, while red lines are the result of stochastic variability (so no impact on $\dot{m}, r_{\rm hot}$ etc) increasing $L_{\rm hot}$ by a factor of 2. The emission from both the outer disc and warm Comptonising region increases with the increase in $L_{\rm hot}$, and it is clear that reprocessing makes a much larger fraction of the optical emission at low accretion rates than at high ones. We quantify the fractional change in optical flux at 4000\AA~ (3.1~eV), $\Delta f_{4000}/f_{4000}$, to a factor 2 increase in $L_{\rm diss,hot}$ across the entire range of AGN masses ($M=10^6\sim 10^{10}M_\odot$) and mass accretion rates ($\dot{m}=0.03$--1). Figure~\ref{fig:variability}a shows this fractional variability as the colour coding across the grid of $\log M/M_\odot$ and $\log \dot{m}$. It is obvious that lower $\dot{m}$ gives larger variability, though there is also a much smaller effect where the variability increases with larger mass. This occurs when reprocessing starts to affect the variability at the peak of the warm Comptonisation region, where the temperature shift increases the amplitude of variability (see Fig.~\ref{fig:reprocess}). We mach this to the observed amplitude of variability seen in a large sample of quasars in SDSS stripe 82. \cite{macleod2010} characterised the variability at rest frame 4000\AA~ with a damped random walk. The mean asymptotic amplitude of variability in optical magnitude is characterised by the structure function extrapolated to infinite time, $SF(\infty)$. This is plotted as a function of $i$-band magnitude, $M_i$, and black hole mass in their Fig.~14. In order to compare our models directly to their results, we convert our mass and $\dot{m}$ to $M_i$, and convert our fractional variability at 4000\AA~ to a magnitude difference ( $2.5 \Delta \log f_{4000} = \Delta m $). Figure~\ref{fig:variability}b shows this magnitude difference as a function of black hole mass and $M_i$ across the whole range of models calculated in Fig.~\ref{fig:variability}a. Figure~\ref{fig:macleod}a shows a zoom of Fig.~\ref{fig:variability}b, limiting it to the same range in mass and $M_i$ as used by \cite{macleod2010}. This can then be compared directly against the data in Fig.\ref{fig:macleod}b (C. Macleod in private communication). The range in $M_i$ for each black hole mass spanned by our models at a given $\dot{m}$ is shown by the cyan lines for $\dot{m}$ of 0.03, 0.1 and 1. This makes it plan that the most variable AGN are those with implied $\dot{m}<0.03$. These plausibly connect to the 'changing look' quasars if these are triggered by a state change from an outer disc to an ADAF flow (e.g.,\citealt{noda2018}). Figure~\ref{fig:macleod}a also stresses the need to use the non-linear conversion between optical flux and bolometric luminosity which is inherent in the standard disc equations. Hence our data with $\dot{m}=0.03-1$ do not span the entire range of $M_i$ which are associated with this range in $\dot{m}$ in Fig.15 of \cite{macleod2010}. There are again some AGN with $\dot{m}>1$. These are rare (Fig.~12 in \citealt{macleod2010}), but they considerably extend the range in $M_i$ for the lowest mass AGN included here. The colour grid is the same between Fig.~\ref{fig:macleod}a and b, and the models with a factor 2 variability in X-rays give the observed amount of optical variability at $\dot{m}=0.03$. Our models predict a weak trend for higher variability at higher black hole masses at fixed $\dot{m}$, which is opposite to the observed weak trend for higher variability at lower $M$ but this may not be a serious discrepancy as the timescales for the higher mass black holes are longer, which lead to some variability being missed. A much bigger discrepancy is that the models predict almost no variability at $\dot{m}=1$, unlike the data which still show variability at the 10\% level. This clearly shows that some other component is required to make at least part of the optical variability, though the most rapid variability (e.g. from Kepler light curves: \citealt{aranzana2018}) must arise from X-ray reprocessing. An additional source of longer term optical variability also matches with results from more intensive monitoring campaigns which stress the lack of correlation between the X-ray and optical lightcurves (e.g., \citealt{arevalo2009}). \cite{gardner2017} suggest that variability of the soft X-ray excess may play a role, but in our models here this still makes little impact on the optical spectrum at $\dot{m}=1$. Instead, there should be intrinsic variability in the disc spectrum assuming our disc dominated SED are the correct description of AGN close to the Eddington limit. Standard disc models do indeed predict that AGN discs should be highly unstable due to their dominant radiation pressure, but the non-linear outcome of what should be limit cycle variability is not yet known (\citealt{hameury2009} use a heating prescription which goes with only gas pressure to avoid complete disc disruption). \begin{figure} \begin{center} \includegraphics[angle=0,width=0.95\columnwidth]{fig9.eps} \end{center} \caption{Effects of hard X-ray reprocess for a black hole of $M=10^8~M_\odot$ with $\dot{m}=0.05$ (a) and 0.5 (b). SEDs in which the hard X-ray luminosity is increased by a factor of 2.0 (red) are compared with those with Novikov-Thorne emissivity with hard X-ray reprocess (black). } \label{fig:reprocess_time} \end{figure} \begin{figure} \begin{center} \includegraphics[width=1.1\columnwidth ]{fig10a.eps} \includegraphics[width=1.1\columnwidth ]{fig10b.eps} \end{center} \caption{(a)The time variability $\log \Delta f_{4000}/f_{4000}$ are shown as color grid of black hole mass $\log M/M_\odot$ and $\log \dot{m}$. Hot X-ray emission is increased by a factor of 2.0. (b) Same as the top panel but $\log \Delta flux/flux$ and $\log \dot{m}$ are converted into $\log \Delta mag$ at 4000\AA~ and $i$-band absolute magnitude $M_i$. SEDs are based on the parameters of model (1) in table~\ref{tab:parameter}.} \label{fig:variability} \end{figure} \begin{figure} \begin{center} \includegraphics[width=1.1\columnwidth ]{fig11a.eps} \includegraphics[width=1.1\columnwidth ]{fig11b.eps} \end{center} \caption{ (a) Enlargement of Fig.~\ref{fig:variability}b. Black dots represent the places which we calculate. (b) $\log SF(\infty)$ at 4000\AA~ are plotted in space of $M_i$ and $\log M/M_\odot$(Fig.14 in \citealt{macleod2010}). These data points are given by C.Macleod in private. Colour grid is the same between these two panels. Contours of constant $\Delta mag$ in the top panel are overlaid on the bottom panel. Solid cyan lines show constant $\dot{m}$ of 0.03, 0.1 and 1.} \label{fig:macleod} \end{figure} \section{Summary and Conclusions} We construct a new spectral model {\sc agnsed} which includes an outer standard disc, a middle region where the disc is covered by optically thick, warm Comptonising electrons, and an inner region of hot plasma which emits the power law X-ray component. We assume that these regions are separated in radius, and that their emission is determined by the overall NT emissivity. This sets the size scale of the hot X-ray plasma and we include reprocessing of the X-rays from this source which illuminate the outer and warm Comptonising disc. We fit this model to multiwavelength SEDs of three well observed AGN with very different Eddington ratios, NGC~5548 ($\dot{m}\sim 0.03$), Mrk~509 ($\dot{m}\sim 0.1$), and PG~$1115+407$ ($\dot{m}\sim 0.4$). The observed spectra are well reproduced with the model, and require an outer standard disc as well as a warm Comptonisation component. This is different to conclusions from previous spectral fits as we constrain our warm Comptonisation component to have seed photons and luminosity from an underlying disc rather than allowing these to be free parameters. The midplane disc is generally passive i.e. the seed photons are produced by reprocessing rather than intrinsic dissipation, which sets the spectral index to $\Gamma_{\rm warm}=2.5$ \citep{petrucci2017}. The transition between the standard disc and warm Comptonisation is always at temperatures consistent with the peak in UV opacity which might point to its origin in the changing disc structure due to failed UV line driven winds \citep{laor2014}. The hot plasma has almost constant dissipation, consistent with 0.02--$0.04L_{\rm Edd}$ for all $\dot{m}$. This implies a smaller size scale with increasing $\dot{m}$, as inferred from X-ray variability \citep{mchardy2006}. The hard X-ray spectral index is consistent with this dissipation always taking place in a region with no underlying disc i.e. a truncated disc/hot inner flow geometry. Fixing this derived geometry gives a full SED model which depends only on mass and mass accretion rate. This model successfully explains the observed tight UV/X relation shown by \cite{lusso2017} as a combination of the constant hard X-ray dissipation together with selection effects which mean that the rarer, higher mass quasars are seen preferentially at larger distances so require higher Eddington fractions to be detected. This selection effect introduces scatter and bias but our model gives a physically based understanding of these factors, so that the relation can be used to probe cosmology. The model includes the contribution to the optical/UV flux from X-ray illumination of the outer disc and warm Comptonisation region. We calculate the optical variability resulting from a stochastic change in X-ray flux. This predicts the fast variability in optical should be a strongly decreasing function of Eddington fraction, as the fixed (average) hard X-ray dissipation is a smaller fraction of the bolometric luminosity at higher Eddington ratios. This matches to some of the trends seen in systematic surveys of AGN variability e.g., SDSS Stripe 82 \citep{macleod2010}, but there is more variability seen in high Eddington fraction AGN than predicted. This probably indicate that such highly radiation pressure dominated discs are somewhat unstable. This should motivate theoretical studies to give better understanding of such discs. \section*{Acknowledgements} We thank to M. Mehdipour for providing us SEDs in \cite{509,5548} and helpful comments on the data. We are also grateful to C. Macleod and E. Lusso for providing us the data points in \cite{macleod2010} and \cite{lusso2017}, and useful discussions and comments. Special thanks to H. Noda and C. Jin for helpful discussions. AK is supported by research program in foreign country by Shibaura Institute of Technology. CD acknowledges support from STFC (ST/P000541/1), and useful conversations with O. Blaes and J.M. Hameury. We also thank P.O.Petrucci as our referee for valuable comments. \bibliographystyle{mnras}
1,314,259,995,428
arxiv
\section{ABOUT MIND-MAPS}\label{intro} Transactional streams are to be understood as an endless flow of data that is lost once it is read. Furthermore, the data can be classified into categories, for example sentences or paragraphs inside a text document. These categories form the transaction, having items inside, for example words or paraphrases. An example for a transactional stream may be the reading of a book, where the text is read exactly once but lost if it is pronounced. The management, and moreover, the analysis of transactional streams is often problematic for several reasons. One of them is that data streams are potentially infinite or at least their end is not known until it is actually reached. Storing the whole data stream is therefore not an option, and the analysis cannot rely on traditional mining techniques that require the whole dataset to be available or that need random access or multiple passes over the data. Currently a lot of research focuses on the processing of such streams of different kind. Some of the typical techniques used with data streams are sliding windows, incremental approaches, or synopses of the data. Surveys of current methods and issues can be found in \cite{BAB02}, \cite{DOM01}, \cite{GOL03}, \cite{MUT03}. With the discussion around \texttt{mind-maps}, we argue for its eligibility by demonstrating its applicability on a couple of algorithmic ideas. In our understanding, \texttt{mind-maps} are to be seen as adaptive and incremental knowledge structures, which live from depending on the occurrence of an input stream. A first approach in stream data analysis with mind-maps had been done in the processing of transactional streams with the creation of \texttt{mini-networks}. These base on transactional data (\cite{SCH04}), the mini-network consists of simple symbolic cells that share a weight value and that represent an individual item in a transaction. The symbolic cells are interconnected with other cells that occur in the transaction as well. In a subsequent step, the mini-network becomes integrated to the mind-map itself, where those cells become merged with those in the mind-map in case that they are identical (\texttt{merge}). Using the fundamental principle of adaptation and Hebbian Learning, the mind-map can be seen as a living engine as it is initially empty but grows over time. Since the states of connections and cells change over time the cells may die or revive as well. Focusing on the skeleton of the \texttt{mind-map}, a retrieve yields on delivering the strongest cell connections. Although \texttt{mind-maps} often refer to such a structure to process fluid signals like text streams, mind-maps often claim to have its associative nature as the fundamental principle. This is true, however, we believe that a temporal character of such systems may be accepted as well to manage temporal stimulation that comes in. Secondly, mind-maps can be seen both from a verificative and an explorative perception: talking about mind-map software mostly refers to a top-down directed production of logically connected entities, for example work-flows or coherent cogitations (\cite{CHGM}, \cite{EPPL}, \cite{HAAS}), whereas the word of \texttt{explorative} more related to a learning or discovering process of such logical structures. In this respect, the following applications refers to the second class of mind-maps and present some algorithmic ideas for the temporal processing of text streams to map contextual information. A simple implementation of mind-maps is the system \texttt{ANIMA}. It refers to a mind-map model of incremental and adaptive nature and allows to manage associations between symbolic cells while having a transaction stream as input. The aim of \texttt{ANIMA} is the efficient processing and management of transactions over time, to present related patterns inside a stream \cite{SCH04}. \section{APPLICATIONS} \subsection{TARGETING NETWORK PROTECTION}\label{network} Network-based anomaly detection \cite{HIL06} \cite{SUL96} refers to a system-based understanding of what the structure and the behaviour of network traffic is, and in this respect to identify abnormal situations. Here, the mind-map model \texttt{ANIMA-AR} is implemented to represent network traffic events while having network packets including header and content as one transaction, is one promising application. It fragments such packet transactions into meaningful symbolic cells within the mind-map and connects the packet cells. Additionally, connection values are established relating to the corresponding frequencies, respectively. Due to the architecture, we may rate the usual network traffic by a lower connection weight - although the frequency is high - but may rate an abnormal/unusual network behaviour by a higher value - although it appears more seldom. We identify the abnormal traffic as it is rated significantly and therefore tolerant against temporal connection updates between the symbolic cells. The following sequence of pictures gives an example on how the mind-map is used; it refers to the management of bad signatures as described in \cite{HIL06}, taking several insertion rules into account. First, the incoming signature ABC is considered and assigned to symbolic cells, being equal-weighted. \begin{figure}[h] \centering \includegraphics[width=2.25cm]{sc06a.jpg} \end{figure} Then, if a new signature CDEF is added to the mini-network, and the weights are adapted. At each step, substrings are considered and evaluated as follows: given a sub-string, then if the sum of all activation states \dots \begin{itemize} \item \dots is exactly 1, then a virus alert takes place. \item \dots is exactly 0, then no virus alert takes place. \item \dots is between 0 and 1, an alert takes place with a probability value. \item \dots increases 1, then it is not considered. \end{itemize} For example, ABC forces a virus alert as it is infected for 100\%, whereas only H is unlikely to be infected. To continue the example, as \texttt{C} is already known, this value stays constantly. \begin{figure}[h] \centering \includegraphics[width=4.5cm]{sc06b.jpg} \end{figure} The following photographs of the mind-map refers to the situation when the signatures \texttt{CDE} \begin{figure}[h] \begin{center} \includegraphics[width=4.5cm]{sc06c.jpg} \end{center} and \texttt{CDGH} are being inserted. With this, the probability values for each signatures is clearly available through the whole life-time of the mind-map. More information can be found in \cite{HIL06}. \begin{center} \includegraphics[width=4.5cm]{sc06d.jpg} \end{center} \end{figure} The mind-map model ANIMA-AR is implemented to detect well-known viruses. The signature of viruses is stored in a graph-like structure; virus signatures are managed and stored, incoming packets - to identify intrusions - evaluated. The scanning speed and the required storage space outperforms current approaches and emerges out of the compression of the signature database. \texttt{ANIMA-AR} is theoretically analysed showing that viruses and similarities are detected. Simulations substantiate the theoretical analysis and show the low false-positive rate tolerating the normal system. In addition, ANIMA-AR is able to automatically detect similar viruses as small mutations or new variants. \subsection{MANAGING IMPLICIT FEEDBACK}\label{retrieval} The consideration of implicit feedback in the field of information retrieval and the automatic collection of information about the user's behaviour \cite{LPT99} \cite{WEI07} is an application of interest. Without an explicitly request of information, we intend to gain some information about what really interesting is. The aim is to use the mind-map as an adaptive storage for such kind of information and consequently, for the enhancement of user-based research requests. We therefore monitor the user's behaviour in interaction with a search engine, keeping an eye on queries and their results, links that the user follows, and diverse time-related information, for example how long he or she stays on web sites. Search sessions like this are considered as single transactions for the mind-map. And, building up such a network provides information about typical queries, results, and measures, especially regarding the relevance of search results, namely towards an enhancement of further queries. An example might be a giving of additional hints or altering the results themselves. One concrete step is a re-ranking of search results, which may help to place those results further on the top of the list that are probably of greater importance according to the given query. Following this, the strengthens of the mind-map are mainly due to the ability to cope with transactional streaming data. We concern with information about search sessions of users, which can easily be broken down into transactions. Moreover, the mind-map is able to store only the most important aspects of the information without the need of storing all feedback data. This helps to keep the network at a reasonable size. Furthermore, the dynamic nature of this mind-map fits quite well to the purpose of this approach: if there is a change of the user's feedback over time, then this trend will be reflected in the mind-map as well. Figure \ref{fig:sc10} shows an architectural snapshot of the mind-map, where we use three different types of cells: \begin{itemize} \item The query terms are single terms that are observed in user queries. \item Queries that form the transactional input to the mind-map. \item The resulting list of documents that have been provided by the underlying search engine for one or more queries. \end{itemize} \begin{figure}[htbp] \centering \includegraphics[width=7cm]{sc10.jpg} \caption{Architecture of the mind-map \cite{WEI07}} \label{fig:sc10} \end{figure} There may exist different connections between these nodes, which might be weighted indicating the strength of the relationship. These are for example connections that indicate relationships between different query terms, connections between queries and query terms, and connections between queries and documents. More information can be found in \cite{WEI07} \cite{WEI08}. \subsection{TARGETING DBLP} Bibliographical databases such as \texttt{Citeseer}, \texttt{Google Scholar}, and \texttt{DBLP} serve as a bibliographic source with lots of information concerning a publication. This compounds the names of the authors, the publication title, the conference, and many other attributes. A bibliography database is accessible online, where all entries share an electronic index to articles, journals, magazines, etc. containing citations and abstracts. Understanding the bibliographic database as a digital collection that is intelligently managed and that supports a search for and the retrieve of bibliographic information to defined queries is a convenient procedure in demanding information right in time. Regularly, the retrieve bases on a collection of queries that consists of keywords in the publication title or the keyword list. For example with \texttt{DBLP}, the querying using a keyword \texttt{plagiarism} leads to an answer set of almost 70 articles, and a search refinement with \texttt{detection} and \texttt{pattern} to 34 and 2 bibliographic entries, respectively. Accordingly, the two referenced publications pledge close; and the names of the authors overlap: {\small \begin{verbatim} 1 NamOh Kang, Sang-Yong Han: Document Copy Detection System Based on Plagiarism Patterns. CICLing 2006: 571-574. 2 NamOh Kang, Alexander F. Gelbukh, Sang-Yong Han: PPChecker: Plagiarism Pattern Checker in Document Copy Detection. TSD 2006: 661-667. \end{verbatim} } With this, we reference to a graph structure representing the association of the three authors \texttt{Han}, \texttt{Kang} and \texttt{Gelbukh}. This is similar to \cite{KEM01} who introduces mini-networks. The double edges signalise a double connection as the single (and parallel) edge between \texttt{Han} to \texttt{Gelbukh} and \texttt{Kang} to \texttt{Gelbukh} refers to a one-way association. Whereas the meaning of a double connection is unique while having two publications, the double entries to \texttt{Gelbukh} seem to be ambiguous: on the one side, it refers to one common publication with both individually, on the other to two single publications with one of the other authors. However, the graph is node-oriented in a way that it simply represents the situation as it is: \texttt{Gelbukh} has one common publication with both of them, and here, it plays no role if this is a common one or not. For static databases, the discovery of associative patterns has been an area of extensive research, and multiple approaches and solutions to the static problem have been presented in the past. A major problem in these approaches is the combinatorial explosion of the search space and the research has therefore mainly focused on reducing this space. Since mind-maps are targeted to data streams, it can not possibly make use of the methods developed for static databases, since these algorithms require multiple passes over the data to calculate frequencies of associated items. This is indeed not acceptable when dealing with data streams, because data streams are potentially infinite, or at the very least their end cannot be foreseen. In this respect, the idea of searching for temporal patterns in a bibliographic database like \texttt{DBLP} while taking the time as the core medium leads to a couple of interesting questions, for example \begin{itemize} \item In general, may we discover scientific communities? While observing the visualisation of associative relationships between authors, we might ask if such dependencies generally form a community, and secondly, how strong these communities may be. Furthermore, if dependencies of of communities exist, are these temporal or visiting, recurring, or constant as it has been mentioned in section \ref{intro}? \item Do there exist diverse trends in publishing? For example, the occurrence of a common publication may be the initiator for a fruitful collaboration (which is proven by following publications on the same or a different research topic) \end{itemize} \begin{figure}[htbp] \centering \includegraphics[width=7cm]{dblp.jpg} \caption{Temporary mind-map (DBLP, year of 1993), consisting of associated author nodes.} \label{fig:dblp} \end{figure} With this, we may perform mind-mapping over a period of time, for example moving a corresponding window temporarily over time. \subsection{SEMANTIC NET-LEARNING}\label{wiwy} The mind-map model \texttt{WYWI} stands for a simple communication paradigm that focus on natural language communication. The model uses a mind-map to manage words and their relationship to others associatively. A sentence is read incrementally and threatened as a transaction with \texttt{concepts} and \texttt{roles}. Currently, only adjectives, nouns, and verbs are considered as worth, they are extracted and put into the mind-map as a semantic structure. Adjectives are considered as sub-concepts of nouns. For example, {\small \begin{verbatim} #S(CONCEPT :NAME MAN :CAT N :FATHER (ROOT) :CHILDREN (YOUNG) :ROLES (READ) :ACT 0.9577) #S(ROLE :NAME READ :CAT V :CONNECTION ((MAN BOOK 0.9577)) :ACT 0.9577) #S(ROLE :NAME SEE :CAT V :CONNECTION ((LION PETER 0.8253)) :ACT 0.9014) \end{verbatim} } Here, the word \texttt{READ} acts as relationship (\texttt{role}) connects both \texttt{MAN} and \texttt{BOOK}, sharing a connection and activation value of 0.9577, respectively. Other roles exist, but are unrelated. Each time-step, the activation values are decreased unless the role or the associated concepts remain inactive. As for the \texttt{ANIMA} mind-map model, the underlying net structure is refreshed after each transaction; it adapts to the given stimulation. However, \texttt{WYWI} does not use any extern source (for example thesaurus) but actively interacts with the user in case a word is unknown. This is to understand the morphologic basis or the lexical category. \texttt{WYWI} also establishes a semantic representation through micro-features: here, the user must rate a word semantically against given (predefined) entities. This produces a k-dimensional space, having k as the number of independent micro-features, for example {\small \begin{verbatim} ... MAN is ((1.0 0.0 1.0 0.0 0.0 0.0 0.0 1.0 ...)) LION is ((0.0 0.0 0.0 0.0 0.7 0.0 0.4 ...)) ... \end{verbatim} } The communication in \texttt{WYWI} is performed as a conversation. For a sentence like \texttt{The yellow lion sees Peter}, the word \texttt{YELLOW} becomes a sub-concept of a word \texttt{LION}, whereas \texttt{LION} and \texttt{PETER} are connected by \texttt{SEE}. Additionally, \texttt{SEE} is becomes higher activated. Additionally, all concepts are represented in the k-space. As a consequence, transactional (dialogue) stream can be processed and queries be sent at each time. {\small \begin{verbatim} ... WYWI: (ENTER THE NEXT WORD OR CLOSE BY '-') USER: Peter WYWI: COULD YOU PLEASE GIVE SOME EXPLANATION, LIKE THE MORPHOLOGIC BASIS? USER: Peter WYWI: AND WHAT IS THE LEXICAL BASIS OF peter? USER: person WYWI: I NEED SOME SEMANTIC INFORMATION ABOUT peter. HAVE A LOOK AND RATE peter FROM 0 to 1: TALL? USER: 0.9 WYWI: And with HEAVY? ... \end{verbatim} } An extended version of the semantic net-learning from text streams is to define a temporal mind-map for a certain actor of a text: where the text stream is consolidated over time by a semantic graph structure. In this work, an important issue for building an author-related mind-map is the resolution of linguistic anaphors. So far, the major idea is to process each sentence incrementally so that a general and pre-defined scheme structure of \texttt{subject - verb - object} is instantiated every time (\cite{LMG05}). At the moment, we only concern with nouns, verbs, and adjectives, where other linguistic categories are disregarded. In this respect, verbs act as a role between the concepts (noun, object). Each concept may have sub-concepts that correspond to attributes, for example the sentence \texttt{the old man likes the green juicy grassland} is translated into \begin{figure}[!htbp] \centering \includegraphics[width=0.42\textwidth]{graph1.jpg} \caption{A semantic graph having \texttt{man} as the centric concept. Attributes are adjunct by dashed lines, whereas roles are connected as a circle.} \label{fig:Graph1} \end{figure} \begin{figure}[!htbp] \centering \includegraphics[width=0.42\textwidth]{graph3.jpg} \caption{The anaphor \texttt{he} is recognized as to be related to \texttt{man}, whereas the two sentence structures are merged by the concept \texttt{man}.} \label{fig:Graph3} \end{figure} where \texttt{man} and \texttt{grassland} represent main concepts and \texttt{green}, \texttt{juicy}, and \texttt{old} sub-concepts. The process of relating identical concepts together is accomplished by a matching of identical words and a resolution of linguistic anaphors (\cite{LL94}, \cite{MIT98}). The following Figure \ref{fig:Graph3} shows the semantic structure after having read the sentence \texttt{He walks into the forest}. So far, the incremental processing of texts can be stopped at any moment. The semantic structure is a mind-map with concepts and relationships of consistent states. Beside focusing on the elaboration of methods to realize pronominal anaphora and co-reference in text streams, we will assign weight values to concepts and roles in order to prove their importance for an actor. The idea of content zoning is to refer to a segmentation of a text document into semantic zones. As indicated in \cite{BW07} and moreover as firstly discussed in \cite{TE99} with \texttt{Argumentative Zoning}, the basic idea here is to structure texts on the basis on pre-defined categories. An example might be the following text, having the actors \texttt{Harry}, \texttt{Hedwig}, and \texttt{owl}: {\small\it ``Harry got up off the floor, stretched, and moved across to his desk. Hedwig made no movement as she began to flick through newspapers, throwing them into the rubbish pile one by one. The owl was asleep or else faking; she was angry with Harry about the limited amount of time she was allowed out of her cage at the moment. As her neared the bottom of the pile of newspapers. Harry slowed down, searching for one particular issue that he knew had arrived shortly after he had returned to Privet Drive for the summer, he remembered that there had been a small mention on the front about the resignation of Charity Burbage, the Muggle Studies teacher at Hogwarts."} an then be zoned to {\small\it \begin{itemize} \item Harry \ms{ACTOR=HARRY} got up off the floor, stretched, and moved across to his \manapher{his $\Rightarrow$ HARRY} desk. \me{ACTOR=HARRY} \item Hedwig \ms{ACTOR=HEDWIG} made no movement as she \manapher{she = HEDWIG} began to flick through newspapers, throwing them \manapher{them = newspaperss} into the rubbish pile one by one. \item The owl \ms{ACTOR=OWL} was asleep or else faking; she \manapher{she = OWL} was angry with Harry \manapher{Harry = HARRY} about the limited amount of time she \manapher{she = OWL} was allowed out of her \manapher{her $\Rightarrow$ OWL} cage at the moment. As her \manapher{her $\Rightarrow$ OWL} neared the bottom of the pile of newspapers, \me{ACTOR=OWL}. \item Harry\ms{ACTOR=HARRY} slowed down, searching for one particular issue that he \manapher{he = HARRY} knew had arrived shortly after he \manapher{he = HARRY} had returned to Privet Drive for the summer, he \manapher{he = HARRY} remembered that there had been a small mention on the front about the resignation of Charity Burbage, the Muggle Studies teacher at Hogwarts. At last he \manapher{he = HARRY} found it.\me{ACTOR=HARRY} \end{itemize} } where linguistic anaphors are solved depending on the current actor, the gender, and/or between different candidates that have already occurred in the text. Knowing that the owl has the name Hedwig, and moreover, having a hierarchy on site, the zoning can be updated even more. An additional analysis can be performed to gain further information about the separate zones. Rather simple to extract information could be statistics about the size and layout of zones, but a more sophisticated analysis of their text content is possible. The latter can lead to an extraction of the semantic content and purpose of a zone. Such kind of information can be used for various purposes, such as comparing documents to each other (regarding their analysed zone structure and content) by simply refering to information about the zones as zone variables (\cite{BW07}). We apply content zoning to text streams in order to establish a semantic mind-map, while using a sliding window of user-specified length; text is buffered in, immediately zoned and analysed. Intermediate statistical results are managed to produce a user specific summary on a given subject. The definition of zones is insufficient to an effective zoning as zones only indicate a position of an information in the text, but do less give information about the content. A solution might be the introduction of zone variables, which describe the content of the input stream and are the core parameters for the summary generation. In this respect, we concern with two categories of zones, namely \texttt{document independent} and \texttt{document dependent} zones, for example the position of the zone in the text stream, its length or the most occurring word (without stopwords). During the zoning process, nearly all sentences are attributed to zones. Useless sentences and sentences which cannot be attributed to a zone are skipped or can be regrouped a user-defined zone. After different steps as anaphor resolution for example, the values of the variables are used for statistical evaluations to generate a summary on a user specific subject. During the process of buffering and processing the text streams, there is an option of real-time evaluation, so that changing values are immediately visible to the user. At each instant, the user has the possibility to call a summary on a previously defined subject, as for example an actor of a fairy tale. But, also less sophisticated results as the top most occuring words or collocations can be called at each moment. \section{CONCLUSIONS} A mind-map is an adaptive engine that basically works incrementally on the fundament of transactional streams. Following our model, mind-maps consist of symbolic neural cells that are connected with each other and that become either stronger or weaker depending on the transactional stream: based on the underlying biologic principle, these symbolic cells and their connections as well may adaptively survive or die, forming different cell agglomerates of arbitrary size. With that, mind-maps may be applicable for the management of trust as well: every human shares an own attitude about others, for example \texttt{Person R} has an attitude to \texttt{Person S} and vice versa. Both are probably different from each other and different to \texttt{Person T}'s view to \texttt{Person R} or \texttt{Person S}. Furthermore, one might get the conclusion if \texttt{Person R} trusts \texttt{Person S} but not vice versa. With this approach, we may follow a proposition by \cite{MI05} who suggests a model on human conversations. The attitude of someone's mind is modelled as a self-organising mind-map. Every person has a model of his/her view of the world and models for other people whom he/she has interacted with. All views including a self view and views to other persons (others) will be modified through conversations between people. We therefore intend to introduce an engine to find regularities between human's objects and to model trust based on this. The creation of an artificial mind-map, where a textual data stream is read and represented in an associative dynamic network. Each incoming stream is decomposed to its items, for example a text stream may be decomposed to words. \ack This work has been done at the MINE research group at the Laboratory for Intelligent and Adaptive Systems across several research projects funded by the University of Luxembourg and the Ministry of Higher Education. {\small
1,314,259,995,429
arxiv
\section{Introduction} \label{sec:intro} Light-matter interactions and nonequilibrium dynamics are unifying themes connecting solid state physics, quantum optics, and atomic and molecular physics. The prospect of controlling properties on demand in quantum materials by coupling them to laser fields has spurred enormous activity recently. Experimental progress in time-resolved spectroscopy has allowed researchers to obtain detailed information about the ultrafast dynamics of laser-excited quantum materials \cite{giannetti_ultrafast_2016,basov_towards_2017,cavalleri_photo-induced_2018}. This is particularly relevant for the investigation of light-induced changes in material properties on ultrafast time scales. Notable examples are Floquet topological states of matter \cite{oka_photovoltaic_2009,lindner_floquet_2011,kitagawa_transport_2011,wang_observation_2013,mahmood_selective_2016,usaj_irradiated_2014,dehghani_dissipative_2014,sentef_theory_2015,claassen_all-optical_2016,hubener_creating_2017,mciver_light-induced_2020}, also demonstrated for cold atoms in optical lattices \cite{jotzu_experimental_2014,aidelsburger_measuring_2015}, ultrafast modifications of effective interactions and their consequences for the emergent material properties \cite{singla_thz-frequency_2015,dutreix_dynamical_2017,kennes_transient_2017,sentef_light-enhanced_2017,tancogne-dejean_ultrafast_2018,topp_all-optical_2018,golez_dynamics_2019,buzzi_photo-molecular_2020}, and cavity material engineering with quantum fluctuations of light \cite{laussy_exciton-polariton_2010,cotlet_superconductivity_2016,kavokin_excitonpolariton_2016,hagenmuller_cavity-enhanced_2017,sentef_cavity_2018,rosner_plasmonic_2018,schlawin_cavity-mediated_2019,mazza_superradiant_2019,curtis_cavity_2019,kiffner_manipulating_2019,hagenmuller_enhancement_2019,allocca_cavity_2019,rokaj_quantum_2019,latini_cavity_2019,forg_cavity-control_2019,thomas_exploring_2019}. The theoretical description of these examples of light-induced states of matter usually involves electronic single-particle excitations, which are fermionic in nature. By contrast, polaritonic systems can often be described by a purely bosonic theory, for example in polaritonic condensates \cite{byrnes_excitonpolariton_2014}. In particular, topological edge states \cite{hasan_colloquium_2010,qi_topological_2011} in those systems are often well described already by an effective semiclassical approach, which simplifies theoretical calculations. This has led to a variety of proposals for realizing topological polaritons, for example in photonic \cite{haldane_possible_2008,wang_observation_2009,rechtsman_photonic_2013,hafezi_imaging_2013}, acoustic \cite{yang_topological_2015,peano_topological_2015,fleury_floquet_2016}, and mechanical systems \cite{nash_topological_2015,susstrunk_observation_2015}. Other work providing evidence for the growing range of polaritonic platforms potentially hosting useful edge states includes studies of molecule-by-molecule assemblies and optically trapped ultracold atoms \cite{polini_artificial_2013} as well as semiconductor microcavities \cite{jacqmin_direct_2014,milicevic_edge_2015,sala_spin-orbit_2015}. Recently, polaritonic systems hosting chiral topological edge states were proposed theoretically \cite{karzig_topological_2015,nalitov_polariton_2015,yuen-zhou_plexciton_2016} and measured experimentally \cite{klembt_exciton-polariton_2018}. For any practical application of chiral topological edge modes it is of key importance to be able to selectively populate these modes, and to track whether such selective population has been successful. However, for example in the work by Karzig et al.~\cite{karzig_topological_2015}, the degree of control over the population of such edge mode in terms of tuning of relevant parameters was not discussed in much detail. In particular it was only noted in passing that the controlled population of the chiral edge mode by laser driving is enabled either by tuning the laser frequency into resonance with the topological polariton band gap or by focussing the laser spot to the edge in real space, or a combination thereof. In the experimental work by Klembt et al.~\cite{klembt_exciton-polariton_2018} an exciton-polariton chiral topological system was realized in a honeycomb lattice with a magnetic field breaking time-reversal symmetry. It was shown by photoluminescence measurements that chiral edge modes could be populated by laser pumping. Due to the fact that this selective edge-mode population was only observed above a certain threshold power, it was argued that the formation of a polariton condensate was a prerequisite of such edge mode population. This condensate formation, in turn, requires a nonlinearity in the underlying Gross-Pitaevskii equation, stemming from a polariton-polariton interaction in the microscopic model. Here we show by calculations for a topological polariton lattice model how real-space imaging and time-resolved spectroscopy go hand-in-hand in demonstrating selective edge mode excitations by laser driving. We demonstrate that selective excitation of chiral edge modes is possible without the necessity of polariton condensation due to nonlinearities, that is, without polariton-polariton interactions. However, in the absence of nonlinearities it is necessary that the pump laser frequency is tuned in resonance with the edge mode, that is, with the bulk topological band gap. Complementary real-space imaging then reveals the edge localization and chiral character of the populated modes. This paper is organized as follows: In Section~\ref{sec:model}, the model is presented and its physics introduced. In Section~\ref{sec:dynamics}, we discuss how the dynamics of the laser-driven system is simulated and tracked through time-resolved spectroscopy. Section~\ref{sec:results} contains the results of these simulations, and Section~\ref{sec:discussion} contains a discussion and conclusions. \begin{figure}[tbp] \centering \includegraphics[width=0.46\textwidth]{figure0_new.pdf} \caption{Illustration of a semiconductor cavity setup. An isolated quantum well (QW) is placed between two optical dynamical Bragg reflectors (DBR). The cavity photon modes are populated by optical pumping and couple to excitonic states within the QW plane via dipole interaction. The specific coupling under consideration here yields topological edge states which can be selectively populated. } \label{fig:1} \end{figure} \section{Model} \label{sec:model} \begin{figure}[tbp] \centering \includegraphics[width=\columnwidth]{fig2.pdf} \caption{ (a) Complex phase of the chiral exciton-polariton coupling [Eq.~\eqref{eq:coupling}] with a branch cut along the $\mathrm\Gamma = (0, 0)$ to $\mathrm X = (-\frac{\pi}{\ell_x}, 0)$ line. This is the phase-winding structure that leads to a non-trivial topological band structure. (b) Real-space lattice structure of a ribbon with periodic boundary conditions in $x$ and open boundary conditions in $y$ direction. (c) Tight-binding topological polariton band structure of the the Hamiltonian $\hat H_0$ [Eq.~\eqref{eq:H0}] on the ribbon geometry. The bands are colored according to their localization at the top or bottom boundary as indicated in panel (a). The model parameters are $t_\mathrm 1 = 0.002 |t_\mathrm 0|,$ $g_0 = 0.2 |t_\mathrm 0|,$ $\varepsilon_\mathrm 0 = -4 |t_\mathrm 0|$, and $\varepsilon_\mathrm 1 = -1.792 |t_\mathrm 0|$, with an effective photon hopping $t_\mathrm 0 < 0.$ } \label{fig:2} \end{figure} Our setup is shown in Fig.~\ref{fig:1}. A quantum well (QW) harbors exciton modes that couple via dipole-dipole interactions to photon modes in a cavity, leading to the formation of exciton polaritons. Alternatively, the QW could just as well be replaced by monolayer transition metal dichalcogenides \cite{karzig_topological_2015,xiao_coupled_2012}. In the following we will assume that there is a chiral coupling between excitons and polaritons. This can be achieved by employing an external magnetic field to energetically select chiral electronic states over their time-reversed counterparts with opposite chirality, which has been suggested by Karzig et al. \cite{karzig_topological_2015} and experimentally realized by Klembt et al. \cite{klembt_exciton-polariton_2018}. From these chiral electrons one obtains excitons with nonzero total angular momentum. These excitons then couple in a chiral fashion to the photon branch with opposite angular momentum compared to the exciton, whereas it couples in a nonchiral fashion to the photon branch with equal angular momentum. Thus, as noted by Karzig et al., the chiral coupling can be viewed as a simple consequence of angular-momentum conservation. In this work, we will focus our attention on a tight-binding model on a two-dimensional square lattice, which is designed to feature non-trivial topology arising from the chiral coupling of two bosonic modes in analogy to this mechanism. A simple coupling satisfying this requirement while at the same time being consistent with the lattice periodicity is given by \begin{align} \label{eq:coupling} g(\vec k) & = g_0\left[ \sin(\ell_x k_{x}) + \mathfrak i \sin(\ell_y k_{y}) \right] \end{align} with coupling strength $g_0 > 0$ and where $\ell_{x/y}$ denotes the respective lattice constant. For one revolution of $\vec k$ around the origin, the coupling accumulates a phase of $2\pi$ [Fig.~\ref{fig:2}(a)]. This winding is the key ingredient leading to a non-trivial topology of the system. In the presence of periodic boundary conditions in both directions, the full static Hamiltonian in momentum space representation has the form \begin{align} \label{eq:H0-k} \hat H_0 = \sum_{\vec k \in \mathfrak L'} \begin{pmatrix} \hat a^\mathrm 0_{\vec k} \\ \hat a^\mathrm 1_{\vec k} \end{pmatrix}^\dagger \begin{pmatrix} \tau_\mathrm 0(\vec k) & g(\vec k) \\ g^*(\vec k) & \tau_\mathrm 1(\vec k) \end{pmatrix} \begin{pmatrix} \hat a^\mathrm 0_{\vec k} \\ \hat a^\mathrm 1_{\vec k} \end{pmatrix} \end{align} where the bosonic operators $\hat a^\mathrm 0_{\vec k}$ ($\hat a^\mathrm 1_{\vec k}$) annihilate a photon (exciton) with momentum $\vec k.$ Further, $\mathfrak L'$ denotes the discretized first Brillouin zone of the lattice and \( \tau_\alpha(\vec k) = 2 t_\alpha \left[ \cos(\ell_x k_{x}) + \cos(\ell_y k_{y}) \right] - \varepsilon_\alpha \) (for $\alpha \in \{0,1\}$) is the tight-binding dispersion with exciton/photon hopping $t_\alpha$ and $\varepsilon_\alpha$ an energy offset. The static Hamiltonian has a band structure that corresponds to a tight-binding version of the continuum model presented by Karzig et al.~\cite{karzig_topological_2015}. The momentum-dependence of the coupling $g(\vec k)$ leads to a non-zero Chern number $C_{\pm} = \mp 1$ of the upper (${+}$) and lower (${-}$) polariton band. A more detailed explanation is given in the Appendix. In order to study the real-space behavior of edge mode excitations, we introduce boundaries to the system by imposing open boundary conditions in $y$ direction, while keeping periodic boundary conditions in $x$ direction (so that the momentum $k_x$ is still a good quantum number). This leads to a ribbon geometry, as shown in Fig.~\ref{fig:2}(b). Let $\mathfrak L$ denote this real-space lattice of size $N = N_x \times N_y$ and $\hat a^{\C}_i$ ($\hat a^{\X}_i$) the photonic (excitonic) field operator at site $i \in \mathfrak L$. The real-space model corresponding to Eq.~\eqref{eq:H0-k} is given by \begin{align} \label{eq:H0} \hat H_0 = \sum_{i \in \mathfrak L} \sum_{d\in\{0, d_x, d_y\}} \sum_{\alpha, \beta \in \{\mathrm 0,\mathrm 1\}} \!\!\! t_{\alpha\beta}(d) \hat a^{\alpha\dagger}_i \hat a^{\beta}_{i+d} \end{align} where $i + d_\nu$ is the index of the nearest neighbor of site $i$ in positive $\nu \in \{x,y\}$ direction. The hopping is determined by the matrices $ \mathbf t(0) = \mathrm{diag}(-\varepsilon_\mathrm 0, -\varepsilon_\mathrm 1) $ and \begin{align} \mathbf t(d_x) & = \begin{pmatrix} t_\mathrm 0 & \mathfrak i g_0 / 2 \\ \mathfrak i g_0 / 2 & t_\mathrm 1 \end{pmatrix}\!, & \mathbf t(d_y) & = \begin{pmatrix} t_\mathrm 0 & g_0 / 2 \\ -g_0 / 2 & t_\mathrm 1 \end{pmatrix}\!. \end{align} Note that $\mathbf t(-d_\nu) = \mathbf t^\dagger(d_\nu)$ so that $\hat H_0$ is Hermitian. As above, the parameters $t_\alpha$ are the photon and exciton hopping amplitudes, $\varepsilon_\alpha$ constant energy offsets, and $g_0$ the exciton-photon coupling strength. For a specific choice of parameters, this model is equivalent to the Qi-Wu-Zhang \cite{qi_topological_2006} or half Bernevig-Hughes-Zhang model \cite{bernevig_quantum_2006}, which is well known as a simplified tight-binding model for the description of topological insulators \cite{asboth_short_2016}. Bosonic transport in a variation of this model including a non-linear interaction term has been studied by Wei\ss{} \cite{weis_nonlinear_2017}. Figure~\ref{fig:2}(c) shows the band structure of the Hamiltonian \eqref{eq:H0} obtained by a partial Fourier transform in $x$ direction. As expected, the model possesses a topological gap which is only crossed by a pair of bands, the eigenstates of which are highly localized at the opposite boundaries of the ribbon. We note that in this model, exciton and photon hopping need to be of opposite sign in order for the edge modes to be present, which can be achieved through a negative exciton effective mass in a real material. \section{Dynamics} \label{sec:dynamics} The time-dependent driving is implemented by the operator \begin{align} \label{eq:F} \hat F(t) & = f_0 \, \mathrm{e}^{-\mathfrak i \omega_\d t} \, \hat a^{\C \dagger}_{i_0} \; + \; \mathrm{H.c.}. \end{align} which, for simplicity, is taken to act directly only on the photonic mode $\alpha = 0$ at a single site $i_0$ located on the open side of the lattice boundary. The parameter $f_0$ is the product of a dipole matrix element of the material under consideration and the electric field strength of the drive laser. The form of the driving is chosen to mimick the laser-driven dynamics induced by a laser that is focussed to the edge of the system and switched on at the initial time. The full time-dependent Hamiltonian then has the form \begin{align} \label{eq:hamiltonian} \hat H(t) = \hat H_0 + \hat F(t). \end{align} Our simulations are performed in the semi-classical limit, where the bosonic operators are replaced by scalar complex fields instead of using a full quantum-mechanical treatment. This corresponds to restricting the possible quantum states to the coherent states $\ket{\psi}$ parametrized by the scalar complex field $\psi \in \mathbb C^{2N}$ satisfying $\hat a^{\alpha}_i \ket{\psi} = \psi_i^\alpha \ket{\psi}.$ The time-dependent Hamiltonian~\eqref{eq:hamiltonian}, including the driving term $\hat F(t)$, is of second order in the creation and annihilation operators due to the absence of a polariton-polariton interaction term. In this case, if the simulation is started in a coherent initial state, the semi-classical approach captures the exact quantum dynamics of the system. The equations of motion for the fields $\psi_i^\alpha$ are ($\hbar = 1$) \begin{align} \begin{split} \mathfrak i \, \frac{\d\psi_i^\alpha}{\d t} \; &= \!\!\!\!\! \sum_{\substack{\beta\in\{\mathrm 0,\mathrm 1\} \\ d\in\{0, \pm d_x,\pm d_y\}}} \!\!\!\!\! t_{\alpha\beta}(d) \psi_{i+d}^\beta \, + \, \delta_{i,i_0}\delta_{\alpha,0} \, f_0 \mathrm{e}^{-\mathfrak i \omega_\d t}. \end{split} \end{align} The initial state is chosen to be the coherent polaritonic vacuum state with $\psi_i^\alpha(0) = 0$ for all $i$ and $\alpha$. \begin{figure*}[tbp] \centering \includegraphics[width=0.9\textwidth]{trspec.pdf} \caption{(a)--(h) Time- and momentum-resolved spectral density [Eq.~\eqref{eq:spec}] of the laser-irradiated system on a ribbon-shaped $N_x \times N_y = 256 \times 8$ lattice. The grey lines show the equilibrium band structure of the model \eqref{eq:H0}. The driving frequency $\omega_\d$ varies between subplots and is indicated by the blue line. The spectral density shows the excitation of the states resonant with the driving laser frequency. Here, the driving amplitude is $f_0 = 0.2|t_\mathrm 0|$ and the spectral density has been computed at a probe time of $t_\mathrm{p} = 400 |t_\mathrm 0|^{-1}$ with $\sigma_\mathrm{p} = 125 |t_\mathrm 0|^{-1}.$ The model parameters are the same as in Fig.~\ref{fig:2}. } \label{fig:3} \end{figure*} In the following we use double-time Green's functions in order to compute time-, momentum- and spatially resolved spectroscopy. This approach is generally applicable to nonequilibrium situations and capable of fully describing both transient and steady-state dynamics in driven systems. For the specific case of a time-periodic driving field, we briefly note that one can, in principle, also obtain spectral information from a Floquet representation \cite{eckardt_high-frequency_2015}. However, the Floquet framework has several limitations compared to our more general approach: (i)~it only applies to a strictly time-periodic steady state, whereas we are specifically interested in the transient \enquote{switch-on} behavior and chiral propagation of topological edge modes; (ii)~it is not straightforward to compute the population of Floquet states, in particular when there are no heat or particle baths attached to the system; (iii)~it only captures the stroboscopic part of the time evolution with the period of the drive, but not the more complicated subcycle dynamics or micromotion. In contrast, these limitations do not apply to double-time Green's function methods. In the semi-classical limit, the double-time lesser Green's functions can be obtained directly from the bosonic fields as \begin{align} \label{eq:Gless} \begin{split} G^<_{i\alpha;j\beta}(t_1, t_2) &:= -\mathfrak i\langle \hat a^{\beta\dagger}_{j}(t_2) \hat a^{\alpha}_i(t_1) \rangle \\ &= -\mathfrak i\psi_i^\alpha(t_1) [\psi_j^\beta(t_2)]^*. \end{split} \end{align} This Green's function contains information about the propagation of a boson added to the system at time $t_2$ and removed from the system at time $t_1$. This can be used to extract both the spectrum of excitations as well as the occupation of such single-particle excited states. From the Green's functions, we compute the time-resolved spectral density \cite{freericks_theoretical_2009} \begin{widetext} \begin{align} \label{eq:spec} I(k, \omega, t_\mathrm{p}) & = \operatorname{Im} \iint \d t_1 \d t_2 \, S_{t_\mathrm{p},\sigma_\mathrm{p}}(t_1) S_{t_\mathrm{p},\sigma_\mathrm{p}}(t_2) {\mathrm e}^{\mathfrak i\omega(t_1-t_2))} \tilde G^<(k; t_1, t_2) \end{align} \end{widetext} with Gaussian probe shape \begin{align} S_{t_\mathrm{p},\sigma_\mathrm{p}}(t) & = \frac{1}{\sigma_\mathrm{p}\sqrt{2\pi}} \exp\left[-\frac{(t - t_\mathrm{p})^2}{2\sigma_\mathrm{p}^2}\right] \end{align} where $t_\mathrm{p}$ is the probe time and $\sigma_\mathrm{p}$ the Gaussian width given by the temporal duration of the probe laser pulse \cite{freericks_theoretical_2009}. The time resolution is thus determined by the shape function. As was shown in the context of electronic Green's functions \cite{PhysRevX.3.041033}, this time resolution comes at the expense of spectral resolution, and vice versa, due to Heisenberg's uncertainty principle. For our purposes, we are mainly interested in spectral information. Therefore, the temporal duration of the probe pulse is chosen to be sufficiently large to be able to resolve the relevant spectral features. The reduced lesser Green's function used in Eq.~\eqref{eq:spec} is \begin{align} \tilde G^<(k_x; t_1, t_2) & = \mathrm{Tr}_{\alpha,y} \mathcal F_{x \to k_x} [G_{i_{x,y},\alpha;i_{x,y},\alpha}^{<}(t_1,t_2)], \end{align} which is Fourier-transformed along the periodic ribbon direction $x$ (denoted by $\mathcal F_{x \to k}$), in order to reveal momentum-resolved information along this direction, and traced over $y$ direction. We also trace here over the photon and exciton index, i.e., we compute the exciton-polariton spectral density. In principle, it is easily possible to resolve the spectral contributions of excitons and photons separately, but this will not be crucial for our analysis of edge localization of pumped modes below. \section{Results} \label{sec:results} In Fig.~\ref{fig:3} we present the time- and momentum-resolved spectral functions, mimicking time-domain photoluminescence spectra, of a continuously irradiated ribbon of size $N_x \times N_y = 256 \times 8$ with laser focused to a lattice site $i_0$ at the lower edge ($y=0$) of the ribbon. The driving frequency of the external laser is varied to be below [Fig.~\ref{fig:3}(a-d)] the topological band gap, within the gap region [Fig.~\ref{fig:3}(e)], and above the gap [Fig.~\ref{fig:3}(g-h)]. As can be seen from the time-resolved spectral density, the polariton branches are selectively occupied by the resonant laser excitation, which shows that the external driving frequency is the main tuning knob for populating polariton branches in the absence of polariton-polariton interactions and associated mechanisms for polariton condensation. \begin{figure}[tbp] \centering \includegraphics[width=\columnwidth]{edgeloc.pdf} \includegraphics[width=\columnwidth]{realspace_c.pdf} \caption{ (a)--(b) Fraction of the total population located at the lowest layer $y=1$ (top panel) and within the lower half of the system $1 \leqslant y \leqslant 4$ (bottom panel). The horizontal grey lines indicate the fraction of the population located in the same region for a uniform distribution over the full $256 \times 8$ lattice. (c)--(d) Real-space density of the photon and exciton field at time $t = 1.2 \times 10^3 |t_0|^{-1}$ for resonant (panel c) and off-resonant (panel d) driving frequency. The driving field is localized at the bottom right corner of the displayed region. The direction of chiral propagation of the polariton modes is indicated by the purple arrow. } \label{fig:4} \end{figure} Finally, we analyze in Fig.~\ref{fig:4} the edge localization of the light-induced states through real-space analysis of the light-induced populations. Fig.~\ref{fig:4}(a) shows the localization at the lower edge, i.e., the ratio of intensity at $y=1$ integrated (along the $x$ direction) and the total intensity in the system at a given point in time. A clearly resonant behavior is observed when the driving frequency matches the edge state energy of $\approx 1.8 |t_C|$. The degree of localization is even more pronounced for the excitonic component of the polariton wavefunction. In Fig.~\ref{fig:4}(b) we analyze the localization integrated over the lower half $1 \leq y \leq 4$ of the ribbon, which consistently shows same the resonant behavior. In Fig.~\ref{fig:4}(c) we show representative real-space images for the photon and exciton densities in the on-resonance case. Here, besides the already discussed localization at the lower edge, one can also observe the chiral propagation of the edge mode from left to right as indicated by the arrow. By clear contrast, in Fig.~\ref{fig:4}(d) both the edge selectivity and chirality of the laser-pumped polariton populations are absent. These real-space images thus provide complementary information to the spectroscopic pictures presented in Fig.~\ref{fig:3}. \section{Discussion} \label{sec:discussion} In summary we have shown lattice model simulations for the temporally and spatially resolved dynamics of chiral topological exciton polaritons. We have demonstrated the selective excitation of chiral edge modes provided that the external driving frequency is sufficiently closely tuned to the topological band gap. Importantly, the focusing of the laser to the edge of the system is by itself not sufficient to selectively populate the edge mode. This is likely due to the fact that also bulk states spatially extend into the edge region, and these bulk states can be populated even by a laser that is focused to the edge. An interesting open question pertains to the role of polariton-polariton interactions and the question of the importance of condensation versus resonant excitation for the experimentally observed chiral edge modes \cite{klembt_exciton-polariton_2018}. As an upshot from our calculations, it is straightforward to extend the formalism employed here to compute time-resolved spectroscopy in periodically driven many-body systems, both with continuous and ultrashort laser pulses. Specifically for continuous driving this is the realm of Floquet engineering \cite{bukov_universal_2015,holthaus_floquet_2015,eckardt_high-frequency_2015,oka_floquet_2019} of effective band structures, which is by itself a rapidly growing research field. Moreover, the recent discussion of breakdown of conventional bulk-boundary correspondence in non-Hermitian systems due to gain, loss, and violation of reciprocity, also points to the importance of complementary spectroscopic and imaging techniques to reveal topological edge modes and their selective population \cite{yao_edge_2018,xiong_why_2018,helbig_generalized_2020}. It is also interesting to contrast the present discussion of complementary techniques for detecting topological edge states with electronic edge state spectroscopy and imaging in quantum materials, for example in topological insulators by means of time- and angle-resolved photoemission spectroscopy \cite{soifer_band-resolved_2019}. The interface between the different fields of quantum simulators, nonequilibrium quantum materials science, and polaritonic condensates promises many interesting applications for future quantum technologies. \begin{figure*}[htbp] \includegraphics[width=\textwidth]{fig5.pdf} \caption{ (a) Band structure of the Bloch Hamiltonian \eqref{eq:bloch} along a path from $\Gamma = (0,0)$ to $\mathrm X = (\pi/\ell_x, 0)$ in reciprocal space. Here, $E_\pm$ denotes the polariton band energies, while $\tau_{0}$ ($\tau_1$) is the uncoupled photon (exciton) dispersion. \\ {(b)}~Discretized Berry curvature of the lattice model for the lower ($F^-$) and upper ($F^+$) band (left and right panel, respectively), computed using the method of Fukui et al. \cite{fukui_chern_2005} over the first Brillouin zone of the model \eqref{eq:bloch} discretized on a $512 \times 512$ grid. The model parameters are the same as in Fig.~\ref{fig:2}(c), except for the boundary conditions. The Chern number of the bands are $C_\pm = \sum_k F^\pm(k) \d k_x \d k_y = \mp 1$ for this case. } \label{fig:berry} \end{figure*} \begin{acknowledgements} We acknowledge helpful discussions with G.~Refael. We are particularly indebted to C.~Bardyn for sharing his Gross-Pitaevskii simulation code. Financial support by the DFG through the Emmy Noether program (SE 2558/2-1) is gratefully acknowledged. \end{acknowledgements}
1,314,259,995,430
arxiv
\section{Introduction} Contemporary wireless communications systems, such as IEEE 802.11 and 4G LTE, deploy multicarrier modulation with the aim of transmitting data over frequency-selective channels. In this sense, OFDM is the most popular choice and a suitable number of subcarriers is used to make subchannels frequency flat. Moreover, dispersion and other phenomena introduce undesirable effects that may limit the overall performance of a wireless system. From this perspective, authors in \cite{Saeed2003} discuss how the number of subcarriers affects the transmission of an OFDM signal with equipped with a single antenna at both transmission sides transmitter-receiver (SISO). In the search of more efficient systems, Multiple-input multiple-output (MIMO) systems were proposed and able to improve the spectral efficiency \cite{Hampton:2014}. However, such benefits also require more sophisticated electrical circuitry and signal processing, which are needed to decouple signals from the different antennas \cite{Goldsmith2005}. The system may increase the throughput using multiplexing mode, where each antenna transmit different signals. Conversely, increasing the performance/reliability requires the transmission of the same information and exploiting diversity. Those characteristics are limited to the Diversity-Multiplexing Tradeoff \cite{Tse2004}. Herein, the multiplexing mode is considered, where the signal of the other $N_t - 1$ transmit antennas interfere each other. Thus, detection algorithms are required to reduce the effects of such interference \cite{Choi:2012},\cite{Kobayashi2015} and are studied throughout this work. In order to attain high levels of efficiency, the MIMO system considers the assumption of rich scattering (isotropic) scenario modeled as independent Rayleigh \cite{marzetta2016}, which is not always entirely valid in real applications. A rule of thumb is the approximation of half wavelength of separation between antennas \cite{Goldsmith2005} to achieve independent fading channels, but this distance may not be always respected, for example, due to space limitation of the receiver hardware, resulting in spatial correlation of the channel coefficients. In realistic scenarios, correlated models are good representations of field measurements \cite{Chizhik2003}, and thus considered in our numerical simulations. Authors in \cite{Guerra2016} discuss how the performance of SISO-OFDM systems scale with the number of subcarriers. In the MIMO-OFDM context, the performance of ZF and MMSE linear detectors are analyzed under spatial correlation scenarios. This work extends the results reported in \cite{Guerra2016}. In particular and differently of \cite{Guerra2016}, herein, we propose a hybrid detection approach, where particle swarm optimization (PSO) and differential evolution (DE) evolutionary heuristics are combined with linear detectors (two detection steps), aiming to improve performance with reduced increment in complexity. In detection problem, the maximum likelihood (ML) is known to provide optimal performance, however its high computational complexity is prohibitive in real applications, specially when the problem dimension increases, e.g., number of antennas, constellation size and number of subcarriers. Heuristic algorithms provide alternative good solutions with relatively low computational complexity. In \cite{Khan2006}, PSO-aided detection is considered in MIMO and in \cite{trimeche2013} to MIMO-OFDM systems, providing lower computational complexity compared to ML detector. In \cite{Seyman_2014}, heuristic approaches differential evolution (DE), genetic algorithm (GA) and PSO are applied to detection in MIMO-OFDM and performance in terms of bit error rate (BER) is evaluated. In \cite{Khan2007}, binary PSO (BPSO) is applied to MIMO-OFDM and an algorithm considering the output of ZF-VBLAST is proposed and performance evaluated numerically. The contributions of this paper are as follows. We analyse the influence on BER performance and computational complexity in terms of \textit{floating points operations} (\textsc{flop}s) of different initial solution as input to the heuristic algorithms, {\it i.e.},we have analyzed distinct initialization, including random guess, linear detector outputs, such as MF and MMSE solutions as input, while perform a comparison between those heuristic detectors in realistic scenario, {\it i.e.}, under spatial correlation between antennas. Moreover, aiming to attain a fair performance-complexity comparison, the input parameters of both heuristic strategies have been systematically chosen, since they directly impact on the algorithm performance and complexity, as studied in \cite{Marinello_2012}. The remainder of this work is organized as follows. Section II revisits briefly the OFDM scheme. Descriptions for the MIMO-OFDM system with spatial channel correlation are offered in section \ref{sec:mimo}. Moreover, section \ref{sec:detectors} also describes the classical MIMO detectors and formulates heuristic aided detectors based on PSO and DE, including the hybrid linear-heuristic approaches. Extensive numerical results are discussed in section \ref{sec:simulation_results}, where BER performance comparison considering spatial correlation was systematically carried out. Besides, subsection \ref{ref:complexity} carefully analyzes the resulting complexity of the MIMO-OFDM detectors. Final remarks and conclusions are offered in section \ref{sec:conclusions}. \noindent{\it Notation}: Throughout the paper, lowercase and uppercase bold-faced letters represent vectors and matrices, respectively. $\mathbb{C}$ and $\mathbb{R}$ the set of complex and real numbers; $\mathfrak{Re\{.\}}$ and $\mathfrak{Im\{.\} }$ represent the real and imaginary parts of a complex number. Operators $[.]^H$, $\|.\|$, $\circ$ and $\otimes$ represent Hermitian, Frobenius norm, Hadamard product and Kronecker product, respectively. $\mathbb{E}\{.\}$ denotes expectation operator and and $\sim \mathcal{U} \in [a, b]$ that a random variable follows an uniform distribution inside a specified interval. \section{OFDM Transmission and Channel}\label{sec:ofdm} A block diagram representing the MIMO-OFDM communication in multiplexing operation mode is exposed in Fig. \ref{fig:mimo_ofdm_block_diagram}. At the transmitter side, the stream of bits are distributed throughout $N_t$ transmitting substreams. Here, classical OFDM modulation is considered and described as follows. The signal passes through the $OFDM_{\rm tx}$ block that represents the OFDM modulator, which includes the serial-to-parallel conversion, digital $M$-ary modulation, inverse discrete Fourier transform (IDFT), cyclic prefix (CP) addition, parallel-to-serial conversion and the transmission of the signal through the wireless channel. At the receiver, the signals of the $N_r$ receive antennas are shifted to baseband, passed by the OFDM demodulator ($OFDM_{\rm rx}$), which includes a serial-to-parallel followed by a discrete Fourier transform (DFT). Thus, CP is discarded, the signal is serialized, demodulated and it finally feeds the detection block, which is the focus of this work. Note that linear, heuristic and hybrid detectors are discussed in more details in section \ref{sec:detectors}. \begin{figure}[htbp!] \centering \includegraphics[width=.5\textwidth]{mimo-ofdm.eps} \vspace{-2mm} \caption{MIMO-OFDM block diagram.} \label{fig:mimo_ofdm_block_diagram} \end{figure} Among the different channel effects, the coherence time $(\Delta t)_c$ and the coherence band $(\Delta B)_\textsc{c}$ may influence parameters of an OFDM system. The coherence time scales directly with the maximum Doppler frequency while the mobility of a wireless terminal may cause problems such as the {\it carrier frequency offset} \cite{Cho2010}, which is important for the performance of the system but not the focus of this paper. The coherence bandwidth is dictated by power delay profile (PDP) of the channel, which is measured empirically \cite{Goldsmith2005}. More specifically, the the coherence bandwidth is evaluated based on the estimation of the delay spread of the PDP of a channel. This parameter influences directly on the number of subcarriers of the system, because, to achieve the flat-fading on every subchannel, the condition $B_{\rm sc} \ll (\Delta B)_\textsc{c}$ requires $N$ to be sufficiently large \cite{Goldsmith2005}. In special, this work deploys the IEEE 802.11b PDP model, which follows an exponential profile \cite{Cho2010}. \section{MIMO-OFDM Multiplexing Mode and Spatial Correlation}\label{sec:mimo} Considering $N_t$ and $N_r$ transmit and receive antennas, respectively, the signal received in a MIMO-OFDM channel on each subcarrier can be expressed as \cite{Paulraj2003}: \begin{equation}\label{eq:mimo} {\bf y}[n]={\bf H}[n] {\bf x}[n] + {\bf z}[n], \end{equation} where ${\bf y}[n] \in \mathbb{C}^{N_r\times 1} $ is the vector of the received signal, ${\bf H}[n] \in \mathbb{C}^{N_r \times N_t}$ is the channel matrix, ${\bf x}[n] \in \mathbb{C}^{N_t\times 1}$ the transmitted information, ${\bf z}[n] \in \mathbb{C}^{N_r\times 1}$ the Gaussian noise with zero mean and variance $\sigma_z^2$ through $n=0, \cdots, N-1$ subcarriers. In order to describe and evaluate spatial correlation between antennas, the Kronecker product is used as follows: \begin{eqnarray} {\bf H}[n] = \sqrt{{\bf R}_r}{\bf G}[n] \sqrt{{\bf R}_t^{H}}, \end{eqnarray} where $\bf G$ is an uncorrelated channel matrix composed by independent and identically distributed (i.i.d.) entries, ${{\bf R}_r}$ and ${{\bf R}_t}$ are the spatial correlation matrices seen by the receiver and transmitter, respectively. The coefficients needed to construct the correlation matrix and the arrange of the antennas (linear, rectangular) influence the entries of correlation matrices of the transmitter and receiver. In \cite{Zelst2002}, an antenna correlation model is proposed for {\it uniform linear antenna} (ULA) array configurations. This model considers that the antennas are arranged equidistantly, where $d_t$ and $d_r$ represent the spacing between the transmitting and receiving antennas, linearly arranged, respectively. To simplify the analysis, we consider $N_t = N_r$, leading to Toeplitz symmetric correlation matriz: \begin{eqnarray} {\bf R}_t = {\bf R}_r = \begin{bmatrix} 1 & \rho & \rho^4 & \dots & \rho^{(N_t-1)^2} \\ \rho & 1 & & & \vdots \\ \rho^4 & \rho & 1 & & \rho^4 \\ \vdots & \vdots & \vdots & \ddots & \rho \\ \rho^{(N_t-1)^2} & \dots & \rho^4 & \rho & 1 \\ \end{bmatrix} , \end{eqnarray} where $\rho\in [0,\,1]$ denotes the correlation index between element antennas of a ULA array. \section{MIMO-OFDM Detectors}\label{sec:detectors} In this section, linear and heuristic-based detectors are discussed in details. Heuristic procedure involves the definition of a fitness function, deployed to evaluate the quality of the population/swarm and to decide which ones are more suitable to solve a given problem (in this paper, MIMO-OFDM detection). Furthermore, the model is rewritten in an equivalent real-valued representation and the PSO and DE heuristic procedures are detailed, while the utilization of different initial solution (hybrid approach) is briefly described. \subsection{Maximum likelihood (ML) Detector} Aiming to perform optimal symbol estimation, ML detection requires an exhaustive search over all symbol vector combinations. However, optimal performance comes at high computational complexity, which is not feasible for real world systems. In the search, the vector that offers the minimum Euclidean distance between the actual received signal ${\bf y}[n]$ and the estimated reconstructed received signal ${\bf H}[n]{\bf x}[n]$, assuming the transmission of a given candidate-signal vector ${\bf x}[n]$. Hence, ML symbols estimation for MIMO-OFDM systems can be formulated as the following problem: \begin{equation} \tilde{\bf x}[n] = \min_{\bf x} \| {\bf y}[n] - {\bf H}[n] {\bf x}[n]\|^2. \end{equation} \subsection{Linear Detectors} Since MIMO channels introduce linear superposition between the transmitted signals, detection algorithms must be deployed at the receiver side to mitigate inter-antenna interference while allow the symbol reconstruction \cite{Cho2010}. In this sense, the ZF is one of the simplest MIMO-OFDM equalizers which uses the Moore-Penrose pseudo-inverse matrix to decouple the transmitted symbol vector, i.e.: \begin{equation}\label{eq: ZFD} {\bf H}^\dagger_{\rm zf}[n] = ({\bf H}[n]^H {\bf H}[n])^{-1} {\bf H}[n]^H. \end{equation} Alternatively, the MMSE linear detector considers the statistical distribution of the noise. Therefore, this detector aims to minimize the distance between the the actual transmitted signal and the estimated signal obtained through a linear equalization matrix \cite{Hampton:2014}. Such optimization procedure can be defined by \begin{equation} {\bf H}^\dagger_{\rm mmse}[n] = \min_{\bf W}\,\, \mathbb{E}\left\{\|\boldsymbol{\mathrm{x}}[n] - {\bf Wy}[n]\|^2\right\}.\label{eq:mmsea} \end{equation} Thus, solving eq. \eqref{eq:mmsea} leads to the MMSE closed form solution \begin{equation}\label{eq:MMSED} {\bf H}^\dagger_{\rm mmse}[n] = \left({\bf H}^H[n] {\bf H}[n] + \dfrac{N_0}{E_S}{\bf I} \right)^{-1}{\bf H}^H[n]. \end{equation} where $\frac{N_0}{E_S}$ is the inverse of the signal-to-noise ratio (SNR). As another option, the matched filter (MF) is a classical method that provides optimum performance in the AWGN scenario, and consists of the multiplication of the received signal by the transpose conjugate of the channel. Finally, linear estimation can be generically described by \begin{equation}\label{eq:linearDetectors} \tilde{\bf x}[n] = {\bf W}_{\rm lin}[n] \,{\bf y}[n], \end{equation} where ${\bf W}_{\rm lin}[n] = {\bf H}^\dagger_{\rm zf}[n]$ for the ZF detection, ${\bf W}_{\rm lin}[n] = {\bf H}^\dagger_{\rm mmse}[n]$ for the MMSE detection and ${\bf W}_{\rm lin} = {\bf H}^{H}[n]$ for the matched filter. \subsection{Fitness Function} To facilitate the application of the heuristic methods, eq.\eqref{eq:mimo} can be denoted as an equivalent real-valued representation as follows: \begin{equation} {\underline{\bf y}[n]} = \begin{bmatrix} \mathfrak{Re}\{ {\bf y}[n] \} \\ \mathfrak{Im}\{ {\bf y}[n] \} \end{bmatrix} ,\quad \underline{\bf H}[n] = \begin{bmatrix} \mathfrak{Re}\{ {\bf H}[n] \} & -\mathfrak{Im}\{ {\bf H}[n] \} \\ \mathfrak{Im}\{ {\bf H}[n] \} & \mathfrak{Re}\{ {\bf H}[n] \} \end{bmatrix} , \end{equation} \begin{equation} \qquad {\underline{\bf x}[n]} = \begin{bmatrix} \mathfrak{Re}\{ {\bf x}[n] \} \\ \mathfrak{Im}\{ {\bf x}[n] \} \end{bmatrix} , \qquad {\underline{\bf z}[n]} = \begin{bmatrix} \mathfrak{Re}\{ {\bf z}[n] \} \\ \mathfrak{Im}\{ {\bf z}[n] \} \end{bmatrix} , \end{equation} where matrix ${\underline{\bf H}} \in \mathbb{R}^{2N_r \times 2N_t}$ and vectors $\underline{\bf y}[n] \in \mathbb{R}^{2N_r \times 1}, \underline{\bf x}[n]$ and $\underline{\bf z}[n] \in \mathbb{R}^{2N_t \times 1} $ are the real-valued representation of the channel, received signal, sent information and thermal noise, respectively. For the detection problem, generally, the fitness function is defined based on the Euclidean distance between the received signal and the estimated-reconstructed (candidate) symbol, and formulated as \cite{trimeche2013,Seyman_2014,Khan2007}: \begin{equation}\label{eq:fitness} f( \boldsymbol{\zeta} ) = \|\underline{\bf y}[n]-\underline{\bf H}[n]\boldsymbol{\zeta} \|^2. \end{equation} where $\zeta$ denotes the entity that we want to evaluate, an specific position of particle in PSO and an individual in DE. \subsection{Heuristic PSO-based Detector} PSO is an evolutionary heuristic algorithm with adjustable parameters, such as cognitive and social factors ($c_1$ and $c_2$ respectively), related to the behavior of bird flocking and fish schooling. Associated to each particle there is a velocity ${\bf v}\in \mathbb{R}^{N_{\rm dim}\times 1}$, actual position ${\bf p}\in \mathbb{R}^{N_{\rm dim}\times 1}$ and personal best position ${\bf p}_{\textsc{pb}}\in \mathbb{R}^{N_{\rm dim}\times 1}$ associated, that are updated at each iteration of the algorithm as follows in matrix representation \cite{Cheng2011}: \begin{equation}\label{eq:psoVelocity} {\bf V} = w{\bf V} + c_1 {\bf U}_1 \circ ({\bf M}_{\textsc{pb}}{\bf - P}) + c_2 {\bf U}_2 \circ ( {\bf M}_{\textsc{gb}}{\bf - P} ), \end{equation} \begin{equation}\label{eq:psoPostion} \bf P = P + V, \end{equation} where $N_{\rm dim}$ denotes the dimensionality of the problem, $w$ the inertia factor; ${\bf U}_1$ and ${\bf U}_2$ are matrices compounded of elements $\sim \mathcal{U}[0,1]$, ${\bf P}\in \mathbb{R}^{N_{\rm dim}\times N_{\rm pop}}$ and ${\bf V} \in \mathbb{R}^{N_{\rm dim}\times N_{\rm pop}}$ matrices store the position and velocity of $N_{\rm pop}$ particles of the swarm in each column, i.e., ${\bf P} = [{\bf p}_{1} \dots {\bf p}_{N_{\rm pop}}]$ and ${\bf V} = [{\bf v}_{1} \dots {\bf v}_{N_{\rm pop}}]$. ${\bf M}_{\textsc{pb}}$ is a matrix constructed with the personal best position of each particle and the best position matrix is given by ${\bf M}_{\textsc{gb}} = [{\bf p}_{\textsc{gb}} \dots {\bf p}_{\textsc{gb}}] \in \mathbb{R}^{N_{\mathrm{dim}} \times N_{\mathrm{pop}} }$, where vector ${\bf p}_{\textsc{gb}}\in \mathbb{R}^{N_{\rm dim}\times 1}$ denotes the best position in the swarm, the global best (in a minimization problem, the position that provides the lowest value of the fitness function). The $w$ coefficient introduced in \cite{Shi1998} can be a constant, linear or nonlinear function and it balances the global and local exploitation depending on its value \cite{Shi1998b}. Here, a nonlinear decreasing strategy of $0.99w$ is considered. Regarding the velocity, to avoid a possible increase to infinity, it was limited to the interval $[-V_{\textsc{max}}, V_{\textsc{max}}]$ \cite{Shi1998b}, with $V_{\textsc{max}}$ representing the maximum possible velocity value. After the execution of $N_{\rm iter}$ times of the PSO algorithm, the output vector ${\bf p}_{\textsc{gb}}$ corresponds to the detected symbol using the PSO-aided detector $\tilde{\bf x}_{\textsc{pso}}[n]$ in the MIMO-OFDM problem. \begin{algorithm}[h] \caption{ PSO -- Particle Swarm Optimization.}\label{algo:PSO} \begin{algorithmic}[1] \small \State{ Input parameters: \, $c_1, c_2, w, N_{\mathrm{pop}}, N_{\mathrm{iter}}, \bf P$} \State{Initialization of ${\bf M}_{\textsc{pb}}$ and ${\bf M}_{\textsc{gb}}$} \For {$1 \to N_{\mathrm{iter}}$} \State{ Calculate velocity, eq. \eqref{eq:psoVelocity}} \State{ Calculate position, eq. \eqref{eq:psoPostion}} \State{ Evaluate fitness function, eq. \eqref{eq:fitness}, for all particles } \State{ Update personal best matrix ${\bf M}_{\textsc{pb}}$} \State{ Update global best matrix ${\bf M}_{\textsc{gb}}$} \EndFor \State{Output: \, ${\bf p}_{\textsc{gb}}$} \end{algorithmic} \end{algorithm} \subsection{Heuristic DE-based Detector} DE is a population-based heuristic proposed in \cite{Storn1997} that relies on operations mutation, crossover and selection in order to avoid be trapped on local minima across the $N_{\rm gen}$ generations of the algorithm. Consider $\boldsymbol{\iota}, \boldsymbol{\nu} , \boldsymbol{\psi}$ vectors with dimensions $N_{\rm dim}\times 1$ that represent the individuals, mutation and crossover vectors, while $N_{\rm ind}$ is the number of individuals. The operations of the DE algorithm operating with the strategy \texttt{rand/1/bin} presented in \cite{Storn1997} are synthesized in the following. \subsubsection{Mutation} At each iteration, the $k-$th mutation vector is constructed as: \begin{equation}\label{eq:mutation} \boldsymbol{\nu}_{k} = \boldsymbol{\iota}_{r_1} + F_{\rm mut}(\boldsymbol{\iota}_{r_2} - \boldsymbol{\iota}_{r_3}), \end{equation} where variables $r_1 \neq r_2 \neq r_3 \neq k$, $k=1,\dots, N_{\rm ind}$; $r_1,r_2,r_3$ are integer random variables distributed as $\sim \mathcal{U}[1, N_{\rm ind}]$, and $F_{\rm mut} \in [0, 2]$ is the parameter representing the mutation scale factor. \subsubsection{Crossover} The crossover vector is created from individual and mutation vectors following the rule: \begin{equation}\label{eq:crossover} \psi_{ik} = \begin{cases} \nu_{ik} & \quad \text{if } rand \in [0, 1] \leq F_{\rm cr} \text{ or } i = {r_k}\\ \iota_{ik} & \quad \text{if } rand \in [0, 1] > F_{\rm cr} \text{ and } i \neq {r_k} \end{cases} \end{equation} where $rand \sim \mathcal{U}\in [0, 1]$; $r_k$ is an integer $\sim \mathcal{U}[1, N_{\rm dim}]$ and $F_{\rm cr}\in [0, 1]$ is the crossover factor, one of the input parameters of the algorithm. \subsubsection{Selection} The population of individuals of the next generation is selected by the following rule: \begin{equation}\label{eq:selection} \boldsymbol{\iota}_{k}^{\textsc{g}} = \begin{cases} \boldsymbol{\psi}_{k} & \quad \text{if } f(\boldsymbol{\psi}_{k}) < f(\boldsymbol{\iota}_{k})\\ \boldsymbol{\iota}_{k} & \quad \text{otherwise } \end{cases} \end{equation} Notice that, in order to select the next generation, the fitness function must evaluate both the individuals and the crossover vectors, which reflects in the computational complexity of the algorithm. After the execution of DE procedure $N_{\rm gen}$ times, the best individual $\boldsymbol{\iota}$ corresponds to the detected (estimated) symbol $\tilde{\bf x}_{\textsc{de}}[n]$ using the DE-aided detector in the MIMO-OFDM problem. \begin{algorithm} \caption{ DE -- Differential Evolution.}\label{algo:DE} \begin{algorithmic}[1] \small \State{ Input parameters: \, $F_{\rm cr}, F_{\rm mut}, N_{\mathrm{ind}}, N_{\mathrm{gen}},[\boldsymbol{\iota}_{1} \dots \boldsymbol{\iota}_{N_{\rm ind}}]$} \For {$1 \to N_{\mathrm{gen}}$} \State{Mutation, eq. \eqref{eq:mutation}, $k = 1, \dots, N_{\rm ind}$} \State{Crossover, eq. \eqref{eq:crossover}, $i=1,\dots,N_{\rm ind}; k = 1, \dots, N_{\rm ind}$ } \State{ Select new individuals, eq. \eqref{eq:selection}, $k = 1, \dots, N_{\rm ind}$} \EndFor \State{Output: \, best individual $\boldsymbol{\iota}$} \end{algorithmic} \end{algorithm} \subsection{Hybrid Detectors} \label{subsec:initial} To improve performance with a marginal increment on the computational complexity of the sub-optimal MIMO-OFDM detectors, two efficient hybrid linear-heuristic algorithms are proposed and evaluated in the sequel. Starting from an initial solution provided by MMSE linear detector, a heuristic approach is applied in the subsequent stage aiming to improve the BER performance. In such hybrid configuration, the initial population/swarm in DE/PSO is generated adding random numbers with Gaussian distribution $\mathcal{N}(0,1)$ to the initial solution \cite{Storn1997}. In this work, different initial guess-solution are considered and numerical simulation are discussed under the perspective of the {\it performance-complexity tradeoff}. For that, numerical simulation results relating performance improvements and complex reduction are pointed out. Three different initializations have been considered herein: \begin{enumerate} \item {\it Random initialization}: initial positions (in the PSO) and population (DE) are generated using random variables uniformly distributed inside the search space. \item {\it Hybrid approach}: two different initial points are performed, which are provided by linear detectors MF and MMSE, while the respective symbol is considered as one variable input to the heuristic algorithms. \item {\it Perturbation on the MF/MMSE solutions}: the initial position of particles and initial population of individuals are obtained adding random Gaussian variables $\mathcal{N}(0,1)$ \cite{Storn1997} to the initial solution provided by MF/MMSE detector. \end{enumerate} The influence of those points on the BER performance and complexity of the algorithm are explored in section \ref{sec:simulation_results}. \section{Numerical Results} \label{sec:simulation_results Throughout this section, MIMO-OFDM systems are simulated considering realistic scenarios and different symbol detection. Specifically, linear, evolutionary heuristic and linear-heuristic detectors performance subject to spatial antenna correlation effect has been compared using BER and rates of convergence for heuristic and hybrid detector approaches. Moreover, for the heuristic-based MIMO-OFDM detectors, the calibration of input parameters is conducted for each heuristic algorithm and respective hybrid approaches and the convergence reduction is appointed. After finding the best input parameter for each heuristic-based detector, the performance of the PSO and DE detectors are compared with hybrid approaches, namely PSO-MF, PSO-MMSE, DE-MF and DE-MMSE considering correlation between antennas; the performance of hybrid approaches are evaluated considering different number of iterations. Finally, the computational complexity of the algorithms are compared in terms of number of operations. Table \ref{tab:mimo_OFDM} summarizes the simulation setup adopted in this work. Moreover, for a fair comparison, equal power allocation (EPA) was deployed throughout the transmitting antennas. \begin{table}[!htb] \caption{MIMO-OFDM simulation parameters. \label{tab:mimo_OFDM}} \centering \begin{tabular}{l l} \hline \textbf{Parameter} &\textbf{Value} \\\hline\hline \multicolumn{2}{c}{OFDM} \\ \hline System Bandwidth, BW & 20MHz \\ Constellation & 4-QAM \\ Delay spread, $\tau_\textsc{rms}$ & 64ns \\ \# Subcarriers, $N$ & 64\\ \hline \multicolumn{2}{c}{MIMO} \\ \hline \# Antennas, $N_t\times N_r$ & $4\times4$\\ Spatial correlation index & $\rho \in [0;\,\, 0.5; \,\, 0.9]$\\ MIMO-OFDM detectors & MF, ZF, MMSE, PSO, DE, PSO-MF, \\ & PSO-MMSE, DE-MF, DE-MMSE\\ Power allocation strategy & EPA \\ \hline \multicolumn{2}{c}{Channel} \\ \hline Type & NLOS Rayleigh channel\\ CSI knowledge & perfect\\ \hline \multicolumn{2}{c}{ Heuristic Detectors Setup} \\ \hline Population size $N_{\rm pop} = N_{\rm ind}$ & {40} \\ Search Space & [-1; 1] \\ \bottomrule \end{tabular} \end{table} \subsection{Input Parameter Calibration for Heuristic-aided MIMO-OFDM Detectors As different parameters may influence in the convergence properties of the heuristic algorithms, they were obtained numerically using the following procedure \cite{Marinello_2012}. Considering a set of start parameters, one by one is varied and the one that provides the lowest BER is considered in the variation of next parameter. The illustration of the procedure executed for PSO algorithm is presented in Fig. \ref{fig:pso_calib_4x4} and for DE algorithm in Fig. \ref{fig:deVarParRound1-2}, considering different values of spatial correlation and different initial points discussed in details in Subsection \ref{subsec:initial}. Observe that different initializations result in different initial parameters, which is more evident in the parameter $F_{\rm mul}$ for random and MF/MMSE initializations. Looking at the convergence in Fig. \ref{fig:pso_convergence_4x4}, one can notice that with MF and MMSE initialization, the number of iterations until convergence is reduced in comparison with random initialization case and consequently the complexity of the algorithm; as the $E_b/N_0$ value increases, more iterations are required. The start and final values after the calibration procedure for both PSO and DE heuristic-based detectors are summarized in Table \ref{tab:inputParametersPso} and \ref{tab:inputParametersDe}. \begin{table}[htbp] \centering \caption{Input parameters of PSO after calibration, considering $E_b/N_0=24{dB}$, different initial points and spatial correlation.} \label{tab:inputParametersPso}% \begin{tabular}{rl} \toprule {\bf Parameter} & {\bf Value}\\ \midrule {$N_{\rm iter}^{\rm start}$} & [100; 20]\\ $c_1^{\rm start}$ & 2 \\ $c_2^{\rm start}$ & 2 \\ $w^{\rm start}$ & 1 \\ \hline {$N_{\rm iter}^{\rm rand}$} & {100} \\ $c_1^{\rm rand}$ & 4 \\ $c_2^{\rm rand}{(\rho)}$ & {1 ($0$) \,\,0.5 ($0.5$) \,\, 1 ($0.9$)} \\ $w^{\rm rand}{(\rho)}$ &{1.5 ($0$) \,\,1.5 ($0.5$) \,\, 3.5 ($0.9$)} \\ \hline {$N_{\rm iter}^{\textsc{mf}}$} & $\in [5; 25]$ \\ $c_1^{\textsc{mf}}$ & 4 \\ $c_2^{\textsc{mf}}{(\rho)}$ & {0.5 ($0$) \,\,0.5 ($0.5$) \,\,1 ($0.9$)} \\ $w^{\textsc{mf}}{(\rho)}$ &1.5 ($0$) \,\,{2 ($0.5$) \,\, 2.5 ($0.9$)} \\ \hline {$N_{\rm iter}^{\textsc{mmse}}$} & $\in [5; 25]$ \\ $c_1^{\textsc{mmse}}(\rho)$ & 3.5($0$) \,\, 4($0.5$)\,\, 4($0.9)$\\ $c_2^{\textsc{mmse}}{(\rho)}$ & {0.5 ($0$) \,\,0.5 ($0.5$) \,\,0.5 ($0.9$)} \\ $w^{\textsc{mmse}}{(\rho)}$ &2 ($0$) \,\,{3 ($0.5$) \,\, 3 ($0.9$)} \\ \bottomrule \end{tabular}% \end{table}% \begin{table}[htbp] \centering \caption{ Input parameters of DE algorithm after calibration considering $E_b/N_0=24dB$, different initial points and spatial correlation.} \label{tab:inputParametersDe}% \begin{tabular}{rl} \toprule {\bf Parameter} & {\bf Value}\\ \midrule $N_{\rm gen}^{\rm start}$ & [100; 20] \\ $F_{\rm mut}^{\rm start}$ & 1 \\ $F_{\rm cr}^{\rm start}$ & 0.5 \\ \hline $N_{\rm gen}^{\rm rand}$ & {100} \\ $F_{\rm cr}^{\rm rand}{(\rho)}$ & {0.6 ($0$) \,\,0.6 ($0.5$) \,\, 0.8 ($0.9$)} \\ $F_{\rm mut}^{\rm rand}(\rho)$ & {0.6 ($0$) \,\,0.8 ($0.5$) \,\, 1.8 ($0.9$)} \\ \hline $N_{\rm gen}^{\textsc{mf}}$ & $\in [5; 25]$ \\ $F_{\rm mut}^{\textsc{mf}}(\rho)$ & {2 ($0$) \,\,2 ($0.5$) \,\, 2 ($0.9$)} \\ $F_{\rm cr}^{\textsc{mf}}{(\rho)}$ & {0.8 ($0$) \,\,0.7 ($0.5$) \,\, 0.9 ($0.9$)} \\ \hline $N_{\rm gen}^{\textsc{mmse}}$ & $\in [5; 25]$ \\ $F_{\rm mut}^{\textsc{mmse}}(\rho)$ & {1.7 ($0$) \,\,2 ($0.5$) \,\, 2 ($0.9$)} \\ $F_{\rm cr}^{\textsc{mmse}}{(\rho)}$ & {0.6 ($0$) \,\,0.7 ($0.5$) \,\, 0.8 ($0.9$)} \\ \bottomrule \end{tabular}% \end{table}% \begin{figure*}[!htbp] \centering \subfloat[Calibration: varying parameters and evaluating performance. ]{% \includegraphics[width=.49\textwidth]{psoVarParRound1_4x4_4QAM.eps} } \hfill \subfloat[Calibration of input parameter of PSO-MF algorithm. ]{% \includegraphics[width=.49\textwidth]{psoVarParMf_4x4_4QAM.eps} } \\ \subfloat[Calibration of input parameters considering PSO-MMSE algorithm. ]{% \includegraphics[width=.48\textwidth]{psoVarParMmse_4x4_4QAM.eps} } \hfill \subfloat[{Convergence analysis for 4-QAM, $4 \times 4$ MIMO-OFDM with PSO detector considering different values of $E_b/N_0$. } \label{fig:pso_convergence_4x4} ]{% \includegraphics[width=0.49\textwidth]{pso_convergence4x4_4QAM.eps} } \\ \caption{ Calibration of input parameters values for 4-QAM $4 \times 4$ MIMO-OFDM PSO detection problem operating under medium-high SNR and different spatial correlation indexes. } \label{fig:pso_calib_4x4} \end{figure*} \begin{figure*}[!htbp] \centering \subfloat[Calibration of input parameters of DE with uniformly random initialization. ]{% \includegraphics[width=.49\textwidth]{deVarParRound1_4x4_4QAM.eps} } \hfill \subfloat[Calibration of input parameters of DE with MF initialization. ]{% \includegraphics[width=.49\textwidth]{deVarParMf_4x4_4QAM.eps} } \\ \subfloat[Calibration of input parameters of DE with MMSE initialization. ]{% \includegraphics[width=.49\textwidth]{deVarParMmse_4x4_4QAM.eps} } \hfill \subfloat[Convergence of DE-aided detector for MIMO-OFDM systems for different spatial correlation values. \label{fig:de_convergence4x4_4QAM} ]{% \includegraphics[width=.49\textwidth]{de_convergence4x4_4QAM.eps} } \\ \caption{ Calibration of input parameters of DE heuristic applied to MIMO-OFDM detection for different values of correlation. } \label{fig:deVarParRound1-2} \end{figure*} \subsection{Performance Analysis} After input parameters calibration, the BER performance of the heuristic and hybrid MIMO-OFDM detectors were numerically obtained. In Fig.\ref{fig:hybridPso} and \ref{fig:hybridDe}, the initial solution provided by the MMSE detector is considered. We observe that, as the number of iterations increase, the MMSE solution is refined and after 15 iterations, the improvement in BER performance becomes marginal for both algorithms DE-MMSE and PSO-MMSE. In \ref{fig:hybridPsoMf} and \ref{fig:hybridDeMf}, a similar behavior is observed. We note that the initial point influences the performance of PSO-based detectors: indeed, the PSO-MMSE provides better results in terms of BER than PSO-MF, but this effect is marginal for DE-MF and DE-MMSE, where similar performance is achieved after 15 iterations. In Fig.\ref{fig:mimo_ofdm_correlation}, the performances of linear, heuristic and hybrid MIMO-OFDM detection approaches are compared. We observe that PSO-MMSE provides the nearest ML performance, and that the hybrid approaches provide similar or better approaches than conventional heuristics. For highly correlated scenarios, the overall performance is worsened. For PSO-MMSE, the gain in performance is evident in contrast to other linear and heuristic detectors. In general, spatial correlation degrades considerably the performance of all the studied detectors. However, hybrid heuristic-linear MIMO-OFDM detectors are suitable choices for MIMO systems operating under low or even moderate antenna correlation. \begin{figure}[ht] \centering \subfloat[Performance of hybrid algorithm PSO-MMSE.\label{fig:hybridPso} ]{% \includegraphics[width=.49\textwidth]{performance_pso_hybrid_4x4_4QAM.eps}} \hfill \subfloat[{Performance of hybrid algorithm DE-MMSE. \label{fig:hybridDe}} ]{% \includegraphics[width=.48\textwidth]{performance_de_hybrid_4x4_4QAM.eps}} \\ \caption{Performance of the MMSE-hybrid algorithm considering ULA with different values of $E_b/N_0$, spatial correlation and increasing number of iterations. } \label{fig:performance_hybrid_4x4_4QAM} \end{figure} \begin{figure}[ht] \centering \subfloat[Performance of hybrid algorithm PSO-MF. \label{fig:hybridPsoMf}]{% \includegraphics[width=.49\textwidth]{performance_pso_hybrid_MF_4x4_4QAM.eps}} \hfill \subfloat[{Performance of hybrid algorithm DE-MF. \label{fig:hybridDeMf}} ]{% \includegraphics[width=.48\textwidth]{performance_de_hybrid_MF_4x4_4QAM.eps}} \\ \caption{Performance of the MF-hybrid algorithm considering ULA with different values of $E_b/N_0$, spatial correlation and increasing number of iterations.} \label{fig:performance_MF_4x4_4QAM} \end{figure} \begin{figure}[!htb] \centering \hspace{-3mm}\includegraphics[width=.5\textwidth]{performance_mimo_ofdm_detectorAll_v6.eps} \vspace{-5mm} \caption{BER performance for 4-QAM, $4 \times 4$ linear array (ULA) antennas MIMO-OFDM for different detectors under different values of spatial correlation and SNR. } \label{fig:mimo_ofdm_correlation} \end{figure} \subsection{Complexity Analysis}\label{ref:complexity} To analyze the complexity of the detection algorithms, the number of \textsc{flop}s among real numbers are considered. The \textsc{flop}s are described as floating point addition, subtraction, multiplication or division operations \cite{golub2012}. In this evaluation, Hermitian operator and \texttt{if} conditional step were disregarded. In practice, some platforms use hardware random number generators, where an electric circuit provides random numbers generation, and so the \textsc{flop}s cost to generate random numbers was also ignored. Table \ref{tab:referenceFlops} describes the number of \textsc{flop}s needed for the main operations considered herein, while in Table \ref{tab:flopLinearHeuristicDetectors}, the full complexity expressions ($\Upsilon$) for the analyzed MIMO-OFDM detectors are shown. In Fig.\ref{fig:flops}, the complexity is described considering typical values, {\it i.e.}, $N_{\dim} = 2N_t; N_t = N_r; N_{\rm ind} = N_{\rm pop} = 5\cdot N_{\dim}$ and admitting the number of iterations up to the convergence obtained previously through simulations, as shown in Fig. \ref{fig:pso_convergence_4x4}, \ref{fig:de_convergence4x4_4QAM} for the heuristic algorithms and for the hybrid algorithm in Fig. \ref{fig:performance_hybrid_4x4_4QAM} and \ref{fig:performance_MF_4x4_4QAM}. From Table \ref{tab:flopLinearHeuristicDetectors}, it can be observed that DE algorithm requires more \textsc{flop}s than PSO since it evaluates $2N_{\rm pop}$ times the fitness function per iteration in eq. \eqref{eq:selection} for individuals and crossover vectors. The complexity between the linear detectors are almost the same, differing from each other by an scalar-matrix multiplication and matrix-matrix sum in eq. \eqref{eq: ZFD} and eq. \eqref{eq:MMSED}. Moreover, observing the hybrid heuristic-linear MIMO-OFDM detector in Fig. \ref{fig:performance_hybrid_4x4_4QAM} and \ref{fig:performance_MF_4x4_4QAM}, the improvement in performance starts to stagnate around 15 iterations, and so $\mathcal{I}_{\rm hyb}=15$ has been considered as the number of iterations of the hybrid algorithm to attain the best performance-complexity tradeoff. \begin{table}[htbp] \centering \caption{Number of \textsc{flop}s, considering vector and matrices ${\bf w}\in \mathbb{R}^{q \times 1}, {\bf A}\in\mathbb{R}^{m\times q}, {\bf B}\in\mathbb{R}^{q\times p}, {\bf C}\in\mathbb{R}^{m\times p}, {\bf D}\in\mathbb{R}^{q\times q}$. } \label{tab:referenceFlops}% \begin{tabular}{p{0.31\textwidth}p{0.08\textwidth}} \toprule \bf Operation & \# \textsc{flop}s\\ \midrule Square root $\sqrt{.}$ & {8} \\ Norm-2, $\sqrt{{\bf w}^T{\bf w}}$ & $2n-1 + 8$\\ Matrix-vector multiply {\bf Aw} & {$m(2q-1)$} \\ {Matrix-matrix multiply ${\bf AB}$} & {$mp(2q-1)$} \\ Matrix multiply-add ${\bf AB+C}$ & $2mpq$ \\ Matrix inversion with LU factorization of {\bf D} \cite{BoydNumerical} & $2/3q^3 + 2q^2$ \\ \bottomrule \end{tabular}% \end{table}% Heuristic detection algorithms produce better BER performance at the cost of an incremental computational complexity compared with linear detectors ZF and MMSE, mainly due to the population/swarm size (around $5$ to $10\cdot N_{\rm dim}$) and number of iterations necessary to attain convergence. In order to reduce the complexity, both hybrid linear-heuristic algorithms combing MF/MMSE and evolutionary-heuristic techniques were analyzed. The PSO-MF provides computational complexity near the linear approaches for $N_t=256$ antennas. PSO-MMSE has similar computational complexity than DE-MF. Although linear MMSE and heuristic algorithms have slightly more computational complexity than other linear approaches, there is also improvement in BER performance. Moreover, evolutionary heuristics may be more flexible to be implemented in hardware. Parallelization, the possibility to deal with non-differentiable and nonlinear functions \cite{Storn1997} and the possibility to truncate the number of iterations to achieve different performance-complexity trade-offs in scenarios that do not require very low levels of BER, for example with MF hybrid, may be good choices for real applications. \begin{table}[!htbp] \centering \caption{Number of \textsc{flop}s per subcarrier for the MIMO-OFDM detectors, with $\underline{\bf H} \in \mathbb{R}^{2N_r \times 2N_t}$, $\underline{\bf y} \in \mathbb{R}^{2N_r \times 1}$, $N_{\rm dim} = 2N_t$.} \label{tab:flopLinearHeuristicDetectors}% \tiny \begin{tabular}{p{0.18\textwidth}p{0.27\textwidth}} \toprule \bf Detector & \bf Number of Operations \\ \midrule $\Upsilon_\textsc{MF}(N_t, N_r)$ & $2N_t(4N_r - 1)$ \\ {$\Upsilon_\textsc{ZF}(N_t, N_r)$} & {$\dfrac{16}{3}N_t^3 + 4N_t^2 + 32N_t^2 N_r + 4N_tN_r - 2N_t$ } \\ {$\Upsilon_\textsc{MMSE}(N_t, N_r)$} & {$\dfrac{16}{3}N_t^3 + 8N_t^2 + 32N_t^2N_r + 4N_tN_r$ } \\ {$\Upsilon_\textsc{PSO}(N_t, N_r, N_{\rm pop}, \mathcal{I})$} & {$N_{\rm pop} \mathcal{I} ( 8N_tN_r + 20N_t + 4N_r + 7)$ } \\ {$\Upsilon_\textsc{DE}(N_t, N_r, N_{\rm ind}, \mathcal{I})$} & { $N_{\rm ind} \mathcal{I} (16N_tN_r + 12N_t + 8N_r + 14) $}\\ {$\Upsilon_\textsc{PSO-MMSE}(N_t, N_r, N_{\rm pop}, \mathcal{I}_{\rm hyb})$} & {$\Upsilon_\textsc{PSO}(N_t, N_r, N_{\rm pop}, \mathcal{I}_{\rm hyb}) + \Upsilon_\textsc{MMSE}(N_t, N_r)$ } \\ {$\Upsilon_\textsc{DE-MMSE}(N_t, N_r, N_{\rm ind}, \mathcal{I}_{\rm hyb})$} & {$\Upsilon_\textsc{DE}(N_t, N_r, N_{\rm ind}, \mathcal{I}_{\rm hyb}) + \Upsilon_\textsc{MMSE}(N_t, N_r)$ }\\ $\Upsilon_\textsc{PSO-MF}(N_t, N_r, N_{\rm pop}, \mathcal{I}_{\rm hyb})$ & $\Upsilon_\textsc{PSO}(N_t, N_r, N_{\rm pop}, \mathcal{I}_{\rm hyb}) + \Upsilon_\textsc{MF}(N_t, N_r)$ \\ $\Upsilon_\textsc{DE-MF}(N_t, N_r, N_{\rm ind}, \mathcal{I}_{\rm hyb})$ & $\Upsilon_\textsc{DE}(N_t, N_r, N_{\rm ind}, \mathcal{I}_{\rm hyb}) + \Upsilon_\textsc{MF}(N_t, N_r)$\\ \hline $\Upsilon_\textsc{ML}(N_t, N_r, \mathcal{M})$ & $\mathcal{M}^{2N_t} ( 8N_tN_r + 4N_r + 7) $\\ \bottomrule \multicolumn{2}{l}{$\mathcal{I}:$ \# iterations for conventional algorithms }\\ \multicolumn{2}{l}{$\mathcal{I}_{\rm hyb}:$ \# iterations for the hybrid algorithm}\\ \end{tabular} \end{table} \begin{figure}[!htbp] \centering \includegraphics[width=0.49\textwidth]{flopsLog.eps} \vspace{-5mm} \caption{MIMO-OFDM Complexity considering an increasing number of antennas for linear, heuristic and hybrid detectors in a point-to-point scenario; $N_t = N_r, N_{\rm dim} = 2N_t, N_{\rm pop} = N_{\rm ind} = 5\cdot N_{\rm dim}, \mathcal{I} = 50, \mathcal{I}_{\rm hyb} = 15$. } \label{fig:flops} \end{figure} \section{Conclusions}\label{sec:conclusions} Extensive simulations were deployed and suitable evolutionary heuristic PSO and DE input parameters calibration were chosen numerically aiming to find suitably and of practical interest solutions for the MIMO-OFDM detection problem. Hybrid approaches considering MF and MMSE as initial solutions have been also considered, where the linear initial solution is improved while the number of iterations of heuristic algorithms reduced. Among the analyzed MIMO-OFDM detectors, the hybrid PSO-MMSE provided the near-ML performance for the considered scenarios, {\it i.e.} $\rho = 0$ (uncorrelated), $\rho = 5$ and $\rho = 0.9$. However, the BER performance has demonstrated be sensible to the initialization. For PSO-MF, the performance was similar to conventional PSO, with the advantage of reduced number of iterations until convergence. For DE, almost the same BER performance was achieved using MF and MMSE. In terms of complexity, ZF and MMSE require almost the same number of \textsc{flop}s, although MMSE requires some statistical knowledge of the channel condition. Among the heuristic detectors, DE requires more \textsc{flop}s in comparison with the PSO, mainly because the number of fitness function evaluations is higher, since in DE it is calculated for the $\boldsymbol{\iota}_k$ and $\boldsymbol{\psi}_k, k = 1,\dots, N_{\rm ind}$ per iteration of the algorithm, in comparison to $N_{\rm pop}$ per iteration with PSO (in the simulations, $N_{\rm pop}=N_{\rm ind}$). To improve the complexity-performance tradeoff, this work proposed and evaluated two linear-heuristic hybrid algorithms suitable to solve the MIMO-OFDM detection problem. Starting from a solution obtained from the MMSE and MF linear detectors, the DE and PSO heuristics were executed in order to further improve the BER performance while they were able to improve substantially the performance-complexity tradeoff even under low and medium spatial correlation scenarios. Numerical simulations have demonstrated that with both hybrid algorithms, the number of iterations required to the convergence is reduced, achieving similar and slightly better performance in the DE and PSO-hybrid detectors when compared to the conventional DE and PSO. \section*{Acknowledgment} This work was supported in part by the National Council for Scientific and Technological Development (CNPq) of Brazil under Grants 130464/2015-5 (Scholarship) and 304066/2015-0 (Researcher grant), in part by Araucaria Foundation, PR, under Grant 302/2012 (Research) and by Londrina State University - Paraná State Government, Brazil.
1,314,259,995,431
arxiv
\section{Introduction} \langed{If the topology of the Universe were multiply connected, as opposed to simply-connected, and if the comoving size of the fundamental domain (FD) were smaller than the comoving distance to the surface-of-last-scattering (SLS), then it should be possible to detect repeating patterns in the CMB fluctuations using full-sky data of sufficient signal-to-noise ratio.} These fluctuations would be those lying along pairs of circles defined by points of intersection between different copies of the SLS in the covering space \citep{Corn98b}. These patterns although found in different directions of the sky, would constitute so-called ``matched circles'', as they would represent the same physical points, but observed from different directions due to topological lensing.\\ While this principle is true for any 3-manifold model of space, the number of pairs of ``matched circles'' or their sizes and relative spatial orientations, as well as their handedness, or phase shift, depend significantly on the assumed 3-manifold and its topological properties, thus providing a way to observationally distinguish between models. \par While the positive correlation signal from matched pairs is expected directly from the metric perturbations, via the Sachs-Wolfe Effect \citep{SW67}, there are many other cosmological effects (e.g. the Doppler effect, the Integrated Sachs-Wolfe Effect (ISW)) \citep{2008PhRvD..77b3525K}, astrophysical foregrounds \citep{WMAPforegrounds} and instrumental effects that constitute noise, from the point-of-view of a matched circles search, and the magnitude of the effects depends on the angular scale. \par Although the CMB data have been analyzed to detect topological lensing signals since the availability of the COBE data \citep{Roukema00-3}, the release of the WMAP observations has provided full-sky data of unprecedented accuracy and resolution, \langed{opening up more promise for direct tests of the topology of the Universe. } Although the ``matched circles'' test is straightforward, it is limited due to noise and FD size constraints. Additional theoretical predictions can be used as independent tests that involve predictions of CMB temperature and polarization fluctuations \langed{for the case that the Universe is multiply connected}, both in real and spherical harmonic spaces, or topological effects on the CMB power spectrum \citep{2004CQGra..21.4901A,2003MPLA...18.2099W,2003PhLA..311..319G,2005MNRAS.358.1285D,2004PhRvD..69j3514R, LLU98,Inoue99, 2007PhRvL..99h1302N,Corn98a,deOliv95,2006PhRvD..73b3511K,2003Natur.425..593L, 2006AIPC..848..774N,2006ApJ...645..820P, 2007AA...476..691C}. Although a successful ``matched circles'' test would provide strong support for \langed{the Universe being multiply connected}, no statistically-significant evidence has been found \citep{2004PhRvL..92t1302C,2006astro.ph..4616S}. \par In \cite{RLCMB04} we performed a ``matched circles'' search using the first year WMAP ILC map \citep{WMAPforegrounds} and found an excess correlation, which one would expect under the PDS hypothesis for circles of angular radii $\alpha\sim 11^\circ$ with centers towards \lbdod{252,65}{51,51}{144,38}{207,10}{271,3}{332,25} and their opposites (Fig.~\ref{fig:thedodec}). \par In this present work we have two key objectives. We revisit those results, verify the existence of the excess correlations and quantify their statistical significance. Secondly, we update the search with the WMAP three year data release, extend it to probe three different resolutions (smoothing lengths) and define the detection confidence thresholds. We also discuss the effects of underlying 2-point correlations, smoothing length and incomplete sky coverage on the value of correlation coefficient. \begin{figure}[!hbt] \renewcommand{\figurename}{Fig} \includegraphics[width=0.5\textwidth]{DODfig1.eps} \caption{Visualization of the matched circles solution reported in \protect\cite{RLCMB04} and reproduced in this paper to constrain its statistical significance, over-plotted on the first year ILC map masked with Kp2 sky mask.} \label{fig:thedodec} \end{figure} In section~\ref{sec:data_and_sims} we introduce the datasets used in the analysis, provide details of their preprocessing, and describe simulations that we use to complete a statistical significance analysis. In section~\ref{sec:statistics} we introduce the details of the statistics being performed and our confidence-level analysis. Results are presented in section~\ref{sec:results}. We conclude in section~\ref{sec:conclusions}. \section{Data and simulations} \label{sec:data_and_sims} We perform a ``matched circles'' search using two sets of data. Firstly, for the sake of compatibility, we choose the same data as in \cite{RLCMB04} -- i.e. the first year WMAP ILC map. \langed{The topologically interesting signal generally dominates over the Doppler (and other) components on large scales \citep{2004PhRvD..69j3514R}: this is a motivation for using a large smoothing length. } However, extended flat fluctuations that happen to have a similar large scale trend can lead to false positives on large scales \citep{2006astro.ph..4616S}. This implies a trade off in the choice of the smoothing length of the data, between large smoothing lengths preferred by the topologically interesting content, and small smoothing scales which avoid false positives induced by chance correlations of extended flat fluctuations. We choose to test three different smoothing scales: $FWHM\equiv\lambda\in\{1^\circ,2^\circ,4^\circ\}$. The ILC map was obtained from \langed{a linear combination of one degree smoothed maps in the five frequency bands, by inverse noise co-adding them,} hence its resolution is consistent with a one degree smoothing scale. \langed{ We further Gaussian smooth this map in spherical harmonic space by convolving it with Gaussian beam response kernels of FWHM corresponding to $2^\circ$ and $4^\circ$ respectively, to obtain the first set of data for matched circles tests.} \par \langed{Secondly}, we choose the three year foreground reduced WMAP data from individual frequency bands Q[1/2],V[1/2] and W[1/2/3/4] and co-add them into one map, according to the inverse noise weighting scheme \langed{used in} \cite{2003ApJS..148..135H}. We call the resultant \langed{map the ``INC map''.} \langed{We smooth the INC map using a Gaussian convolution kernel to four different FWHM smoothing lengths: $\lambda\in\{0.5^\circ,1^\circ,2^\circ,4^\circ\}$.} \par We downgrade all data from the initial resolution defined by the Healpix pixelization scheme \citep{Healpix2005} with resolution parameter $n_s=512$ (res. 9) to a resolution parameter of $n_s=256$ (res. 8). We remove the the residual monopole and dipole components ($\ell=0,1$) in spherical harmonic space\footnote{Since the residual WMAP maps foregrounds are strong, we perform this step using the Kp2 sky mask to keep the compatibility between the data and simulations.}, because these components are of no cosmological interest. At the final stage of preprocessing, we remove the residual monopole by offsetting the maps in real \langed{(2-sphere)} space so that $\langle T\rangle = 0$ outside the Kp2 sky mask. \par Throughout the analysis, \langed{i.e. for both the ILC and INC maps}, we use the Kp2 sky mask, which masks $\sim 15$\% of the sky including the brightest \langed{resolved} point sources. \langed{The Kp2 sky mask is} different from the sky mask used in ~\cite{RLCMB04}. \langed{While the ILC map is best suited e.g. for the full sky low multipoles alignment analysis, for the purpose of the matched circles test, the residual galactic contamination should be masked out, although we realize that the use of the Kp2 mask \langed{may} be too conservative.} \langed{In App.~\ref{sec:S-gal-cut} we compare the impact of different sky masks and demonstrate that our results are not very sensitive to the precise characteristics of the sky mask.} \langed{ For each of the two data sets, we produce $\Nsim=100$ realistic Gaussian random field (GRF) signal and noise simulations of the WMAP data to quantify the statistical significance of plausible detections, to discard false positives, and to resolve the $2\sigma$-confidence levels.} \langed{ Therefore, for the first dataset we simulate the first year ILC map, inside ``region 0'' defined outside the Kp2 sky mask of \cite{WMAPforegrounds}, and for the second dataset we simulate the three year WMAP INC map. } \par As will be shown in Sect.~\ref{sec:statistics}, the matched circles correlation coefficient depends on the monopole value in the map. Also, in principle it is sensitive to the shape of the 2-point correlation function, since the correlator is a 2-point statistic, by construction, and so it becomes a measure of the underlying intrinsic 2-point correlations in the CMB (albeit via a specially selected subset of pairs of points on the matched circles). Therefore it is necessary to take into account possible variations in the underlying 2-point correlation function with varying angular separation, which if not properly accounted for in simulations may lead to under(over)-estimation of the confidence level thresholds. \langed{Given that the concordance best fit LCDM cosmological model \citep{2007ApJS..170..377S} yields a very poor fit to the CMB data at large angular scales, due to lack of correlations in the 2-point correlation function of the data with respect to the LCDM model at scales $>60^\circ$, and that the correlation statistic is sensitive to the details of the intrinsic 2-point CMB correlations (and in particular to any large scale anomalies), we do not assume the LCDM model to help create our simulations. Instead, we take a model independent approach.} \langed{As the CMB reference power spectrum in our GRF simulations of the expected signal, we use the reconstructed power spectrum from the three-year WMAP data \citep{2007ApJS..170..288H}} \footnote{In the high $l$ end (noise dominated range) of the reconstructed power spectrum, the unphysical negative values are zeroed to have a zero contribution to the total variance of the map. This approximation has a negligible effect due to small statistical weight of the large l multipoles, and large exponential Gaussian smoothing that we apply to the data. \langed{In practice, this approximation has a negligible effect on the variance of the resulting simulation. Moreover, it can at most} only make our analysis more conservative. }. \langed{Furthermore, we neglect the effects of cosmic variance, and only randomize the phases (and noise realizations) in our simulations. We remove the $C_{\ell=0,1}$ (i.e. the monopole and dipole) components from our simulations.\\} We use the same set \langed{of $a_{lm}$s representing the CMB signal} for a single simulation of the two datasets, followed by convolution with instrumental beam profiles.\\ \langed{For each differential assembly (DA), we simulate the noise according its properties and scanning strategy (number of observations per pixel in map) using uncorrelated, Gaussian noise.} \par The simulations are preprocessed in exactly the same way as the observational data. \langed{We neglect the impact of the (resolved or unresolved) point sources which is negligible, since we apply relatively large smoothing and use the Kp2 sky mask for the analysis.} In Appendix A, we discuss the sensitivity of our results to the degree of smoothing, the sky mask applied, and the assumed statistical approach in greater \langed{detail}. \section{Statistics} \label{sec:statistics} We describe our correlator statistics, parameter space, search optimization and approach for assessing the statistical significance. \subsection{Matched circles test} As in \cite{2004PhRvL..92t1302C} and \cite{RLCMB04} we use a correlation \langed{statistic} of the form \begin{equation} S = 2 \frac{\langle T_i m_i T_j m_j\rangle}{\langle T^2_i m_i m_j\rangle+ \langle T^2_j m_i m_j\rangle} \label{eq:Sstat} \end{equation} where the index $i$ defines a set of all points in the ``first'' set of six circles related to the orientation of a fundamental dodecahedron; index $j$ is the set of corresponding points along the matched six circles; and $m_{i}, m_j$ \langed{are cut} sky weights of the Kp2 sky mask, which can have a value of either $0$ for a masked pixel or $1$ for an unmasked pixel. \langed{Clearly, perfectly matched circles would yield $S=1$, which, due to non-zero noise contributions, is not possible in reality.} \langed{ The dispersion of the correlation coefficient as defined in Eq.~\ref{eq:Sstat} is statistically enhanced in the small circles regime, due to the joint effect of the reduced number of points probing the matched circles as compared to larger circles, the accidental correlations of large (w.r.t. the smoothing scale) flat fluctuations that happen to have similar (or opposite) large scale trends, as well as due to the fact that the r.m.s. values necessarily shrink (down to zero in case of zero mean fluctuations) for circles of size comparable or smaller than the smoothing length. } As shown in Sect.~\ref{sec:results}, this reduces the ability to robustly determine the \langed{degree} of consistency or inconsistency of data with simulations, due to the finite accuracy of the $S$ values and significant steepening of confidence-level contours in this regime. We note that the $S$ statistic value would tend to unity, regardless of the shape of the underlying CMB fluctuations, as the monopole increases in the CMB maps. One could expect \langed{a} similar effect for \langed{the} dipole component, for small circles. This effect would affect the simulations and the data to the same extent. The sensitivity of the test would however be significantly weakened and as such we remove the monopole and dipole components from the datasets for the analysis, and defer study of the impact of other small $\ell$ multipoles to appendix~\ref{app:Sdependences}. \subsection{Parameter space} \label{sec:parspace} We perform a resolution-limited, full parameter-space search over the orientation of the fundamental dodecahedron, and over a limited range of the identified circles sizes of up to $20^\circ$. The parameters are defined as follows: $l,b$ -- galactic longitude and latitude of the first circle, $g$ -- the angle of rotation of the dodecahedron about the axis determined by $(l,b)$, $a$ -- the angular radius of the matched circle, and $s$ -- the twist parameter defining the relative phase offset of the matched circles. \\ We use the following parameter space: \begin{equation} \begin{array}{ccl} l &\in& [ 0^\circ,72^\circ)\\ b &\in& [ ~26.57^\circ, 90^\circ)\\ g &\in& [ 0^\circ,72^\circ)\\ a &\in& [ 1^\circ,20^\circ]\\ s &\in& \{-36^\circ, 0^\circ, 36^\circ\}\\ \end{array} \label{eq:param_grid} \end{equation} The boundaries in $(l,b)$ \langed{conservatively} cover a larger region than the one twelfth of the sphere from which a ``first'' circle centre can be chosen non-redundantly. The range of angle $g$ is 72$^\circ$, to cover all possible orientations of the fundamental dodecahedron for a chosen ``first'' circle centre. Values larger than $72^\circ$ would yield the same set of 12 circle centres as a rotation by that angle modulo $72^\circ$. The interval in circle size $a$ is chosen to be roughly symmetric and centered about the $11^\circ$ value suggested by \citet{RLCMB04}. The three twists $s$ are chosen as in \citet{RLCMB04}. For all datasets we use the same resolution of $1^\circ$ in probing the parameter space, except for the data with smoothing length $\lambda=0.5^\circ$, in which case we use a resolution of $0.5^\circ$. \subsection{Accuracy and search optimization} \begin{figure*}[!hbt] \centering \renewcommand{\figurename}{Fig} \includegraphics[angle=-90,width=0.49\textwidth]{DODfig2.eps} \includegraphics[angle=-90,width=0.49\textwidth]{DODfig3.eps} \caption{Left panel: Convergence of $S$ values to the ``ideal'' fiducial value $\Sfid =S(r=1000)$ as a function of resolution parameter $r$ (a sampling density resolution parameter, defining the number of pixels to be used, to probe the CMB fluctuations along circles through Eq.~(\protect\ref{eq:Npix})) and as a function of circle size $a$. The $\Npix(a)$ function shape for a given smoothing length is fitted linearly, so that the accuracy in $S$ values was approximately constant for all considered circles radii. The assumed working precision level of $\Delta S=0.01$ is marked with thick horizontal line. For clarity, only $\Delta S$ relation derived for data smoothed with Gaussian $\lambda=0.5^\circ$ is shown. Similar relations are obtained for the remaining three smoothing lengths. The average value (black thick line) from all tested circles radii is used to define the required value of $r$ parameter for the circle search with data smoothed to $0.5^\circ$, in order to achieve the targeted accuracy on $S$ value. Right panel: Average $S$ convergence relations derived for data with different smoothing lengths $\lambda\in\{0.5^\circ,1^\circ,2^\circ,4^\circ\}$, along with $1\sigma$ error bars from 20 simulations. The intersections of these with the horizontal (black thick) line give the required values of $r$ for each smoothing length in order to obtain the assumed working precision of $\Delta S=0.01$. } \label{fig:convergence} \end{figure*} The resolution of the data that we analyze \langed{is spatially constant and} is limited by the finite pixel size, so circles of different sizes are probed by different \langed{numbers} of pixels. \\ \langed{As the parameter space of the search is large, it is important to consider the trade-off between the accuracy of the estimates of $S$ (directly related to the number of pixels probing the underlying fluctuations) and the numerical computational time needed to obtain better accuracy. However, the speed of the search can be substantially increased, since the effective resolution of the data in our case is not limited by the pixel size, but rather by the smoothing length.} \par In this section, we focus on the density of points (probing the fluctuations along circles in the sky) required to obtain a given accuracy in estimating $S$, and its dependence on \langed{the angular radius of the circles $a$,} a circle sampling density parameter $r$, the map resolution parameter $\nside$, and the smoothing length properties of the data. \langed{Maps smoothed with larger smoothing lengths have fewer significant, high-spatial frequency Fourier modes, and there is no need for fine sampling in order to fully encode the information content along the circles. } Assessing the same level of precision for smaller circles also requires a smaller number of pixels than for larger circles. We \langed{perform} a series of tests to determine the sampling density required to achieve our desired accuracy level. The tests rely on measuring the speed of convergence to the ``ideal'' fiducial $\Sfid$ value, derived using far more points in the circle than the number of available pixels along the circle in our datasets\footnote{ For all directions pointing inside a single pixel, the same temperature value of that pixel is used.}, as a function of the increasing sampling density. \langed{We empirically model the circle sampling-density function} in such a way that \langed{for a given} $r$ value parameter, and for a given smoothing length of the data, the accuracy of the resulting $S$ values (i.e. the statistical size of the departure from the fiducial value) is approximately the same for all circle sizes (Fig.~\ref{fig:convergence} left panel). We use the following fitted function: \begin{equation} \Npix = \Bigl(3.40 a [deg] + 76.85\Bigr) \Bigl(\frac{r}{32}\Bigr)\Bigl(\frac{n_s}{256} \Bigr) \label{eq:Npix} \end{equation} where $\Npix$ is the number of pixels used for calculation of $S$, for a circle of angular size $a$, and for a map of resolution $\nside$. The resolution parameter $r$ controls the sampling density. In practice, we choose the closest, even integer as an $\Npix$ value for the calculations. This empirically-devised formula yields approximately the same accuracy of derived values of S for all circle sizes (Fig.~\ref{fig:convergence} left panel), and holds for all smoothing lengths. The aim is to find a value of $r$, for each smoothing length, which will provide sufficient accuracy. \par We therefore calculate the deviation $\Delta S(r)$ \begin{equation} \Delta S(r) = \langle|S_i(r)-\Sfid_i(r=1000)|\rangle \label{eq:convergence} \end{equation} where the $\langle\rangle$ averaging is performed over all curves derived from Eq.~\ref{eq:Npix} for circle radii $a [deg] \in\{3,5,10,20,30,40\}$. \par We assume the working accuracy for $S$ values to be $\Delta S=0.01$ throughout the analysis. This defines the required values of \langed{the} sampling density parameter $r$ (Fig.~\ref{fig:convergence} right panel) and the corresponding number of pixels to be used (Eq.~\ref{eq:Npix}) to achieve the targeted accuracy. For the smoothing lengths $\lambda [deg] \in\{0.5,1,2,4\}$, the required resolution parameter values are $r\in\{26,18,12,8\}$. We use these values throughout the analysis with both the data and the simulations. \subsection{Statistical significance} In this section we discuss our statistical approach for quantifying the confidence intervals. \par Since our simulations simulate CMB fluctuations in an isotropic, simply-connected Universe, we test the consistency of the WMAP data with the null hypothesis that the CMB is \langed{an arbitrary} realization of the GRF in a simply-connected space. We quantify the degree of consistency via $S$ correlator values obtained from the data, and compared with those of simulated distributions from $\Nsim = 100$ GRF simulations (Sect.~\ref{sec:data_and_sims}). As an alternate hypothesis we choose the PDS topological model. The inconsistency of the data with the simulations, at high significance level, would then be considered as consistency in favor of the alternative hypothesis (PDS model). \par Since we are interested only in the highest positive $S$ correlations, we build probability distribution functions (PDFs) of $\Smax(a)$, the maximal value of the correlation $S(a)$ found in the matched circle search in the parameter space $(l,b,g,s)$ (Eq.~\ref{eq:param_grid}), using $\Nsim = 100$ simulations\footnote{ Statistically there are some small differences in the $S$ values resulting from probing slightly different angular separations (arising due to different separations of pairs of points, for the same pair of matched circles, when calculated with two different phase twists: $s=0$ and $=\pm 36$), due to the dependence of the $S$ value on the underlying CMB 2-point{} correlation function. For such small twists, this is found to be of the same magnitude as the statistical error on the $\Delta S(\approx 0.01)$ for all considered circle radii.}. We probe the underlying PDFs of $\Smax(a)$ at $8$ different values of $a$, i.e. for $a \in \{1,2,5,8,11,14,17,20\}$ \langed{in degrees.} We reconstruct the confidence intervals $[c(a),d(a)]$, for the 68\% and 95\% confidence levels defined by the (cumulative) probability $P$ of finding a GRF, simulated, CMB realization that yields $\Smax_{\mbox{\rm \tiny sim}} > \Smax_{\mbox{\rm \tiny data}}$: \begin{equation} \begin{array}{ccc} P(\Smax_{\mbox{\rm \tiny sim}} > \Smax_{\mbox{\rm \tiny data}})(a) &=& 1 - \int\limits_{c(a)}^{d(a)} f(\Smax,a) d\Smax\\ &=& 1 - \sum\limits_{i=1, \Smax_{\mbox{\rm \tiny sim,i}} \leq \Smax_{\mbox{\rm \tiny data}}}^{i=\Nsim} 1/\Nsim \end{array} \end{equation} where $c(a) = \mbox{min}(\Smax)(a)$ and $f(\Smax,a)$ is the MC probed PDF of the $\Smax$ values. \par We interpolate confidence interval contours for the remaining $a$ values of the parameter space using 4th order polynomial fit. \par In the next section we apply this procedure to the considered WMAP datasets and simulations and present our results. \section{Results} \label{sec:results} \begin{figure*}[!hbt] \renewcommand{\figurename}{Fig} \includegraphics[angle=-90,width=0.48\textwidth]{DODfig4.eps} \includegraphics[angle=-90,width=0.48\textwidth]{DODfig5.eps}\\ \includegraphics[angle=-90,width=0.48\textwidth]{DODfig6.eps} \includegraphics[angle=-90,width=0.48\textwidth]{DODfig7.eps}\\ \includegraphics[angle=-90,width=0.48\textwidth]{DODfig8.eps} \includegraphics[angle=-90,width=0.48\textwidth]{DODfig9.eps} \caption{Results of the search in the parameter space (see Sect.~\protect\ref{sec:parspace}) for the highest $S$ correlations in the first year WMAP ILC map (left column) and three year INC map (right column) smoothed to $\lambda =1^\circ$ (top), $\lambda=2^\circ$ (middle), $\lambda=4^\circ$ (bottom). For clarity only the highest 72 $S(a)$ statistic values are plotted for each of the three considered phase shifts: $-36^\circ, 0^\circ, 36^\circ$ marked with red ($+$), green ($\times$) and blue ($*$) respectively, and separated by $0.2^\circ$ offset for better visualization and comparison. The red crosses correspond to the PDS model. The thick solid line in the left column show the $\Smax$ values for a search in the three-year ILC data with a phase shift of $-36^\circ$. The $68\%$ and $95\%$ confidence level contours from $\Nsim =100$ simulations are over-plotted. Clearly we reproduce the results of \protect\cite{RLCMB04}. Most of the points with the highest $S$ values in the range of $10^\circ\leq a\leq 12^\circ$ closely correspond to the solution depicted in Fig.~\ref{fig:thedodec}. It is easily seen \langed{that much} higher correlation coefficients would have been required in order to significantly reject the null hypothesis that the Universe is simply connected in favor of the PDS model alternative hypothesis.} \label{fig:ILC1INC3} \end{figure*} \langed{In Fig.~\ref{fig:ILC1INC3} we present results of the all-parameter-space search for the WMAP first year ILC map (left panel), and the three year WMAP INC map (right panel).} \par The signal at $\sim 11^\circ$ in Fig.~4 of \cite{RLCMB04} is reproduced and plotted with red crosses in Fig.~\ref{fig:ILC1INC3} (middle-left). \langed{Clearly, it is not necessary to process large number of simulations to resolve high confidence level contours, since all the datasets are consistent with the simply connected space GRF simulations at a confidence level as low as about 68\% at all smoothing scales.} \par Is is easily seen that as the circle size shrinks to zero ($a\leq 2^\circ$), it is difficult to estimate precisely the significance of the detections since the CL contours steepen, while the accuracy of the S value determination is fixed at $\Delta S \sim 0.01$. This effect is most severe for large smoothing scales, as expected. \par We note that the correlations $S$ tend to increase in relation to the smoothing length applied to the data. \\ In particular, the signal reported in \cite{RLCMB04} is sensitive to increases in the smoothing length. While at the smoothing length of $\lambda=1^\circ$ there is practically no excess maximum in $\Smax (a \approx 12^\circ)$ for $s=-36^\circ$ relative to \langed{the $\Smax$ values for $s=0^\circ$ and $s=+36^\circ$, on the other hand,} the excess is clearly seen at the smoothing length of $\lambda=4^\circ$, where its significance increases almost up to the 95\% CL. \langed{The results for the three year INC data} are consistent with the first year data, in \langed{the} sense that no statistically important excess correlations are found. \langed{In addition to analysing our two primary datasets, we also carried out the following complementary searches.} \langed{We completed an all-parameter-space search using} the three year WMAP ILC data. We find that the excess correlation corresponding to the hypothesized PDS model is weakened for all smoothing lengths (for $s\in\{0^\circ,+36^\circ\}$), and basically \langed{indistinguishable from the noise of what would be false positive detections if we were to define the 68\% confidence level as a detection threshold}. \langed{For $s=-36^\circ$, we} plot the $\Smax(a)$ values for the three year ILC data with \langed{a} black line in the left column of Fig.~\ref{fig:ILC1INC3}. \par \langed{Our other complementary test was that} we performed a $0.5^\circ$ resolution all parameter space search, using first and three year INC data and did not find extra strong localized correlations. However since the computation time increases with the power of the increased resolution (i.e. increasing the resolution by a factor of two increases the calculation time by a factor of $2^{n}$ where $n=4$ is the number of parameters in parameter space) we haven't performed the significance analysis with simulations, and therefore we do not present these results. \section{Discussion} \label{sec:discussion} The analysis of the correlations derived from the data and presented in the previous section finds no statistically-significant detections. The cross-correlations of the $S$ values, obtained for different angular radii of the matched circles, were however neglected. It is of course faster to compute confidence intervals for a sparse parameter space and interpolate in between. However, the significance of any detections found this way \langed{(i.e. conditionally to the \langed{ {\em a priori} } assumed circle radii)} might be overestimated, compared to the case when all possible correlations were accounted for in the full covariance matrix analysis. In the present work, since we do not find any significant deviations from the null hypothesis (i.e. we do not find any strong outliers in the $S$ correlations) in any individually probed value of the ``$a$'' parameter, we find no need for any further extensions to the significance analysis already pursued. \langed{We note that these cross-correlations are present not just in the data, but also in the Monte-Carlo simulations, so they affect the analyses of simulations and data to the same extent.} Finally, we note that our statistical approach of considering only the maximal $S$ correlation values could be altered to consider the full distributions of $S$ correlations. \langed{However, we} are especially interested in \langed{viable} candidates for non-trivial topology (especially in the proposed correlation signal around angular \langed{circle} radii of $11^\circ$) and as such, the models with the largest $S$ values are the best candidates. Given that there can be only one correct orientation of the fundamental dodecahedron and hence only one $S$ correlation value corresponding to it (most likely the largest locally found $S$ value), in the alternative way involving the full distributions of $S$ correlations, the test would be heavily dominated by numerous values that will not be associated with the true topological correlation signal. As a result, the test would mostly measure a degree of consistency between the simulations and data with respect to underlying two-point correlations via the circles on the sky, rather than candidates for non-trivial topology. We therefore build and rely on the statistics of specifically selected (in the full parameter space search) $S$ values, one for a given simulation at a given circle radius, to build our statistics and reconstruct the confidence thresholds. Although we are aware that relying on the distributions of maximal values of random variates may lead to asymmetrical distributions with enhanced tails, we note that in the case of the $S$ statistics, the possible values are by definition restricted to within the range $[-1, 1]$. We also note that it is unnecessary to resolve high confidence level contours, since the data are consistent with simulations mostly to within $\sim 1\sigma$ confidence contour. \section{Conclusions} \label{sec:conclusions} \par In \cite{RLCMB04} it has been suggested that the shape of the space might be consistent with \langed{the} Poincar\'e dodecahedral space (PDS) model (Fig.~\ref{fig:thedodec}). This suggestion was due to an \langed{excess positive correlation} in the matched circles test \citep{Corn98b} of \langed{the} first year WMAP ILC map, however the statistical significance of this excess was not specified. We have revisited those results and found consistent correlation excess corresponding to the same orientation of the fundamental dodecahedron using independent software. We extended and updated the matched circles search with the WMAP three year ILC data and the three year foreground reduced, inverse noise co-added map, and tested these at three different smoothing scales $(1^\circ,2^\circ,4^\circ)$. We performed \langed{an} analysis of the statistical significance of the reported excess, based on realistic and very conservative MC GRF CMB simulations of the datasets. We find that \langed{under} ``matched circles'' tests, both the first and three year WMAP data are consistent with the simply-connected topology hypothesis, for all smoothing scales, at a confidence level as low as 68\%, apart from the first year ILC data smoothed to $4^\circ$, which \langed{are} consistent at 95\% CL. \begin{acknowledgements} \langed{The authors} would like to thank the anonymous referee for useful comments and suggestions. BL would like to thank Naoshi Sugiyama for his support. BL acknowledges the use of the computing facilities of the Nagoya University (Japan), and support from the Monbukagakusho scholarship. \par We acknowledge the use of the Legacy Archive for Microwave Background Data Analysis (LAMBDA). Support for LAMBDA is provided by the NASA Office of Space Science. \par All simulations, map operations and statistics were performed using software written by BL. \end{acknowledgements}
1,314,259,995,432
arxiv
\section{Introduction} Epistemic neural networks (ENNs) were recently introduced as a new framework for modeling uncertainty in deep learning \citep{osband2022epistemic}. ENNs offer an interface for expressing uncertainty due to knowledge, the kind that can be resolved with additional data, as opposed to uncertainty due to chance. The ability to express uncertainty due to knowledge is crucial to intelligence. For example, effective exploration, adaptation, and decision making should rely on an agent knowing what it does not know. Under the ENN framework, the paper introduces a new architecture, called the {\it epinet}, for uncertainty modeling \citep{osband2022epistemic}. An epinet is a relatively small neural network added to a big ``base network'' to produce uncertainty estimates. The base network can have any modern deep learning architecture, and it can even be a pre-trained network. The epinet is designed to be economic in the amount of computation it requires (typically far less than the computation required by the base network), while delivering performance on par or better than popular Bayesian deep learning approaches. That paper shows that epinets perform well on a range of image classification and reinforcement learning tasks, either in the statistical quality or computational requirements, or both, compared to alternative approaches. In particular, the paper shows that neural networks with an epinet can outperform very large ensembles at orders of magnitude lower computational costs. Following these promising results, one natural question to ask such a network, which is designed to know what it doesn't know, is: \begin{center} {\it Does an epinet offer any statistical or computational benefits in tasks with distributional shifts?} \end{center} While it is possible to design versions of epinets specifically to address distributional shifts, we defer those investigations to future research. As a first step, we take the epinet trained on ImageNet by \cite{osband2022epistemic} and study its robustness on a set of ImageNet distributional-shift benchmarks, including ImageNet-A, ImageNet-O, and ImageNet-C \citep{hendrycks2021nae, hendrycks2019robustness}. The epinet is trained on top of a base network that is a pre-trained ResNet. We compare the epinet against its base ResNet, as well as an ensemble of ResNets, a popular approach to uncertainty estimation. More specifically, we are interested in knowing whether the epinet improves the robustness of its associated base network on these benchmarks, and whether the epinet is statistically or computationally more efficient than the ensemble approach for handling distributional shifts. In addition to traditional measures of robustness, we also measure the quality of {\it joint predictions} across multiple inputs. A key result in \cite{osband2022epistemic} is that the epinet is able to provide joint predictions of much higher quality than alternative approaches. The paper points out that the quality of joint predictions is a measure of how well the neural network knows that it does not know. For example, in the context of out-of-distribution inputs, consider an agent that is uncertain about the labels associates with these inputs. By looking at the agent's predictive distribution at a single input, one cannot tell whether the agent's uncertainty would resolve if trained on this data point. However, as pointed out in \cite{osband2022epistemic}, by looking at joint predictions across multiple inputs, one can distinguish whether the agent's uncertainty would resolve if trained on these out-of-distribution examples. This knowledge can be particularly useful if the agent plans to gather more data to improve its predictions. Further, as elaborated in \cite{wen2022predictions}, the quality of joint predictions has important relevance to decision making. Thus, a particular interesting question to look at is whether the epinet makes better joint predictions than alternative approaches, not only on in-distribution test data like what we see in \cite{osband2022epistemic}, but also on test data with distributional shifts. Here is a summary of our key observations on ImageNet-A/O/C. \begin{enumerate} \item The epinet improves or performs similarly to the associated ResNet according to traditional robustness metrics. However, it does not completely address these distributionally-robust challenges. This is not surprising, as the current epinet architecture and training are not designed to address such challenges. \item The ensemble approach is not competitive with the epinet in either the statistical qualities or computational costs according to traditional robustness metrics. \item The epinet dramatically outperforms the ResNet and ensemble baselines in joint predictions, at a computational cost not much more than that of the base ResNet. This is similar to the results for in-distribution test data in \cite{osband2022epistemic}. \end{enumerate} Our results point to an important future research direction. Even though the epinet is more robust against distributional shifts compared to the ResNet and ensemble baselines, as it is currently trained, it does not completely address these challenges. We believe that a stronger prior is needed to inform the agent of the possibility of out-of-distribution inputs. This can, for example, take form in various types of data augmentation, which is a common heuristic in the literature. It is also worthwhile to design and investigate other forms of regularization. Fortunately, the ENN framework can easily accommodate these techniques, while offering the additional benefit of allowing neural networks to express knowledge of what they do and do not know. \section{Experimental setup} We describe the experimental setup for evaluation on ImageNet-A/O/C. We will discuss the models, datasets, and evaluation metrics. \subsection{Models} We evaluate the ResNet, epinet, and ensemble models from \cite{osband2022epistemic}. The models are trained on the ImageNet dataset \citep{deng2009imagenet}. The ResNet and epinet architectures, together with their checkpoints, are available in the open-source library at \url{https://github.com/deepmind/enn}. For the ResNet model, we consider several standard architectures, including ResNet-$L$ for $L \in \{50, 101, 152, 200\}$. We consider ensembles made from ResNet-$50$ models as ensemble members. We independently train $30$ ResNet-$50$ models and use them to form ensembles of size $1$, $3$, $10$, and $30$. Recall from \cite{osband2022epistemic} that an ENN can be described by a tuple $\left(f_\theta(x, z), P_Z\right)$, where $f$ is a network with parameters $\theta$ that takes input $x$ and an \textit{epistemic index} $z$, and $P_Z$ is a reference distribution from which the epistemic index $z$ is drawn. For a single input $x$, an ENN would assign a probability $\int_z P_Z(dz) \softmax \left(f_{\theta}(x, z)\right)_{y}$ to each class $y$. To make a joint prediction across multiple inputs $(x_1, \dots, x_\tau)$, an ENN would assign a probability \[ \int_z P_Z(dz) \prod_{t=1}^\tau \softmax \left(f_{\theta}(x_t,z)\right)_{y_t} \] to each class combination $(y_1, \dots, y_\tau)$. Note that by introducing dependencies on the epistemic index $z$, ENNs allow for more expressive joint predictions beyond simply the product of marginal predictions. Under the ENN framework, the epinet approach can be described as \begin{equation} \label{eq:epinet} f_{\zeta, \eta}(x, z) = \underbrace{\mu_\zeta(x)}_{\textrm{\footnotesize base net}} + \underbrace{\sigma_\eta(\phi_\zeta(x), z)}_{\textrm{\footnotesize epinet}} = \underbrace{\mu_\zeta(x)}_{\textrm{\footnotesize base net}} + \underbrace{\sigma_\eta^L(\phi_\zeta(x), z)}_{\textrm{\footnotesize learnable epinet}} + \underbrace{\sigma^P(\phi_\zeta(x), z)}_{\textrm{\footnotesize prior epinet}}, \end{equation} where $z$ is the epistemic index with a standard Gaussian reference distribution, the base net $\mu_\zeta$ is a pre-trained ResNet with parameters $\zeta$, and $\phi_\zeta(x)$ denotes information from the base net that is passed as input to the epinet. Specifically, $\phi_\zeta(x)$ includes the input image $x$ as well as the last-layer features from the base ResNet. The epinet is composed of a learnable network $\sigma_\eta^L$ with weights $\eta$ and a fixed prior network $\sigma^P$. The learnable network is a variant of an MLP, and the prior network is a combination of an MLP variant and an ensemble of small convolutional networks. The sizes of the learnable epinet and prior epinet are several orders of magnitude smaller than the size of the base ResNet. An epinet is trained for each ResNet-$L$ base network for $L \in \{50, 101, 152, 200\}$. The network $f_{\zeta, \eta}$ outputs logits and is trained using cross-entropy loss and ridge regression. The weights $\zeta$ of the base network are frozen during training. More details about the architecture and training can be found in Section 3.2 and Appendix F.1 of \cite{osband2022epistemic}, as well as in the open-source library. To obtain a predictive distribution from the network $f_{\zeta, \eta}$, we sample $1000$ epistemic indices from the reference distribution and average over the corresponding predictions. \subsection{Datasets} We consider several standard ImageNet robustness benchmarks, including ImageNet-A and ImageNet-O from \cite{hendrycks2021nae}, as well as ImageNet-C from \cite{hendrycks2019robustness}. ImageNet-A concerns a collection of adversarial images that are selected to trick a ResNet-50 ImageNet classifier to make the wrong prediction. ImageNet-O is a collection of images that do not belong to the $1000$ classes that appear in the ImageNet training set. ImageNet-C concerns a collection of synthetic corruptions that are applied to the ImageNet test images. It includes $16$ types of corruption noise and $5$ levels of corruption severity. For ImageNet-A and ImageNet-O, subsets with size $200$ from the $1000$ ImageNet classes are selected so that a misclassification would be considered egregious \citep{hendrycks2021nae}. Following \cite{hendrycks2021nae}, we compare models that make predictions on these subsets of classes. To restrict our benchmark models to predict a subset of classes, we restrict the logits to these classes before taking the softmax. \subsection{Metrics} For ImageNet-A, we evaluate the prediction accuracy, expected calibration error, and two types of log-losses, the marginal log-loss and joint log-loss. The marginal log-loss is the expected negative log-likelihood of a \textit{single} test example under the model's predictive distribution for the single test input. The joint log-loss is the expected negative log-likelihood of a \textit{batch} of test examples under the model's \textit{joint} predictive distribution, which is over combinations of labels, for the whole batch of inputs. We take the batch size to be $10$, and we apply the dyadic sampling heuristic from \cite{osband2022evaluating} to measure the joint log-loss. For ImageNet-O, the goal is for the model to distinguish out-of-distribution test images from the in-distribution test images taken from the ImageNet test set (restricted to the $200$ classes mentioned above). We follow \cite{hendrycks2021nae} and measure the area under the precision-recall curve (AUPR). For each image, the anomaly score is defined as the negative of the maximum class probability. The anomaly scores for these in-distribution and out-of-distribution test images are used to compute the AUPR. For ImageNet-C, the labels of the corrupted images are taken to be the same as the original images. We measure the prediction error, expected calibration error, marginal log-loss, and joint log-loss for each combination of corruption type and severity. Following \cite{hendrycks2019robustness}, for the prediction error, we first sum the prediction errors over corruption severities for each corruption type, and then take a weighted average of the sums over all the corruption types, where the weights are the inverse of the summed prediction errors obtained by an AlexNet. The weighted average is referred to as the mean corruption error, or mCE, in \cite{hendrycks2019robustness}. For the calibration error, marginal, and joint log-losses, we take a simple average over all the corrupted datasets. Since we evaluate the model checkpoints and do not re-train any models, the only source of uncertainty in evaluation comes from the random sampling of epistemic indices. We have found that the randomness has a negligible effect on our results, and so, to simply the figures, we will omit the error bars. On the other hand, how the randomness in model training affects these metrics is left for future research. \section{Results} We present our main results for ImageNet-A, ImageNet-O, and ImageNet-C in Sections~\ref{se:imagenet-a}, \ref{se:imagenet-o}, and \ref{se:imagenet-c}, respectively. These results demonstrate that, on ImageNet-A/O/C, the epinet improves or is around the same level of robustness compared to the associated ResNet baseline according to traditional evaluation metrics involving only marginal predictive distributions. The ensemble baseline, interestingly, is not competitive with the epinet according to these metrics as we increase the model size for both methods. For joint predictions, the epinet significantly outperforms the ResNet and ensemble baselines. That said, we will see that even though using an epinet helps with robustness in general, it does not fully address the challenges presented by these datasets, which deserve future research. We follow up our investigation of model behaviors in Section~\ref{se:prediction-uncertainty}, where we look at the prediction confidence of these benchmark models on ImageNet-A/O/C. We will see that these models, in fact, demonstrate reasonable levels of prediction confidence on ImageNet-A and ImageNet-C, but all of them are over-confident on ImageNet-O. A widely used heuristic to improve model performance on the ImageNet test set is to re-scale the logits using a tunable temperature parameter post training. \citet{osband2022epistemic} also finds that this heuristic improves model performance on ImageNet. In Section~\ref{se:temperature}, we will look at how applying these temperatures tuned for ImageNet evaluation affects model robustness on ImageNet-A/O/C. We will see that re-scaling model predictions by these temperatures in general does not improve robustness, and, in a few cases, even hurts model robustness by a significant amount. \subsection{ImageNet-A} \label{se:imagenet-a} \begin{figure}[b] \centering \includegraphics[width=\textwidth]{figures/imagenet_a_performance.pdf} \caption{Model performance on ImageNet-A.} \label{fig:imagenet_a} \end{figure} Figure~\ref{fig:imagenet_a} shows the performance of the ResNet, epinet, and ensemble models on ImageNet-A. We show the accuracy, calibration error, marginal log-loss, and joint log-loss of these models as we increase the model size. In each plot, the four ResNet data points correspond to ResNet-$L$ for $L \in \{50, 101, 152, 200\}$. The four epinet data points correspond to the four epinets trained on these ResNet base networks, respectively. The four ensemble data points correspond to ensembles of sizes $1$, $3$, $10$, and $30$. We observe that bigger networks in general improve model performance, with the exception that the accuracy of ensembles does not increase with the ensemble size. This could be due to the fact that ImageNet-A is designed to trick a ResNet-$50$ model, and an ensemble of ResNet-$50$ models might still fail to improve accuracy. We see that the epinet achieves similar accuracy as its associated base ResNet. Both models significantly outperform the ensemble in accuracy as the model size increases. The epinet improves the calibration error over the base ResNet. Even though ensembling also improves the calibration error, the improvement is not as much for a fixed model size compared to the epinet. The epinet slightly improves the marginal log-loss over the base ResNet, but a similar improvement can be achieved by increasing the ResNet size. For a fixed model size, the ensemble does not seem to offer any benefit in the marginal log-loss over the other approaches. For the joint log-loss, the epinet outperforms alternatives by a significant margin. This huge gap in performance echoes what \citet{osband2022epistemic} observe with in-distribution evaluation. Overall, the epinet improves the robustness over the base ResNet while requiring only a little additional computation, and it offers a huge advantage in the quality of joint predictions compared to the baseline approaches. However, we see that the accuracy of the epinet model is below $10\%$ -- it is still far from solving the challenge of correctly classifying these adversarial images. That said, this particular epinet model is not designed to address such a challenge. We hypothesize that the epinet (or other ENNs), together with its base net, need to be equipped with a stronger prior, either in training or architecture, in order to address this issue, which we defer to future research. \subsection{ImageNet-O} \label{se:imagenet-o} \begin{figure} \centering \includegraphics[width=0.4\textwidth]{figures/imagenet_o_aupr.pdf} \caption{Model performance on ImageNet-O. The dashed line corresponds to a uniformly random classifier.} \label{fig:imagenet_o} \end{figure} Figure~\ref{fig:imagenet_o} presents the results on ImageNet-O. We show the area under the precision-recall curve of the benchmark models as their model sizes increase. We see that all models perform better with a larger model size. The epinet outperforms the ResNet, both of which perform better than the ensemble. Again, we see that even though the epinet improves the robustness of the base ResNet, it is not much better than a uniformly random classifier, shown as the dashed line in Figure~\ref{fig:imagenet_o}. Clearly, additional techniques are needed to fully address the challenge imposed by this dataset. \subsection{ImageNet-C} \label{se:imagenet-c} \begin{figure} \centering \includegraphics[width=\textwidth]{figures/imagenet_c_average_performance.pdf} \caption{Model performance on ImageNet-C averaged over tasks. The mean weighted corruption error (mCE) is the average test error weighted by the errors of an AlexNet. For the other metrics, the values are a simple average across all tasks.} \label{fig:imagenet_c} \end{figure} Figure~\ref{fig:imagenet_c} presents the main results on the ImageNet-C dataset. The figure shows the mean weighted corruption error (mCE), expected calibration error, marginal log-loss, and joint log-loss as a function of the model size. Each data point is averaged over the corruption types and corruption severities. The epinet performs similarly to the ResNet in mCE. Both methods get better mCE with an increasing model size, and both methods improve the metric more so than the ensemble as the model size increases. The epinet is slightly better than the ResNet at the expected calibration error. Interestingly, the calibration error of the ensemble model degrades as the ensemble size increases. This seems similar to the observations made by \citet{rahaman2021ensemble}, who observe a similar phenomenon and point out that this degradation would go away by scaling the logits using a cold temperature. We make an analogous observation in Section~\ref{se:temperature} where we investigate temperature re-scaling. For the marginal log-loss, the epinet is no better than its base ResNet. Increasing the ensemble size improves the marginal log-loss, but we can obtain a larger improvement by increasing the model size of the ResNet and epinet models. Similar to ImageNet-A, the epinet dramatically outperforms the ResNet and ensemble baselines in the joint log-loss. We see here that the epinet attains around the same level of robustness as its base ResNet on ImageNet-C according to mCE, calibration error, and marginal log-loss. Ensembling, interestingly, is not a competitive approach according to these metrics. For joint predictions, epinet demonstrates a significant advantage over the other agents, similar to what we see in ImageNet-A (Section~\ref{se:imagenet-a}) and in-distribution evaluation \citep{osband2022epistemic}. \subsection{Prediction uncertainty} \label{se:prediction-uncertainty} We dive deeper into model behaviors by looking at the uncertainty of model predictions on these datasets. For a specific model, we define the confidence score as the average probability assigned to the predicted labels. Note that the confidence score relies only on a model's marginal predictions. Figure~\ref{fig:imagenet_a_confidence} shows the benchmark models' confidence scores on ImageNet-A. Interestingly, the confidence scores are actually not very high, ranging from $34\%$ to $42\%$. Unlike the examples shown in \cite{hendrycks2021nae}, for which a ResNet-50 model makes the wrong predictions with $99\%$ certainty, here we see that all of our models are actually on average ambiguous about the correct classes. In Figure~\ref{fig:imagenet_a_failure_rate}, we define the failure rate as the percentage of ImageNet-A examples for which the model makes the wrong prediction with over $95\%$ certainty. We see that the failure rate is below $5\%$ for all of the models. The epinet has a lower failure rate than the ResNet. Even though the failure rate of the ensemble is the lowest, recall from Figure~\ref{fig:imagenet_a} that the accuracy of the ensemble is a lot lower than the other two approaches. \begin{figure} \centering \begin{subfigure}[t]{0.45\textwidth} \centering \includegraphics[width=\textwidth]{figures/imagenet_a_confidence.pdf} \caption{Model prediction confidence, which is the average probability assigned to predicted labels, on ImageNet-A.} \label{fig:imagenet_a_confidence} \end{subfigure} ~ \begin{subfigure}[t]{0.45\textwidth} \centering \includegraphics[width=\textwidth]{figures/imagenet_a_failure_rate.pdf} \caption{Percentage of ImageNet-A test examples for which the model predicts incorrectly but with over $95\%$ certainty.} \label{fig:imagenet_a_failure_rate} \end{subfigure} \caption{ImageNet-A prediction uncertainty and failure rate.} \label{fig:imagenet_a_uncertainty} \end{figure} Figure~\ref{fig:imagenet_o_uncertainty} shows the confidence scores of the benchmark models on the out-of-distribution ImageNet-O dataset and the in-distribution ImageNet dataset (restricted to the $200$ selected classes). Ideally, we would want the model to assign low confidence to out-of-distribution inputs and high confidence to in-distribution inputs. We see from Figure~\ref{fig:imagenet_o_uncertainty} that, unfortunately, all of the models fail to do this. For the ResNet and ensemble models, the out-of-distribution confidence scores are in fact higher than the in-distribution scores. The epinet seems to behave slightly better than the baselines, but still it can hardly distinguish out-of-distribution from in-distributuion inputs according to these confidence scores. It seems disappointing that these models give such over-confident predictions on ImageNet-O, even though they seem to demonstrate reasonable levels of confidence on ImageNet-A. One hypothesis is that since the images in ImageNet-A actually belong to the classes present in the training set, these models might to some degree recognize features relevant to the correct classes, even though there might be other distracting features, leading to moderate confidence overall. In contrast, the images in ImageNet-O do not belong to any of the training classes, and, as such, it might be more likely that certain features dominate the predictions. This might be an interesting hypothesis to investigate in the future. Another interesting observation is that, for the ResNet and epinet models, the in-distribution confidence scores increase with the model size, while the out-of-distribution confidence scores decrease with the model size. It appears that larger models improve generalization and robustness in this case. \begin{figure}[t] \centering \includegraphics[width=0.7\textwidth]{figures/imagenet_o_confidence.pdf} \caption{Prediction confidence of benchmark models on ImageNet test examples (in-distribution) and ImageNet-O test examples (out-of-distribution).} \label{fig:imagenet_o_uncertainty} \end{figure} Figure~\ref{fig:imagenet_c_uncertainty} shows the average confidence scores of benchmark models on ImageNet-C as the level of corruptions applied to the test images becomes more severe. For each value of corruption severity, we average the confidence scores across all types of corruption noise. We see that all models appear reasonable in that, as the corruption noise becomes larger, the confidence scores become lower. It is interesting that none of these models are explicitly exposed to any of these corruption noises in their training, and yet they demonstrate a monotonic decrease in their prediction confidence as the corruption noise grows larger. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{figures/imagenet_c_confidence.pdf} \caption{Confidence scores of benchmark models as a function of corruption severity on ImageNet-C.} \label{fig:imagenet_c_uncertainty} \end{figure} Overall, the benchmark models seem to demonstrate reasonable levels of prediction confidence on ImageNet-A and ImageNet-C, but all of them are extremely over-confident on ImageNet-O. More research is needed to investigate what kind of prior is needed to curb the over-confidence. \subsection{Temperature re-scaling} \label{se:temperature} A common heuristic to improve model performance during evaluation is to re-scale the logits post training using a tunable ``temperature'' parameter. The heuristic is also applied to the ResNet, epinet, and ensemble models in \cite{osband2022epistemic}. In this section, we will look at these temperature re-scaled models that are improved on the ImageNet dataset and evaluate them on Imagenet-A/O/C. Note that we do not consider having access to validation sets containing ImageNet-A/O/C samples, which poses a slightly different problem that deserves future investigation. Here we are interested in whether or not models equipped with temperature re-scaling optimized for the ImageNet dataset are robust on ImageNet-A/O/C. In Figure~\ref{fig:temperature}, we show the ratio of model performance with and without temperature re-scaling on ImageNet-A/O/C. For the expected calibration error, marginal log-loss, and joint log-loss, a ratio above $1$ means that re-scaling by the temperature tuned for ImageNet evaluation hurts performance on the corresponding test set. For AUPR, a ratio above $1$ indicates that the performance improves with temperature re-scaling. Temperature re-scaling should not affect accuracy and mCE, for which we should observe a ratio close to $1$. We see in Figure~\ref{fig:imagenet_a_temperature} that re-scaling by temperatures optimized for ImageNet hurts the performance of all models on ImageNet-A. Figure~\ref{fig:imagenet_o_temperature} shows that AUPR improves slightly with temperature re-scaling, but the increase is marginal (less than $2\%$ difference). Figure~\ref{fig:imagenet_c_temperature} shows that temperature re-scaling worsens the performance of the ResNet and epinet on ImageNet-C. However, for the ensemble models, temperature re-scaling helps ensembles of size greater than $3$. According to \cite{osband2022epistemic}, the tuned temperature for the ensemble models is less than $1$. This seems consistent with the observations made in \cite{rahaman2021ensemble} that the calibration error of ensembles is sensitive to temperature re-scaling. The marginal and joint log-losses are also slightly better with temperature re-scaling for ensembles of size more than $3$, but the differences are small. Overall, it seems that re-scaling the trained models by temperatures optimized for the ImageNet dataset in general does not offer much benefit on these robustness datasets, and, in several cases, it even hurts model robustness by a significant amount. \begin{figure}[t] \centering \begin{subfigure}{\textwidth} \centering \includegraphics[width=\textwidth]{figures/imagenet_a_temperature_rescaling.pdf} \caption{ImageNet-A} \label{fig:imagenet_a_temperature} \end{subfigure} \vspace{0.5cm} \begin{subfigure}{\textwidth} \centering \includegraphics[width=0.4\textwidth]{figures/imagenet_o_temperature_rescaling.pdf} \caption{ImageNet-O} \label{fig:imagenet_o_temperature} \end{subfigure} \vspace{0.5cm} \begin{subfigure}{\textwidth} \centering \includegraphics[width=\textwidth]{figures/imagenet_c_temperature_rescaling.pdf} \caption{ImageNet-C} \label{fig:imagenet_c_temperature} \end{subfigure} \caption{Compare model performance on ImageNet-A/O/C with and without logits re-scaling using the temperatures optimized for the ImageNet evaluation set.} \label{fig:temperature} \end{figure} \section{Conclusions} We investigated the robustness of epinets against distributional shifts by taking the epinet model trained on ImageNet \citep{osband2022epistemic} and evaluating it on ImageNet-A, ImageNet-O, and ImageNet-C. We compared its performance with the performances of ResNets, which the epinet uses as its base network, and ensembles of ResNets. We found that the epinet in general improves or attains a similar level of robustness as its base ResNet according to traditional robustness metrics. The ensemble approach is not competitive with the epinet according to these metrics. For joint predictions, the epinet significantly outperforms the alternatives by a huge margin. Despite these improvements attained by the epinet, it is still far from addressing the robustness challenges imposed by these datasets. One possible future research direction is to investigate what kinds of prior can effectively inform the epinet, or other ENNs, of the possibility of inputs from a shifted distribution. The benchmark models in this paper all started from a relatively uninformed prior, which we have seen does not fare well in these robustness tasks. One example of producing a stronger prior effect is through data augmentation, which is a common heuristic in the robustness literature. The ENN framework can easily incorporate this technique through a change in the training loss function. Compared to traditional models that apply data augmentation, the ENN approach can offer the additional benefit of improving joint predictions across multiple inputs. Another possibility for epinets is to regularize through augmentation in the feature space rather than in the input image space. More research is needed to design and understand different kinds of priors in order to improve model robustness against distributional shifts. \bibliographystyle{apalike} \section{Introduction} Epistemic neural networks (ENNs) were recently introduced as a new framework for modeling uncertainty in deep learning \citep{osband2022epistemic}. ENNs offer an interface for expressing uncertainty due to knowledge, the kind that can be resolved with additional data, as opposed to uncertainty due to chance. The ability to express uncertainty due to knowledge is crucial to intelligence. For example, effective exploration, adaptation, and decision making should rely on an agent knowing what it does not know. Under the ENN framework, the paper introduces a new architecture, called the {\it epinet}, for uncertainty modeling \citep{osband2022epistemic}. An epinet is a relatively small neural network added to a big ``base network'' to produce uncertainty estimates. The base network can have any modern deep learning architecture, and it can even be a pre-trained network. The epinet is designed to be economic in the amount of computation it requires (typically far less than the computation required by the base network), while delivering performance on par or better than popular Bayesian deep learning approaches. That paper shows that epinets perform well on a range of image classification and reinforcement learning tasks, either in the statistical quality or computational requirements, or both, compared to alternative approaches. In particular, the paper shows that neural networks with an epinet can outperform very large ensembles at orders of magnitude lower computational costs. Following these promising results, one natural question to ask such a network, which is designed to know what it doesn't know, is: \begin{center} {\it Does an epinet offer any statistical or computational benefits in tasks with distributional shifts?} \end{center} While it is possible to design versions of epinets specifically to address distributional shifts, we defer those investigations to future research. As a first step, we take the epinet trained on ImageNet by \cite{osband2022epistemic} and study its robustness on a set of ImageNet distributional-shift benchmarks, including ImageNet-A, ImageNet-O, and ImageNet-C \citep{hendrycks2021nae, hendrycks2019robustness}. The epinet is trained on top of a base network that is a pre-trained ResNet. We compare the epinet against its base ResNet, as well as an ensemble of ResNets, a popular approach to uncertainty estimation. More specifically, we are interested in knowing whether the epinet improves the robustness of its associated base network on these benchmarks, and whether the epinet is statistically or computationally more efficient than the ensemble approach for handling distributional shifts. In addition to traditional measures of robustness, we also measure the quality of {\it joint predictions} across multiple inputs. A key result in \cite{osband2022epistemic} is that the epinet is able to provide joint predictions of much higher quality than alternative approaches. The paper points out that the quality of joint predictions is a measure of how well the neural network knows that it does not know. For example, in the context of out-of-distribution inputs, consider an agent that is uncertain about the labels associates with these inputs. By looking at the agent's predictive distribution at a single input, one cannot tell whether the agent's uncertainty would resolve if trained on this data point. However, as pointed out in \cite{osband2022epistemic}, by looking at joint predictions across multiple inputs, one can distinguish whether the agent's uncertainty would resolve if trained on these out-of-distribution examples. This knowledge can be particularly useful if the agent plans to gather more data to improve its predictions. Further, as elaborated in \cite{wen2022predictions}, the quality of joint predictions has important relevance to decision making. Thus, a particular interesting question to look at is whether the epinet makes better joint predictions than alternative approaches, not only on in-distribution test data like what we see in \cite{osband2022epistemic}, but also on test data with distributional shifts. Here is a summary of our key observations on ImageNet-A/O/C. \begin{enumerate} \item The epinet improves or performs similarly to the associated ResNet according to traditional robustness metrics. However, it does not completely address these distributionally-robust challenges. This is not surprising, as the current epinet architecture and training are not designed to address such challenges. \item The ensemble approach is not competitive with the epinet in either the statistical qualities or computational costs according to traditional robustness metrics. \item The epinet dramatically outperforms the ResNet and ensemble baselines in joint predictions, at a computational cost not much more than that of the base ResNet. This is similar to the results for in-distribution test data in \cite{osband2022epistemic}. \end{enumerate} Our results point to an important future research direction. Even though the epinet is more robust against distributional shifts compared to the ResNet and ensemble baselines, as it is currently trained, it does not completely address these challenges. We believe that a stronger prior is needed to inform the agent of the possibility of out-of-distribution inputs. This can, for example, take form in various types of data augmentation, which is a common heuristic in the literature. It is also worthwhile to design and investigate other forms of regularization. Fortunately, the ENN framework can easily accommodate these techniques, while offering the additional benefit of allowing neural networks to express knowledge of what they do and do not know. \section{Experimental setup} We describe the experimental setup for evaluation on ImageNet-A/O/C. We will discuss the models, datasets, and evaluation metrics. \subsection{Models} We evaluate the ResNet, epinet, and ensemble models from \cite{osband2022epistemic}. The models are trained on the ImageNet dataset \citep{deng2009imagenet}. The ResNet and epinet architectures, together with their checkpoints, are available in the open-source library at \url{https://github.com/deepmind/enn}. For the ResNet model, we consider several standard architectures, including ResNet-$L$ for $L \in \{50, 101, 152, 200\}$. We consider ensembles made from ResNet-$50$ models as ensemble members. We independently train $30$ ResNet-$50$ models and use them to form ensembles of size $1$, $3$, $10$, and $30$. Recall from \cite{osband2022epistemic} that an ENN can be described by a tuple $\left(f_\theta(x, z), P_Z\right)$, where $f$ is a network with parameters $\theta$ that takes input $x$ and an \textit{epistemic index} $z$, and $P_Z$ is a reference distribution from which the epistemic index $z$ is drawn. For a single input $x$, an ENN would assign a probability $\int_z P_Z(dz) \softmax \left(f_{\theta}(x, z)\right)_{y}$ to each class $y$. To make a joint prediction across multiple inputs $(x_1, \dots, x_\tau)$, an ENN would assign a probability \[ \int_z P_Z(dz) \prod_{t=1}^\tau \softmax \left(f_{\theta}(x_t,z)\right)_{y_t} \] to each class combination $(y_1, \dots, y_\tau)$. Note that by introducing dependencies on the epistemic index $z$, ENNs allow for more expressive joint predictions beyond simply the product of marginal predictions. Under the ENN framework, the epinet approach can be described as \begin{equation} \label{eq:epinet} f_{\zeta, \eta}(x, z) = \underbrace{\mu_\zeta(x)}_{\textrm{\footnotesize base net}} + \underbrace{\sigma_\eta(\phi_\zeta(x), z)}_{\textrm{\footnotesize epinet}} = \underbrace{\mu_\zeta(x)}_{\textrm{\footnotesize base net}} + \underbrace{\sigma_\eta^L(\phi_\zeta(x), z)}_{\textrm{\footnotesize learnable epinet}} + \underbrace{\sigma^P(\phi_\zeta(x), z)}_{\textrm{\footnotesize prior epinet}}, \end{equation} where $z$ is the epistemic index with a standard Gaussian reference distribution, the base net $\mu_\zeta$ is a pre-trained ResNet with parameters $\zeta$, and $\phi_\zeta(x)$ denotes information from the base net that is passed as input to the epinet. Specifically, $\phi_\zeta(x)$ includes the input image $x$ as well as the last-layer features from the base ResNet. The epinet is composed of a learnable network $\sigma_\eta^L$ with weights $\eta$ and a fixed prior network $\sigma^P$. The learnable network is a variant of an MLP, and the prior network is a combination of an MLP variant and an ensemble of small convolutional networks. The sizes of the learnable epinet and prior epinet are several orders of magnitude smaller than the size of the base ResNet. An epinet is trained for each ResNet-$L$ base network for $L \in \{50, 101, 152, 200\}$. The network $f_{\zeta, \eta}$ outputs logits and is trained using cross-entropy loss and ridge regression. The weights $\zeta$ of the base network are frozen during training. More details about the architecture and training can be found in Section 3.2 and Appendix F.1 of \cite{osband2022epistemic}, as well as in the open-source library. To obtain a predictive distribution from the network $f_{\zeta, \eta}$, we sample $1000$ epistemic indices from the reference distribution and average over the corresponding predictions. \subsection{Datasets} We consider several standard ImageNet robustness benchmarks, including ImageNet-A and ImageNet-O from \cite{hendrycks2021nae}, as well as ImageNet-C from \cite{hendrycks2019robustness}. ImageNet-A concerns a collection of adversarial images that are selected to trick a ResNet-50 ImageNet classifier to make the wrong prediction. ImageNet-O is a collection of images that do not belong to the $1000$ classes that appear in the ImageNet training set. ImageNet-C concerns a collection of synthetic corruptions that are applied to the ImageNet test images. It includes $16$ types of corruption noise and $5$ levels of corruption severity. For ImageNet-A and ImageNet-O, subsets with size $200$ from the $1000$ ImageNet classes are selected so that a misclassification would be considered egregious \citep{hendrycks2021nae}. Following \cite{hendrycks2021nae}, we compare models that make predictions on these subsets of classes. To restrict our benchmark models to predict a subset of classes, we restrict the logits to these classes before taking the softmax. \subsection{Metrics} For ImageNet-A, we evaluate the prediction accuracy, expected calibration error, and two types of log-losses, the marginal log-loss and joint log-loss. The marginal log-loss is the expected negative log-likelihood of a \textit{single} test example under the model's predictive distribution for the single test input. The joint log-loss is the expected negative log-likelihood of a \textit{batch} of test examples under the model's \textit{joint} predictive distribution, which is over combinations of labels, for the whole batch of inputs. We take the batch size to be $10$, and we apply the dyadic sampling heuristic from \cite{osband2022evaluating} to measure the joint log-loss. For ImageNet-O, the goal is for the model to distinguish out-of-distribution test images from the in-distribution test images taken from the ImageNet test set (restricted to the $200$ classes mentioned above). We follow \cite{hendrycks2021nae} and measure the area under the precision-recall curve (AUPR). For each image, the anomaly score is defined as the negative of the maximum class probability. The anomaly scores for these in-distribution and out-of-distribution test images are used to compute the AUPR. For ImageNet-C, the labels of the corrupted images are taken to be the same as the original images. We measure the prediction error, expected calibration error, marginal log-loss, and joint log-loss for each combination of corruption type and severity. Following \cite{hendrycks2019robustness}, for the prediction error, we first sum the prediction errors over corruption severities for each corruption type, and then take a weighted average of the sums over all the corruption types, where the weights are the inverse of the summed prediction errors obtained by an AlexNet. The weighted average is referred to as the mean corruption error, or mCE, in \cite{hendrycks2019robustness}. For the calibration error, marginal, and joint log-losses, we take a simple average over all the corrupted datasets. Since we evaluate the model checkpoints and do not re-train any models, the only source of uncertainty in evaluation comes from the random sampling of epistemic indices. We have found that the randomness has a negligible effect on our results, and so, to simply the figures, we will omit the error bars. On the other hand, how the randomness in model training affects these metrics is left for future research. \section{Results} We present our main results for ImageNet-A, ImageNet-O, and ImageNet-C in Sections~\ref{se:imagenet-a}, \ref{se:imagenet-o}, and \ref{se:imagenet-c}, respectively. These results demonstrate that, on ImageNet-A/O/C, the epinet improves or is around the same level of robustness compared to the associated ResNet baseline according to traditional evaluation metrics involving only marginal predictive distributions. The ensemble baseline, interestingly, is not competitive with the epinet according to these metrics as we increase the model size for both methods. For joint predictions, the epinet significantly outperforms the ResNet and ensemble baselines. That said, we will see that even though using an epinet helps with robustness in general, it does not fully address the challenges presented by these datasets, which deserve future research. We follow up our investigation of model behaviors in Section~\ref{se:prediction-uncertainty}, where we look at the prediction confidence of these benchmark models on ImageNet-A/O/C. We will see that these models, in fact, demonstrate reasonable levels of prediction confidence on ImageNet-A and ImageNet-C, but all of them are over-confident on ImageNet-O. A widely used heuristic to improve model performance on the ImageNet test set is to re-scale the logits using a tunable temperature parameter post training. \citet{osband2022epistemic} also finds that this heuristic improves model performance on ImageNet. In Section~\ref{se:temperature}, we will look at how applying these temperatures tuned for ImageNet evaluation affects model robustness on ImageNet-A/O/C. We will see that re-scaling model predictions by these temperatures in general does not improve robustness, and, in a few cases, even hurts model robustness by a significant amount. \subsection{ImageNet-A} \label{se:imagenet-a} \begin{figure}[b] \centering \includegraphics[width=\textwidth]{figures/imagenet_a_performance.pdf} \caption{Model performance on ImageNet-A.} \label{fig:imagenet_a} \end{figure} Figure~\ref{fig:imagenet_a} shows the performance of the ResNet, epinet, and ensemble models on ImageNet-A. We show the accuracy, calibration error, marginal log-loss, and joint log-loss of these models as we increase the model size. In each plot, the four ResNet data points correspond to ResNet-$L$ for $L \in \{50, 101, 152, 200\}$. The four epinet data points correspond to the four epinets trained on these ResNet base networks, respectively. The four ensemble data points correspond to ensembles of sizes $1$, $3$, $10$, and $30$. We observe that bigger networks in general improve model performance, with the exception that the accuracy of ensembles does not increase with the ensemble size. This could be due to the fact that ImageNet-A is designed to trick a ResNet-$50$ model, and an ensemble of ResNet-$50$ models might still fail to improve accuracy. We see that the epinet achieves similar accuracy as its associated base ResNet. Both models significantly outperform the ensemble in accuracy as the model size increases. The epinet improves the calibration error over the base ResNet. Even though ensembling also improves the calibration error, the improvement is not as much for a fixed model size compared to the epinet. The epinet slightly improves the marginal log-loss over the base ResNet, but a similar improvement can be achieved by increasing the ResNet size. For a fixed model size, the ensemble does not seem to offer any benefit in the marginal log-loss over the other approaches. For the joint log-loss, the epinet outperforms alternatives by a significant margin. This huge gap in performance echoes what \citet{osband2022epistemic} observe with in-distribution evaluation. Overall, the epinet improves the robustness over the base ResNet while requiring only a little additional computation, and it offers a huge advantage in the quality of joint predictions compared to the baseline approaches. However, we see that the accuracy of the epinet model is below $10\%$ -- it is still far from solving the challenge of correctly classifying these adversarial images. That said, this particular epinet model is not designed to address such a challenge. We hypothesize that the epinet (or other ENNs), together with its base net, need to be equipped with a stronger prior, either in training or architecture, in order to address this issue, which we defer to future research. \subsection{ImageNet-O} \label{se:imagenet-o} \begin{figure} \centering \includegraphics[width=0.4\textwidth]{figures/imagenet_o_aupr.pdf} \caption{Model performance on ImageNet-O. The dashed line corresponds to a uniformly random classifier.} \label{fig:imagenet_o} \end{figure} Figure~\ref{fig:imagenet_o} presents the results on ImageNet-O. We show the area under the precision-recall curve of the benchmark models as their model sizes increase. We see that all models perform better with a larger model size. The epinet outperforms the ResNet, both of which perform better than the ensemble. Again, we see that even though the epinet improves the robustness of the base ResNet, it is not much better than a uniformly random classifier, shown as the dashed line in Figure~\ref{fig:imagenet_o}. Clearly, additional techniques are needed to fully address the challenge imposed by this dataset. \subsection{ImageNet-C} \label{se:imagenet-c} \begin{figure} \centering \includegraphics[width=\textwidth]{figures/imagenet_c_average_performance.pdf} \caption{Model performance on ImageNet-C averaged over tasks. The mean weighted corruption error (mCE) is the average test error weighted by the errors of an AlexNet. For the other metrics, the values are a simple average across all tasks.} \label{fig:imagenet_c} \end{figure} Figure~\ref{fig:imagenet_c} presents the main results on the ImageNet-C dataset. The figure shows the mean weighted corruption error (mCE), expected calibration error, marginal log-loss, and joint log-loss as a function of the model size. Each data point is averaged over the corruption types and corruption severities. The epinet performs similarly to the ResNet in mCE. Both methods get better mCE with an increasing model size, and both methods improve the metric more so than the ensemble as the model size increases. The epinet is slightly better than the ResNet at the expected calibration error. Interestingly, the calibration error of the ensemble model degrades as the ensemble size increases. This seems similar to the observations made by \citet{rahaman2021ensemble}, who observe a similar phenomenon and point out that this degradation would go away by scaling the logits using a cold temperature. We make an analogous observation in Section~\ref{se:temperature} where we investigate temperature re-scaling. For the marginal log-loss, the epinet is no better than its base ResNet. Increasing the ensemble size improves the marginal log-loss, but we can obtain a larger improvement by increasing the model size of the ResNet and epinet models. Similar to ImageNet-A, the epinet dramatically outperforms the ResNet and ensemble baselines in the joint log-loss. We see here that the epinet attains around the same level of robustness as its base ResNet on ImageNet-C according to mCE, calibration error, and marginal log-loss. Ensembling, interestingly, is not a competitive approach according to these metrics. For joint predictions, epinet demonstrates a significant advantage over the other agents, similar to what we see in ImageNet-A (Section~\ref{se:imagenet-a}) and in-distribution evaluation \citep{osband2022epistemic}. \subsection{Prediction uncertainty} \label{se:prediction-uncertainty} We dive deeper into model behaviors by looking at the uncertainty of model predictions on these datasets. For a specific model, we define the confidence score as the average probability assigned to the predicted labels. Note that the confidence score relies only on a model's marginal predictions. Figure~\ref{fig:imagenet_a_confidence} shows the benchmark models' confidence scores on ImageNet-A. Interestingly, the confidence scores are actually not very high, ranging from $34\%$ to $42\%$. Unlike the examples shown in \cite{hendrycks2021nae}, for which a ResNet-50 model makes the wrong predictions with $99\%$ certainty, here we see that all of our models are actually on average ambiguous about the correct classes. In Figure~\ref{fig:imagenet_a_failure_rate}, we define the failure rate as the percentage of ImageNet-A examples for which the model makes the wrong prediction with over $95\%$ certainty. We see that the failure rate is below $5\%$ for all of the models. The epinet has a lower failure rate than the ResNet. Even though the failure rate of the ensemble is the lowest, recall from Figure~\ref{fig:imagenet_a} that the accuracy of the ensemble is a lot lower than the other two approaches. \begin{figure} \centering \begin{subfigure}[t]{0.45\textwidth} \centering \includegraphics[width=\textwidth]{figures/imagenet_a_confidence.pdf} \caption{Model prediction confidence, which is the average probability assigned to predicted labels, on ImageNet-A.} \label{fig:imagenet_a_confidence} \end{subfigure} ~ \begin{subfigure}[t]{0.45\textwidth} \centering \includegraphics[width=\textwidth]{figures/imagenet_a_failure_rate.pdf} \caption{Percentage of ImageNet-A test examples for which the model predicts incorrectly but with over $95\%$ certainty.} \label{fig:imagenet_a_failure_rate} \end{subfigure} \caption{ImageNet-A prediction uncertainty and failure rate.} \label{fig:imagenet_a_uncertainty} \end{figure} Figure~\ref{fig:imagenet_o_uncertainty} shows the confidence scores of the benchmark models on the out-of-distribution ImageNet-O dataset and the in-distribution ImageNet dataset (restricted to the $200$ selected classes). Ideally, we would want the model to assign low confidence to out-of-distribution inputs and high confidence to in-distribution inputs. We see from Figure~\ref{fig:imagenet_o_uncertainty} that, unfortunately, all of the models fail to do this. For the ResNet and ensemble models, the out-of-distribution confidence scores are in fact higher than the in-distribution scores. The epinet seems to behave slightly better than the baselines, but still it can hardly distinguish out-of-distribution from in-distributuion inputs according to these confidence scores. It seems disappointing that these models give such over-confident predictions on ImageNet-O, even though they seem to demonstrate reasonable levels of confidence on ImageNet-A. One hypothesis is that since the images in ImageNet-A actually belong to the classes present in the training set, these models might to some degree recognize features relevant to the correct classes, even though there might be other distracting features, leading to moderate confidence overall. In contrast, the images in ImageNet-O do not belong to any of the training classes, and, as such, it might be more likely that certain features dominate the predictions. This might be an interesting hypothesis to investigate in the future. Another interesting observation is that, for the ResNet and epinet models, the in-distribution confidence scores increase with the model size, while the out-of-distribution confidence scores decrease with the model size. It appears that larger models improve generalization and robustness in this case. \begin{figure}[t] \centering \includegraphics[width=0.7\textwidth]{figures/imagenet_o_confidence.pdf} \caption{Prediction confidence of benchmark models on ImageNet test examples (in-distribution) and ImageNet-O test examples (out-of-distribution).} \label{fig:imagenet_o_uncertainty} \end{figure} Figure~\ref{fig:imagenet_c_uncertainty} shows the average confidence scores of benchmark models on ImageNet-C as the level of corruptions applied to the test images becomes more severe. For each value of corruption severity, we average the confidence scores across all types of corruption noise. We see that all models appear reasonable in that, as the corruption noise becomes larger, the confidence scores become lower. It is interesting that none of these models are explicitly exposed to any of these corruption noises in their training, and yet they demonstrate a monotonic decrease in their prediction confidence as the corruption noise grows larger. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{figures/imagenet_c_confidence.pdf} \caption{Confidence scores of benchmark models as a function of corruption severity on ImageNet-C.} \label{fig:imagenet_c_uncertainty} \end{figure} Overall, the benchmark models seem to demonstrate reasonable levels of prediction confidence on ImageNet-A and ImageNet-C, but all of them are extremely over-confident on ImageNet-O. More research is needed to investigate what kind of prior is needed to curb the over-confidence. \subsection{Temperature re-scaling} \label{se:temperature} A common heuristic to improve model performance during evaluation is to re-scale the logits post training using a tunable ``temperature'' parameter. The heuristic is also applied to the ResNet, epinet, and ensemble models in \cite{osband2022epistemic}. In this section, we will look at these temperature re-scaled models that are improved on the ImageNet dataset and evaluate them on Imagenet-A/O/C. Note that we do not consider having access to validation sets containing ImageNet-A/O/C samples, which poses a slightly different problem that deserves future investigation. Here we are interested in whether or not models equipped with temperature re-scaling optimized for the ImageNet dataset are robust on ImageNet-A/O/C. In Figure~\ref{fig:temperature}, we show the ratio of model performance with and without temperature re-scaling on ImageNet-A/O/C. For the expected calibration error, marginal log-loss, and joint log-loss, a ratio above $1$ means that re-scaling by the temperature tuned for ImageNet evaluation hurts performance on the corresponding test set. For AUPR, a ratio above $1$ indicates that the performance improves with temperature re-scaling. Temperature re-scaling should not affect accuracy and mCE, for which we should observe a ratio close to $1$. We see in Figure~\ref{fig:imagenet_a_temperature} that re-scaling by temperatures optimized for ImageNet hurts the performance of all models on ImageNet-A. Figure~\ref{fig:imagenet_o_temperature} shows that AUPR improves slightly with temperature re-scaling, but the increase is marginal (less than $2\%$ difference). Figure~\ref{fig:imagenet_c_temperature} shows that temperature re-scaling worsens the performance of the ResNet and epinet on ImageNet-C. However, for the ensemble models, temperature re-scaling helps ensembles of size greater than $3$. According to \cite{osband2022epistemic}, the tuned temperature for the ensemble models is less than $1$. This seems consistent with the observations made in \cite{rahaman2021ensemble} that the calibration error of ensembles is sensitive to temperature re-scaling. The marginal and joint log-losses are also slightly better with temperature re-scaling for ensembles of size more than $3$, but the differences are small. Overall, it seems that re-scaling the trained models by temperatures optimized for the ImageNet dataset in general does not offer much benefit on these robustness datasets, and, in several cases, it even hurts model robustness by a significant amount. \begin{figure}[t] \centering \begin{subfigure}{\textwidth} \centering \includegraphics[width=\textwidth]{figures/imagenet_a_temperature_rescaling.pdf} \caption{ImageNet-A} \label{fig:imagenet_a_temperature} \end{subfigure} \vspace{0.5cm} \begin{subfigure}{\textwidth} \centering \includegraphics[width=0.4\textwidth]{figures/imagenet_o_temperature_rescaling.pdf} \caption{ImageNet-O} \label{fig:imagenet_o_temperature} \end{subfigure} \vspace{0.5cm} \begin{subfigure}{\textwidth} \centering \includegraphics[width=\textwidth]{figures/imagenet_c_temperature_rescaling.pdf} \caption{ImageNet-C} \label{fig:imagenet_c_temperature} \end{subfigure} \caption{Compare model performance on ImageNet-A/O/C with and without logits re-scaling using the temperatures optimized for the ImageNet evaluation set.} \label{fig:temperature} \end{figure} \section{Conclusions} We investigated the robustness of epinets against distributional shifts by taking the epinet model trained on ImageNet \citep{osband2022epistemic} and evaluating it on ImageNet-A, ImageNet-O, and ImageNet-C. We compared its performance with the performances of ResNets, which the epinet uses as its base network, and ensembles of ResNets. We found that the epinet in general improves or attains a similar level of robustness as its base ResNet according to traditional robustness metrics. The ensemble approach is not competitive with the epinet according to these metrics. For joint predictions, the epinet significantly outperforms the alternatives by a huge margin. Despite these improvements attained by the epinet, it is still far from addressing the robustness challenges imposed by these datasets. One possible future research direction is to investigate what kinds of prior can effectively inform the epinet, or other ENNs, of the possibility of inputs from a shifted distribution. The benchmark models in this paper all started from a relatively uninformed prior, which we have seen does not fare well in these robustness tasks. One example of producing a stronger prior effect is through data augmentation, which is a common heuristic in the robustness literature. The ENN framework can easily incorporate this technique through a change in the training loss function. Compared to traditional models that apply data augmentation, the ENN approach can offer the additional benefit of improving joint predictions across multiple inputs. Another possibility for epinets is to regularize through augmentation in the feature space rather than in the input image space. More research is needed to design and understand different kinds of priors in order to improve model robustness against distributional shifts. \bibliographystyle{apalike}
1,314,259,995,433
arxiv
\section{Introduction} \label{s:intro} Radiation from astrophysical or laboratory plasmas depends upon the strength and topology of the magnetic field within the plasma, as well as on the acceleration of plasma particles. For relativistic plasmas, radiative output of an individual particle depends significantly on the particle's direction of motion, as relativistic beaming modifies the angular distribution of the radiation emitted to lie within a cone of opening angle 1/$\gamma$ along the particle's velocity. Particles in uniform magnetic fields produce synchrotron radiation, which is characterized by the sweep of the radiation cone as relativistic particles orbit or spiral along the magnetic field lines. In turbulent magnetic fields, particles still radiate in a synchrotron regime as long as the scale of the magnetic field variation is longer than the particles' average Larmor radii. However, if the magnetic field varies on a scale smaller than a Larmor radius, the particles ``jitter'' through a series of small transverse accelerations without a substantial change of direction. An observer will thus be in a particle's radiative cone over some length of its accelerated path, until the particle direction is sufficiently deviated that the line of sight is outside the radiative cone. For an isotropic particle distribution, the rates of particles entering and exiting paths within 1/$\gamma$ of the line of sight are equal. The resulting radiation will be that of particles ``jittering'' with small randomized accelerations, and will be reflective of the magnetic field spectra along a line of sight path through the turbulent magnetic field region \citep{M00}. Whereas in synchrotron radiation spectra the frequencies depend upon relativistic beaming and the sweep of the radiation cone, in jitter radiation spectra the frequencies instead depend directly upon the turbulent variations of the magnetic field and relativistic beaming serves only to limit the observed radiation to that emitted by trajectories within 1/$\gamma$ of the line of sight \citep{M00}. The resulting radiative spectrum differs from that of synchrotron radiation and, in particular, its low-frequency spectral index is not limited by $s=1/3$ (i.e., $dW/d\omega\propto\omega^{1/3}$ below the synchrotron peak), referred to in the literature as the synchrotron "line of death" \citep{Preece+98}. Gamma-ray bursts (GRBs) are believed to be a natural astrophysical site for the emergence of a strong small-scale turbulent magnetic field. Relativistic shocks generate strong magnetic fields that are random on very small (sub-Larmor) scales via the Wiebel-like (particle streaming) instability \citep{ML99}. In such an instability, a small magnetic field perturbation in the plane transverse to the motions of counter-streaming particles results in the development and growth of filamentary current structures up to a saturation point at which they may persist for some time longer than the dynamical time-scale of the system, creating an extended region of small-scale turbulent magnetic field. This has been studied extensively in numerical PIC simulations (e.g., \citet{Silva+03,Nish+03,fred04,M+05,Spit05,CSA07,Spit07}) of both baryonic and pair plasmas. (See \citet{MS09} for a discussion of the applicability and implications of such simulations for GRB physics.) Recent simulations of magnetic reconnection in pair plasmas have also shown the generation of these strong small-scale magnetic fields via the Weibel instability acting on the streams of accelerated particles in the reconnection exhaust funnels (\citet{Swis+08,ZH08}). We speculate that magnetic reconnection events in a GRB may produce electron-positron plasmas in situ, even in initially lepton-poor plasmas. For the resulting regions of strong small-scale magnetic field, synchrotron theory is inapplicable and jitter radiation theory must be considered. Thus, regardless of the particular model of a GRB (e.g., baryonic, leptonic, magnetic, etc.), the jitter radiation mechanism coupled with the relativistic kinematics of the ejected material represents a viable phenomenological model. It has recently been shown that such a model reproduces a number of spectral features of GRB light-curves remarkably well \citep{MPR09}. Following the approach of \citet{M06}, we define a geometry where the local filamentation axis (the local axis along which counter-streaming particle motion occurs) lies in the $z$-direction so that the field perturbation from the shock or reconnection event is amplified by the Weibel instability in the $xy$-plane (this might be at or upstream from a shock front lying in the $xy$-plane and propagating in $z$, or upstream from reconnection exhaust funnels with filaments pointing in $z$ direction and the Weibel fields being perpendicular to it, as suggested by PIC simulations). The resulting amplified magnetic field is randomly oriented in the $xy$-plane and independently generated at each position in $z$ (shown theoretically in \citet{ML99} and confirmed in PIC simulations such as \citet{Nish+03}, \citet{Silva+03}, and \citet{fred04}). The decoupled behavior of the magnetic fields along the filamentation axis ($z$) and in the plane ($xy$) transverse to it means that the resulting Fourier spectra are independent of one another and the overall field distribution is highly anisotropic. Qualitatively, the spectrum of the Weibel-generated magnetic field in the direction transverse to the filamentation axis has been shown to rise and then drop at a scale of order the plasma skin depth \citep{fred04}. The magnetic field thus has a general spectral form that may be parameterized as \begin{equation}\label{eq:perpfspec} f_{xy}(k_{\perp})=\frac{k_{\perp}^{2\alpha_{\perp}}}{\left( \kappa_{\perp}^2+k_{\perp}^2\right)^{\alpha_{\perp}+\beta_{\perp}}}, \end{equation} where $k_\perp = (k_x^2 + k_y^2)^{1/2}$, and $\alpha_{\perp} > 0$ and $\beta_{\perp} > 0$. In this form $k_{\perp}$ refers to the magnetic field wavenumber in the plane transverse to the filamentation axis and $\kappa_{\perp}$, $\alpha_{\perp}$, and $\beta_{\perp}$ are free parameters controlling the spectral break and the soft and hard spectral indices, respectively. The field along the filamentation axis is in general unknown but we expect it to be of a similar form, with independent free parameters $\kappa_{\parallel}$, $\alpha_{\parallel}$, and $\beta_{\parallel}$: \begin{equation}\label{eq:pllfspec} f_{z}(k_{\parallel})=\frac{k_{\parallel}^{2\alpha_{\parallel}}}{\left( \kappa_{\parallel}^2+k_{\parallel}^2\right)^{\alpha_{\parallel}+\beta_{\parallel}}}, \end{equation} where again $\alpha_{\parallel} > 0$ and $\beta_{\parallel} > 0$. A plot of $f_{z}(k_{\parallel})$ is shown for a particular choice of parameters in Figure \ref{fig:basicspectra} and demonstrates the basic behavior of this function. The variables $k$ and $\kappa$ are presumed to be unitless, their units $k_0$ having been separated into a normalizable coefficient $k_0^{-2\beta}$. We have modified this form from the \citet{M06} paper, in which the power in the denominator was simply $\beta$ and it was required that $\beta > \alpha$. In these spectral forms used here, the asymptotes of the functions given in equations \ref{eq:perpfspec}-\ref{eq:pllfspec} are: \begin{equation}\label{eq:fldasympt} f(k)=\left\{ \begin{array} {l@{\qquad}l} k^{2\alpha} ,& \mbox{if } k<<\kappa,\\ k^{-2\beta} ,& \mbox{if } k>>\kappa. \end{array} \right. \end{equation} The theory of jitter radiation with the above magnetic field spectra has been utilized by \citet{M06} to derive the basic equations describing the jitter spectrum and demonstrate its dependence on the angle $\theta$ between the viewing angle and the local filamentation axis. Taking into account our modification of the form of the magnetic field spectra, the analytical work \citep{M06} indicates that the jitter radition $F_{\nu}$ spectrum should have the following general properties: \begin{enumerate} \item two breaks, with locations depending on $\kappa_\perp$, $\kappa_\parallel$, and $\theta$, \item a high-energy spectral index $\beta^{\prime}$, where $\beta^{\prime}$ approaches $\beta_{\parallel}$ as $\theta$ goes to 0 and $\beta^{\prime}$ approaches $\beta_{\perp}$ as $\theta$ goes to $\pi$/2, \item a low-energy spectral index $\alpha^{\prime}$, where $\alpha^{\prime}$ approaches 1 as $\theta$ goes to 0 and $\alpha^{\prime}$ approaches 0 as $\theta$ goes to $\pi$/2. \end{enumerate} The resulting radiation spectra from numerical calculations for a limited selection of parameter choices and viewing angles were presented in \citet{M06}. Here we present the results of more extensive numerical calculations illustrating the full range of jitter spectral variation due to viewing angle for a chosen set of field parameters, followed by a more thorough exploration of the magnetic field parameter space and its implications for the jitter radiation spectrum. Section II of our paper presents the calculated acceleration spectra for a range of viewing angles $\theta$. In Section III we develop the connection between the acceleration and radiation spectra through an analytical treatment of a simple linearized model of the acceleration spectrum. In Section IV we present the radiation spectra calculated for various viewing angles $\theta$ and analyze the spectral variation with viewing angle by means of a five-parameter spectral fit. In Section VI we show the effect of variations in the magnetic field parameters upon the radiation spectra. Section VII presents discussion and conclusions. \section{Acceleration Spectra} \label{s:accspec} The equations for calculating the spectrum of a particle's acceleration due to magnetic field turbulence generated by a relativistic Weibel-type instability were developed in \citet{M06}, but only calculated in full for a single representative oblique viewing angle of $\theta = 10^o$ in between the head-on ($\theta = 0$) and edge-on ($\theta = 90^o$) cases. Here we calculate the acceleration spectra for a more complete range of intermediate viewing angles in order to explore the spectral progression with $\theta$. As derived in \citet{M06}, the volume-averaged temporal Fourier component of a particle's acceleration due to the Lorentz force for the static magnetic field generated by the relativistic Weibel instability is \begin{eqnarray}\label{eq:accform} \left\langle \left| \mathbf{w}_{\omega^{\prime}} \right| ^2 \right\rangle & = & \left(2\pi V\right)^{-1}\int\left|\mathbf{w_k}\right|^2\delta\left(\omega\prime+\mathbf{k\cdot v}\right)d\mathbf{k}\\ & = & \frac{C}{2\pi}\left(1+\cos ^2\theta\right) \int f_z(k_\parallel)f_{xy}(k_\perp)\delta\left(\omega\prime+\mathbf{k\cdot v}\right)d^2k_{\perp}dk_{\parallel} \end{eqnarray} where $k$ is the magnetic wavevector, $\bf v$ is the particle's velocity vector, and $\mathbf{w}_{\omega^{\prime}}=\int\mathbf{w}e^{i\omega^{\prime}t}$. For the case of a shock viewed at an oblique angle $\theta$ from the normal of the shock plane (defined as the $z$-axis), ${\bf k\cdot v}=k_{x} v\sin\theta + k_{z} v\cos\theta$, where we have defined the $x$-axis so that the velocity vector $\bf v$ lies in the $xz$-plane. We can then use the delta function to substitute for either $k_{x}$ or $k_{z}$. This becomes: \begin{equation}\label{eq:accxyform} \left\langle \left| \mathbf{w}_{\omega^{\prime}} \right| ^2\right\rangle = \frac{C}{2\pi |v\cos\theta|}(1+\cos^2\theta)\int f_z\left(\frac{\omega^{\prime}}{v\cos\theta}+k_{x}\tan\theta\right)f_{xy}((k_x^2+k_y^2)^{1/2})dk_xdk_y \end{equation} or \begin{equation}\label{eq:accz} \left\langle \left| \mathbf{w}_{\omega^{\prime}} \right| ^2\right\rangle = \frac{C}{2\pi |v\sin\theta|}(1+\cos^2\theta)\int f_z(k_z)f_{xy}\bigg(\bigg(\bigg(\frac{\omega^{\prime}}{v\sin\theta}+k_{z}\cot\theta\bigg)^2+k_y^2\bigg)^{1/2}\bigg)dk_ydk_z \end{equation} where $C$ is an arbitrary normalization constant, proportional to $\left\langle B^2\right\rangle$. The two forms are equivalent; however, as we approach $\theta$ = 0 or $\theta$ = $\pi/2$, the calculation is more convenient if one avoids denominators approaching zero by choosing the appropriate form. It should be noted that neither form is valid for the endpoints $\theta$ = 0 or $\theta$ = $\pi$/2, which must be treated separately as in \citet{M06}. For a single radiating particle, we plug equations (\ref{eq:perpfspec}) and (\ref{eq:pllfspec}) for $f_{xy}$ and $f_z$ into equations (\ref{eq:accxyform}) and (\ref{eq:accz}) to obtain: \begin{equation}\label{eq:accxyfin} \left\langle \left| \mathbf{w}_{\omega^{\prime}} \right| ^2\right\rangle = \eta_1(\theta)\frac{C}{2\pi v} \int_{-\infty}^{\infty} \int_{-\infty}^{\infty}\frac{(\frac{\omega^{\prime}}{v\sin\theta}+k_x)^{2\alpha_{\parallel}}}{(\kappa_{\parallel}^2\cot^2\theta+(\frac{\omega^{\prime}}{v\sin\theta}+k_x)^2)^{\alpha_{\parallel}+\beta_{\parallel}}}\frac{(k_x^2+k_y^2)^{\alpha_\perp}}{(\kappa_{\perp}^2+k_x^2+k_y^2)^{\beta_{\perp}}}dk_xdk_y \end{equation} where \begin{equation} \eta_1(\theta) = \frac{(\tan\theta)^{-2\beta_{\parallel}}(1+\cos^2\theta)}{|\cos\theta|} \end{equation} or alternatively \begin{equation}\label{eq:acczfin} \left\langle \left| \mathbf{w}_{\omega^{\prime}} \right| ^2\right\rangle = \eta_2(\theta)\frac{C}{2\pi v}\int_{-\infty}^{\infty}\int_{-\infty}^{\infty} \frac{k_{z}^{2\alpha_{\parallel}}}{( \kappa_{\parallel}^2+k_z^2)^{\alpha_{\parallel}+\beta_{\parallel}}}\frac{((\frac{\omega^{\prime}}{v\cos\theta}+k_z)^2+k_y^2\tan^2\theta)^{\alpha_{\perp}}}{\left( \kappa_{\perp}^2\tan^2\theta+(\frac{\omega^{\prime}}{v\cos\theta}+k_{z})^2+k_y^2\tan^2\theta\right)^{\alpha_{\perp}+\beta_{\perp}}}dk_ydk_z \end{equation} where \begin{equation} \eta_2(\theta) = \frac{(\cot\theta)^{-2\beta_{\perp}}(1+\cos^2\theta)}{|\sin\theta|} \end{equation} This may then be numerically integrated to produce the acceleration spectrum as a function of $\omega^{\prime}$. We normalize the wave-vector to a dimensional constant $k_0$, hence $k$ is dimensionless, as discussed above. The spectral form remains the same with $\kappa$ = $\kappa{^\prime}k_0$ and a normalizable factor $k_0^{2\beta}$ in front. The frequencies $\omega$ are normalized to $\omega_0$ such that $v = \omega_0/k_0 = 1$. The parameters $\alpha_i$, $\beta_i$, and $\kappa_i$ in our magnetic field spectra are in general unknown free parameters that depend upon the spatial distribution of the magnetic field in the turbulent region, and the parameters $\kappa_i$, corresponding to the dimension correlation lengths of the magnetic field in the turbulent region, will vary even within an astrophysical system and affect the resulting peak energy of the radiation spectrum. For our initial calculations of spectral variations with $\theta$ we choose reasonable values of the parameters as follows: \begin{eqnarray} \kappa_\perp = \kappa_\parallel = 10, \\ \alpha_\perp = \alpha_{\parallel} = 2, \\ \beta_\perp = \beta_\parallel = 1.5\,; \end{eqnarray} These parameters are varied (jointly and individually) in Section \ref{s:sfvarparam} to explore the resulting effect on the calculated radiation spectrum. We present the results for varying $\theta$ in Figure \ref{fig:AccSpecAll}. Figure \ref{fig:AccSpecPeakDetail} shows further detail of the critical region around the peak of the acceleration spectra. The graphs show linearly-connected $\log_e-\log_e$ data, with data point intervals of 0.1. We have arbitrarily normalized the spectra so that the low-$k$ part of the $\theta = 2$ spectrum asymptotes at unity. The acceleration spectra are flat for low $\omega^{\prime}$, then at a certain $\omega^{\prime}$ they turn rapidly into a sloped region, then there is a second transition to a steep decline for high $\omega^{\prime}$. For $\theta$ near 0 the spectra have a clear peak, but as $\theta$ increases the peak recedes and eventually disappears altogether. As $\theta$ approaches $\pi$/2 a peak again becomes evident but the location of the peak has shifted by about 0.4 from its position for low $\theta$. The transition point for flattening at low $\omega^{\prime}$ appears to move off rapidly to lower $\omega^{\prime}$ as $\theta$ approaches 0. Figure \ref{fig:AccSpecAmps} plots the amplitude of the acceleration spectra at our lower calculation boundary, at the approximate location of the peak for small $\theta$, and at the approximate location of the peak for $\theta$ close to $\pi$/2. The crossing of the lines on this graph correspond to spectral transitions as the peak disappears and then eventually reappears at a new location for higher $\theta$. The graph of the slope of the $\log-\log$ plot in Figure \ref{fig:AccSpecSlope} also illustrates the disappearance and reemergence of the peak, but further shows that even for the unpeaked spectra there is a flattening of the slope of the mid-range $\theta$ spectra in the region in between the positions of the peaks that appear at higher and lower $\theta$. For unpeaked spectra, there remains a transition region of some extent between the low-frequency and high-frequency power laws; consequently, the unpeaked spectra may still be better fit by division into three power law regions as opposed to two. An analysis of equations (\ref{eq:accxyfin}) and (\ref{eq:acczfin}) indicates that $\omega^{\prime}$ functions as a shift of the center of the magnetic spectral form in which we have made the delta-function substitution. Since this $\omega^{\prime}$ factor is always positive, the function's offset always occurs in the direction of negative wavenumber components $k_x$ or $k_z$. An example of this behavior for the product of two functions such as equations (\ref{eq:perpfspec}) and (\ref{eq:pllfspec}) over a range of offsets is shown in Figure \ref{fig:offsetvar}. The resulting integral is highly sensitive to the shape of the two functions and the offset between them, which control the resonance like behavior that produces the resulting peak and transition points in the acceleration spectrum. The angle $\theta$ plays a role in determining both the width and the offset of one function relative to the other: the $\kappa_{\parallel}^2\cot^2\theta$ or $\kappa_{\perp}^2\tan^2\theta$ terms in the denominator influence the width of the function under consideration, while the offset is given by $\frac{\omega^\prime}{v\cos\theta}$ or ${\frac{\omega^\prime}{v\sin\theta}}$. These are linked such that as $\theta$ increases, both the width and the offset of the function containing $\theta$ increases. The spectral indices $\alpha_\perp$, $\alpha_{\parallel}$, $\beta_\perp$, and $\beta_\parallel$ can also influence the width of the functions and hence also affect the location of the transition points in the acceleration spectra. The overall effect of such variations on the resulting jitter radiation spectrum will be explored more in Section 5. Summarizing the results of this section, we find that the acceleration spectra are generally characterizable to a good approximation by division into three regions: a flat low-$k$ region and two power law regions, as shown in Figure \ref{fig:wcartoon}. These may altogether be described via one amplitude, two non-zero spectral indices being functions of $\alpha$s and $\beta$s of the field spectra, and two breaks, which depend primarily on the magnetic spectral peaks $\kappa_\perp$, $\kappa_{\parallel}$, and the viewing angle $\theta$. In the next section we develop a simple linearized model of the acceleration spectrum using these five parameters and show that they can be used to analytically predict the behavior of the final jitter radiation spectrum. \section{From Acceleration to Radiation Spectra} \label{s:acctorad} The angle-averaged emissivity of a relativistic particle undergoing a series of small transverse accelerations not substantially affecting its overall velocity is as follows (\citealp{LL,M00}, and others): \begin{equation}\label{eq:powereq} \frac{dW}{d\omega}=\frac{e^2\omega}{2\pi c^2}\int_{\omega/2\gamma^2}^{\infty}\frac{\left|\mathbf{w}_{\omega^{\prime}}\right|^2}{\omega^{\prime2}}\left(1-\frac{\omega}{\omega^{\prime}\gamma^2}+\frac{\omega^2}{2\omega^{\prime2}\gamma^4}\right)d\omega^{\prime} \end{equation} Analysis of the high-$\omega$ and low-$\omega$ asymptotic behavior of the radiation spectra for shocks viewed head-on and edge-on, as carried out in \citet{M06}, yields: \begin{eqnarray}\label{eq:radasympt} \mbox{for } \theta=0,\qquad \frac{dW}{d\omega}\propto \left\{ \begin{array} {l@{\qquad}l} \omega^1 & \mbox{if } \omega\ll\kappa_\| v \gamma^2\\ \omega^{-2\beta_{\parallel}} & \mbox{if } \omega\gg\kappa_\| v \gamma^2\\ \end{array} \right. \nonumber \\ \nonumber \\ \mbox{for } \theta=\pi/2,\qquad \frac{dW}{d\omega}\propto \left\{ \begin{array} {l@{\qquad}l} \omega^0 & \mbox{if } \omega\ll\kappa_\perp v \gamma^2\\ \omega^{-2\beta_{\perp}+1} & \mbox{if } \omega\gg\kappa_\perp v \gamma^2\\ \end{array} \right. \end{eqnarray} where $\theta$ is the angle between the line of sight and the normal to the shock front \citep{M06} and $\alpha_\perp,\ \alpha_\|>1/2$. We thus expect the radiation spectra at oblique viewing angles to vary between these two forms, dominated by the parallel or the transverse spectra as we vary between the two extremes. The $\omega$ dependence contributed by the integral primarily originates in the lower limit $\omega/(2\gamma^2)$ and where it falls on the acceleration spectrum. In section \ref{s:accspec}, we found that the acceleration spectrum could be simply characterized as three regions of approximately power-law behavior: a flat initial amplitude at low $\log\omega^{\prime}$ (Region I), a region of positive or negative slope (Region II), and a region with a more steeply negative slope (Region III). Using this to make a simple approximation for our acceleration spectrum in three regions, we can calculate an approximate analytical solution to equation \ref{eq:powereq}. We define our simple approximation to the acceleration spectrum using five free parameters, all of which may be fit to the spectrum on a $\log_e-\log_e$ plot: $\mathpzc{A}=\log a_0$ is the amplitude of the low-$\omega^{\prime}$ limit; $\mathpzc{T}_1=\log \omega_1^{\prime}$ is the transition point between the first and second regions; $\mathpzc{S}_1$ is the spectral index in the second region; $\mathpzc{T}_2=\log \omega_2^{\prime}$ is the transition point between the second and third regions; and $-\mathpzc{S}_2$ is the spectral index in the third region ($\mathpzc{S}_2 > 0$). The acceleration spectrum then has the form (shown in Figure \ref{fig:wcartoon}): \begin{eqnarray}\label{eq:accap1} \label{eq:accap1.1} \lefteqn{\mbox{for } \omega^{\prime} < e^{\mathpzc{T}_1}:} \nonumber \\ & & \left\langle \left| \mathbf{w}_{\omega^{\prime}} \right| ^2\right\rangle = e^\mathpzc{A} = a_0, \\ \label{eq:accap1.2} \lefteqn{\mbox{for } e^{\mathpzc{T}_1} < \omega^{\prime} < e^{\mathpzc{T}_2}:} \nonumber \\ & & \left\langle \left| \mathbf{w}_{\omega^{\prime}} \right| ^2\right\rangle = e^{\mathpzc{A}-\mathpzc{S}_1\mathpzc{T}_1}\omega^{\prime \mathpzc{S}_1} = a_0\left(\frac{\omega^{\prime}}{\omega^{\prime}_1}\right)^{\mathpzc{S}_1},\\ \label{eq:accap1.3} \lefteqn{\mbox{for } \omega^{\prime} > e^{\mathpzc{T}_2}: } \nonumber \\ & & \left\langle \left| \mathbf{w}_{\omega^{\prime}} \right| ^2\right\rangle = e^{\mathpzc{A}+\mathpzc{S}_1\mathpzc{T}_2-\mathpzc{S}_1\mathpzc{T}_1+\mathpzc{S}_2\mathpzc{T}_2}\omega^{\prime -\mathpzc{S}_2} = a_0 \left(\frac{\omega^{\prime}_2}{\omega^{\prime}_1}\right)^{\mathpzc{S}_1}\left(\frac{\omega^{\prime}}{\omega^{\prime}_2}\right)^{-\mathpzc{S}_2} \end{eqnarray} In Figure \ref{fig:modaccspec} we show example models for the acceleration spectra at $\theta$ = 10 degrees and $\theta$ = 60 degrees and their comparison to the original acceleration spectra (as in Figure \ref{fig:AccSpecAll}) for a particular choice of fitting rules. To first order (neglecting the second and third terms in Eq. (\ref{eq:powereq})), the resulting radiation spectrum is as follows: \begin{eqnarray} \label{eq:radap1.1} \mbox{for } \omega/2\gamma^2 < e^{\mathpzc{T}_1}: & & \nonumber \\ \left(\frac{dW}{d\omega}\right)_I &=& \frac{e^2}{2 \pi c^2} e^{\mathpzc{A}}\left[(2\gamma^2)-\omega\frac{\mathpzc{S}_1 e^{-\mathpzc{T}_1}}{\mathpzc{S}_1-1}+\omega \frac{(\mathpzc{S}_1+\mathpzc{S}_2)e^{-\mathpzc{T}_1\mathpzc{S}_1+\mathpzc{T}_2(\mathpzc{S}_1-1)}}{(\mathpzc{S}_1-1)(\mathpzc{S}_2+1)}\right] \nonumber \\ &=& \frac{a_0 e^2}{2\pi c^2}\left[2\gamma^2-\frac{\mathpzc{S}_1}{\mathpzc{S}_1-1}\left(\frac{\omega}{\omega_1^{\prime}}\right)+\frac{\mathpzc{S}_1+\mathpzc{S}_2}{(\mathpzc{S}_1-1)(\mathpzc{S}_2+1)}\left(\frac{\omega_2^{\prime}}{\omega_1^{\prime}}\right)^{\mathpzc{S}_1}\left(\frac{\omega}{\omega_2^{\prime}}\right)\right], \\ \label{eq:radap1.2} \mbox{for } e^{\mathpzc{T}_1} < \omega/2\gamma^2 < e^{\mathpzc{T}_2}: & & \nonumber \\ \left(\frac{dW}{d\omega}\right)_{II} &=& \frac{e^2}{2 \pi c^2}e^{\mathpzc{A}-\mathpzc{T}_1\mathpzc{S}_1}\left[\frac{-\omega^{\mathpzc{S}_1}}{(\mathpzc{S}_1-1)(2\gamma^2)^{\mathpzc{S}_1-1}}+\omega\left(\frac{1}{\mathpzc{S}_1-1}+\frac{1}{\mathpzc{S}_2+1}\right)e^{\mathpzc{T}_2(\mathpzc{S}_1-1)}\right] \nonumber \\ &=& \frac{a_0 e^2}{2 \pi c^2} \left[-\frac{(2\gamma^2)^{1-\mathpzc{S}_1}}{(\mathpzc{S}_1-1)}\left(\frac{\omega}{\omega_1^{\prime}}\right)^{\mathpzc{S}_1}+ \frac{\mathpzc{S}_1+\mathpzc{S}_2}{(\mathpzc{S}_1-1)(\mathpzc{S}_2+1)} \left(\frac{\omega_2^{\prime}}{\omega_1^{\prime}}\right)^{\mathpzc{S}_1}\frac{\omega}{\omega_2^{\prime}}\right], \\ \label{eq:radap1.3} \mbox{for } \omega/2\gamma^2 > e^{\mathpzc{T}_2}: & & \nonumber \\ \left(\frac{dW}{d\omega}\right)_{III} &=& \frac{e^2}{2\pi c^2}e^{\mathpzc{A}-\mathpzc{T}_1\mathpzc{S}_1+\mathpzc{T}_2(\mathpzc{S}_1+\mathpzc{S}_2)}\frac{(2\gamma^2)^{1+\mathpzc{S}_2}}{\left|\mathpzc{S}_2+1\right|}\omega^{-\mathpzc{S}_2} \nonumber \\ &=& \frac{a_0 e^2}{2 \pi c^2} \frac{(2\gamma^2)^{1+\mathpzc{S}_2}}{\left|\mathpzc{S}_2+1\right|}\left(\frac{\omega_2^{\prime}}{\omega^{\prime}_1}\right)^{\mathpzc{S}_1}\left(\frac{\omega}{\omega_2^{\prime}}\right)^{-\mathpzc{S}_2} \end{eqnarray} The radiative power spectrum (equations (\ref{eq:radap1.1}) - (\ref{eq:radap1.3})) obtained analytically by our fit-based approximation for the acceleration spectrum agrees with that obtained via full numerical integration in the following section within about 10\%. We have chosen our fitting method for the acceleration spectrum to most closely capture the spectral indices; a different choice of fit may allow for a better determination of peak positions. Calculations of such spectra for parameters fitted to our acceleration spectra at $\theta$ = 10 degrees and $\theta$ = 60 degrees are shown in Figures \ref{fig:modradspec10} and \ref{fig:modradspec60}. The region boundaries in these figures indicate that the radiation spectrum cannot be described by a simple linear approximation in the three regions that were defined by the breaks in our acceleration spectrum. The key features (position of spectral peak and spectral breaks) of the radiation spectrum originate in the additional terms in Equations (\ref{eq:radap1.1}) and (\ref{eq:radap1.2}). Consequently, the transition points in our radiation spectrum do not directly correspond to the transition points in the acceleration spectrum. The asymptotic behavior of the spectrum at high and low energies can be easily obtained from Eqs. (\ref{eq:radap1.1}) and (\ref{eq:radap1.3}): \begin{eqnarray} \label{eq:radap1..1} \mbox{for } \omega/2\gamma^2\ < \omega_1^{\prime}: & & \nonumber \\ \left(\frac{dW}{d\omega}\right)_I &\propto& \omega^0, \\ \label{eq:radap1..3} \mbox{for } \omega/2\gamma^2 > \omega_2^{\prime}: & & \nonumber \\ \left(\frac{dW}{d\omega}\right)_{III} &\propto& \omega^{-\mathpzc{S}_2} \end{eqnarray} Thus the high and low-energy asymptotic behavior of the radiation spectrum will be identical to the high and low-energy behavior of the acceleration spectrum. The behavior of the spectrum in the intermediary Region II can be solved for as well. We take the derivative of Equation (\ref{eq:radap1.2}) to solve for the position $P = \log(\omega_p/2\gamma^2)$ of the spectral peak, in cases where it exists: \begin{equation} \label{eq:radpeak} P = \frac{1}{1-\mathpzc{S}_1}\log\left[\frac{\mathpzc{S}_1(\mathpzc{S}_2+1)}{\mathpzc{S}_1+\mathpzc{S}_2}\right]+\mathpzc{T}_2 \\ \end{equation} We note that the peak position becomes undefined in the case $\mathpzc{S}_1<0, \left|\mathpzc{S}_1\right|>\mathpzc{S_2}$, for which Equation (\ref{eq:radap1.2}) is everywhere decreasing. The case $\mathpzc{S}_1<0,\left|\mathpzc{S}_1\right|<\mathpzc{S_2}$, in which the acceleration spectrum would decline more steeply in Region II than in Region III is impermissible, so we find that the radiation spectrum will be unpeaked whenever the mid-range spectral index $\mathpzc{S}_1$ of the acceleration spectrum is negative (i.e., for $\mathpzc{S}_1<0$). An exploration of the behavior of peak point $P$ relative to the region boundaries shows that for certain values of $\theta$ the peak of the Region II function exists, but has crossed the boundary into Region I and consequently does not appear in the resulting radiation spectrum. Thus, while a calculation of $P$ from the results of fitting the acceleration spectrum appears to indicate the re-emergence of a peak in the radiation spectrum as $\theta$ approaches $\pi/2$, this peak falls beyond the Region II lower boundary and is not observed. We note that the first term in Equation (\ref{eq:radpeak}) is negative for both $0 < \mathpzc{S}_1 < 1$ and $\mathpzc{S}_1 > 1$, so the peak is always located below the transition point between Regions II and III. An analysis of the behavior of Equation(\ref{eq:radap1.2}) below the Region II peak indicates the following behavior: \begin{equation} \label{eq:radap1..2} \mbox{for } \omega_1^{\prime} < \omega/2\gamma^2\ < \omega_2^{\prime}: \\ \left(\frac{dW}{d\omega}\right)_{II} \propto \cases{ \omega^1 & if $\mathpzc{S}_1>1$, \cr \omega^{\mathpzc{S}_1} & if $\mathpzc{S}_1<1$, } \\ \end{equation} Thus, from Equations (\ref{eq:radap1..1})-(\ref{eq:radap1..2}), it is evident that the spectral indices $\mathpzc{S}_1$ and $\mathpzc{S}_2$ of the acceleration spectra will generally correspond to spectral indices $s_1$ and $s_2$ in two power law regions of the radiation spectra. In the case of the high-frequency spectral index $s_2$ this correspondence is exact; however, the relation between mid-range (i.e. intermediate-frequency) spectral indices $s_1$ and $\mathpzc{S}_1$ is modified an upper limit of unity on $s_1$ and also breaks down when the first term in equation \ref{eq:radpeak} is undefined or larger in magnitude than the distance between the acceleration's spectral transition points $\mathpzc{T}_2 - \mathpzc{T}_1$. The asymptotic form in Equation (\ref{eq:radap1..1}) suggests that we may neglect the second and third terms in Equation (\ref{eq:radap1.1}) and solve for the transition point $T = \log(\omega_t/2\gamma^2)$ at which the dominating term in Region II becomes significant. We find that: \begin{eqnarray} \label{eq:radtranspt} \mbox{for } \mathpzc{S}_1 < 1: & & \nonumber \\ & \left(\frac{\omega_t}{2\gamma^2}\right)^{\mathpzc{S}_1} & = (1-\mathpzc{S}_1) \omega_1^{\prime \mathpzc{S}_1} \nonumber \\ & T & = \frac{1}{\mathpzc{S}_1}\log(1-\mathpzc{S}_1)+\mathpzc{T}_1, \\ \mbox{for } \mathpzc{S}_1 > 1: & & \nonumber \\ & \left(\frac{\omega_t}{2\gamma^2}\right)^{\mathpzc{S}_1} & = \frac{(\mathpzc{S}_1-1)(\mathpzc{S}_2+1)}{\mathpzc{S}_1+\mathpzc{S}_2}\left(\frac{\omega_1^{\prime \mathpzc{S}_1}}{\omega_2^{\prime (\mathpzc{S}_1-1)}}\right) \nonumber \\ & T & = \log\left[\frac{(\mathpzc{S}_1-1)(\mathpzc{S}_2+1)}{\mathpzc{S}_1+\mathpzc{S}_2}\right] + \mathpzc{S}_1\mathpzc{T}_1+(\mathpzc{S}_1-1)\mathpzc{T}_2 \end{eqnarray} We have demonstrated that a simple approximation for our acceleration spectrum allows us to analytically obtain some of the key features of the resulting jitter radiation spectrum, notably that it will have a similar three-region form with $dW/d{\omega} \propto \omega^0$ for small $\omega$ and $dW/d{\omega} \propto \omega^{-s_2}$ for large $\omega$ and a possibly-peaked transition region. Unlike the acceleration spectrum, the intermediary region in the jitter radiation spectrum will have a maximum slope of 1 and may be unpeaked at angles $\theta$ at which the acceleration spectrum was peaked. We emphasize that our spectral calculations are all for a single emitting electron, {\em not} a power-law distribution of electrons. Yet, the power-law photon spectrum emerges at high energies (above the second jitter spectral break $\omega_2$) in this jitter mechanism, in contrast to the synchrotron exponential spectral decay above the synchrotron frequency. \section{Analysis of Radiation Spectra} \label{s:radspec} Now we turn to full numerical calculations of the jitter radiation spectrum, generated by successive numerical integrations of equations (\ref{eq:powereq}) and (\ref{eq:accxyfin}) or (\ref{eq:acczfin}) The results for varying $\theta$ are presented in Figure \ref{fig:RadSpecAll}, with data point intervals of 0.2 on the $\log \omega$ scale. Once again we have normalized the spectra such that the low-energy asymptotic value of the $\theta = 2$ spectrum is unity. The detailed view of the peak region in Figure \ref{fig:RadSpecPeakDetail} shows results in this region for intervals of 0.05 in $\log(\omega)$. The spectral shapes and trends are, as expected, much like the acceleration spectra but broadened and flattened overall. No peak reemerges in the spectra as $\theta$ approaches $\pi$/2. Figure \ref{fig:vFvRadSpecAll} shows the $\nu F_{\nu}$ spectrum such as is commonly presented for GRBs and used in GRB spectral analysis. These jitter radiation results show a significant evolution in the spectrum emitted at different viewing angles relative to the main filamentation axis of the magnetic field spectrum. Note that this viewing angle effect in our calculations is entirely due to particles with velocities directed along the line of sight providing the dominant contribution to the radiation emitted to any particular viewing angle. We are neglecting the angular distribution of the radiation emitted by each particle and using the angle-averaged emissivity for the spectrum emitted in the forward direction by particles with a particular orientation angle relative to the magnetic field filamentation. Like the acceleration spectrum, the radiation spectrum can be generally described in terms of three regions (two spectral breaks), and an amplitude or slope in each. To conveniently summarize the spectral features and their evolution, we have developed a five-parameter fit which describes the spectral behavior in these three regions, which we designate as Regions R-I, R-II, and R-III to avoid confusion with the Regions I, II, and III as defined earlier for the acceleration spectrum. (Recall that the transition points for the acceleration spectra do not correspond to the apparent transition points in the radiation spectra.) As before, we have chosen our technique to optimize our results for the spectral indices, rather than the spectral transition points. Since for jitter radiation the soft spectral index varies continuously and approaches 0 for low $\omega$, the results of a simple two-region fit to these spectra would depend significantly upon where the lower bound of the data window falls relative to the peak. Within a fixed data window, a two-region spectral fit would tend to produce an artificial reduction in the soft spectral index for spectra with higher-frequency spectral peaks or breaks. Unfortunately there is no simple way to characterize the behavior of the middle range of the spectrum because of the transition from peaked to unpeaked spectra as $\theta$ varies. Even in unpeaked spectra, the extent and curve of the transition region between the flat low $\omega^{\prime}$ part of the spectrum and the strongly negatively-sloped high $\omega^{\prime}$ part of the spectrum varies substantially. Consequently, we have chosen to model the unpeaked spectra still as a three-region spectra rather than solely by its upper and lower asymptotes. We characterize the spectra by defining three lines (each requiring a slope and a reference point) and finding the transition points at which they intersect. \emph{Region R-I (flat, amplitude $A$)}: The low-frequency region R-I is flat, with a slope close to zero. To describe this region, we take our initial calculated amplitude to be $A$, the low-frequency amplitude. For very small $\theta$ our lower calculation boundary may be insufficient to capture the initial flat part of the spectrum, since our first spectral transition point approaches -$\infty$ as $\theta$ goes to 0. \emph{Region R-II (positive or negative slope $s_1$)}: The intermediate-frequency region R-II may have positive slope resulting in a peak or a slight negative slope (of notably less magnitude than the slope in region R-III). Since not all the spectra are peaked, we have chosen to avoid using the peak value for our fit. Instead we define the ``drop point" as the region where the second derivative reaches its minimum value. As the place of largest negative change in slope, this coincides well with the ``knee" or second break of the function, and is always at slightly higher frequencies than the peak itself. We then find the slope and a reference point in this region by either: \begin{itemize} \item Method a: for peaked spectra, we take the maximum value of the numerical derivative and its associated data point. \item Method b: for unpeaked spectra, we take the average value of the numerical derivative in the region between the drop point and the ``deviation point" where the spectrum first drops below $A-0.01$ (this corresponds to a deviation of about 1\% from its original value). Our reference point is the data point halfway between or next highest to halfway between the deviation and drop points. \end{itemize} \emph{Region R-III (negative slope, defined as -$s_2$)}: The high-energy frequency region R-III has a large negative slope compared to the rest of the function. This slope is still changing over the region close to the second spectral break that we are considering, so we determine a representative slope by calculating the slope of a line between the drop point, which is the minimum of the numerical second derivative, and the higher frequency point that is the absolute minimum of the numerical second derivative (the data point closest to where the second derivative crosses zero in this region). These points are well-defined for all our radiation spectra as long as the calculation boundary extends a couple orders of magnitude in $e$ above the drop point. Either point may be used as a reference point in this region. The first spectral transition point $\tau_1$ is obtained by solving for the intersection of the lines defined in Regions R-I and R-II, and the second spectral transition point $\tau_2$ is obtained by solving for the intersection of the lines in Regions R-II and R-III. We have chosen to work with the $F_{\nu}$ spectrum because of the convenience of its distinctive flat (spectral index of 0) initial amplitude at very low frequencies, but it is easy enough to translate $F_{\nu}$ spectral features into spectral features of the $\nu F_{\nu}$ spectrum or the photon spectrum $N(E)$, as the spectral indices will simply be increased or decreased by 1 and the transition points between the power law regions will roughly coincide, with a slight shift based on normalization. In terms of the Band function fit commonly used for GRB spectra \citep{band}, the relation between the high-energy spectral indices is $\beta_{Band} = s_2-1$. The relation between the low-energy spectral indices is complicated by the fact that the Band function is a two-region fit and not sensitive to multiple spectral indices below the spectral peak; consequently $\alpha_{Band}$ will range between -1 and $s_1-1$ depending on where the data fitting window falls relative to our first spectral break $\tau_1$. The $\nu F_{\nu}$ peak energy, which is $E_p$ in the Band function, will correspond to a slope of -1 in the $F_{\nu}$ spectrum, and will lie roughly in the vicinity of the second spectral break $\tau_2$ in our fit. Figures \ref{fig:fullfitamp} - \ref{fig:fullfits2} show spectral fit results obtained using our above technique on the radiation spectra. We have also applied the same technique to the acceleration spectra presented in section \ref{s:accspec} and plotted them for comparison. In addition to data resolution effects, some discontinuity in fitting the peaked vs. unpeaked form of the spectrum is unavoidable and is reflected in our results. Figures \ref{fig:fullfitamp} and \ref{fig:fullfits2} indicate that the amplitude $A$ and the high-frequency spectral index $s_2$ are close in both their values and their evolution with $\theta$ for the two types of spectra. The mid-range spectral index $s_1$ varies similarly in Fig. \ref{fig:fullfits1} for both spectra, but appears to approach different asymptotic values as it approaches $\theta = 0$ and $\theta = \pi/2$, clearly showing the expected $s_1<1$ limiting behavior. The second spectral break $\tau_2$ shows that the radiation transition point tends to be about 1/2 a power of $e$ lower than the second spectral behavior for the acceleration case, but shows similar evolution with $\theta$ in both cases. Figure \ref{fig:fullfitdroppk} shows the angular dependence of the spectral peak in both our acceleration and radiation $F_{\nu}$ spectra, the $\nu F_{\nu}$ spectral peak (peak data point in Figure \ref{fig:vFvRadSpecAll}, and the drop point, which we have defined as the minimum in the numerical second derivative. We clearly see the usefulness of the drop point in tracking the spectral behavior across the full range of $\theta$, and that it closely tracks the behavior of the $\nu F_{\nu}$ spectral peak ($E_p$). Both Figures \ref{fig:fullfitdroppk} and \ref{fig:fullfitpkheight} clearly show the re-emergence of peaked acceleration spectra as $\theta$ approaches $\pi/2$ and the lack of peaked radiation spectra for similar values of $\theta$. \section{Spectral Features and Exploration of the Spectral Parameter Space} \label{s:sfvarparam} We have explored the influence of changes in the magnetic field spectral parameters on the acceleration experienced by the particle and hence its resulting radiative profile. Sections \ref{s:accspec} and \ref{s:radspec} presented the acceleration and radiation spectra calculated from magnetic field spectra of the form given in Equations (\ref{eq:perpfspec}) and (\ref{eq:pllfspec}) with our original choice of parameters $\alpha$ = $\alpha_{\perp}$ = $\alpha_{\parallel}$ = 2.0, $\beta$ = $\beta_{\perp}$ = $\beta_{\parallel}$ = 1.5, $\kappa$ = $\kappa_{\perp}$ = $\kappa_{\parallel}$ = 10. In this section, we present the results of varying these parameters. We vary the joint parameters $\alpha$, $\beta$, and $\kappa$ in the parallel and perpendicular magnetic field spectra. We also vary the parameters $\alpha_{\parallel}$, $\alpha_{\perp}$, $\beta_{\parallel}$, and $\beta_{\perp}$ individually. Finally, we vary the ratio $K = \kappa_{\perp}$/$\kappa_{\parallel}$. For each variation of the initial parameters, we have calculated the radiation spectrum for three representative angles at $\theta$ = $10^o$, $45^o$, and $80^o$. The results are presented according to their impact on the characteristics of the radiation spectrum as developed in section \ref{s:radspec}, namely the initial amplitude $A$, the spectral breaks $\tau_1$ and $\tau_2$, and the spectral indices $s_1$ and $s_2$. We also present the results for the peak ''strength'', the height of the spectral peak above the initial amplitude. The spectra are divided by $\langle B^2\rangle \propto \int{f(\kappa_{\parallel})f_(\kappa_{\perp})d^3k} \propto k_0^{-2(\beta_{\parallel}+\beta_{\perp})}$ to appropriately normalize the amplitudes relative to one another, but in all cases we have arbitrarily normalized the final spectra such that the low-energy asymptotic value of the $\theta = 10$ spectrum with our original choice of parameters is unity (zero on the logarithmic scale). As in Section \ref{s:radspec}, the initial, low-frequency amplitude A is the first calculated value of the angle-averaged radiative power emitted per frequency $dW/d\omega$. This value is generally a good approximation for the asymptotic value of the function as it approaches lower $\omega$, though it may deviate somewhat from this value for $\theta$ approaching 0, as the spectra becomes sloped rather than flattened at our lower calculation boundary in $\omega$. Among the resulting figures \ref{fig:alphaamp}-\ref{fig:ampcomp}, variations in the magnetic field parameters $\kappa_i$ produce the largest effect on the low-frequency amplitude, causing changes of about 4 orders of magnitude in $e$ when varied individually via the ratio $K = \kappa_{\perp}/\kappa_{\parallel}$, and up to 7 orders of magnitude in $e$ when varied together as $\kappa = \kappa_{\parallel} = \kappa_{\perp}$. Variation with changes in the magnetic field spectral indices $\alpha_i$ and $\beta_i$ are small in comparison, on the scale of about 1-2 orders of magnitude. The amplitude increases with increasing $K$ for $\theta = 10^o$ and generally decreases with increasing $K$ for $\theta=80^o$; thus, it increases when $\kappa_{\perp}$ dominates at small $\theta$ and when $\kappa_{\parallel}$ dominates at large $\theta$. The mid-range spectral index in peaked spectra is the maximum slope below the peak, as determined by taking the numerical first derivative of our calculated values. For unpeaked spectra, we find the spectral index as the average slope between the point at which the graph deviates by more than 0.01 from the initial amplitude A and the drop point at which the numerical second derivative reaches a minimum. Figures \ref{fig:alphas1}-\ref{fig:kappas1} present the effect of magnetic field parameter variations on $s_1$. As can be seen in Figure \ref{fig:s1comp}, the mid-range spectral index is strongly affected by variations in both the parameters $\beta_i$, especially for $\theta = 10^o$, and in the relative strength of the $\kappa_i$ values. The peaked $\theta = 10^o$ spectra is notably more sensitive to magnetic field variations than the unpeaked $45^o$ and $80^o$ spectra. The ratio $K = \kappa_{\perp}/\kappa_{\parallel}$ has the largest influence at all three representative viewing angles, with $s_1$ increasing as $\kappa_{\parallel}>\kappa_{\perp}$). We note that even at $\theta$ approaching $\pi/2$, we obtain a positive slope (and hence a peaked spectrum) for $K = 1/10$. For both peaked and unpeaked spectra, the high-frequency spectral index -$s_2$ is determined by taking the slope between the drop point and the absolute minimum of the second derivative above the peak (i.e., the closest data point to where the second derivative crosses zero). Figures \ref{fig:alphas2} - \ref{fig:kappas2} show the effects of variations in the magnetic field parameters on -$s_2$. We find that as expected analytically, this spectral index is primarily influenced by the magnetic field parameters $\beta_i$. In particular, $s_2$ is most strongly influenced by $\beta_{\perp}$, the high-wavenumber spectral index of the magnetic field spectrum transverse to the current filamentation. As seen in figure \ref{fig:betas2}, $\beta_{\parallel}$ affects $s_2$ only at small angles $\theta$, and its influence even then is less than that of varying $\beta_{\perp}$. The apparently strong influence of $\kappa$ is largely an artificial effect as the $\kappa$ parameter's strong shifting of the function (as indicated in our analysis of the spectral breaks below) towards higher frequencies interferes with the calculation of $s_2$ by shifting the absolute minumum of the second derivative outside our calculation boundaries. This causes an artificial reduction in the steepness of the slope for $\kappa = 100$, as evident also in Figure \ref{fig:kappas2}. We calculate the first spectral break (i.e. transition point) as the intersection between the line $\log \left\langle \left| \mathbf{w}_{\omega^{\prime}} \right| ^2\right\rangle$ = A and the line of slope $s_1$ through the point of maximum positive slope for peaked spectra or through the data point in the middle of the range over which we averaged to obtain slope $s_1$ for unpeaked spectra. (If the middle of the range does not fall on a data point, we take the next larger data point.) We find, as shown in figures \ref{fig:alphatp1}-\ref{fig:tp1comp}, that the first spectral break is strongly influenced by changes in $\alpha$ and $\kappa$ or the $\kappa$-ratio $K$. The break position shifts to higher frequency by about an order of magnitude in $e$ as we increase $\alpha$ from 1 to 10 jointly in the parallel and perpendicular magnetic field spectra. In varying the $\alpha_i$ separately we see that $\alpha_{\parallel}$ has a larger influence at $\theta = 10^o$ and $\alpha_{\perp}$ has a larger influence at $\theta = 80^o$. Increasing the $\kappa$ jointly by powers of 10 results in shifting the first spectral break to higher frequencies by roughly 4 orders of magnitude in e. Varying the $\kappa$ parameters relative to one another results in a similarly strong shift, towards higher frequencies for $\kappa_{\perp} > \kappa_{\parallel}$. The second spectral break $\tau_2$ is calculated as the intersection between a line of slope $s_1$ through the point of maximum positive slope (for peaked spectra) or through the mid-point of the averaging region (for unpeaked spectra), and the line of slope $-s_2$ through the ''drop point" at which the second derivative reaches a minimum (i.e., the largest negative change in the slope). Our results (in figures \ref{fig:alphatp2}-\ref{fig:tp2comp}) indicate that the second transition point is most strongly influenced by the $\kappa_i$ varied jointly or via the ratio $K$. The low-wavenumber magnetic field spectral index also demonstrates a fairly strong influence, with larger $\alpha$ shifting $\tau_2$ to higher frequencies. A comparison of the influence of $\alpha$ on the two spectral break points (as seen in Figures \ref{fig:alphatp1} and \ref{fig:alphatp2}) indicates a very similar shift in both break points; thus increasing $\alpha$ shifts the entire spectrum towards higher frequencies. Figures \ref{fig:alphapkheight}-\ref{fig:kappapkheight} show the variation in the peak strength (which we have defined as the height of the spectral peak above the low-frequency amplitude $A$ with changes in the magnetic field parameters. For $\theta=10^o$, the only of our three representative angles that has peaked spectrum for $K=1$, peak strength increases with increasing $\alpha$ and $\beta$. Individually, increasing $\beta_{\perp}$ has the largest effect in increasing the peak strength, while increasing $\alpha_{\perp}$ lowers it. Similarly, increasing $\alpha_{\parallel}$ increases the peak strength while increasing $\beta_{\parallel}$ lowers it. The largest effect overall is produced by variation of the ratio $K$ between the perpendicular and parallel field parameters $\kappa_i$. For $\kappa_{\parallel} > \kappa_{\perp}$ (i.e. $K<1$), the peak strength appears to persist to higher angles $\theta$, while for $\kappa_{\perp}>\kappa_{\parallel}$ the peak can be small or non-existent even at $\theta = 10^o$. Thus the ratio between $\kappa_{\perp}$ and $\kappa_{\parallel}$, the respective peaks of the magnetic field perpendicular and parallel spectra, strongly influences the progression of the spectral evolution between its $\theta$ = 0 and $\theta$ = $\pi/2$ limiting values, as expected from our earlier analysis in Section \ref{s:accspec}. We have seen that relatively minor changes in the magnetic field spectra can produce very significant effects upon the jitter radiation spectra, particularly in the appearance of the spectral peak or break region. Furthermore, while we have included in this section only spectra from a few representative viewing angles $\theta$, the angular dependence demonstrated suggests that the connection of such features to the transverse or parallel magnetic field spectra can be tested by observing their variation with viewing angle. \section{Conclusions} We have calculated the angle-averaged power spectra of jitter radiation emitted by {\em a single relativistic electron} undergoing small Lorentz-force accelerations transverse to its overall velocity. Note that the obtained spectra are equivalent to the ensemble-averaged spectra per one electron from a collection of monoenergetic relativistic electrons. The radiation spectra are calculated using a smoothly connected broken power-law model of a magnetic field mimicking the structure of magnetic fields generated by the Weibel instability. The shapes of the resulting jitter radiation spectra are shown to depend on the magnetic field spatial spectrum and to vary with the angle $\theta$ of the electron velocity (being also the line of sight) with respect to the direction of the field anisotropy ($z$-axis). The effect of varying parameters in the magnetic field spectra has been explored and indicates that the jitter radiation spectral features, such as the strength of the spectral peak or the extent of a sloped transition region, are quite sensitive to the parameters controlling the magnetic field spectra. Despite the high sensitivity of the jitter radiation spectra to the magnetic field spatial spectrum or, in general, the field correlation tensor, $K_{ij}({\bf k})=B_{\bf k}^i B_{\bf k}^{*j}$ \citep{M06}, one can draw some fairly robust conclusions. When the parallel and perpendicular magnetic field spectra are similar, one has just four essential parameters, their low-$k$ and high-$k$ spectral slopes, $\alpha>1/2$ and $\beta>0$, the peak representing a typical correlation length $\kappa$, and the viewing angle of the line of sight with respect to the magnetic filament direction, $\theta$. The power (i.e., $F_\nu$) spectrum produced by monoenergetic electrons moving towards the observer with the Lorentz factor $\gamma$, in general, has three power-law segments: a flat low-energy part, an intermediate-energy region which rises or slightly falls with a slope of less than unity (the exact value depending on $\theta$), and the more steeply falling off part with the slope being between -2$\beta$ and -2$\beta+1$, again, depending on $\theta$. The shape of the spectrum changes significantly with the angle $\theta$ between the radiating particle's velocity and the axis of the current filamentation generated by the counterstreaming Weibel instability. As $\theta \rightarrow 0$, the low-frequency spectral break $\tau_1$ approaches $-\infty$ and the maximum spectral slope (mid-range spectral index $s_1$ approaches the value of 1 (the trend of our results agreeing well with the $\theta = 0$ case in \citet{M06}). As $\theta$ increases, the spectral peak weakens as $s_1$ decreases and $\tau_1$ shifts towards the peak region. The disappearance of the spectral peak at some particular $\theta$ appears to be a result of both these spectral changes, and there is an extended transition region between the low-energy and high-energy power law trends. Consequently, we find that both the peaked and unpeaked spectra are well described by a three-region fit. Two-region fits are likely to miss out on the variation in the spectral slope below the peak at small $\theta$; consequently the resulting low-energy spectral index will be extremely sensitive to where the peak falls relative to the lower bound of a measured spectral window. This will be true even if the low-energy spectral index is taken at a common energy, as in the ``effective'' low-energy spectral index $\alpha_{eff}$ commonly taken as the tangential slope of the logarithmic spectrum at 25 keV. In comparing the radiation spectra for the full range of $\theta$, we have found that the ``drop point'', which we determined as the minimum of the numerical second derivative of the logarithmic data (i.e. the largest negative change in the spectral slope) serves as a good common reference point for both peaked and unpeaked spectra; in addition, the ``drop point'' in the radiation power spectrum evolves with the angle $\theta$ much like the $\nu F_{\nu}$ spectral peak energy $E_p$. In section \ref{s:acctorad}, we developed in detail the relation between the radiation spectrum and the underlying Fourier spectrum of the particle's acceleration. In particular, we find that the radiation spectrum has much the same shape as the acceleration spectrum but that the apparent transition points in the two spectra do not simply coincide for most angles of $\theta$. Furthermore, although the acceleration spectrum sees the re-emergence of a spectral peak for $\theta \rightarrow \pi/2$, the radiation spectrum does not. We have also demonstrated that a simple fit to the acceleration spectrum allows for the generation of a model radiation spectrum which approximates the realistic one with 10\% accuracy. We have found that variations in the magnetic field spectral parameters influence the final radiation spectrum by controlling the width and peak-positions of the functions within the integrand and the extent to which this directly modifies the effect of the offset, which is proportional to $\omega^{\prime}$. If we consider the general progression of the radiation spectra from being strongly peaked at small $\theta$ to unpeaked at $\theta$ near $\pi$/2, the trends shown here indicate that the speed of the progression of the spectral shape between the two extremes is dependent on the relative strengths of the parameters in the magnetic field spectra transverse and parallel to the shock front. We have confirmed that the jitter radiation high-energy spectral index is determined primarily by the high-$k$ magnetic field spectral index $\beta$, which otherwise has little influence on the spectrum. The low-$k$ magnetic field spectral index $\alpha$ is shown to have a significant influence on the low-energy and mid-range portions of the radiation spectrum when varied jointly in both magnetic field spectrum, although this influence is substantially reduced when only one $\alpha_i$ is varied. The parameters $\kappa_\perp$ and $\kappa_\parallel$ represent the dimensionless correlation lengths of the magnetic field distribution in the direction along the Weibel current filaments (and the direction of shock propagation in the case of a GRB) and in the perpendicular plane (parallel to the shock plane for a GRB). We find that increasing $\kappa_{\perp}$ and $\kappa_{\parallel}$ jointly shifts the entire spectrum to higher energies with relatively little effect on the spectral shape. Thus, as expected, the location of the spectral peak and break energies (and the corresponding peak energy $E_p$ of the $\nu F_{\nu}$ spectrum) are determined primarily by the correlation length of the magnetic field turbulence. The progression of the spectral shape between the head-on and edge-on cases is sensitive to the variation of the $\kappa$ parameter in one function relative to the other, such that for a particular viewing angle $\theta$ either peaked or unpeaked spectra can be attained via modification of the $\kappa$ ratio $K$. In the extreme that $\kappa_\parallel$ is 2 orders of magnitude larger than $\kappa_\perp$ we recover a peaked spectra for the angles as high as $\theta\sim80$ degrees. It is also notable that the spectral peak and transition points undergo relatively little horizontal shift as $K$ varies when $\theta$ = 10 degrees, but shift quite dramatically (3-4 orders of magnitude) during this variation for $\theta$ = 80 degrees. We shall summarize the most notable properties of our jitter radiation results with particular significance to interpreting radiation spectra from astrophysical sources. First, the jitter radiation spectra are significantly harder than synchrotron spectra in the region just below the spectral peak. This may be a significant mechanism in astrophysical sources such as gamma-ray bursts where a substantial population is seen to violate the synchrotron limit. Second, both the maximum slope in the region below the peak and the extent of this sloped region (between the low-frequency spectral break and the peak) are strongly influenced by $\theta$, with the largest slope and largest extent of the sloped region at small $\theta$. The angle $\theta_{np}$ at which the peak disappears is determined primarily by the ratio between the magnetic field correlation lengths perpendicular to and along the filamentation axis. Third, the position of the spectral peak represents the characteristic correlation length of the magnetic field, which in case of the Weibel turbulence depends on the density as $n^{1/2}$. This is at odds with the synchrotron radiation in which the spectral peak measures the magnetic field strength. This result is important for accurate interpretation of the observed spectra as well. Fourth, the high-energy part of the spectrum is represented by a power-law, even though the electrons are monoenergetic. Thus, no power-law distribution of Fermi-accelerated electrons is required to produce the observed power-law spectra in prompt GRBs. This has important implication for the interpretation of the observed data, provided the electrons are radiating in the jitter regime. Fifth, the angular dependence exhibited, combined with relativistic kinematics of a curved shock front can explain certain puzzling features of the GRB prompt spectral variability \citep{MPR09}. The sensitivity of the jitter spectra to the magnetic field anisotropy makes it a possible tool for diagnostics in sites of small-scale magnetic field turbulence and also a basis for analysis of astrophysical sources where the magnetic field orientations relative to the direction of observation are changing over time. Although we often appeal to GRBs as sites where jitter radiation can likely be produced, we cannot exclude other astrophysical objects, e.g. jets in active galactic nuclei, early supernova shocks and other violent sources, provided that small-scale magnetic fields may be produced and maintained in them. At last but not least, one can use jitter radiation as diagnostic tool in laser-plasma interaction experiments, e.g., Hercules \citep{GRB+Hercules08,HerculesKU07}, aimed at studies of Weibel turbulence and conditions in GRBs within the Laboratory Astrophysics and High-Energy-Density Physics programs. \acknowledgements This work has been supported by NASA, NSF, DOE via grants NNX07AJ50G, NNX08AL39G, AST-0708213, DE-FG02-04ER54790, DE-FG02-07ER54940.
1,314,259,995,434
arxiv
\section{Introduction} \label{sec:introduction} Anisotropic diffusion, in which the rate of diffusion of some quantity is faster in certain directions than others, occurs in many different physical systems and applications. Examples include diffusion in geological formations~\cite{Saadatfar2002}, thermal properties of structural materials and crystals~\cite{Dian-lin1991}, image processing~\cite{Perona1990,Caselles1998,Mrazek2001}, biological systems, and plasma physics. Diffusion Tensor Magnetic Resonance Imaging makes use of anisotropic diffusion to distinguish different types of tissue as a medical diagnostic~\cite{Basser2002}. In plasma physics, the collision operator gives rise to anisotropic diffusion in velocity space, as does the quasilinear operator describing the interaction of particles with waves~\cite{Stix1992}. In magnetized plasmas, thermal conduction can be much more rapid along the magnetic field line than across it; this will be the main application in mind for this paper. Centered finite differencing is commonly used to implement anisotropic thermal conduction in fusion and astrophysical plasmas~\cite{Gunter2005,Parrish2005,Sharma2006}. Methods based on finite differencing~\cite{Gunter2005} and higher order finite elements~\cite{Sovinec2004} are able to simulate highly anisotropic thermal conduction~($\chi_\parallel/\chi_\perp \sim 10^9$, where $\chi_\parallel$ and $\chi_\perp$ are parallel and perpendicular conduction coefficients, respectively) in laboratory plasmas. ``Symmetric" differencing introduced in~\cite{Gunter2005} is particularly simple and has some desirable properties: perpendicular numerical diffusion is independent of parallel conduction coefficient $\chi_\parallel$, perpendicular numerical diffusion is small, and the numerical heat flux operator is self adjoint. While in the symmetric method the components of the heat flux are located at cell corners, they are located at the cell faces in the ``asymmetric" method. The asymmetric method has been used to study convection in anisotropically conducting plasmas~\cite{Parrish2005} and in simulations of collisionless accretion disks~\cite{Sharma2006}. An important fact that has been overlooked is that the methods based on centered differencing can give heat fluxes inconsistent with the second law of thermodynamics, i.e., heat can flow from lower to higher temperatures. This accentuates temperature extrema and may result in negative temperatures at some grid points, causing numerical instabilities as the sound speed becomes imaginary. Also, in image processing applications it is required that no new spurious extrema are generated with anisotropic diffusion~\cite{Perona1990}, making centered differencing unviable. We show that both the symmetric and asymmetric methods can be modified so that temperature extrema are not accentuated. The components of the anisotropic heat flux consist of two contributions: the normal term and the transverse term (see \S 2). The normal term for the asymmetric method (like isotropic conduction) always gives heat flux from higher to lower temperatures, but the transverse term can be of any sign. The transverse term can be ``limited" to ensure that temperature extrema are not accentuated. We use slope limiters, analogous to those used in second order methods for hyperbolic problems \cite{VanLeer1979,Leveque2002}, to limit the transverse heat fluxes. For the symmetric method, where primary heat fluxes are located at cell corners, both normal and transverse terms need to be limited. Limiting based on the entropy-like function ($\dot{s}^* \equiv -\vec{q} \cdot \vec{\nabla} T \geq 0$) is also discussed. Limiting introduces numerical diffusion in the perpendicular direction, and the desirable property of the symmetric method that perpendicular pollution is independent of $\chi_\parallel$ no longer holds. The ratio of perpendicular numerical diffusion and the physical parallel conductivity with a Monotonized Central (MC; see \cite{Leveque2002} for a discussion of slope limiters) limiter is $\chi_{\perp, {\rm num}} / \chi_\parallel \sim 10^{-3}$ for a modest number of grid points ($\sim 100$ in each direction). This clearly is not adequate for simulating laboratory plasmas which require $\chi_\parallel/\chi_\perp \sim 10^9$ because perpendicular numerical diffusion will swamp the true perpendicular diffusion. For laboratory plasmas the temperature profile is relatively smooth and the negative temperature problem does not arise, so symmetric differencing \cite{Gunter2005} or higher order finite elements \cite{Sovinec2004} may be adequate. However, astrophysical plasmas can have sharp temperature gradients, e.g., the transition region of the sun separating the hot corona and the much cooler chromosphere, or the disk-corona interface in accretion flows. In these applications centered differencing may lead to negative temperatures giving rise to numerical instabilities. Limiting introduces somewhat larger perpendicular numerical diffusion but will ensure that heat flows in the correct direction at temperature extrema; hence negative temperatures are avoided. Even a modest anisotropy in conduction~($\chi_\parallel/\chi_\perp \lesssim 10^3$) should be enough to study the qualitatively new effects of anisotropic conduction on dilute astrophysical plasmas~\cite{Parrish2005}, but the positivity of temperature is absolutely essential for numerical robustness. The paper is organized as follows. In \S 2 we describe the heat equation with anisotropic conduction and its numerical implementation using asymmetric and symmetric centered differencing. In \S 3 we present simple test problems for which centered differencing results in negative temperatures. Limiting as a method to avoid unphysical behavior at temperature extrema is introduced in \S 4 \& \S 5. Slope limiters are discussed in \S 4 and limiting based on the entropy-like condition in \S 5. Some mathematical properties of limited methods are discussed in \S 6. In \S 7 we compare different methods and their convergence properties with some test problems. We conclude in \S 8. \section{Anisotropic thermal conduction} Thermal conduction in plasmas with the mean free path much larger than the gyroradius is anisotropic with respect to the magnetic field lines; heat flows primarily along the field lines with little conduction in the perpendicular direction~\cite{Braginskii1965}. In such cases, a divergence of anisotropic heat flux is added to the energy equation. Thermal conduction can modify the characteristic structure of the magnetohydrodynamic (MHD) equations making it difficult to incorporate into upwind methods. However, thermal conduction can be evolved independently of the MHD equations using operator splitting, as done in~\cite{Parrish2005}. The equation for the evolution of internal energy density due to anisotropic thermal conduction is \begin{eqnarray} \label{eq:anisotropic_conduction} \frac{\partial e}{\partial t} &=& - \vec{\nabla} \cdot \vec{q}, \\ \vec{q} &=& - \vec{b} n (\chi_\parallel-\chi_\perp) \nabla_\parallel T - n \chi_\perp \vec{\nabla} T, \end{eqnarray} where $e$ is the internal energy per unit volume, $\vec{q}$ is the heat flux, $\chi_\parallel$ and $\chi_\perp$ are the coefficients of parallel and perpendicular conduction with respect to the local field direction~(with dimensions $L^2T^{-1}$), $n$ is the number density, $T \equiv (\gamma-1)e/n$ is the temperature with $\gamma=5/3$ as the ratio of specific heats for an ideal gas, $\vec{b}$ is the unit vector along the field line, and $\nabla_\parallel=\vec{b} \cdot \vec{\nabla}$ represents the derivative along the magnetic field direction. Throughout the paper we use $\gamma=2$ to avoid factors of $2/3$ and $5/3$; results of the paper are not affected by this choice. \begin{figure} \centering \includegraphics[width=3in,height=3in]{gridnew.eps} \caption{A staggered grid with scalars $S_{i,j}$ (e.g., $n$, $e$, and $T$) located at cell centers. The components of vectors, e.g., $\vec{b}$ and $\vec{q}$, are located at cell faces. Note, however, that for the symmetric method the primary heat fluxes are located at the cell corners~\cite{Gunter2005}, and the face centered flux is obtained by interpolation (see \S 2.2).\label{fig:fig1}} \end{figure} We consider a staggered grid with the scalars like $n$, $e$, and $T$ located at the cell centers and the components of vectors, e.g., $\vec{b}$ and $\vec{q}$, located at the cell faces~\cite{Stone1992}, as shown in Figure \ref{fig:fig1}. The face centered components of vectors naturally represent the flux of scalars out of a cell. All the methods presented here are conservative and fully explicit. It should be possible to take longer time steps with an implicit generalization of the schemes discussed in the paper, but the construction of fast implicit schemes for anisotropic conduction is non-trivial. In two dimensions the internal energy density is updated as follows, \begin{equation} \label{eq:e_evolve} e^{n+1}_{i,j} = e^{n}_{i,j} - \Delta t \left[ \frac{q^n_{x,i+1/2,j}-q^n_{x,i-1/2,j}}{\Delta x} + \frac{q^n_{y,i,j+1/2}-q^n_{y,i,j-1/2}}{\Delta y} \right], \end{equation} where the time step $\Delta t$, satisfies the stability condition \cite{Richtmyer1967} (ignoring density variations) \begin{equation} \label{eq:TimeStep} \Delta t \leq \frac{\mbox{min}[\Delta x^2, \Delta y^2]}{2(\chi_\parallel + \chi_\perp)}, \end{equation} $\Delta x $ and $\Delta y$ are grid sizes in the two directions. The generalization to three dimensions is straightforward. The methods we discuss differ in the way heat fluxes are calculated at the faces. In rest of the section we discuss the methods based on asymmetric and symmetric centered differencing as discussed in \cite{Gunter2005}. From here on $\chi$ will represent parallel conduction coefficient in cases where an explicit perpendicular diffusion is not considered~(i.e., the only perpendicular diffusion is due to numerical effects). \subsection{Centered asymmetric scheme} \begin{figure} \centering \psfrag{A}{\large{$(n\chi)_{-1/2}$}} \psfrag{B}{\large{$(n\chi)_{1/2}$}} \psfrag{C}{\large{$T_0$}} \psfrag{D}{\large{$T_{1}$}} \psfrag{E}{\large{$T_{-1}$}} \includegraphics[width=3in,height=3in]{harmonic_avg.eps} \caption{This figure provides a motivation for using a harmonic average for $\overline{n}\overline{\chi}$. Consider a 1-D case with the temperatures and $n\chi$'s as shown in the figure. Given $T_{-1}$ and $T_1$, and the $n\chi$'s at the faces, we want to calculate an average $\overline{n}\overline{\chi}$ between cells $-1$ and $1$. Assumption of a constant heat flux gives, $q_{-1/2}=q_{1/2}=\overline{q}$, i.e., $-(n\chi)_{-1/2} (T_0-T_{-1})/\Delta x = -(n\chi)_{1/2} (T_1-T_0)/\Delta x = -\overline{n}\overline{\chi} (T_1-T_{-1})/ 2 \Delta x$. This immediately gives a harmonic mean, which is weighted towards the smaller of the two arguments, for the interpolation $\overline{n}\overline{\chi}$. \label{fig:fig2}} \end{figure} The heat flux in the $x$- direction (in 2-D), using the asymmetric method is given by \begin{equation} \label{eq:q1_asymmetric} q_{x,i+1/2,j}=- \overline{n}\overline{\chi} b_x \left[ b_x \frac{\partial T}{\partial x} + \overline{b_y} \overline{\frac{\partial T}{\partial y}} \right], \end{equation} where overline represents the variables interpolated to the face at $(i+1/2,j)$. The variables without an overline are naturally located at the face. The interpolated quantities at the face are given by simple arithmetic averaging, \begin{eqnarray} \label{eq:asymmetric_bavg} \overline{b_y} &=& (b_{y,i,j-1/2}+b_{y,i+1,j-1/2} + b_{y,i,j+1/2}+b_{y,i+1,j+1/2})/4, \\ \label{eq:asymmetric_Tavg} \overline{\partial T/\partial y} &=& (T_{i,j+1}+T_{i+1,j+1}-T_{i,j-1}-T_{i+1,j-1})/4 \Delta y. \end{eqnarray} We use a harmonic mean to interpolate the product of number density and conductivity \cite{Hyman2002}, \begin{equation} \label{eq:asymmetric_navg} \frac{2}{\overline{n}\overline{\chi}}=\frac{1}{(n\chi)_{i,j}}+\frac{1}{(n\chi)_{i+1,j}}; \end{equation} this is second order accurate for smooth regions, but $\overline{n}\overline{\chi}$ becomes proportional to the minimum of the two $n\chi$'s on either side of the face when the two differ significantly. Figure \ref{fig:fig2} gives the motivation for using a harmonic average. Physically, using a harmonic average preserves the robust result that the heat flux into a region should go to zero as the density in that region goes to zero, as in a thermos bottle using a vacuum for insulation. Harmonic averaging is also necessary for the method to be stable with the time step in Eq. (\ref{eq:TimeStep}). Instead, if we use a simple mean, the stable time step condition becomes severe by a factor $ \sim \mbox{max}[n_{i+1,j},n_{i,j}]/2\mbox{min} [n_{i+1,j}, n_{i,j}]$, which can result in an unacceptably small time step for initial conditions with a large density contrast. Physically, this is because the heat capacity is very small in low density regions, so even a tiny heat flux into that region causes rapid changes in temperature. Analogous expressions can be written for heat flux in other directions. \subsection{Centered symmetric scheme} The notion of symmetric differencing was introduced in \cite{Gunter2005}, where primary heat fluxes are located at the cell corners, with \begin{equation} \label{eq:q1_symmetric} q_{x,i+1/2,j+1/2} = -\overline{n}\overline{\chi} \overline{b_x} \left [ \overline{b_x} \overline{\frac{\partial T}{\partial x}} + \overline{b_y} \overline{\frac{\partial T}{\partial y}} \right ], \end{equation} where overline represents the interpolation of variables at the corner given by a simple arithmetic average, \begin{eqnarray} \label{eq:symmetric_bxavg} \overline{b_x} &=& (b_{x,i+1/2,j}+b_{x,i+1/2,j+1})/2, \\ \label{eq:symmetric_byavg} \overline{b_y} &=& (b_{y,i,j+1/2}+b_{y,i+1,j+1/2})/2, \\ \label{eq:symmetric_Tavg} \overline{\partial T/\partial x} &=& (T_{i+1,j}+T_{i+1,j+1}-T_{i,j}-T_{i,j+1})/2 \Delta x, \\ \overline{\partial T/\partial y} &=& (T_{i,j+1}+T_{i+1,j+1}-T_{i,j}-T_{i+1,j})/2 \Delta y. \end{eqnarray} As before (and for the same reasons), a harmonic average is used for the interpolation of $n\chi$, \begin{equation} \label{eq:symmetric_navg} \frac{4}{\overline{n}\overline{\chi}}= \frac{1}{(n\chi)_{i,j}} + \frac{1}{(n\chi)_{i+1,j}} + \frac{1}{(n\chi)_{i,j+1}}+\frac{1}{(n\chi)_{i+1,j+1}}. \end{equation} Analogous expressions can be written for $q_{y,i+1/2,j+1/2}$. The harmonic average here is different from~\cite{Gunter2005}, who use an arithmetic average. Ref.~\cite{Gunter2005} is primarily interested in magnetic fusion applications, where density variations are usually well resolved (shocks are usually not important in magnetic fusion) so arithmetic averaging will work well. But there might be some magnetic fusion cases, such as instabilities in the edge region of a fusion device, where there might be large density variations per grid cell and a harmonic average could be useful. All of the test cases in~\cite{Gunter2005} used a uniform density and so will not be affected by the choice of arithmetic or harmonic average. The heat fluxes located at the cell faces, $q_{x,i+1/2,j}$ and $q_{y,i,j+1/2}$, to be used in Eq.~(\ref{eq:e_evolve}) are given by an arithmetic average, \begin{eqnarray} q_{x,i+1/2,j} &=& (q_{x,i+1/2,j+1/2}+q_{x,i+1/2,j-1/2})/2, \\ q_{y,i,j+1/2} &=& (q_{y,i+1/2,j+1/2}+q_{y,i-1/2,j+1/2})/2.\end{eqnarray} As demonstrated in \cite{Gunter2005}, the symmetric heat flux satisfies the self adjointness property (equivalent to $\dot{s}^* \equiv - \vec{q} \cdot \vec{\nabla} T \geq 0$) at cell corners and has the desirable property that the perpendicular numerical diffusion ($\chi_{\perp,{\rm num}}$) is independent of $\chi_\parallel/\chi_\perp$ (see Figure 6 in~\cite{Gunter2005}). But, as we show later, both symmetric and asymmetric schemes do not satisfy the crucial local property that heat must flow from higher to lower temperatures, the violation of which may result in negative temperature with large temperature gradients. The heat flux in the $x$- direction $q_x$ consists of two terms: the normal term $q_{xx}=-n \chi b_x^2 \partial T/\partial x$ and the transverse term $q_{xy}=-n \chi b_x b_y \partial T/\partial y$. The asymmetric scheme uses a 2 point stencil to calculate the normal gradient and a 6 point stencil to calculate the transverse gradient, as compared to the symmetric method that uses a 6 point stencil for both (hence the name symmetric). This makes the symmetric method less sensitive to the orientation of coordinate system with respect to the field lines. \begin{figure} \centering \includegraphics[width=2.95in,height=2.5in]{chessboard.eps} \caption{The symmetric method is unable to diffuse a temperature distributed in a chess-board pattern. The plus ($+$) and minus ($-$) symbols denote two unequal temperatures. The average of $\partial T/\partial x|_{i+1/2,j}=(T_+ - T_-)/\Delta x$ and $\partial T/\partial x|_{i+1/2,j+1}=(T_- - T_+)/\Delta x$ to calculate $\partial T/\partial x|_{i+1/2,j+1/2}= \partial T/\partial x|_{i+1/2,j} + \partial T/\partial x|_{i+1/2,j+1}$ vanishes, similarly $\partial T/\partial y|_{i+1/2,j+1/2}=0$. \label{fig:fig3}} \end{figure} A problem with the symmetric method which is immediately apparent is its inability to diffuse away a chess-board temperature pattern as $\overline{\partial T/\partial x}$ and $\overline{\partial T/\partial y}$, located at the cell corners, vanish for this initial condition (see Figure \ref{fig:fig3}). \section{Negative temperature with centered differencing} \label{sec:Negative} In this section we present two simple test problems that demonstrate that negative temperatures can arise with both asymmetric and symmetric centered differencing. \subsection{Asymmetric method} \begin{figure} \centering \includegraphics[width=3in,height=3in]{asymmfail_grid.eps} \includegraphics[width=5in,height=4in]{asymmfail_graph.eps} \caption{Test problem to show that the asymmetric method can result in negative temperature. Magnetic field lines are along the diagonal with $b_x=-b_y=1/\sqrt{2}$. With the asymmetric method heat flows out of the third quadrant which is already a temperature minimum, resulting in a negative temperature $T_{i,j}$. However due to numerical perpendicular diffusion, at late times the temperature becomes positive again. The temperature at $(i,j)$ is shown for different methods: asymmetric (solid line), symmetric (dotted line), asymmetric and symmetric with slope limiters (dashed line; both give the same result), and symmetric with entropy limiting (dot dashed line).\label{fig:fig4}} \end{figure} Consider a $2 \times 2$ grid with a hot zone ($T=10$) in the first quadrant and cold temperature ($T=0.1$) in the rest, as shown in Figure \ref{fig:fig4}. Magnetic field is uniform over the box with $b_x=-b_y=1/\sqrt{2}$. Number density is a constant equal to unity. Reflecting boundary conditions are used for temperature. Using the asymmetric scheme for heat fluxes out of the grid point $(i,j)$ (the third quadrant) gives, $q_{x,i-1/2,j}=q_{y,i,j-1/2}=0$, and $q_{x,i+1/2,j} = q_{y,i,j+1/2} = (9.9/8) n \chi/\Delta x$ (where $\Delta x=\Delta y$ is assumed). Thus, heat flows out of the grid point $(i,j)$, which is already a temperature minimum. This results in the temperature becoming negative. Figure \ref{fig:fig4} shows the temperature in the third quadrant vs.\ time for different methods. The asymmetric method gives negative temperature ($T_{i,j}<0$) for first few time steps, which eventually becomes positive. All other methods (except the one based on entropy limiting) give positive temperatures at all times for this problem. Methods based on limited temperature gradients will be discussed later. This test demonstrates that the asymmetric method may not be suitable for problems with large temperature gradients because negative temperature results in numerical instabilities. \subsection{Symmetric method} \label{subsec:symmfail} \begin{figure} \centering \includegraphics[width=3in,height=3in]{symmfail_grid.eps} \includegraphics[width=5in,height=4in]{symmfail_graph.eps} \caption{Test problem for which the symmetric method gives negative temperature at $(i,j)$. Magnetic field is along the $x$- direction, $b_x=1$ and $b_y=0$. With this initial condition, all heat fluxes into $(i,j)$ should vanish and the temperature $T_{i,j}$ should not evolve. All methods except the symmetric method (asymmetric, and slope and entropy limited methods) give a constant temperature $T_{i,j}=0.1$ at all times. But with the symmetric method, the temperature at $(i,j)$ becomes negative due to the heat flux out of the corner $(i-1/2,j+1/2)$. The temperature $T_{i,j}$ eventually becomes equal to the initial value of $0.1$. \label{fig:fig5}} \end{figure} The symmetric method does not give negative temperature with the test problem of the previous section. In fact, the symmetric method gives the correct result for temperature with no numerical diffusion in the perpendicular direction (zero heat flux out of the grid point $(i,j)$, see Figure \ref{fig:fig4}). Other methods resulted in a temperature increase at $(i,j)$ because of perpendicular numerical diffusion. Here we consider a case where the symmetric method gives negative temperature. As before, consider a $2 \times 2$ grid with a hot zone ($T=10$) in the first quadrant and cold temperature ($T=0.1$) in the rest; the only difference from the previous test problem is that the magnetic field lines are along the $x$- axis, $b_x=1$ and $b_y=0$ (see Figure \ref{fig:fig5}). Reflective boundary conditions are used for temperature, as before. Since there is no temperature gradient along the field lines for the grid point $(i,j)$, we do not expect the temperature there to change. While all other methods give a stationary temperature in time, the symmetric method results in a heat flux out of the grid $(i,j)$ through the corner at $(i-1/2,j+1/2)$. With the initial condition as shown in Figure \ref{fig:fig5}, the only non-vanishing symmetric heat flux out of $(i,j)$ is, $q_{x,i-1/2,j+1/2}=- (9.9/2) n\chi /\Delta x$. The only non-vanishing face-centered heat flux entering the box through a face is $q_{x,i-1/2,j}=- (9.9/4) n\chi /\Delta x<0$; i.e., heat flows out of $(i,j)$ which is already a temperature minimum. This results in the temperature becoming negative at $(i,j)$, although at late times it becomes equal to the initial temperature at $(i,j)$. This simple test shows that the symmetric method can also give negative temperatures (and associated numerical problems) in presence of large temperature gradients. \section{Slope limited fluxes} As discussed earlier, the heat flux $q_x$ is composed of two terms: the normal $q_{xx} = -n \chi b_x^2 \partial T/\partial x$ term, and the transverse $q_{xy}=-n\chi b_xb_y \partial T/\partial y$ term. For the asymmetric method the discrete form of the term $q_{xx} = -n \chi b_x^2 \partial T/\partial x$ has the same sign as $-\partial T/\partial x$, and hence guarantees that heat flows from higher to lower temperatures. However, $q_{xy}=-n\chi b_xb_y \partial T/\partial y$ can have an arbitrary sign, and can give rise to heat flowing in the ``wrong" direction. We use slope limiters, analogous to those used for linear reconstruction of variables in numerical simulation of hyperbolic systems \cite{VanLeer1979,Leveque2002}, to ``limit" the transverse terms. Both asymmetric and symmetric methods can be modified with slope limiters. The slope limited heat fluxes ensure that temperature extrema are not accentuated. Thus, unlike the symmetric and asymmetric methods, slope limited methods can never give negative temperatures. \subsection{Limiting the asymmetric method} Since the normal heat flux term $q_{xx}$ is naturally located at the face, no interpolation for $\partial T/\partial x$ is required for its evaluation. However, an interpolation at the $x$- face is required to evaluate $\overline{\partial T/\partial y}$ used in $q_{xy}$ (the term with overlines in Eq. \ref{eq:q1_asymmetric}). The arithmetic average used in Eq. (\ref{eq:asymmetric_Tavg}) for $\overline{\partial T/\partial y}$ to calculate $q_{xy}$ was found to result in heat flowing from lower to higher temperatures (see Figure \ref{fig:fig4}). To remedy this problem we use slope limiters to interpolate temperature gradients in the transverse heat flux term. Slope limiters are widely used in numerical simulations of hyperbolic equations (e.g., computational gas dynamics; see \cite{VanLeer1979,Leveque2002}). Given the initial values for variables at grid centers, slope limiters (e.g., minmod, van Leer, and Monotonized Central (MC)) are used to calculate the slopes of conservative piecewise linear reconstructions in each grid cell. Limiters use the variable values in the nearest grid cells to come up with slopes that ensure that no new extrema are created for the conserved variables along the characteristics, a property of hyperbolic equations. Similarly, we use slope limiters to interpolate temperature gradients in the transverse heat flux term so that unphysical oscillations do not arise at temperature extrema. The slope limited asymmetric heat flux in the $x$- direction is still given by Eq. (\ref{eq:q1_asymmetric}), with the same $\partial T/\partial x$ as in the asymmetric method, but a slope limited interpolation for the transverse temperature gradient $\overline{\partial T/\partial y}$, given by \begin{eqnarray} \nonumber \label{eq:slope_asymm} \left . \overline{\frac{\partial T}{\partial y}} \right |_{i+1/2,j} &=& L \left \{ L \left [\left . \frac{\partial T}{\partial y} \right |_{i,j-1/2} , \left . \frac{\partial T}{\partial y} \right |_{i,j+1/2} \right ], \right . \\ && \hspace{1in} \left . L \left [ \left . \frac{\partial T}{\partial y} \right |_{i+1,j-1/2}, \left . \frac{\partial T}{\partial y} \right |_{i+1,j+1/2} \right ] \right \}, \end{eqnarray} where $L$ is a slope limiter like minmod, van Leer, or Monotonized Central (MC) limiter \cite{Leveque2002}; e.g., the MC limiter is given by \begin{equation} \label{eq:MC} {\rm MC}(a,b) = {\rm minmod} \left [ 2~{\rm minmod}(a,b), \frac{a+b}{2} \right ], \end{equation} where \begin{eqnarray} \nonumber {\rm minmod}(a,b) &=& {\rm min}(a,b) \hspace{0.25in} {\rm if}~a, b > 0, \\ \nonumber &=& {\rm max}(a,b) \hspace{0.25in} {\rm if}~a, b < 0, \\ \nonumber &=& 0 \hspace{0.25in} {\rm if}~ab \leq 0. \end{eqnarray} A slope limiter weights the interpolation towards the argument smallest in magnitude, if the arguments differ by too much, and returns a zero if the two arguments are of opposite signs. An analogous expression for the transverse temperature gradient at the $y$- face, $\overline{\partial T/\partial x}$, is used to evaluate the heat flux $q_y$. Interpolation similar to the asymmetric method is used for all other variables (Eqs. \ref{eq:asymmetric_bavg} \& \ref{eq:asymmetric_navg}). \subsection{Limiting the symmetric method} In the symmetric method, primary heat fluxes in both directions are located at the cell corners (see Eq. \ref{eq:q1_symmetric}). Temperature gradients in both directions have to be interpolated at the corners. Thus, to ensure that temperature extrema are not amplified with the symmetric method, both $\overline{\partial T/\partial x}$ and $\overline{\partial T/\partial y}$ need to be limited. The face-centered $q_{xx,i+1/2,j}$ is calculated by averaging $q_{xx}$ from the adjacent corners, which are given by the following slope-limited expressions: \begin{eqnarray} \label{eq:qxx_symm_lim_up} q^N_{xx,i+1/2,j+1/2} &=& -\overline{n}\overline{\chi} \overline{b_x^2} L2 \left [ \left . \frac{\partial T}{\partial x} \right |_{i+1/2,j}, \left . \frac{\partial T}{\partial x} \right |_{i+1/2,j+1} \right ], \\ \label{eq:qxx_symm_lim_down} q^S_{xx,i+1/2,j-1/2} &=& -\overline{n}\overline{\chi} \overline{b_x^2} L2 \left [ \left . \frac{\partial T}{\partial x} \right |_{i+1/2,j}, \left . \frac{\partial T}{\partial x} \right |_{i+1/2,j-1} \right ], \end{eqnarray} where $N$ and $S$ superscripts indicate the north-biased and south-biased heat fluxes. The face centered heat flux used in Eq. (\ref{eq:e_evolve}) is $q_{xx,i+1/2,j} = (q^N_{xx,i+1/2,j+1/2} + q^S_{xx,i+1/2,j-1/2})/2$; the other interpolated quantities (indicated with an overline) are the same as in Eq. (\ref{eq:q1_symmetric}). The limiter $L2$ which is different from standard slope limiters is defined as \begin{eqnarray} \nonumber L2(a,b) &=& (a+b)/2, \mbox{ if } \min(\alpha a, a / \alpha) < (a+b)/2 < \max(\alpha a, a /\alpha), \\ \nonumber &=& \min(\alpha a, a / \alpha), \mbox{ if } (a+b)/2 \leq \min(\alpha a, a / \alpha), \\ &=& \max(\alpha a, a / \alpha), \mbox{ if } (a+b)/2 \geq \max(\alpha a, a / \alpha), \end{eqnarray} where $0<\alpha<1$ is a parameter; this reduces to a simple averaging if the temperature is smooth, while restricting the interpolated temperature ($\overline{\partial T/\partial x}$) to not differ too much from $\partial T / \partial x |_{i+1/2,j}$ (and be of the same sign). We choose $\alpha=3/4$ for all of the results in this paper. Note that the $L2$ limiter is not symmetric with respect to its arguments. It ensures that $q_{xx,i+1/2,j \pm 1/2}$ is of the same sign as $-\partial T/\partial x |_{i+1/2,j}$; i.e., the interpolated normal heat flux is from higher to lower temperatures. This interpolation will be able to diffuse the chess board pattern in Figure \ref{fig:fig3}. The transverse temperature gradient is limited in a way similar to the asymmetric method; the temperature gradient $\overline{\partial T/\partial y}|_{i+1/2,j}$ is still given by Eq. (\ref{eq:slope_asymm}). Thus if $\alpha=1$, the limited symmetric method becomes somewhat similar to the limited asymmetric method (though with differences in the interpolation of the magnetic field direction and of $n \chi$). \section{Limiting with the entropy-like source function} \label{sec:ent_limiting} If the entropy-like source function, which we define as $\dot{s}^*=-\vec{q} \cdot \vec{\nabla} T$ (see Appendix \ref{app:app5} to see how this is different from the entropy function) is positive everywhere, heat is guaranteed to flow from higher to lower temperatures. For the symmetric method, $\dot{s}^*$ evaluated at the cell corners is positive definite, but this need not be true for interpolations at the cell faces; thus heat may flow from lower to higher temperatures. An entropy-like condition can be applied at all face-pairs to limit the transverse heat flux terms ($q_{xy}$ and $q_{yx}$), such that \begin{equation} \label{eq:ent_lim} \dot{s}^* = - q_{x,i+1/2,j} \left . \frac{\partial T}{\partial x} \right |_{i+1/2,j} - q_{y,i,j+1/2} \left . \frac{\partial T}{\partial y} \right |_{i,j+1/2} \geq 0. \end{equation} The limiter $L2$ is used to calculate the normal gradients $q_{xx}$ and $q_{yy}$ at the faces, as in the slope limited symmetric method (see \S 4.2). The use of $L2$ ensures that $-q_{xx,i+1/2,j}\partial T/\partial x |_{i+1/2,j} \geq 0$, and only the transverse terms $q_{xy}$ and $q_{yx}$ need to be reduced to satisfy Eq. (\ref{eq:ent_lim}). That is, if on evaluating $\dot{s}^*$ at all four face pairs the entropy-like condition (Eq. \ref{eq:ent_lim}) is violated, the transverse terms are reduced to make $\dot{s}^*$ vanish. The attractive feature of the entropy limited symmetric method is that it reduces to the symmetric method (which has the smallest numerical diffusion of all the methods; see Figure \ref{fig:fig9}) when Eq. (\ref{eq:ent_lim}) is satisfied. The hope is that limiting of transverse terms may prevent oscillations with large temperature gradients. The problem with entropy limiting, unlike the slope limited methods, is that it does not guarantee that numerical oscillations at large temperature gradients will be absent (e.g, see Figures \ref{fig:fig4} and \ref{fig:fig7}). For example, when $\partial T/\partial x|_{i+1/2,j} = \partial T/\partial y|_{i,j+1/2}=0$, Eq. (\ref{eq:ent_lim}) is satisfied for arbitrary heat fluxes $q_{x,i+1/2,j}$ and $q_{y,i,j+1/2}$. In such a case, transverse heat fluxes $q_{xy}$ and $q_{yx}$ can cause heat to flow in the ``wrong" direction, causing unphysical oscillations at temperature extrema. However, this unphysical behavior occurs only for a few time steps, after which the oscillations are damped. The result is that the overshoots are not as pronounced and quickly decay with time, unlike in the asymmetric and symmetric methods (see Figures \ref{fig:fig6} \& \ref{fig:fig7}). Although temperature extrema can be accentuated by the entropy limited method, early on one can choose sufficiently small time steps to ensure that temperature does not become negative; this is equivalent to saying that the entropy limited method will not give overshoots at late times (see Figure \ref{fig:fig7} and Tables \ref{tab:tab1}-\ref{tab:tab4}). This trick will not work for the centered symmetric and asymmetric methods where temperatures can be negative even at late times (see Figure \ref{fig:fig7}). To guarantee that temperature extrema are not amplified, in addition to entropy limiting at all points, one can also use slope limiting of transverse temperature gradients at extrema. This results in a method that does not amplify the extrema, but is more diffusive compared to just entropy limiting (see Figure \ref{fig:fig9}). Because of the simplicity of slope limited methods and their desirable mathematical properties (discussed in the next section), they are preferred over the cumbersome entropy limited methods. \section{Mathematical properties} In this section we prove that the slope limited fluxes satisfy the physical requirement that temperature extrema are not amplified. Also discussed are global and local properties related to the entropy-like condition $\dot{s}^* = - \vec{q} \cdot \vec{\nabla} T \geq 0$. \subsection{Behavior at temperature extrema} Slope limiting of both asymmetric and symmetric methods guarantees that temperature extrema are not amplified further, i.e., the maximum temperature does not increase and the minimum temperature does not decrease, as required physically. This ensures that the temperature is always positive and numerical problems because of imaginary sound speed do not arise. The normal heat flux in the asymmetric method ($-\overline{n}\overline{\chi} b_x^2 \partial T/\partial x$) and the $L2$ limited normal heat flux term in the symmetric method (Eqs. \ref{eq:qxx_symm_lim_up} and \ref{eq:qxx_symm_lim_down}) allows the heat to flow only from higher to lower temperatures. Thus the terms responsible for unphysical behavior at temperature extrema are the transverse heat fluxes $q_{xy}$ and $q_{yx}$. Slope limiters ensure that the transverse heat terms vanish at extrema and heat flows down the temperature gradient at those grid points. The operator $L(L(a,b),L(c,d))$, where $L$ is a slope limiter like minmod, van Leer, or MC, is symmetric with respect to all its arguments, and hence can be written as $L(a,b,c,d)$. For the slope limiters considered here (minmod, van Leer, and MC), $L(a,b,c,d)$ vanishes unless all four arguments $a,b,c,d$ have the same sign. At a local temperature extremum (say at $(i,j)$), the $x$- (and $y$-) face-centered slopes $\partial T/\partial y|_{i,j+1/2}$ and $\partial T/\partial y|_{i,j-1/2}$ (and $\partial T/\partial x|_{i+1/2,j}$ and $\partial T/\partial x|_{i-1/2,j}$) are of opposite signs, or at least one of them is zero. This ensures that the slope limited transverse temperature gradients ($\overline{\partial T/\partial y}$ and $\overline{\partial T/\partial x}$) vanish (from Eq. \ref{eq:slope_asymm}). Thus, the heat fluxes are $q_{x,i \pm 1/2, j} = -\overline{n}\overline{\chi} \overline{b_x}^2 \partial T/\partial x|_{i \pm 1/2, j}$ and $q_{y,i, j\pm 1/2} = -\overline{n}\overline{\chi} \overline{b_y}^2 \partial T/\partial y|_{i, j \pm 1/2}$ at the temperature extrema, which are always down the temperature gradient. This ensures that temperature never decreases (increases) at a temperature minimum (maximum), and negative temperatures are avoided. \subsection{The entropy-like condition, $\dot{s}^* = -\vec{q} \cdot \vec{\nabla} T \geq 0$} If the number density $n$ remains constant in time, then multiplying Eq. (\ref{eq:anisotropic_conduction}) with $T$ and integrating over all space gives \begin{eqnarray} \label{eq:selfadjointness} \nonumber \frac{1}{(\gamma-1)} \frac{\partial}{\partial t} \int n T^2 dV = - \int T \vec{\nabla} \cdot \vec{q} dV &=& \int \vec{q} \cdot \vec{\nabla} T dV \\ &=& - \int n \chi |\nabla_\parallel T|^2 dV \le 0, \end{eqnarray} assuming that the surface contributions vanish. This analytic constraint implies that volume averaged temperature fluctuations cannot increase in time. Locally it gives the entropy-like condition $\dot{s}^*=-\vec{q} \cdot \vec{\nabla} T \geq 0$, implying that heat always flows from higher to lower temperatures. Ref. \cite{Gunter2005} has shown that the symmetric method satisfies $\dot{s}^*=-\vec{q} \cdot \vec{\nabla} T \geq 0$ at cell corners. The entropy-like function $\dot{s}^*$ evaluated at $(i+1/2,j+1/2)$ with the symmetric method is \begin{equation} \dot{s}^*_{i+1/2,j+1/2} = -q_{x,i+1/2,j+1/2} \left. \overline{\frac{\partial T}{\partial x}} \right |_{i+1/2,j+1/2} - q_{y,i+1/2,j+1/2} \left. \overline{\frac{\partial T}{\partial y}} \right |_{i+1/2,j+1/2}. \end{equation} Using the symmetric heat fluxes (Eq. \ref{eq:q1_symmetric}) the entropy-like function becomes, \begin{eqnarray} \nonumber \dot{s}^* &=& \overline{n}\overline{\chi} \overline{b_x}^2 \left | \overline{\frac{\partial T}{\partial x}} \right |^2 + \overline{n}\overline{\chi} \overline{b_y}^2 \left | \overline{\frac{\partial T}{\partial y}} \right |^2 + 2 \overline{n}\overline{\chi} \overline{b_x}~\overline{b_y} \overline{\frac{\partial T}{\partial x}}~\overline {\frac{\partial T}{\partial y}}, \\ &=& \overline{n}\overline{\chi} \left [ \overline{b_x} \overline{\frac{\partial T}{\partial x}} + \overline{b_y} \overline{\frac{\partial T}{\partial y}} \right ]^2 \geq 0, \end{eqnarray} and integration over the whole space implies Eq. (\ref{eq:selfadjointness}). Although the entropy-like condition is satisfied by the symmetric method at grid corners (both locally and globally), this condition is not sufficient to guarantee positivity of temperature at cell centers, as we demonstrate in \S \ref{subsec:symmfail}. Also notice that the modification of the symmetric method to satisfy the entropy-like condition at face pairs (see \S \ref{sec:ent_limiting}) does not cure the problem of negative temperatures (see Figure \ref{fig:fig4}). Thus, a method which satisfies the entropy-like condition ($\dot{s}^* = -\vec{q} \cdot \vec{\nabla} T \geq 0$) interpolated at some point does not necessarily satisfy it everywhere, implying that unphysical oscillations in the presence of large temperature gradients may arise even if the interpolated entropy-like condition holds. With an appropriate interpolation, the asymmetric method and the slope limited asymmetric methods can be modified to satisfy the global entropy-like condition $\dot{S}^* = -\int \vec{q} \cdot \vec{\nabla} T dV/V \geq 0$. Consider \begin{equation} \dot{S}^* = \frac{-1}{N_x N_y} \sum_{i,j} \left [ q_{x,i+1/2,j} \left . \frac{\partial T}{\partial x} \right |_{i+1/2,j} + q_{y,i,j+1/2} \left . \frac{\partial T}{\partial y} \right |_{i,j+1/2} \right ], \end{equation} where $N_x$ and $N_y$ are the number of grid points in each direction. Substituting the form of asymmetric heat fluxes, \begin{eqnarray} \nonumber \label{eq:Sdot} \dot{S}^* &=& \frac{1}{N_x N_y} \sum_{i,j} \left [ \left ( \overline{n} \overline{\chi} b_x^2 \left | \frac{\partial T}{\partial x} \right |^2 \right )_{i+1/2,j} + \left ( \overline{n} \overline{\chi} b_y^2 \left | \frac{\partial T}{\partial y} \right |^2 \right)_{i,j+1/2} \right . \\ &+& \left . \left ( \overline{n \chi b_x b_y \frac{\partial T}{\partial y}} \right )_{i+1/2,j} \left . \frac{\partial T}{\partial x} \right |_{i+1/2,j} + \left ( \overline{n \chi b_x b_y \frac{\partial T}{\partial x}} \right )_{i,j+1/2} \left . \frac{\partial T}{\partial y} \right |_{i,j+1/2} \right ], \end{eqnarray} where overlines represent appropriate interpolations. We define \begin{eqnarray} G_{x,i+1/2,j} &=& \sqrt{ \left( \overline{n}\overline{\chi} \right)}_{i+1/2,j} b_{x,i+1/2,j} \left . \frac{\partial T}{\partial x} \right |_{i+1/2,j}, \\ G_{y,i,j+1/2} &=& \sqrt{ \left( \overline{n}\overline{\chi} \right)}_{i,j+1/2} b_{y,i,j+1/2} \left . \frac{\partial T}{\partial y} \right |_{i,j+1/2}, \\ \overline{G}_{y,i+1/2,j} &=& \overline{ \sqrt{n \chi} b_y \left . \frac{\partial T}{\partial y} \right | }_{i+1/2,j}, \\ \overline{G}_{x,i,j+1/2} &=& \overline{ \sqrt{n \chi} b_x \left . \frac{\partial T}{\partial x} \right | }_{i,j+1/2}. \end{eqnarray} In terms of $G$'s, Eq. (\ref{eq:Sdot}) can be written as \begin{eqnarray} \nonumber \dot{S}^* = \frac{1}{N_x N_y} \sum_{i,j} && \left [ G_{x,i+1/2,j}^2 + G_{y,i,j+1/2}^2 \right . \\ &+& \left . G_{x,i+1/2,j} \overline{G}_{y,i+1/2,j} + \overline{G}_{x,i,j+1/2} G_{y,i,j+1/2} \right ]. \end{eqnarray} A lower bound on $\dot{S}^*$ is obtained by assuming the cross terms to be negative, i.e., \begin{eqnarray} \nonumber \dot{S}^* \geq \frac{1}{N_x N_y} \sum_{i,j} && \left [ G_{x,i+1/2,j}^2 + G_{y,i,j+1/2}^2 \right . \\ &-& \left . \left | G_{x,i+1/2,j} \overline{G}_{y,i+1/2,j} \right | - \left | \overline{G}_{x,i,j+1/2} G_{y,i,j+1/2} \right | \right ]. \end{eqnarray} Now define $\overline{G}_{y,i+1/2,j}$ and $\overline{G}_{x,i,j+1/2}$ as follows (the following interpolation is necessary for the proof to hold): \begin{eqnarray} \overline{G}_{x,i,j+1/2} &=& L ( G_{x,i+1/2,j}, G_{x,i-1/2,j}, G_{x,i+1/2,j+1}, G_{x,i-1/2,j+1} ), \\ \overline{G}_{y,i+1/2,j} &=& L ( G_{y,i,j+1/2}, G_{y,i,j-1/2}, G_{y,i+1,j+1/2}, G_{y,i+1,j-1/2} ), \end{eqnarray} where $L$ is an arithmetic average (as in centered asymmetric method) or a slope limiter (e.g., minmod, van Leer, or MC) which satisfies the property that $ |L(a,b,c,d)| \leq (|a|+|b|+|c|+|d|)/4$. Thus, \begin{eqnarray} \nonumber \dot{S}^* &\ge& \frac{1}{N_xN_y} \sum_{i,j} G_{x,i+1/2,j}^2 + G_{y,i,j+1/2}^2 - \frac{1}{4} \left [ \left |G_{x,i+1/2,j} G_{y,i,j+1/2} \right | \right . \\ \nonumber &+& \left |G_{x,i+1/2,j} G_{y,i,j-1/2} \right | + \left |G_{x,i+1/2,j} G_{y,i+1,j+1/2} \right | + \left |G_{x,i+1/2,j} G_{y,i+1,j-1/2} \right | \\ \nonumber &+& \left |G_{y,i,j+1/2} G_{x,i+1/2,j}\right | + \left | G_{y,i,j+1/2} G_{x,i-1/2,j} \right | + \left |G_{y,i,j+1/2} G_{x,i+1/2,j+1} \right | \\ &+& \left . \left |G_{y,i,j+1/2} G_{x,i-1/2,j+1} \right | \right ]. \end{eqnarray} Shifting the dummy indices and combining various terms give, \begin{eqnarray} \nonumber \dot{S}^* &\ge& \frac{1}{N_xN_y} \sum_{i,j} G_{x,i+1/2,j}^2 + G_{y,i,j+1/2}^2 -\frac{1}{2} \left [ \left |G_{x,i+1/2,j} G_{y,i,j+1/2} \right | \right . \\ &+& \nonumber \left . \left |G_{x,i+1/2,j} G_{y,i,j-1/2} \right | + \left | G_{x,i+1/2,j} G_{y,i+1,j+1/2} \right | + \left |G_{x,i+1/2,j} G_{y,i+1,j-1/2} \right | \right ] \\ \nonumber &=& \frac{1}{4N_xN_y} \sum_{i,j} \left [ \left ( |G_{x,i+1/2,j}| - |G_{y,i,j+1/2}| \right )^2 + \left ( |G_{x,i+1/2,j}| - |G_{y,i,j-1/2}| \right )^2 \right . \\ \nonumber &+& \left . \left ( |G_{x,i+1/2,j}| - |G_{y,i+1,j+1/2}| \right )^2 + \left ( |G_{x,i+1/2,j}| - |G_{y,i+1,j-1/2}| \right )^2 \right ] \\ &\geq & 0. \end{eqnarray} Thus, an appropriate interpolation for the asymmetric and the slope limited asymmetric methods results in a scheme that satisfies the global entropy-like condition. A variation of this proof can be used to prove the global entropy condition $\dot{S} \geq 0$ by multiplying Eq. (\ref{eq:anisotropic_conduction}) with $1/T$ instead of $T$ (see Appendix \ref{app:app5}), although the form of interpolation would need to be modified slightly. It is comforting that introducing a limiter to the asymmetric method does not break the global entropy-like condition. However, it is important to remember that the entropy-like (or entropy) condition satisfied at some point does not guarantee a local heat flow in the correct direction; thus it is necessary to use slope limiters at temperature extrema to avoid temperature oscillations. \section{Further tests} \begin{sidewaystable} \centering \begin{tabular}{ccccccc} \hline Method & L1 error & L2 error & L$\infty$ error & T$_{\rm max}$ & T$_{\rm min}$ & $\chi_{\perp,{\rm num}}/\chi_\parallel$ \\ \hline asymmetric & 0.0324 & 0.0459 & 0.0995 & 10.0926 & 9.9744 & 0.0077 \\ asymmetric minmod & 0.0471 & 0.0627 & 0.1195 & 10.0410 & 10 & 0.0486 \\ asymmetric MC & 0.0358 & 0.0509 & 0.1051 & 10.0708 & 10 & 0.0127 \\ asymmetric van Leer & 0.0426 & 0.0574 & 0.1194 & 10.0519 & 10 & 0.0238 \\ symmetric & 0.0114 & 0.0252 & 0.1425 & 10.2190 & 9.9544 & 0.00028 \\ symmetric entropy & 0.0333 & 0.0477 & 0.0997 & 10.0754 & 10 & 0.0088 \\ symmetric entropy extrema & 0.0341 & 0.0487 & 0.1010 & 10.0751 & 10 & 0.0101 \\ symmetric minmod & 0.0475 & 0.0629 & 0.1322 & 10.0406 & 10 & 0.0490 \\ symmetric MC & 0.0289 & 0.0453 & 0.0872 & 10.0888 & 10 & 0.0072 \\ symmetric van Leer & 0.0438 & 0.0585 & 0.1228 & 10.0519 & 10 & 0.0238 \\ \hline \end{tabular} \caption{Diffusion in circular field lines: $50 \times 50$ grid \label{tab:tab1} } The errors are based on the assumption that the initial hot patch has diffused to a uniform temperature ($T=10.1667$) in the ring 0.5$<r<$0.7, and $T=10$ outside it. \end{sidewaystable} \begin{sidewaystable} \centering \begin{tabular}{ccccccc} \hline Method & L1 error & L2 error & L$\infty$ error & T$_{\rm max}$ & T$_{\rm min}$ & $\chi_{\perp, {\rm num}}/\chi_\parallel$ \\ \hline asymmetric & 0.0256 & 0.0372 & 0.0962 & 10.1240 & 9.9859 & 0.0030 \\ asymmetric minmod & 0.0468 & 0.0616 & 0.1267 & 10.0439 & 10 & 0.0306 \\ asymmetric MC & 0.0261 & 0.0405 & 0.0907 & 10.1029 & 10 & 0.0040 \\ asymmetric van Leer & 0.0358 & 0.0502 & 0.1002 & 10.0741 & 10 & 0.0971 \\ symmetric & 0.0079 & 0.0173 & 0.1206 & 10.2276 & 9.9499 & $4.1 \times 10^{-5}$ \\ symmetric entropy & 0.0285 & 0.0420 & 0.0881 & 10.0961 & 10 & 0.0042 \\ symmetric entropy extrema & 0.0291 & 0.0425 & 0.0933 & 10.0941 & 10 & 0.0041 \\ symmetric minmod & 0.0471 & 0.0618 & 0.1275 & 10.0433 & 10 & 0.0305 \\ symmetric MC & 0.0123 & 0.0252 & 0.1133 & 10.1406 & 10 & 0.00084 \\ symmetric van Leer & 0.0374 & 0.0514 & 0.1038 & 10.0697 & 10 & 0.0104 \\ \hline \end{tabular} \caption{Diffusion in circular field lines: $100 \times 100$ grid \label{tab:tab2} } \end{sidewaystable} \begin{sidewaystable} \centering \begin{tabular}{ccccccc} \hline Method & L1 error & L2 error & L$\infty$ error & T$_{\rm max}$ & T$_{\rm min}$ & $\chi_{\perp, {\rm num}}/\chi_\parallel$ \\ \hline asymmetric & 0.0165 & 0.0281 & 0.0949 & 10.1565 & 9.9878 & 0.0012 \\ asymmetric minmod & 0.0441 & 0.0585 & 0.1214 & 10.0511 & 10 & 0.0191 \\ asymmetric MC & 0.0161 & 0.0289 & 0.0930 & 10.1397 & 10 & 0.0015 \\ asymmetric van Leer & 0.0264 & 0.0407 & 0.0928 & 10.1006 & 10 & 0.0035 \\ symmetric & 0.0052 & 0.0132 & 0.1125 & 10.2216 & 9.9509 & $1.9 \times 10^{-5}$ \\ symmetric entropy & 0.0256 & 0.0385 & 0.0959 & 10.1103 & 10 & 0.0032 \\ symmetric entropy extrema & 0.0260 & 0.0391 & 0.0954 & 10.1074 & 10 & 0.0032 \\ symmetric minmod & 0.0444 & 0.0588 & 0.1219 & 10.0503 & 10 & 0.0192 \\ symmetric MC & 0.0053 & 0.0160 & 0.0895 & 10.1676 & 10 & 0.0002 \\ symmetric van Leer & 0.0281 & 0.0426 & 0.0901 & 10.0952 & 10 & 0.0038 \\ \hline \end{tabular} \caption{Diffusion in circular field lines: $200 \times 200$ grid \label{tab:tab3} } \end{sidewaystable} \begin{sidewaystable} \centering \begin{tabular}{ccccccc} \hline Method & L1 error & L2 error & L$\infty$ error & T$_{\rm max}$ & T$_{\rm min}$ & $\chi_{\perp, {\rm num}}/\chi_\parallel$ \\ \hline asymmetric & 0.0118 & 0.0234 & 0.0866 & 10.1810 & 9.9898 & $5.9 \times 10^{-4}$ \\ asymmetric minmod & 0.0399 & 0.0539 & 0.1120 & 10.0629 & 10 & 0.0115 \\ asymmetric MC & 0.0102 & 0.0230 & 0.0894 & 10.1708 & 10 & $6.8 \times 10^{-4}$ \\ asymmetric van Leer & 0.0167 & 0.0290 & 0.1000 & 10.1321 & 10 & 0.0013 \\ symmetric & 0.0033 & 0.0104 & 0.1112 & 10.2196 & 9.9504 & $8.4 \times 10^{-6}$ \\ symmetric entropy & 0.0252 & 0.0384 & 0.0969 & 10.1144 & 10 & 0.0027 \\ symmetric entropy extrema & 0.0253 & 0.0383 & 0.0958 & 10.1135 & 10 & 0.0026 \\ symmetric minmod & 0.0401 & 0.0541 & 0.1124 & 10.0622 & 10 & 0.0116 \\ symmetric MC & 0.0032 & 0.0122 & 0.0896 & 10.1698 & 10 & $6.5 \times 10^{-5} $ \\ symmetric van Leer & 0.0182 & 0.0307 & 0.1026 & 10.1260 & 10 & 0.0013 \\ \hline \end{tabular} \caption{Diffusion in circular field lines: $400 \times 400$ grid \label{tab:tab4} } \end{sidewaystable} We use test problems discussed in \cite{Parrish2005} and \cite{Sovinec2004} to compare different methods. The first test problem (taken from \cite{Parrish2005}) initializes a hot patch in circular field lines; ideally the hot patch should diffuse only along the field lines, but perpendicular numerical diffusion causes some cross-field thermal conduction. Unlike the limited methods, both asymmetric and symmetric methods show temperature oscillations at the temperature discontinuity. The second test problem (from \cite{Sovinec2004}) includes a source term and an explicit perpendicular diffusion coefficient ($\chi_\perp$). The steady state temperature gives a measure of the perpendicular numerical diffusion $\chi_{\perp,{\rm num}}$. \subsection{Diffusion of a hot patch in circular magnetic field} \begin{figure} \centering \includegraphics[width=2.5in,height=2.0in]{asymm_400.eps} \includegraphics[width=2.5in,height=2.0in]{symm_400.eps} \includegraphics[width=2.5in,height=2.0in]{asymm_monocen_400.eps} \includegraphics[width=2.5in,height=2.0in]{symm_monocen_400.eps} \includegraphics[width=2.5in,height=2.0in]{ent_only_400.eps} \includegraphics[width=2.5in,height=2.0in]{symm_minmod_400.eps} \caption{The temperature at $t=200$ for different methods initialized with the ring diffusion problem on a 400 $\times$ 400 grid. Shown from left to right and top to bottom are the temperatures for: asymmetric, symmetric, asymmetric-MC, symmetric-MC, entropy limited symmetric, and minmod methods. Both asymmetric and symmetric methods give temperatures below 10 (the initial minimum temperature) at late times. The result with a minmod limiter is very diffusive. The slope limited symmetric method is less diffusive than the slope limited asymmetric method. Entropy limited method does not show non-monotonic behavior at late times, but is diffusive compared to the better slope limited methods.\label{fig:fig6}} \end{figure} The circular diffusion test problem was proposed in \cite{Parrish2005}. A hot patch surrounded by a cooler background is initialized in circular field lines; the temperature drops discontinuously across the patch boundary. At late times, we expect the temperature to become uniform (and higher) in a ring along the magnetic field lines. The computational domain is a $[-1,1]\times[-1,1]$ Cartesian box. The initial temperature distribution is given by \begin{eqnarray} \label{eq:ring_problem} \nonumber T &=& 12 \hspace{0.25 in} \mbox{if} \hspace{0.1 in} 0.5<r<0.7 \hspace{0.1 in} \mbox{and} \hspace {0.1 in} \frac{11}{12}\pi<\theta<\frac{13}{12}\pi, \\ &=& 10 \hspace{0.25 in} \mbox{otherwise}, \end{eqnarray} where $r=\sqrt{x^2+y^2}$ and $\tan\theta=y/x$. Fixed circular magnetic field lines centered at the origin are initialized and number density ($n$) is set to unity. Reflective boundary conditions are used for temperature; magnetic field and conduction vanishes outside $r=1$. The parallel conduction coefficient $\chi=0.01$; there is no explicit perpendicular diffusion ($\chi_\perp=0$). We evolve the anisotropic conduction equation (\ref{eq:e_evolve}) till time = 200, by when we expect the temperature to be almost uniform along the circular ring $0.5<r<0.7$. In steady state (at late times), energy conservation implies that the ring temperature should be 10.1667, while the temperature outside the ring should be maintained at 10. Figure \ref{fig:fig6} shows the temperature distribution for different methods at time=200. All methods result in a higher temperature in the annulus $r \in [0.5,0.7]$. The limited schemes show larger perpendicular diffusion (see Tables \ref{tab:tab1}-\ref{tab:tab4} which give errors, minimum and maximum temperatures, and numerical perpendicular diffusion at time=200; also see Figure \ref{fig:fig8}) compared to the symmetric and asymmetric schemes. The perpendicular numerical diffusion ($\chi_{\perp,{\rm num}}$) scales with the parallel diffusion coefficient $\chi$ for all methods. Notice that for Sovinec's test problem (discussed in the next section) where temperature is smooth and an explicit $\chi_\perp$ is present, perpendicular numerical diffusion for the symmetric method does not increase with increasing $\chi_\parallel$. The minmod limiter is much more diffusive than van Leer and MC limiters. Both symmetric and asymmetric methods give a minimum temperature below the initial minimum of 10, even at late times. At late times the symmetric method gives a temperature profile full of non-monotonic oscillations (Figure \ref{fig:fig6}). Although the slope limited fluxes are more diffusive than the symmetric and asymmetric methods, they never show undershoots below the minimum temperature. The entropy limited symmetric method gives temperature undershoots at early times which are damped quickly, and the minimum temperature is still $10$ at late times (see Tables \ref{tab:tab1}-\ref{tab:tab4} \& Figure \ref{fig:fig7}). Entropy limiting combined with a slope limiter at temperature extrema behaves similar to the slope limiter based schemes. \begin{figure} \centering \includegraphics[width=4in,height=3in]{minT.eps} \caption{Minimum temperature over the whole box as a function of time for the ring diffusion test problem: symmetric (dashed line), asymmetric (solid line), and entropy limited symmetric (dot dashed line) methods are shown. Initially the temperature of the hot patch is 10 and the background is at 0.1. Both asymmetric and symmetric methods result in negative temperature, even at late times. The non-monotonic behavior with the entropy limited method is considerably less pronounced; the minimum temperature quickly becomes equal to the initial minimum $0.1$. The slope limited heat fluxes maintain the minimum temperature at 0.1 at all times, as required physically.\label{fig:fig7}} \end{figure} Strictly speaking, a hot ring surrounded by a cold background is not a steady solution for the ring diffusion problem. Temperature in the ring will diffuse in the perpendicular direction (because of perpendicular numerical diffusion, although very slowly) until the whole box is at a constant temperature. A rough estimate for time averaged perpendicular numerical diffusion $\langle \chi_{\perp,{\rm num}} \rangle$ follows from Eq. (\ref{eq:anisotropic_conduction}), \begin{equation} \label{eq:chiperp_num} \langle \chi_{\perp,{\rm num}} \rangle = \frac{ \int (T_f - T_i) dV} {\int dt \left ( \int \nabla^2 T dV \right )}, \end{equation} where the space integral is taken over the hot ring $0.5<r<0.7$, and $T_i$ and $T_f$ are the initial and final temperature distributions in the ring. Figure \ref{fig:fig8} plots the numerical perpendicular diffusion (using Eq. \ref{eq:chiperp_num}) for the ring diffusion problem at different resolutions (see Tables \ref{tab:tab1}-\ref{tab:tab4}). The estimates for perpendicular diffusion agree roughly with the more accurate calculations using Sovinec's test problem described in the next section (compare Figures \ref{fig:fig8} \& \ref{fig:fig9}); as with Sovinec's test, the symmetric method is the least diffusive. Table \ref{tab:tab5} lists the convergence rate of $\chi_{\perp,{\rm num}}$ for the ring diffusion problem evolved with different methods. \begin{figure} \centering \includegraphics[width=4in,height=3in]{conv_ringdiff.eps} \caption{Convergence of $\chi_{\perp,{\rm num}}/\chi_\parallel$ as the number of grid points is increased for the ring diffusion problem. The numerical perpendicular diffusion $\chi_{\perp,{\rm num}}$ is calculated numerically by measuring the heat diffusing out of the circular ring (Eq. \ref{eq:chiperp_num}). The different schemes are: asymmetric ($\triangle$), asymmetric with minmod ($\triangledown$), asymmetric with MC ($\square$), asymmetric with van Leer ($\ast$), symmetric ($+$), symmetric with entropy limiting ($\diamond$), symmetric with entropy and extrema limiting ($\triangleright$), symmetric with minmod ($\star$), symmetric with MC ($\times$), and symmetric with van Leer limiter ($\triangleleft$). \label{fig:fig8}} \end{figure} \begin{table}[hbt] \centering \caption{Asymptotic slopes for convergence of $\chi_{\perp, {\rm num}}$ in the ring diffusion test problem \label{tab:tab5}} \begin{tabular}{cc} \hline Method & slope \\ \hline asymmetric & 1.066 \\ asymmetric minmod & 0.741 \\ asymmetric MC & 1.142 \\ asymmetric van Leer & 1.479 \\ symmetric & 1.181 \\ symmetric entropy & 0.220 \\ symmetric entropy extrema & 0.282 \\ symmetric minmod & 0.735 \\ symmetric MC & 1.636 \\ symmetric van Leer & 1.587 \\ \hline \end{tabular} \end{table} To study the very long time behavior of different methods (in particular to check whether the symmetric and asymmetric methods give negative temperatures even at very late times) we initialize the same problem with the hot patch at 10 and the cooler background at 0.1. Figure \ref{fig:fig7} shows the minimum temperature with time for the symmetric, asymmetric, and entropy limited symmetric methods; slope limited methods give the correct result for the minimum temperature ($T_{\rm min}=0.1$) at all times. With a large temperature contrast, both symmetric and asymmetric methods give negative minimum temperature even at late times. Such points where temperature becomes negative, when coupled with MHD equations, can give numerical instabilities because of an imaginary sound speed. The minimum temperature with the entropy limited symmetric method shows small undershoots at early times which are damped quickly and the minimum temperature is equal to the initial minimum ($0.1$) after time=1. \subsection{Convergence studies: measuring $\chi_{\perp, {\rm num}}$} \begin{figure} \centering \includegraphics[width=4 in, height=3 in]{kpar10_iso.eps} \includegraphics[width=4 in, height=3 in]{kpar100_iso.eps} \caption{A measure of perpendicular numerical diffusion $\chi_{\perp,{\rm num}} = |T^{-1}(0,0)-T^{-1}_{\rm iso}|$ for $\chi_\parallel/\chi_\perp=10$ (top) and $\chi_\parallel/\chi_\perp=100$ (bottom), using different methods. The different schemes are: asymmetric ($\triangle$), asymmetric with minmod ($\triangledown$), asymmetric with MC ($\square$), asymmetric with van Leer ($\ast$), symmetric ($+$), symmetric with entropy limiting ($\diamond$), symmetric with entropy and extrema limiting ($\triangleright$), symmetric with minmod ($\star$), symmetric with MC ($\times$), and symmetric with van Leer limiter ($\triangleleft$). The numerical diffusion scales with $\chi_\parallel$ for all methods except the symmetric differencing \cite{Gunter2005}. \label{fig:fig9}} \end{figure} We use the steady state test problem described in \cite{Sovinec2004} to measure the perpendicular numerical diffusion coefficient, $\chi_{\perp,{\rm num}}$. The computational domain is a unit square $[-0.5,0.5]\times[-0.5,0.5]$, with vanishing temperature at the boundaries; number density is set to unity. The source term $Q=2\pi^2 \cos(\pi x) \cos(\pi y)$ that drives the lowest eigenmode of the temperature distribution is added to Eq. (\ref{eq:anisotropic_conduction}). The anisotropic diffusion equation with a source term possesses a steady state solution. The equation that we evolve is \begin{equation} \label{eq:anisotropic_conduction_source} \frac{\partial e}{\partial t} = - \vec{\nabla} \cdot \vec{q} + Q .\end{equation} The magnetic field is derived from the flux function of the form $\psi \propto \cos(\pi x)$ $\cos(\pi y)$; this results in concentric field lines centered at the origin. The temperature eigenmode driven by the source function $Q$ is constant along the field lines. The steady state solution for the temperature is $T(x,y)=\chi_\perp^{-1} \cos(\pi x) \cos(\pi y)$, independent of $\chi_\parallel$. The perpendicular diffusion coefficient $\chi_\perp$ is chosen to be unity, thus $T^{-1}(0,0)$ gives a measure of total perpendicular diffusion: the sum of $\chi_\perp$ (the explicit perpendicular diffusion) and $\chi_{\perp, {\rm num}}$ (the perpendicular numerical diffusion). To account for $\chi_{\perp, {\rm num}}$ due to the errors in discretization of the parallel diffusion operator, we calculate $\chi_{\perp, {\rm num}} = |T^{-1}(0,0) - T^{-1}_{\rm iso}(0,0)|$, where $T_{\rm iso}(0,0)$ is the central temperature calculated by the discretized equations at the same resolution in the isotropic limit $\chi_\parallel=\chi_\perp$. The convention that we use is slightly different (and more accurate) than that used in previous work, $\chi_{\perp, {\rm num}} = |T^{-1}(0,0) - 1|$, which effectively assumed that isotropic diffusion gives $T_{\rm iso}(0,0)=1$ exactly. Figure \ref{fig:fig9} shows the perpendicular numerical diffusivity $\chi_{\perp,{\rm num}}= |T^{-1}(0,0)-T^{-1}_{\rm iso}(0,0)|$ for $\chi_\parallel/\chi_\perp=10$, $100$ using different methods. The perpendicular diffusion ($\chi_{\perp,{\rm num}}$) for all methods except the symmetric method increases linearly with $\chi_\parallel$. This property has been emphasized by \cite{Gunter2005} to motivate the use of symmetric differencing for fusion applications, which require the perpendicular numerical diffusion to be small for $\chi_\parallel/\chi_\perp \sim 10^9$. The slope limited methods (with a reasonable resolution) are not suitable for the applications which require $\chi_\parallel/\chi_\perp \gg 10^4$; this rules out the fusion applications mentioned in \cite{Gunter2005,Sovinec2004}. However, only the slope limited methods give physically appropriate behavior at temperature extrema, thereby avoiding negative temperatures in presence of sharp temperature gradients. The error (perpendicular numerical diffusion, $\chi_{\perp,{\rm num}}=|T^{-1}(0,0)-T^{-1}_{\rm iso}(0,0)|$) for most methods except the ones based on minmod limiter, show a roughly second order convergence (see Table \ref{tab:tab6}). \begin{table}[hbt] \centering \caption{Asymptotic slopes for convergence of error $\chi_{\perp,{\rm num}} = |T^{-1}(0,0)-T^{-1}_{\rm iso}(0,0)|$ \label{tab:tab6}} \begin{tabular}{ccc} \hline Method & $\chi_\parallel/\chi_\perp=10$ & $\chi_\parallel/\chi_\perp=100$ \\ \hline asymmetric & 1.802 & 1.770 \\ asymmetric minmod & 0.9674 & 0.9406 \\ asymmetric MC & 1.9185 & 1.9076 \\ asymmetric van Leer & 1.706 & 1.728 \\ symmetric & 1.726 & 1.762 \\ symmetric entropy & 2.407 & 2.966 \\ symmetric entropy extrema & 1.949 & 1.953 \\ symmetric minmod & 0.9155 & 0.8761 \\ symmetric MC & 1.896 & 1.9049 \\ symmetric van Leer & 1.6041 & 1.6440 \\ \hline \end{tabular} \end{table} \section{Conclusions} It is shown that simple centered differencing of anisotropic conduction can result in negative temperatures in the presence of large temperature gradients. We present simple test problems where asymmetric and symmetric methods give heat flowing from lower to higher temperatures, leading to negative temperatures at some grid points. Negative temperature results in numerical instabilities, as the sound speed becomes imaginary. Numerical schemes based on slope limiters are proposed to solve this problem. The methods developed here will be useful in numerical studies of hot, dilute, anisotropic astrophysical plasmas \cite{Parrish2005,Sharma2006}, where large temperature gradients may be common. Anisotropic conduction can play a crucial role in determining the global structure of hot, non-radiative accretion flows (e.g., \cite{Balbus2001,Parrish2005,Sharma2006}). Therefore, it will be useful to extend ideal MHD codes used in previous global numerical studies (e.g., \cite{Stone2001}) to include anisotropic conduction. Slope limiting methods that prevent negative temperature can be particularly helpful in global disk simulations where there are huge temperature gradients that occur between a hot, dilute corona and the cold, dense disk. The slope limited method with an MC limiter appears to be the most accurate method that does not result in unphysical behavior with large temperature gradients (see Figures \ref{fig:fig6} \& \ref{fig:fig8}). While we have tried a number of possible variations other than the ones described here, there might be ways to further improve these algorithms. Future work might explore other combinations of limiters, or limiters on combined fluxes instead of limiting the normal and transverse components independently, or might explore using higher-order information to reduce the effects of limiters near extrema while preserving physical behavior. Although the slope and entropy limited methods in the present form are not suitable for fusion applications that require accurate resolution of perpendicular diffusion for huge anisotropy ($\chi_\parallel/\chi_\perp \sim 10^9$), they are appropriate for astrophysical applications with large temperature gradients. A relatively small anisotropy of thermal conduction may be sufficient to study the effects of anisotropic thermal conduction~\cite{Parrish2005}. The primary advantage of the limited methods is their robustness in presence of large temperature gradients. Apart from the simulations of dilute astrophysical plasmas with large temperature gradients (e.g., magnetized collisionless shocks), monotonicity-preserving methods may find use in diverse fields where anisotropic diffusion is important, e.g., image processing, biological transport, and geological systems. \section{Acknowledgments} Useful discussions with Tom Gardiner, Ian Parrish, and Ravi Samtaney are acknowledged. This work is supported by the US DOE under contract \# DE-AC02-76CH03073 and the NASA grant NNH06AD01I.
1,314,259,995,435
arxiv
\section{Introduction} Yates's algorithm from 1937 is a kind of fast Fourier transform that computes for a function $f\colon \{0,1\}^n\rightarrow\mathbf R$ and another function $\upsilon\colon \{0,1\}\times\{0,1\}\rightarrow\mathbf R$ the values \begin{equation} \label{eq: yates sum intro} \widehat f(x_1,\ldots, x_n)\ =\ \sum_{y_1,\ldots,y_n \in\{0,1\}}\ \upsilon(x_1,y_1)\cdots \upsilon(x_n,y_n) f(y_1,\ldots,y_n)\,. \end{equation} simultaneously for all $X=(x_1,\ldots,x_n)\in\{0,1\}^n$ using only $O(2^nn)$ operations, instead of the obvious $O(4^nn)$. The algorithm is textbook material in many sciences. Yet, though it appears in Knuth \cite[\S 3.2]{Knuth2}, it has received little attention in combinatorial optimisation. Recently, the authors \cite{BHKK07, BHK08} used Yates's algorithm in combination with Moebius inversion to give algorithms for a number of canonical combinatorial optimisation problems such as Chromatic Number and Domatic Number in $n$-vertex graphs, and $n$-terminal Minimum Steiner Tree, in running times within a polynomial factor of $2^n$. From the way it is normally stated, Yates's algorithm seems to face an inherent $2^n$ lower bound, up to a polynomial factor, and it also seems to be oblivious to the structural properties of the transform it computes. The motivation of the present investigation is to expedite the running time of Yates's algorithm for certain structures so as to get running times with a dominating factor of the form $(2-\epsilon)^n$. From the perspective of running times alone, our improvements are modest at best, but apart from providing evidence that the aesthetically appealing $2^n$ bound from \cite{BHK08} can be beaten, the combinatorial framework we present seems to be new and may present a fruitful direction for exact exponential time algorithms. \subsection{Results} In a graph $G=(V,E)$, a set $D\subseteq V$ of vertices is \emph{dominating} if every vertex not in $D$ has at least one neighbour in $D$. The \emph{domatic number} of $G$ is the largest $k$ for which $V$ can be partitioned in to $k$ dominating sets. We show how to compute the domatic number of an $n$-vertex graph with maximum degree $\Delta$ in time \[ O^*\bigl((2^{\Delta+1}-2)^{n/(\Delta+1)}\bigr)\,; \] the $O^*$ notation suppresses factors that are polynomial in $n$. For constant $\Delta$, this bound is always better than $2^n$, though not by much: \medskip \centerline{\small \begin{tabular}{c|ccccccc} $\Delta$ & 3 & 4 & 5 & 6 & 7 & 8 & $\cdots$ \\\hline \rule{0ex}{1.5em}$(2^{\Delta+1}-2)^{1/(\Delta+1)}$ & $1.9344$ & $1.9744$ & $1.9895$ & $1.9956$ & $1.9981$ & $1.9992$ & $\cdots$ \end{tabular}} \medskip The \emph{chromatic number} of a graph is the minimum $k$ for which the vertex set can be covered with $k$ independent sets; a set $I\subseteq V$ is \emph{independent} if no two vertices in $I$ are neighbours. We show how to compute the chromatic number of an $n$-vertex graph with maximum degree $\Delta$ in time \[ O^*\bigl((2^{\Delta+1}-\Delta-1)^{n/(\Delta+1)}\bigr)\,. \] This is slightly faster than for Domatic Number: \medskip \centerline{\small \begin{tabular}{c|ccccccc} $\Delta$ & 3 & 4 & 5 & 6 & 7 & 8 & $\cdots$\\\hline \rule{0ex}{1.5em} $(2^{\Delta+1}-\Delta-1)^{1/(\Delta+1)}$ & $1.8613$ & $1.9332$ & $1.9675$ & $1.9840$ & $1.9921$ & $1.9961$ & $\cdots$ \end{tabular}} \medskip One notes that even for moderate $\Delta$, the improvement over $2^n$ is minute. Moreover, the colouring results for $\Delta\leq 5$ are not even the best known: by Brooks's Theorem \cite{Brooks41}, the chromatic number of a connected graph is bounded by its maximum degree unless the graph is complete or an odd cycle, both of which are easily recognised. It remains to decide if the chromatic number is 3, 4, or 5, and with algorithms from the literature, $3$- and $4$-colourability can be decided in time $O(1.3289^n)$ \cite{BeiEpp05} and $O(1.7504^n)$ \cite{Bys04}, respectively. However, this approach does stop at $\Delta=5$, since we know no $o(2^n)$ algorithm for $5$-colourability. Other approaches for colouring low-degree graphs are known via pathwidth: given a path decomposition of width $w$ the $k$-colourability can be decided in time $k^w n^{O(1)}$ \cite{FGSS06}; for $6$-regular graphs one can find a decomposition with $w \leq n(23+\epsilon)/45$ for any $\epsilon>0$ and sufficiently large $n$ \cite{FGSS06}, and for graphs with $m$ edges one can find $w \leq m/5.769 + O(\log n)$ \cite{KMRR05}. However, even these pathwidth based bounds fall short when $k \geq 5$---we are not aware of any previous $o(2^n)$ algorithm. For the general case, it took 30 years and many papers to improve the constant in the bound for Chromatic Number from $2.4423$~\cite{Law76} via $2.4151$~\cite{Epp03}, $2.4023$~\cite{Bys04}, $2.3236$~\cite{BH08}, to~$2$~\cite{BHK08}, and a similar (if less glacial) story can be told for the Domatic Number. None of these approaches was sensitive to the density of the graph. Moreover, what interests us here is not so much the size of the constant, but the fact that it is less than $2$, dispelling the tempting hypothesis that $2^n$ should be a `difficult to beat' bound for computing the Chromatic Number for sparse graphs. In \S\ref{sec: con} we present some tailor-made variants for which the running time improvement from applying the ideas of the present paper are more striking. Chromatic Number and Domatic Number are special cases of set partition problems, where the objective is to partition an $n$-element set $U$ (here, the vertices of a graph) with members of a given family $\EuScript F$ of its subsets (here, the independent or dominating sets of the graph). In full generality, we show how to compute the covering, packing, and partition numbers of $(U,\EuScript F)$ in time within a polynomial factor of \begin{equation} \label{eq: running time} |\{ T\subseteq U \colon \text{there exists an $S\in \EuScript F$ such that $S\subseteq T$}\}|\,, \end{equation} the number of supersets of the members of $\EuScript F$. In the worst case, this bound is not better than $2^n$, and the combinatorial challenge in applying the present ideas is to find good bounds on the above expression. \subsection{Techniques} The main technical contribution in this paper, sketched in Figure~1, is that Yates's algorithm can, for certain natural choices of $\upsilon\colon \{0,1\}\times\{0,1\}\rightarrow\mathbf R$, be trimmed by considering in a bottom-up fashion only those $X\in\{0,1\}^n$ that we are actually interested in, for example those $X$ for which $f(X)\neq 0$ and their supersets. (We will understand $X$ as a subset of $\{1,\ldots,n\}$ whenever this is convenient.) Among the transforms that are amenable to trimming are the zeta and Moebius transforms on the subset lattice. \begin{figure} \parbox{4cm}{\begin{tabular}{c}\includegraphics{trim.1} \includegraphics{trim.2} \includegraphics{trim.3}\end{tabular}} \parbox{11cm}{\small Figure 1: Trimmed evaluation. Originally, Yates's algorithm considers the entire subset lattice (left). We trim the evalation from below by considering only the supersets of `interesting' points (middle), and from above by abandoning computation when we reach certain points (right).} \end{figure} We use the trimmed algorithms for zeta and Moebius transforms to expedite Moebius inversion, a generalisation of the principle of inclusion--exclusion, which allows us to compute the cover, packing, and partition numbers. The fact that these numbers can be computed via Moebius inversion was already used in \cite{BH08,BHKK07,BHK08}, and those parts of the present paper contain little that is new, except for a somewhat more explicit and streamlined presentation in the framework of partial order theory. The fact that we can evaluate both the zeta and Moebius transforms pointwise in such a way that we are done with $X$ before we proceed to $Y$ for every $Y\supset X$ also enables us to further trim computations from what is outlined above. For instance, if we seek a minimum set partition of sets from a family $\EuScript F$ of subsets of $U$, then it suffices to find the minimum partition of all $X$ such that $U\setminus X=S$ for some $S\in \EuScript F$. In particular, we need not consider how many sets it takes to partition $X$ for $X$'s large enough for $U\setminus X$ not to contain any set from $\EuScript F$. The main combinatorial contribution in this paper is that if $\EuScript F$ is the family of maximal independent sets, or the family of dominating sets in a graph, then we show how to bound \eqref{eq: running time} in terms of the maximum degree $\Delta$ using an intersection theorem of Chung \emph{et al.} \cite{CFGS86} that goes back to Shearer's Entropy Lemma. For this we merely need to observe that the intersection of $\EuScript F$ and the closed neighbourhoods of the input graph excludes certain configurations. In summary, via \eqref{eq: running time} the task of bounding the running time for (say) Domatic Number reduces to a combinatorial statement about the intersections of certain families of sets. \subsubsection*{Notation} Yates's algorithm operates on the lattice of subsets of an $n$-element universe $U$, and we find it convenient to work with notation established in partial order theory. For a family $\EuScript F$ of subsets of $U$, let $\min\EuScript F$ (respectively, $\max\EuScript F$) denote the family of minimal (respectively, maximal) elements of $\EuScript F$ with respect to subset inclusion. The \emph{upper closure} (sometimes called \emph{up-set} or \emph{filter}) of $\EuScript F$ is defined as \[ \upset\EuScript F = \{\, T\subseteq U\colon \text{there exists an $S\in \EuScript F$ such that $S\subseteq T$}\,\}\,. \] For a function $f$ defined on subsets of $U$, the \emph{support} of $f$ is defined as \[ \mathrm{supp}(f)=\{\,X\subseteq U\colon f(X)\neq 0\}\,. \] For a graph $G$, we let $\EuScript D$ denote the family of \emph{dominating sets} of $G$ and $\EuScript I$ the family of \emph{independent sets} of $G$. Also, for a subset $W\subseteq V$ of vertices, we let $G[W]$ denote the subgraph induced by $W$. For a proposition $P$, we use Iverson's bracket notation $[P]$ to mean $1$ if $P$ is true and $0$ otherwise. \section{Trimmed Moebius Inversion} For a family $\EuScript F$ of sets from $\{0,1\}^n$ and a set $X\in \{0,1\}^n$ we will consider $k$-tuples $(S_1,\ldots,S_k)$ with $S_i\in\EuScript F$ and $S_i\subseteq X$. Such a tuple is \emph{disjoint} if $S_{i_1}\cap S_{i_2}=\emptyset$ for all $1\leq i_1<i_2\leq k$, and \emph{covering} if $S_1\cup\cdots \cup S_k= X$. From these concepts we define for fixed $k$ \begin{enumerate} \item the \emph{cover number} $c(X)$, viz.\ the number of covering tuples, \item the \emph{packing number} $p(X)$, viz.\ the number of disjoint tuples, \item the \emph{partition number} or \emph{disjoint cover number} $d(X)$, viz.\ the number of tuples that are both disjoint and covering. \end{enumerate} In this section we show how to compute these numbers in time $|\upset\EuScript F|n^{O(1)}$, rather than $2^nn^{O(1)}$ as in \cite{BHKK07, BHK08}. The algorithms are concise but somewhat involved, and we choose to present them here starting with an explanation of Yates's algorithm. Thus, the first two subsections are primarily expository and aim to establish the new ingredients in our algorithms. At the heart of our algorithms lie two transforms of functions $f\colon \{0,1\}^n\rightarrow \mathbf R$ on the subset lattice. The \emph{zeta} transform $f\zeta$ is defined for all $X\in\{0,1\}^n$ by \begin{equation}\label{eq: zeta transform} (f\zeta)(X)\,=\sum_{Y\subseteq X} f(Y)\,. \end{equation} (The notation $f\zeta$ can be read either as a formal operator or as a product of the $2^n$-dimensional vector $f$ and the matrix $\zeta$ with entries $\zeta_{YX}=[Y\subseteq X]$.) The \emph{Moebius} transform $f\mu$ is defined for all $X\in\{0,1\}^n$ by \begin{equation}\label{eq: moebius transform} (f\mu)(X)\,=\sum_{Y\subseteq X}(-1)^{|X\setminus Y|} f(Y)\,. \end{equation} These transforms are each other's inverse in the sense that $f=f\zeta\mu=f\mu\zeta$, a fundamental combinatorial principle called \emph{Moebius inversion}. We can (just barely) draw an example in four dimensions for a function $f$ given by $f(\{4\})=f(\{1,2,4\})=1$, $f(\{1,3\})=2$ and $f(X)=0$ otherwise: \[\vcenter{\hbox{\includegraphics{lattices.1}}} \quad \begin{matrix} \stackrel{\textstyle\zeta}{\longrightarrow}\\ \stackrel{\textstyle\mu}{\longleftarrow} \end{matrix} \quad \vcenter{\hbox{\includegraphics{lattices.2}}}\] Another example that we will use later is the connection between the packing number and the disjoint cover number, \begin{equation} \label{eq: p=dzeta} p=d\zeta\,, \end{equation} which is easy to verify: By definition, \[ (d\zeta)(X)\,=\sum_{Y\subseteq X} d(Y)\,. \] Every disjoint $k$-tuple $(S_1,\ldots,S_k)$ with $S_1\cup\cdots\cup S_k\subseteq X$ appears once on the right hand side, namely for $Y=S_1\cup\cdots\cup S_k$, so this expression equals the packing number $p(X)$. \subsection{Yates's algorithm} Yates's algorithm \cite{Yates1937} expects the transform in the form of a function $\upsilon\colon \{0,1\}\times\{0,1\}\rightarrow\mathbf R$ and computes the transformed values \begin{equation} \label{eq: yates sum} \widehat f(X)\ = \sum_{Y\in\{0,1\}^n} \upsilon(x_1,y_1)\cdots\upsilon(x_n,y_n) f(Y)\,. \end{equation} simultaneously for all $X\in\{0,1\}^n$. Here, we let $(x_1,\ldots,x_n)$ and $(y_1,\ldots,y_n)$ denote the binary representations (or, `incidence vectors') of $X$ and $Y$, so $x_j=[j\in X]$ and $y_j=[j\in Y]$. To obtain \eqref{eq: zeta transform} set $\upsilon(x,y)=[y\leq x]$ and to obtain \eqref{eq: moebius transform} set $\upsilon(x,y)=[y\leq x](-1)^{x-y}$. The direct evaluation of \eqref{eq: yates sum} would take $2^n$ evaluations of $f$ for each $X$, for a total of $O(2^n2^nn)=O(4^nn)$ operations. The zeta and Moebius transforms depend only on $Y\subseteq X$, so they would require only $\sum_{X}2^{|X|}=\sum_{0\leq i\leq n}\binom{n}{i} 2^i=3^n$ evaluations. Yates's algorithm is faster still and computes the general form in $O(2^nn)$ operations: \medskip\noindent{\small {\bf Algorithm~Y.} (\emph{Yates's algorithm.}) Computes $\widehat f(X)$ defined in \eqref{eq: yates sum} for all $X\in\{0,1\}^n$ given $f(Y)$ for all $Y\in \{0,1\}^n$ and $\upsilon(x,y)$ for all $x,y\in\{0,1\}$. \begin{itemize} \item[{\bf Y1:}] For each $X\in\{0,1\}^n$, set $g_0(X)=\!f(X)$. \item[{\bf Y2:}] For each $j=1,\ldots,n$ and $X\in\{0,1\}^n$, set \[ g_j(X)= \upsilon([j\in X],0) g_{j-1}(X\setminus \{j\})+ \upsilon([j\in X],1) g_{j-1}(X\cup \{j\})\,. \] \item[{\bf Y3:}] Output $g_n$. \end{itemize}} The intuition is to compute $\widehat f(X)$ `coordinate-wise' by fixing fewer and fewer bits of $X$ in the sense that, for $j=1,\ldots, n$, \begin{equation} \label{eq: y induction} g_j(X)\ = \sum_{y_1,\ldots, y_j\in\{0,1\}}\,\upsilon(x_1,y_1)\cdots \upsilon(x_j,y_j) f(y_1,\ldots,y_j,x_{j+1},\ldots,x_n)\,. \end{equation} Indeed, the correctness proof is a straightforward verification (by induction) of the above expression. \subsection{Trimmed pointwise evaluation} To set the stage for our present contributions, observe that both the zeta and Moebius transforms `grow upwards' in the subset lattice in the sense that $\mathrm{supp}(f\zeta),\mathrm{supp}(f\mu)\subseteq \upset \mathrm{supp}(f)$. Thus, in evaluating the two transforms, one ought to be able to trim off redundant parts of the lattice and work only with lattice points in $\upset\mathrm{supp}(f)$. We would naturally like trimmed evaluation to occur in $O(|\upset\mathrm{supp}(f)|n)$ operations, in the spirit of Algorithm~Y. However, to obtain the values at $X$ in Step Y2 of Algorithm~Y, at first sight it appears that we must both `look up' (at $X\cup\{j\}$) and 'look down' (at $X\setminus\{j\}$). Fortunately, it suffices to only `look down'. Indeed, for the zeta transform, setting $\upsilon(x,y)=[y\leq x]$ and simplifying Step Y2 yields \begin{equation} \label{eq: y zeta recursion} g_j(X)= [j\in X] g_{j-1}(X\setminus \{j\})+g_{j-1}(X)\,. \end{equation} For the Moebius transform, setting $\upsilon(x,y)=[y\leq x](-1)^{x-y}$ and simplifying yields \begin{equation} \label{eq: y moebius recursion} g_j(X)= -[j\in X] g_{j-1}(X\setminus \{j\})+g_{j-1}(X)\,. \end{equation} Furthermore, it is not necessary to look `too far' down: for both transforms it is immediate from \eqref{eq: y induction} that \begin{equation} \label{eq: y support trimming} \text{$g_j(X)=0$ holds for all $X\notin\upset\mathrm{supp}(f)$ and $j=0,\ldots,n$}\,. \end{equation} In what follows we tacitly employ (\ref{eq: y support trimming}) to limit the scope of (\ref{eq: y zeta recursion}) and (\ref{eq: y moebius recursion}) to $\upset\mathrm{supp}(f)$. The next observation is that the lattice points in $\upset\mathrm{supp}(f)$ can be evaluated in order of their rank, using sets $\EuScript L(r)$ containing the points of rank $r$. Initially, the sets $\EuScript L(r)$ contain only $\mathrm{supp}(f)$, but we add elements from $\upset\mathrm{supp}(f)$ as we go along. These observations result in the following algorithm for evaluating the zeta transform; the algorithm for evaluating the Moebius transform is obtained by replacing \eqref{eq: y zeta recursion} in Step Z3 with \eqref{eq: y moebius recursion}. \medskip\noindent{\small{\bf Algorithm~Z.} (\emph{Trimmed pointwise fast zeta transform.}) Computes the nonzero part of $f\zeta$ given the nonzero part of $f$. The algorithm maintains $n+1$ families $\EuScript L(0),\ldots,\EuScript L(n)$ of subsets $X\in\{0,1\}^n$; $\EuScript L(r)$ contains only sets of size $r$. We compute auxiliary values $g_j(X)$ for all $1\leq j\leq n$ and $X\in\upset\mathrm{supp}(f)$; it holds that $g_n(X)=(f\zeta)(X)$. \begin{itemize} \item[{\bf Z1:}] For each $X\in\mathrm{supp}(f)$, insert $X$ into $\EuScript L(|X|)$. Set the current rank $r=0$. \item[{\bf Z2:}] Select any $X\in \EuScript L(r)$ and remove it from $\EuScript L(r)$. \item[{\bf Z3:}] Set $g_0(X)=f(X)$. For each $j=1,\ldots, n$, set \[ g_j(X)= [j\in X] g_{j-1}(X\setminus \{j\})+g_{j-1}(X)\,. \] [At this point $g_n(X)=(f\zeta)(X)$.] \item[{\bf Z4:}] If $g_n(X)\neq 0$, then output $X$ and $g_n(X)$. \item[{\bf Z5:}] For each $j\notin X$, insert $X\cup \{j\}$ into $\EuScript L(r+1)$. \item[{\bf Z6:}] If $\EuScript L(r)$ is empty then increment $r\leq n$ until $\EuScript L(r)$ is nonempty; terminate if $r=n$ and $\EuScript L(n)$ is empty. \item[{\bf Z7:}] Go to Z2. \end{itemize}} Observe that the evaluation at $X$ is complete once Step~Z3 terminates, which enables further trimming of the lattice `from above' in case the values at lattice points with higher rank are not required. By symmetry, the present ideas work just as well for transforms that `grow downwards', in which case one needs to `look up'. However, they do not work for transforms that grow in both directions, such as the Walsh--Hadamard transform. In the applications that now follow, $f$ will always be the indicator function of a family $\EuScript F$. In this case having $\mathrm{supp}(f)$ quickly available translates to $\EuScript F$ being efficiently listable; for example, with polynomial delay. \subsection{Covers} The easiest application of the trimmed Moebius inversion computes for each $X\in\upset\EuScript F$ the cover number $c(X)$. This is a particularly straightforward function of the zeta transform of the indicator function $f$: simply raise each element of $f\zeta$ to the $k$th power and transform the result back using $\mu$. To see this, observe that both sides of the equation \begin{equation}\label{eq: moebius for cover} (c\zeta)(Y)= \bigl((f\zeta)(Y)\bigr)^k \end{equation} count the number of ways to choose $k$-tuples $(S_1,\ldots, S_k)$ with $S_i\subseteq Y$ and $S_i\in\EuScript F$. By Moebius inversion, we can recover $c$ by applying $\mu$ to both sides of \eqref{eq: moebius for cover}. \medskip\noindent{\small {\bf Algorithm~C.} (\emph{Cover number.}) Computes $c(X)$ for all $X\in\upset\EuScript F$ given $\EuScript F$. The sets $\EuScript L(r)$ and auxiliary values $g_j(X)$ are as in Algorithm~Z; also required are auxiliary values $h_j(X)$ for Moebius transform. \begin{itemize} \item[{\bf C1:}] For each $X\in\EuScript F$, insert $X$ into $\EuScript L(|X|)$. Set the current rank $r=0$. \item[{\bf C2:}] Select any $X\in \EuScript L(r)$ and remove it from $\EuScript L(r)$. \item[{\bf C3:}] [Zeta transform.] Set $g_0(X)=[X\in\EuScript F]$. For each $j=1,\ldots, n$, set \[ g_j(X) = [j\in X] g_{j-1} (X\setminus\{j\}) + g_{j-1} (X)\,. \] [At this point it holds that $g_n(X)=(f\zeta)(X)$.] \item[{\bf C4:}] [Evaluate zeta transform of $c(X)$.] Set $h_0(X)=g_n(X)^k$. \item[{\bf C5:}] [Moebius transform.] For each $j=1,\ldots, n$, set \[ h_j(X)= -[j\in X] h_{j-1} (X\setminus\{j\}) + h_{j-1} (X)\,. \] \item[{\bf C6:}] Output $X$ and $h_n(X)$. \item[{\bf C7:}] For each $j\notin X$, insert $X\cup \{j\}$ into $\EuScript L(r+1)$. \item[{\bf C8:}] If $\EuScript L(r)$ is empty, then increment $r\leq n$ until $\EuScript L(r)$ is nonempty; terminate if $r=n$ and $\EuScript L(n)$ is empty. \item[{\bf C9:}] Go to C2. \end{itemize}} \subsection{Partitions} What makes the partition problem slightly less transparent is the fact that we need to use dynamic programming to assemble partitions from sets with different ranks. To this end, we need to compute for each rank $s$ the `ranked zeta transform' \[ (f\zeta^{(s)})(X)\mspace{5mu}=\mspace{-10mu}\sum_{% Y\subseteq X, |Y|=s} f(Y)\,. \] For rank $s$, consider the number $d^{(s)}(Y)$ of tuples $(S_1,\ldots,S_k)$ with $S_i\in\EuScript F$, $S_i\subseteq Y$, $S_1\cup\cdots\cup S_k=Y$ and $|S_1|+\cdots+|S_k|=s$. Then $d(Y)=d^{(|Y|)}(Y)$. Furthermore, the zeta-transform $(d^{(s)}\zeta)(X)$ counts the number of ways to choose $(S_1,\ldots,S_k)$ with $S_i\subseteq X$, $S_i\in\EuScript F$, and $|S_1|+\cdots+|S_k|= s$. Another way to count the exact same quantity is \begin{equation}\label{eq: partition transform} q(k,s,X)\mspace{7mu}=\sum_{s_1+\cdots+ s_k=s}\mspace{5mu} \prod_{i=1}^k (f\zeta^{(s_i)})(X)\,. \end{equation} Thus we can recover $d^{(s)}(Y)$ from $q(k,s,X)$ by Moebius inversion. As it stands, \eqref{eq: partition transform} is time-consuming to evaluate even given all the ranked zeta transforms, but we can compute it efficiently using dynamic programming based on the recurrence \[ q(k,s,X)=\begin{cases} \sum_{t=0}^s q(k-1,s-t,X) (f\zeta^{(t)})(X),& \text{if $k>1$}\,,\\ (f\zeta^{(s)})(X), & \text{if $k=1$}\,.\end{cases} \] This happens in Step~D4. \medskip\noindent{\small {\bf Algorithm~D.} (\emph{Disjoint cover number.}) Computes $d(X)$ for all $X\in\upset\EuScript F$ given $\EuScript F$. The sets $\EuScript L(r)$ are as in Algorithm~Z; we also need auxiliary values $g_j^{(s)}(X)$ and $h_j^{(s)}(X)$ for all $X\in\upset\EuScript F$, $1\leq j\leq n$, and $0\leq s\leq n$; it holds that $g_n^{(s)}(X)=(f\zeta^{(s)})(X)$ and $h_n^{(s)}(X)=d^{(s)}(X)$. \begin{itemize} \item[{\bf D1:}] For each $X\in\EuScript F$, insert $X$ into $\EuScript L(|X|)$. Set the current rank $r=0$. \item[{\bf D2:}] Select any $X\in \EuScript L(r)$ and remove it from $\EuScript L(r)$. \item[{\bf D3:}] [Ranked zeta transform.] For each $s=0,\ldots,n$, set $g_0^{(s)}(X)=[X\in\EuScript F][|X|=s]$. For each $j=1,\ldots, n$ and $s=0,\ldots, n$, set \[ g_j^{(s)}(X) = [j\in X] g_{j-1}^{(s)} (X\setminus\{j\}) + g_{j-1}^{(s)} (X)\,. \] [At this point it holds that $g_n^{(s)}(X)=(f\zeta^{(s)})(X)$ for all $0\leq s\leq n$.] \item[{\bf D4:}] [Evaluate zeta transform of $d^{(s)}$.] For each $s=0,\ldots, n$, set $q(1,s)=g_n^{(s)}(X)$. For each $i=2,\ldots,k$ and $s=0,\ldots,n$, set $q(i,s)= \sum_{t=0}^s q(i-1,s-t) g_n^{(t)}(X).$ \item[{\bf D5:}] [Ranked Moebius transform.] For each $s=0,\ldots,n$, set $h_0^{(s)}(X)=q(k,s)$. For each $j=1,\ldots,n$ and $s=0,\ldots,n$, set \[ h_j^{(s)}(X)= -[j\in X] h_{j-1}^{(s)} (X\setminus\{j\}) + h_{j-1}^{(s)} (X)\,. \] [At this point it holds that $h_n^{(s)}(X)=d^{(s)}(X)$ for all $0\leq s\leq n$.] \item[{\bf D6:}] Output $X$ and $h_n^{(|X|)}(X)$. \item[{\bf D7:}] For each $j\notin X$, insert $X\cup \{j\}$ into $\EuScript L(r+1)$. \item[{\bf D8:}] If $\EuScript L(r)$ is empty, then increment $r\leq n$ until $\EuScript L(r)$ is nonempty; terminate if $r=n$ and $\EuScript L(n)$ is empty. \item[{\bf D9:}] Go to D2. \end{itemize}} \subsection{Packings} According to \eqref{eq: p=dzeta}, to compute $p(X)$ it suffices to zeta-transform the partition number. This amounts to running Algorithm~Z after Algorithm~D. (For a different approach, see \cite{BHK08}.) \section{Applications} \vskip-0.3cm \subsection{The number of dominating sets in sparse graphs} \label{sec: comb} This section is purely combinatorial. Let $\EuScript D$ denote the dominating sets of a graph. A complete graph has $2^n-1$ dominating sets, and sparse graphs can have almost as many: the $n$-star graph has $2^{n-1}$ dominating sets and average degree less than $2$. Thus we ask how large $|\EuScript D|$ can be for graphs with bounded maximum degree. An easy example is provided by the disjoint union of complete graphs of order $\Delta+1$: every vertex subset that includes at least one vertex from each component is dominating, so $|\EuScript D| = (2^{\Delta+1}-1)^{n/(\Delta+1)}$. We shall show that this is in fact the largest possible $\EuScript D$ for graphs of maximum degree $\Delta$. Our analysis is based on the following intersection theorem. \begin{lem}[Chung \emph{et al.} \cite{CFGS86}] \label{lem: chung} Let $U$ be a finite set with subsets $P_1,\ldots, P_m$ such that every $u\in U$ is contained in at least $\delta$ subsets. Let\/ $\EuScript F$ be a family of subsets of\/ $U$. For each\/ $1\leq \ell \leq m$, define the projections $\EuScript F_\ell= \{\, F\cap P_\ell \colon F\in \EuScript F\,\}$. Then \[ |\EuScript F|^\delta \leq \prod_{\ell=1}^m |\EuScript F_\ell|\,. \] \end{lem} \begin{thm}\label{thm: dom} The number of dominating sets of an $n$-vertex graph with maximum degree $\Delta$ is at most\/ \( (2^{\Delta+1}-1)^{n/(\Delta+1)}.\) \end{thm} \begin{pf} Let $G=(V,E)$ be a graph with $|V|=n$ and maximum degree $\Delta$. For each $v\in V$, let $A_v$ be the closed neighbourhood around vertex $v$, \begin{equation}\label{eq: Ai def} A_v = \{v\}\cup \{\,u\in V\colon uv\in E\,\}\,. \end{equation} Next, for each $u\in V$ with degree $d(u)<\Delta$, add $u$ to $\Delta-d(u)$ of the sets $A_v$ not already containing $u$ (it does not matter which). Let $a_v=|A_v|$ and note that $\sum_v a_v=(\Delta+1) n$. We want to apply Lemma~\ref{lem: chung}. To this end, let $U=V$ and $m=n$. By construction, every $u\in V$ belongs to exactly $\delta=\Delta+1$ subsets $A_v$. To get a nontrivial bound on $\EuScript D$ we need to bound the size of $\EuScript D_v = \{\, D\cap A_v \colon D\in\EuScript D\,\}$. Every $D\cap A_v$ is one of the $2^{a_v}$ subsets of $A_v$, but none of the $D\cap A_v$ can be the empty set, because either $v$ or one of its neighbours must belong to the dominating set $D$. Thus $|\EuScript D_v|\leq 2^{a_v}-1$. By Lemma~\ref{lem: chung}, we have \begin{equation}\label{eq: from chung lemma} |\EuScript D|^{\Delta+1} \leq \prod_v (2^{a_v}-1)\,. \end{equation} Since $x\mapsto \log\,(2^x-1)$ is concave, Jensen's inequality gives \[ \frac{1}{n}\sum_v \log\,(2^{a_v}-1) \leq \log\,(2^{\sum_v a_v/n}-1) = \log\,(2^{\Delta+1}-1)\,. \] Taking exponentials and combining with \eqref{eq: from chung lemma} gives $|\EuScript D|^{\Delta+1} \leq (2^{\Delta+1}-1)^n$. \end{pf} \vskip-0.3cm \subsection{Domatic Number} We first observe that a graph can be packed with $k$ dominating sets if and only if it can be packed with $k$ \emph{minimal} dominating sets, so we can consider $k$-packings from $\min \EuScript D$ instead of $\EuScript D$. This has the advantage that $\min\EuScript D$ can be listed faster than $2^n$. \begin{lem}[Fomin \emph{et al.} \cite{FGPS05}] Any $n$-vertex graph has at most\/ $O^*(1.7170^n)$ minimal dominating sets, and they can be listed within that time bound. \end{lem} \begin{thm}\label{thm: applications} For an $n$-vertex graph $G$ with maximum degree $\Delta$ we can decide in time \[ O^*\bigl((2^{\Delta+1}-2)^{n/(\Delta+1)}\bigr) \] whether $G$ admits a packing with $k$ dominating sets. \end{thm} \begin{pf} We use Algorithm~D with $\EuScript F= \min \EuScript D$. By the above lemma, we can complete Step~D1 in time $O^*(1.7170^n)$. The rest of the algorithm requires time $O^*(|\upset \min\EuScript D|)$. Since every superset of a dominating set is itself dominating, $\upset\min\EuScript D$ is a sub-family of $\EuScript D$ (in fact, it is exactly $\EuScript D$), so Theorem~\ref{thm: dom} bounds the total running time by \[ O^*\bigl((2^{\Delta+1}-1)^{n/(\Delta+1)}\bigr)\,. \] We can do slightly better if we modify Algorithm~D in Step~D7 to insert $X\cup \{j\}$ only if it excludes at least one vertex for each closed neighbourhood. Put otherwise, we insert $X\cup \{j\}$ only if the set $V\setminus (X\cup \{j\})$ dominates the graph $G$. The graph then has Domatic Number at least $k+1$ if and only if the algorithm reports some $X$ for which $d(X)$ is nonzero. The running time can again be bounded as in Theorem~\ref{thm: dom} but now $D\cap A_v$ can neither be the empty set, nor be equal to $A_v$. Thus the application of Lemma~\ref{lem: chung} can be strengthened to yield the claimed result. \end{pf} \vskip-0.3cm \subsection{Chromatic Number} Our first argument for Chromatic Number is similar; we give a stronger and slightly more complicated argument in \S\ref{sec: chromatic via bip}. We consider the independent sets $\EuScript I$ of a graph. An independent set is not necessarily dominating, but it is easy to see that a \emph{maximal} independent set is dominating. Moreover, the Moon--Moser bound tells us they are few, and Tsukiyama \emph{et al.} tell us how to list them with polynomial delay: \begin{lem}[Moon and Moser \cite{MM65}; Tsukiyama \emph{et al.} \cite{TIAS77}] Any $n$-vertex graph has at most\/ $O^*(1.4423^n)$ maximal independent sets, and they can be listed within that bound. \end{lem} \begin{thm}\label{thm: k independent sets} For an $n$-vertex graph $G$ with maximum degree $\Delta$ we can decide in time \[ O^*\bigl((2^{\Delta+1}-1)^{n/(\Delta+1)}\bigr) \] whether $G$ admits a covering with $k$ independent sets. \end{thm} \begin{pf} It is easy to see that $G$ can be covered with $k$ independent sets if and only if it can be covered with $k$ \emph{maximal} independent sets, so we will use Algorithm~C on $\max\EuScript I$. Step~C1 is completed in time $O^*(1.4423^n)$, and the rest of the algorithm considers only the points in $\upset \max \EuScript I$, which all belong to $\EuScript D$. Again, Theorem~\ref{thm: dom} bounds the total running time. \end{pf} \vskip-0.3cm \subsection{Chromatic Number via bipartite subgraphs} \label{sec: chromatic via bip} We can do somewhat better by considering the family $\EuScript B$ of vertex sets of induced \emph{bipartite} subgraphs, that is, the family of sets $B\subseteq V$ for which the induced subgraph $G[B]$ is bipartite. As before, the literature provides us with a nontrivial listing algorithm: \begin{lem}[Byskov and Eppstein \cite{Byskov-thesis}] Any $n$-vertex graph has at most\/ $O^*(1.7724^n)$ maximal induced bipartite subgraphs, and they can be listed within that bound. \end{lem} The family $\max\EuScript B$ is more than just dominating, which allows us to use Lemma~\ref{lem: chung} in a stronger way. \begin{thm} \label{thm: c} For an $n$-vertex graph of maximum degree $\Delta$ it holds that \[ |\upset\max\EuScript B| \leq (2^{\Delta+1}-\Delta-1)^{n/(\Delta+1)}\,. \] \end{thm} \begin{pf} Let $G=(V,E)$ be a graph with $|V|=n$ and maximum degree $\Delta$. Let $\EuScript F=\upset\max\EuScript B$. Let $A_v$ be as in \eqref{eq: Ai def}. With the objective of applying Lemma~\ref{lem: chung}, we need to bound the number of sets in $\EuScript F_v=\{\,F\cap A_v \colon F\in \EuScript F\,\}$. Assume first that $G$ is $\Delta$-regular. Let $A_v=\{ v, u_1,\ldots, u_\Delta \}$. We will rule out $\Delta+1$ candidates for $F\cap A_v$, namely \begin{equation} \label{eq: bip ruled out} \emptyset, \{u_1\}, \ldots, \{u_\Delta\}\notin \EuScript F_v\,. \end{equation} This then shows that $|\EuScript F_v|\leq 2^{\Delta+1}-\Delta-1$ and thus the bound follows from Lemma~\ref{lem: chung}. To see that \eqref{eq: bip ruled out} holds, observe that $F\in\EuScript F$ contains a $B\subseteq F$ such that the induced subgraph $G[B]$ is maximal bipartite. To reach a contradiction, assume that there exists a $v\in V$ with $F\cap A_v\subseteq \{u_\ell\}$. Since $B\subseteq F$, we have $B\cap A_v\subseteq \{u_\ell\}$, implying that $v$ does not belong to $B$, and that at most one of its neighbours does. Consequently, $G[B\cup\{v\}]$ is also bipartite, and $v$ belongs to a partite set opposite to any of its neighbours. This contradicts the fact that $G[B]$ is maximal bipartite. To establish the non-regular case, we can proceed as in the proof of Theorem~\ref{thm: dom}, adding each $u\in V$ with $d(u)<\Delta$ to some $\Delta-d(u)$ of the sets $A_v$ not already including $u$. Note that by adding $y$ new vertices to $A_v$ originally containing $x$ vertices, we get $|\{F\cap A_v:F\in \EuScript F\}|\leq 2^y(2^x-x-1)$. Next, since $2^y(2^x-x-1)\leq 2^{y+x}-(y+x)-1$ for all non-negative integers $y,x$ and $\log\,(2^x-x-1)$ is a concave function, the bound follows as before via Jensen's inequality. \end{pf} \begin{thm}\label{thm: k independent sets with bip} For an $n$-vertex graph $G$ with maximum degree $\Delta$ we can decide in time \[ O\bigl((2^{\Delta+1}-\Delta-1)^{n/(\Delta+1)}\bigr) \] whether $G$ admits a covering with $k$ independent sets. \end{thm} \begin{pf} When $k$ is even, it is easy to see that $G$ can be covered by $k$ independent sets if and only if it can be covered by $k'=k/2$ maximal bipartite sets, so we will use Algorithm~C on $\max\EuScript B$ and investigate whether $c(V)\neq 0$. When $k$ is odd, we again use Algorithm~C with $k'=(k-1)/2$ maximal bipartite sets, but this time we check whether an $X$ is output such that both $c(X)\neq 0$ and $V\setminus X$ is independent in $G$. In both cases the running time bound follows from Theorem~\ref{thm: c}. \end{pf} \section{Concluding Remarks} \label{sec: con} Since the presented improvements on running time bounds are modest, one can ask whether this is because of weak bounds or because of inherent limitations of the technique. We observe that the running time bounds in Theorems \ref{thm: applications}, \ref{thm: k independent sets}, and \ref{thm: k independent sets with bip} are met by a disjoint union of complete graphs of order $\Delta+1$. Thus, either further trimming or splitting into connected components is required for improved algorithms in this context. We chose to demonstrate the technique for Chromatic and Domatic Number since these are well-known and well-studied. To briefly demonstrate some further application potential, more artificial problem variants such as determining if a $\Delta$-regular graph has domatic number at least $\Delta/2$, or if the square of a $\Delta$-regular graph has chromatic number at most $3\Delta/2$, admit stronger bounds. For example, if $G$ has domatic number at least $d$, $d$ even, then its vertices can be partitioned into two sets, both of which contain $d/2$ dominating sets. This suggests the following meet-in-the-middle strategy. Run Algorithm~D with $\EuScript F$ equal to all dominating sets and $k=d/2$, but modify Step~D7 to insert $X\cup \{j\}$ only if $|A_v\setminus (X\cup\{j\})|\geq d/2$ holds for all vertices $v$. At termination, we check whether the algorithm has output two sets $X$ and $Y$ such that $X\cup Y=V$ and $d(X),d(Y)>0$. (For example, one can check for duplicates in a table with entry $\{X,V\setminus X\}$ for each output $X$ with $d(X)>0$.) This algorithm variant considers only sets with many forbidden intersections with the neighbourhoods of vertices, which translates into stronger bounds via Lemma~\ref{lem: chung}.
1,314,259,995,436
arxiv
\section{Introduction} The Skyrme model of nuclear physics \cite{Skyrme} describes baryons as topological solitons in a non-linear field theory of $\pi$-mesons. It has enjoyed qualitative success in describing both single nucleon properties and the nucleon-nucleon interaction \cite{ANW,JackRho,Jacks}. It is thus of interest to try to apply the model to larger nuclei. In theory, it should be possible to calculate the binding energies and gamma ray spectra of all nuclei, without needing to introduce additional free parameters. This is a very attractive prospect. However, solutions of the classical field theory must be quantised before any comparison to the real world can be made. The Skyrme model is not renormalisable, so there is little hope of a full treatment as a quantum field theory. Instead, a semiclassical quantisation is usually attempted, based on a limited number of degrees of freedom about a given classical solution. The earliest efforts at quantisation included only zero mode collective coordinates: spin and isospin. For a single skyrmion, this produces a description of nucleons and the $\Delta$ resonance in modest agreement with experiment \cite{AN,ANW}. However, for solutions of higher baryon number $B$, a simple collective coordinate quantisation includes effects of order $\hbar^{2}$, while ignoring effects of order $\hbar$. For this reason, there has recently been a growing acceptance that it is necessary to include at least some low-lying vibrational modes. This requires a better understanding of the structure and dynamics of multi-skyrmion configurations, at least in the immediate neighbourhood of the minimal energy solutions. There has been considerable recent progress in this direction, based on Manton's old idea of representing low energy solitonic excitations as motion on a finite dimensional moduli space. Studies by Leese et al \cite{Leese} and Walet \cite{Walet}, employing an instanton approximation for the Skyrme fields, gave encouraging results. Recently, Barnes et al directly computed the normal mode spectra of $B=2$, 3 and 4 multiskyrmions \cite{deut,alpha,conf}. A remarkable structure emerged in all three spectra. The lowest frequencies corresponded to known attractive channel scatterings; the next mode up was the `breather' (a trivial size fluctuation), followed by various higher multipole breathing modes. Barnes et al were also able to classify the vibrations according to the symmetry groups of the respective static solitons. Remarkably, the $B=4$ vibrational modes below the breather fell into representations exactly corresponding to those for small zero-mode deformations of the BPS 4-monopole solution. The same phenomen occurred for the low frequency vibrations of the deuteron. This led to further investigation of the connection between BPS monopoles and Skyrmions, which has now been understood by Houghton et al in terms of rational maps \cite{Houghton}. In fact, Houghton et al were able to show that the correspondence should continue for any baryon number (monopole charge), and were thus able to predict the lowest $4B-7$ vibrational modes for $B=3$ and 7. Their $B=3$ predictions were then confirmed by \cite{conf}. The higher multipole breathing modes were not so well understood. In the current Letter, we shall present a simple geometrical explanation for these modes, which relies only on the symmetry of the static solutions \cite{Battye}. We predict $4B-7$ such modes for a multiskyrmion of baryon number $B$, and show how these may be classified as representations of the symmetry group of the static soliton. Our predictions match exactly all known vibrations for $B=2$, 3 and 4; plus we predict an additional triplet of modes for $B=4$. We also make detailed predictions for $B=5$, 6 and 7. Our idea is extremely simple, but has far-ranging consequences. Perhaps the most startling of these is the implication that the $N$-skyrmion moduli space may be of a higher dimension than was previously thought. Interestingly, our theory is in agreement with the $B=3$ results of \cite{conf}, which already contradicted the ``standard wisdom'' in this regard. We are thus able to clarify an outstanding puzzle. Overall, when added to the already considerable progress made by \cite{deut,alpha,conf,Houghton,Battye}, this represents a major new insight into the moduli space approach. \section{Classification of Multipole Breathing Modes} Classical multiskyrmion solutions are now known up to baryon number $B=9$ \cite{Braaten,Battye}. All display considerable symmetry: the $B=1$ solution is spherical, while the deuteron has axial symmetry. For $B \ge 3$, the minimal energy multiskyrmions are polyhedra, with $2B-2$ faces. For example, $B=3$ is a tetrahedron and $B=4$ a cube. Baryon density is peaked at the vertices of these polyhedra, and to a lesser extent along the edges joining them. Lines of zero baryon density run out from the origin through the midpoints of each face. These lines are rather special in another way. Consider the inverse map from the field space $SU(2)$ back to real space. In general, for a configuration of baryon number $B$, each point in field space maps to $B$ points in real space. But there are certain special field values which map to fewer points in real space. For a minimal energy multiskyrmion, the image of these points, which we shall refer to as the branch locus, corresponds to the zero baryon density lines (or branch lines) running through each face. Since there are $2B-2$ faces, there are also $2B-2$ branch lines. Note that for this purpose, a line of zero baryon density which runs straight through the origin counts as two branch lines. Our idea is to consider vibrations of the branch locus, while leaving the baryon density distribution of the multiskyrmion roughly intact. We leave the branch locus fixed at the origin, but allow the locations at which the branch lines intersect a sphere of fixed radius to vary. There are $2B-2$ branch lines, each of which has two angular degrees of freedom, giving a total of $4B-4$ modes. Three of these will correspond to global rotation, so we are left with $4B-7$ non-trivial vibrations. The latter clearly correspond to complex breathing motions: two branch lines moving towards each other will compress the baryon density between them, whereas motion away from baryon density peaks will cause expansion. The non-trivial vibrational modes must lie in multiplets (of degenerate frequency), transforming under irreducible representations of the symmetry group of the static soliton. These representations can be found by decomposing the $(4B-4)$-dimensional representation formed by the angles of the branch lines in spherical polar coordinates. The irreducible component(s) corresponding to the rotational zero modes can then be removed, leaving only the true vibrational modes. Note that while the spherical polar angles form the most convenient general definition of the $(4B-4)$-dimensional representation, in practice it is usually possible to find a simpler parametrisation for the movement of the branch lines. This is the case for most examples we will consider here. Let us now consider detailed predictions for individual multiskyrmions, beginning with the deuteron. The minimal energy $B=2$ solution is a torus: with axial symmetry, plus a reflection symmetry in the plane of the torus. The symmetry group is $D_{\infty h}$, axial symmetry extended by inversion. The two branch lines occupy the axis of symmetry; take this to be the $z$-axis. Then the $x$- and $y$-axes at some points $z = \pm \alpha$ are equivalent to the 4-dimensionsal anglular basis. A simple computation reveals that the characters of this representation are uniformly zero, other than the identity with value 4, and rotations by angle $\theta$ with value $4 \cos \theta$. So using the notation of \cite{deut}, we have two 2-dimensional irreducible representations: $1^{+}$ and $1^{-}$. The former corresponds to rotations around axes perpendicular to $z$, so we are left with $1^{-}$ as a true vibration. This is exactly the mode found by \cite{deut}. It is a ``dipole'' breathing motion, where one side of the torus inflates while the other is compressed. The two-fold degeneracy corresponds to motion aligned along the $x$- or $y$-axes. The classical $B=3$ multiskyrmion is a tetrahedron; its branch lines pass through a (dual) tetrahedron. There is no obviously simple way to parametrise the 8-dimensional angular representation in this case (although we have performed this calculation and checked the results below). However, the possible motions of the branch lines are quite limited, and are easy to see. One can have a dipole motion, where three of the branch lines move towards the fourth. There are four obvious directions for this motion, but exciting all four at once gives a trivial size fluctuation, so only three are independent. A quick check of the symmetry of this vibration under the tetrahedral group $T_{d}$ shows its representation to be $F_{2}$, using the notation of Hamermesh \cite{Hamermesh}. (We will use this notation throughout, except where otherwise explicitly noted.) It is also possible to split the branch locus into two pairs of lines, with the lines in each pair moving towards each other. This creates a ``quadrupole'' breathing motion, where two opposite edges of the multiskyrmion are compressed, while the other four inflate; and vice versa. This motion has an obvious threefold basis, but only two directions are independent, so the corresponding representation is $E$. Again, these two modes correspond to those observed by Barnes et al \cite{conf}. The $B=4$ multiskyrmion is a cube; symmetry group $O_{h}$. The branch lines are just the Cartesian axes. We label the irreducible representations by those of the group of rotations of a cube ($O$), with superscripts indicating parity. Decomposition of the 12-dimensional branch line angle representation gives $F_{1}^{+}$, $F_{1}^{-}$, $F_{2}^{+}$ and $F_{2}^{-}$, all 3-dimensional. The $F_{1}^{+}$ corresponds to rotational zero-modes. $F_{1}^{-}$ is a dipole breathing motion, and $F_{2}^{+}$ a quadrupole, similar to the modes described above for $B=2$ and $B=3$. Both of these modes were observed, with the right symmetries, by \cite{alpha}. The remaining vibration, $F_{2}^{-}$, corresponds to a twisting motion: grab diagonally opposite corners of the cube, and twist them in opposite directions. This mode was not seen by \cite{alpha}, but it may well have considerably higher energy than the other two. Finding it would provide striking confirmation of our theory. The next two multiskyrmions have rather less symmetry: $D_{2d}$ and $D_{4d}$ respectively. Pictures of them may be found in \cite{Battye}. The $B=5$ solution has four square and four pentagonal faces. The branch lines form two slightly distorted tetrahedral configurations: see Figure~\ref{fig:d2d} for a schematic diagram. The symmetry group $D_{2d}$ maps each of these ``tetrahedra'' to themselves; they cannot be interchanged. The 16-dimensional angular representation can therefore be reduced to two identical 8-dimensional representations. Each of these can then be decomposed into $A_{1}$, $A_{2}$, $B_{1}$, $B_{2}$ and $E$ (twice). The rotational zero modes are $E$ and $A_{2}$. Again, we have dipole and quadrupole type breathing motions. Since the $B=5$ multiskyrmion is slightly elongated about one axis, both the dipole and quadrupole motions split into a singlet and a doublet: $B_{2}$ and $E$ for the dipole, $A_{1}$ and $E$ for the quadrupole. There are three 1-dimensional twisting motions, two $B_{1}$'s and an $A_{2}$. The remaining motions correspond to two groups of four axes each vibrating in a dipole or quadrupole, but with the two groups out of phase. Hence the remaining $B_{2}$ is an axial dipole anti-dipole, $E$ a transverse dipole anti-dipole, and $A_{1}$ an axial quadrupole anti-quadrupole. The $B=6$ multiskyrmion has two square faces and eight pentagonal faces. Arrange four pentagons around each square, then rotate one of the squares by $45^{\circ}$ so that the two halves can be fitted together. This configuration, like $B=5$, is slightly elongated in one direction. The symmetry group $D_{4d}$ is not included in Hamermesh \cite{Hamermesh}, so we write out its character table (see Table~\ref{tab:char}) in order to define notation for the representations. Our standard decomposition of the 20-dimensional representation gives $1^{+}$, $1^{-}$, $A^{+}$, $A^{-}$, $C$ (twice), $B^{+}$ (three times) and $B^{-}$ (three times). The rotational modes to be discarded are $A^{+}$ and $B^{-}$. The situation is now too complicated for us to be confident of identifying the kind of motion corresponding to each representation. However, we can make a few obvious assignments. There will be axial and transverse dipole motions ($1^{-}$ and $A^{-}$); also axial and transverse quadrupole modes ($1^{+}$ and $C$). $B^{+}$ and the remaining $C$ could be twisting modes. Bending or wobbling the $z$-axis (the branch lines passing through the two squares) while leaving the other axes fixed should give $B^{+}$ and $B^{-}$ respectively. This leaves two modes unidentified; it will be interesting to examine them when they are computed. \begin{table}[htbp] \begin{center} \leavevmode \begin{tabular}[c]{|c|c|c|c|c|c|c|c|} \hline \ & $E$ & $C_{4}$ & $C_{2}$ & $S_{8}$ & $S_{8}^3$ & $\sigma C$ & $\sigma S$ \\ \hline $1^{+}$ & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ $1^{-}$ & 1 & 1 & 1 & -1 & -1 & 1 & -1 \\ $A^{+}$ & 1 & 1 & 1 & 1 & 1 & -1 & -1 \\ $A^{-}$ & 1 & 1 & 1 & -1 & -1 & -1 & 1 \\ $B^{+}$ & 2 & 0 & -2 & $\sqrt{2}$ & $-\sqrt{2}$ & 0 & 0 \\ $B^{-}$ & 2 & 0 & -2 & $-\sqrt{2}$ & $\sqrt{2}$ & 0 & 0 \\ $C$ & 2 & -2 & 2 & 0 & 0 & 0 & 0 \\ \hline \end{tabular} \caption{Character table for $D_{4d}$ } \label{tab:char} \end{center} \end{table} Finally, we consider $B=7$. This is a perfect dodecahedron, with symmetry group $I_{h}$. We label the representations of this group by those of ${\rm Alt}_{5}$, the even permutations of five objects, with superscripts to indicate parity. We make no attempt to identify the physical appearances of these modes. We can, however, predict their symmetries. Our decomposition gives $5^{+}$, $5^{-}$, $4^{+}$, $4^{-}$, $3^{+}$ and $3^{-}$. The triplet $3^{+}$ corresponds to the discarded zero-modes. Interestingly, these representations correspond to those predicted by \cite{Houghton} for the $4B-7$ lower frequency scattering modes, although these authors do not give parity assignments. This also happens for the $B=3$ tetrahedron, which we attribute to the self-duality of a tetrahedron. In all other cases, however, the $4B-7$ higher breathing modes have quite different symmetries to the $4B-7$ scattering modes. \section{Discussion and conclusions} We have proposed a simple explanation of the origin of the higher multipole breathing modes observed in multiskyrmions; namely, that they correspond to vibrations of the branch locus, or lines of zero baryon density. Our results are summarised in Table~\ref{tab:predict}. We predict $4B-7$ such modes for a multiskyrmion of baryon number $B$ ($4B-6$ for the deuteron). Together with the $4B-7$ lower frequency scattering modes, plus nine zero modes and one trivial breather, this gives a total of $8B-4$ modes ($8B-3$ for the deuteron). This would appear to resolve a long-standing ``counting problem''. A single skyrmion has 6 degrees of freedom, which leads to the naive expectation that an $N$-nucleon system should have $6N$ degrees of freedom, and hence be described by a $6N$-dimensional moduli space. It can be argued that this dimensionality should be increased by one, since all Skyrme configurations have a trivial size fluctuation. Since minimal energy solutions for $B>1$ are single large solitons, they therefore have a maximum of 9 zero modes. This gave rise to the hope that multiskyrmions would have exactly $6B-9$ low-lying vibrational modes; that these vibrations might in fact correspond directly to the ``broken zero modes'' of well-separated skyrmions. The results of Barnes et al for $B=2$ and $B=4$ seemed to support this notion; they found just exactly the right number of vibrations in each case. However, they found one too many mode for $B=3$, and in a multiplet which prevented separating this ``extra mode'' from the others. Since the lower half of this spectrum perfectly matched the predictions of \cite{Houghton} however, it was hard to discard their results. We now predict exactly the multiplets Barnes et al observed. This is strong evidence against a $(6B+1)$-dimensional moduli space. So what is going on? It would seem either that the moduli space approach is wrong, or that Skyrme configurations (for $B \ge 3$) have $2B-5$ more degrees of freedom than was previously thought. This means that knowing the position and orientations of $N$ skyrmions is not sufficient information to determine the field everywhere in space. What other structure could there be? One possibility is that the branch locus contains additional information. Another is that additional dynamical variables are required, for example arising from interactions between angular velocities of different skyrmions. Further speculation is beyond the scope of this Letter, but the current results certainly indicate that further investigation of the structure of the branch loci would be worthwhile. RM would like to thank Wojtek Zakrzewski for useful discussions on the background to this work. KB is supported by PPARC; RM partially by the Department of Mathematical Sciences, Durham University. \begin{table}[htbp] \begin{center} \leavevmode \begin{tabular}{|c|c|c|c|c|} \hline B & Symmetry & Mode & Degeneracy & Description \\ \hline 2 & $D_{\infty h}$ & $1^{-}$ & 2 & dipole \\ \hline 3 & $T_{d}$ & $F_{2}$ & 3 & dipole \\ \ & \ & $E$ & 2 & quadrupole \\ \hline 4 & $O_{h}$ & $F_{1}^{-}$ & 3 & dipole \\ \ & \ & $F_{2}^{+}$ & 3 & quadrupole \\ \ & \ & $F_{2}^{-}$ & 3 & twist \\ \hline 5 & $D_{2d}$ & $B_{2}$ & 1 & axial dipole \\ \ & \ & $E$ & 2 & transverse dipole \\ \ & \ & $A_{1}$ & 1 & axial quadrupole \\ \ & \ & $E$ & 2 & transverse quadrupole \\ \ & \ & $B_{1}$ & 1 & twist \\ \ & \ & $B_{1}$ & 1 & twist \\ \ & \ & $A_{2}$ & 1 & twist \\ \ & \ & $B_{2}$ & 1 & axial `anti-dipole' \\ \ & \ & $E$ & 2 & transverse `anti-dipole' \\ \ & \ & $A_{1}$ & 1 & axial `anti-quadrupole' \\ \hline 6 & $D_{4d}$ & $1^{-}$ & 1 & axial dipole \\ \ & \ & $B^{+}$ & 2 & transverse dipole \\ \ & \ & $1^{+}$ & 1 & axial quadrupole \\ \ & \ & $C$ & 2 & transverse quadrupole \\ \ & \ & $A^{+}$ & 1 & twist \\ \ & \ & $C$ & 2 & twist \\ \ & \ & $B^{+}$ & 2 & $z$-axis bend \\ \ & \ & $B^{-}$ & 2 & $z$-axis wobble \\ \ & \ & $B^{+}$ & 2 & \ \\ \ & \ & $B^{-}$ & 2 & \ \\ \hline 7 & $I_{h}$ & $5^{-}$ & 5 & dipole \\ \ & \ & $5{+}$ & 5 & quadrupole \\ \ & \ & $4^{+}$ & 4 & \ \\ \ & \ & $4^{-}$ & 4 & \ \\ \ & \ & $3^{-}$ & 3 & \ \\ \hline \end{tabular} \caption{Summary of Predictions for $B=2$ to $B=7$} \label{tab:predict} \end{center} \end{table} \begin{figure}[htbp] \begin{center} \leavevmode \psfig{file=d2d.eps,width=1.6in} \caption{Schematic diagram of branch lines of $B=5$ multiskyrmion. Solid lines pass through square faces; dotted lines through pentagons.} \label{fig:d2d} \end{center} \end{figure}
1,314,259,995,437
arxiv
\section{Introduction} Colorectal cancer (CRC) is the third most commonly diagnosed cancer and the second leading cause of cancer death in the United States for men and women combined \cite{doi:10.3322/caac.21551}. According to the American Cancer Society's estimation, there are approximately 145,600 new cases of colorectal cancer and 51,020 deaths from the disease projected for 2019 in the United States \cite{doi:10.3322/caac.21551}. Fortunately, colorectal cancer usually develops slowly over many years and the disease can be prevented if adenomas are detected and removed before they progress to cancer \cite{pmid:27493942}. Moreover, colorectal cancer is most curable if it could be detected at early stages \cite{pmid:27493942}. Therefore, early detection and diagnosis play a crucial role in colorectal cancer prevention and treatment. \begin{figure}[htbp] \centerline{\includegraphics[width=0.45\textwidth]{figure1.png}} \caption{Appearance variability of polyps: Polyps vary in their shape, size, texture, and location within the large colon. They may have similar color and shape with colon intima, and are thus difficult to be detected.} \label{fig} \end{figure} Colorectal polyps are abnormal tissues growing on the intima of the colon or rectum with a high risk to develop into colorectal cancer. Colorectal polyp detection and removal in the early stage is an effective way to prevent colorectal cancer. Currently, colonoscopy is the primary method for colorectal polyp detection, during which a tiny camera is navigated into the colon in order to find and remove polyps inside. However, according to Leufkens \emph{et al.}'s study \cite{factors}, one out four polyps can be missed during colonoscopy procedures due to various human factors. Therefore, there is a critical need for an efficient and accurate computer-aided colorectal polyp detection system for colonoscopy. Accurate computer-aided colorectal polyp detection is a challenging problem since ({\romannumeral1}) colorectal polyps vary greatly in size, orientation, color, and texture and ({\romannumeral2}) many polyps do not stand out from surrounding mucosa. As a result, some colorectal polyps are difficult to be detected. Figure 1 shows a few examples to illustrate these challenges. Previous polyp detection methods adopt hand-crafted features such as texture, color or shape and pass them to a detection framework to find the position of polyps \cite{6187710,Gross_polypsegmentation,bernal:towards}. However, these methods are not effective enough to be used in realistic clinical practice. Recently, with the renaissance of deep learning, the CNN-based deep neural network architecture has been widely used and proven to be a powerful approach for colorectal polyp detection and segmentation. Some studies used object detection methods, such as Faster R-CNN \cite{DBLP:journals/corr/RenHG015} or YOLO \cite{DBLP:journals/corr/RedmonDGF15} to find and indicate polyps with bounding boxes \cite{Mo2018AnEA,app9122404}. While these object detection based methods show excellent performance on recall and precision, they can not localize the polyps accurately on the pixel level. Physicians still need to find the polyp boundaries in the proposed bounding box during colonoscopy to remove them. Therefore, the computer aided system for colorectal polyp segmentation has great clinical significance and can reduce the doctor's workload and segmentation errors from doctor's subjectivity. Some other studies used semantic image segmentation methods such as FCN (Fully Convolutional Networks) \cite{DBLP:journals/corr/LongSD14}, U-Net \cite{DBLP:journals/corr/RonnebergerFB15} or SegNet \cite{DBLP:journals/corr/BadrinarayananK15} for colorectal polyp detection and have shown great potential in this application \cite{Li2017ColorectalPS,Wang2018,guo2019giana,Brandao2017FullyCN}. However, these methods were constructed with traditional CNN structure which contains repeated max-pooling or downsampling (striding) operations. Note that max-pooling and downsampling operations are originally designed for image classification problem. While they can reduce the feature map resolution for high-level feature extraction, localization information that is very important for segmentation is decimated. Hence, these segmentation methods cannot produce accurate predictions and detailed segmentation maps along polyp boundaries. In this paper, we propose a novel convolution neural network architecture for colorectal polyp segmentation. The network consists of an encoder that extracts multi-scale information from different layers and a decoder that expands learned information to an output segmentation map the same size as the original image. For the encoder, we remove the downsampling operation from the last convolution block to reduce feature map resolution loss, and hence, increase the localization accuracy. Meanwhile, we introduce the dilated convolution to enlarge the field of view of the convolution kernels to learn high-level abstract features from the input colonoscopy image without downsampling operation. For the decoder, we first upsample all the feature maps of varied sizes at different layers to the size of the original image and then concatenate them together to generate an output segmentation map. Compared to U-Net's decoder architecture, our model concatenates multi-scale feature maps at the same time rather than combining consecutive feature maps each time along the path. Furthermore, we only apply bilinear interpolation rather than deconvolution to upsample the feature maps to the desired size with fewer parameters. In summary, the key contributions of this paper include: \begin{itemize}[leftmargin=*,noitemsep,topsep=0pt] \item We propose a novel convolution neural network based on U-Net's encoder-decoder architecture for colorectal polyp segmentation. We remove the downsampling from the last block of backbone that causes spatial resolution reduction and meanwhile introduce dilated convolution to enlarge the field of view to learn high-level features. \item We apply several post processing methods such as morphological transformations that smooth the segmentation boundaries and combining nearby objects that originally belong to a large polyp in the output segmentation map to improve polyp detection performance. \item We conduct extensive experiments on two main mainstream datasets and our experiment results show that our method significantly outperforms previous methods with a F1-score of 96.11\% on CVC-ClinicDB and 80.86\% on ETIS-Larib Polyp DB. \end{itemize} \begin{figure*}[htbp] \centerline{\includegraphics[width=0.86\textwidth]{figure2_new.png}} \caption{Model architecture. Four blocks in the encoder represent the outputs of the last four stages of Resnet-50. Feature maps are concatenated together and transformed into a 1-dimensional segmentation map after a stack of $3\times3$ and $1\times1$ convolution layers in the decoder. Best viewed in color.} \label{fig} \end{figure*} \section{Related Work} Early polyp segmentation methods usually learn some specific patterns from hand-crafted features and train a classifier to distinguish polyps from colonoscopy images. Gross \emph{et al.} \cite{Gross_polypsegmentation} and Bernal \emph{et al.} \cite{bernal:towards} used color and texture information as features to identify the location of polyps. Ganz \emph{et al.} \cite{6187710} and Mamonov \emph{et al.} \cite{6782378} used shape or contours information for polyp segmentation. However, these hand-craft-feature based methods can only perform well for some typical polyps with fixed patterns. To improve the polyp segmentation performance, CNN had been adopted to extract more powerful and discriminative features \cite{DBLP:journals/corr/TamakiSHRKKYMT16,7545996}. In recent years, deep learning has achieved significant performance in computer vision, making it possible to classify at the pixel level (semantic image segmentation). FCN, U-Net, and SegNet are three popular methods in this area. FCN \cite{DBLP:journals/corr/LongSD14} was proposed by Long \emph{et al.} and proven to be a powerful tool for the semantic image segmentation. They replaced the fully-connected layers of traditional CNN with convolution layers and used deconvolution layer to upsample the feature maps to the size of input image for pixel-level classification. Recently, FCN has been introduced and become a popular technique in medical image segmentation. Because of its promising potential in medical image segmentation, a number of studies used FCN with different backbone networks for colorectal polyp segmentation and obtained promising results \cite{Brandao2017FullyCN,Li2017ColorectalPS}. U-Net \cite{DBLP:journals/corr/RonnebergerFB15}, proposed by Ronneberger \emph{et al.} is another important method in semantic image segmentation and it was initially proposed for biomedical image segmentation. The authors developed an encoder-decoder architecture that contains two symmetric paths to combine the feature maps from each layer, and showed that the method achieved good performance on biomedical segmentation applications. Inspired by U-Net, Mohammed \emph{et al.} proposed Y-Net \cite{DBLP:journals/corr/abs-1806-01907} which fuses two encoders with and without pre-trained VGG19 weights to fill the gap between the large variation of testing data and the limited training data, which is a common challenge in medical image analysis tasks. SegNet \cite{DBLP:journals/corr/BadrinarayananK15} was proposed by Badrinarayanan \emph{et al.} for semantic image segmentation. They built a model with the same encoder network as U-Net but used a varied form of their decoder network. SegNet uses the max pooling indices from the corresponding encoder feature maps to upsample (without learning) the feature maps in decoder network, which improves boundary delineation and reduces the number of parameters. In \cite{Wang2018}, Wang \emph{et al.} applied SegNet architecture in their method for colorectal polyp segmentation. Recently, DeepLab \cite{DBLP:journals/corr/ChenPK0Y16} became the most popular framework for semantic image segmentation. Benefited from dilation convolution which was firstly proposed by Yu \emph{et al.} \cite{YuKoltun2016}, DeepLab can learn multi-scale features from different receptive fields without spatial resolution reduction caused by consecutive pooling operations or convolution striding (downsampling). Therefore, the model can learn increasingly abstract feature representation by a higher resolution which keeps more localization information. Hinted by dilation convolution and DeepLab, Guo \emph{et al.} \cite{guo2019giana} used the dilation convolution in their model and achieved a good performance on polyp segmentation. \section{Method} \subsection{Model Architecture} The architecture of our model is shown in Figure 2. Inspired by U-Net \cite{DBLP:journals/corr/RonnebergerFB15}, we construct an end-to-end convolutional neural network which consists of a construction part (encoder) on the left and an expansive part (decoder) on the right. The model takes a single colonoscopy image as the input and outputs a binary mask segmentation of polyps the same size as the input image on the last layer. \subsubsection{Encoder} Different from U-Net which constructs the encoder with repeated application of two $3\times3$ convolutions followed by ReLU and $2\times2$ max pooling operation, we use a more powerful network, the Resnet-50 \cite{he2016deep}, as the backbone of the encoder. The Resnet-50 is built with the "bottleneck" architectures consisting \cite{he2016deep} of a stack of 3 layers. The three layers are $1\times1$, $3\times3$, and $1\times1$ convolutions, where the $1\times1$ layers are responsible for reducing and then increasing (restoring) dimensions, leaving the $3\times3$ layer a bottleneck with smaller input or output dimensions. The last fully connected layer used for classification is truncated. Moreover, the Resnet-50 backbone uses pre-trained weights trained on ImageNet \cite{imagenet_cvpr09} dataset, which brings already learned features or patterns, such as lines, curves, angles or edges that commonly appears in real-world images. These common features and patterns help the model converge faster and more smoothly during the training process. The feature maps of each stage of Resnet-50 are stored as the input of decoder. One of the problems of applying existing network frameworks as the backbone for semantic image segmentation is the usage of downsampling operations. The downsampling operation (convolution with stride 2 in Resnet) commonly used in deep convolutional neural networks to enlarge the field of view is originally designed for image classification problems. Downsampling reduces the feature resolution to enlarge the field of view of kernels and thus helps the model to learn high-level abstract features for classification. However, this benefit is achieved at the cost of localization accuracy since detailed information important for segmentation are decimated. To compensate for this information loss, inspired by \emph{Chen et al.} \cite{DBLP:journals/corr/ChenPK0Y16}, we rebuild our backbone by introducing dilated convolution into the last block of the Resnet-50, shown as the red line in Figure 2. Since we use convolution with stride $s = 1$ at the end of stage 4 of Resnet-50, the size of the feature map after the last stage of Resnet-50 (stage 5) is the same as that of the previous feature map rather than $\frac{1}{4}$ of the size. In order to obtain the same field of view for the kernels in stage 5 as usual, we perform dilation convolution with dilation rate $r = 2$ on the feature map in this stage. \subsubsection{Decoder} Different from the U-Net's decoder architecture which is a symmetric path of the encoder, our newly designed decoder consists of four upsampling blocks and one final convolution block. There is no connection between consecutive upsampling blocks. ResNet-50 contains five stages which we denote as $\{R1, R2, R3, R4, R5\}$. These four upsampling blocks take feature maps from the last four stages ($\{R2, R3, R4, R5\}$) as the input. The dimension of each feature map is reduced to 48, 48, 48, and 256 respectively by performing $1\times1$ convolutions and then each feature map is upsampled with the scale factor $s = 16, 16, 8$, and $4$ respectively to the original image size by interpolation. By reducing the dimension of the feature maps to a smaller size, shown as the yellow lines in Figure 2, we can save considerable amount of computational resources. The reason we only reduce the first three feature maps to a smaller dimension of 48 is that although these low-level feature maps contain detailed \begin{figure}[htb!] \centerline{\includegraphics[width=0.45\textwidth]{figure3.png}} \caption{Dilated convolution with kernel size $3\times 3$ with different dilate rate of 1, 2, and 3. The left one with dilate rate $r = 1$ corresponds to the standard convolution. The receptive field grows when the dilate rate increase. } \label{fig} \end{figure} localization information, they do not provide accurate pixel-level classification information due to a small view of field. Therefore, the last feature map which contains high-level classification information is reduced to 256 dimensions while low-level feature maps are reduced to 48. By fusing localization information from lower levels and abstract information from high-level, we can improve the segmentation detail. To combine information from different levels, we concatenate the four feature maps from its corresponding encoder stage into one block with original image size and $48\times3+256 = 400$ dimensions, which is shown in rounded rectangular in Figure 2. Finally, two $3\times3$ convolutions are applied in the final convolution block to merge multi-scale feature maps followed by a $1\times1$ convolution for generating the output segmentation map. \subsection{Dilated Convolution} Dilated convolution was first proposed by \emph{Yu et al.} \cite{YuKoltun2016} to address dense prediction problems such as semantic segmentation. The goal of dense prediction is to compute a discrete or continuous label for each pixel in the image, which is structurally different from image classification. Dilated convolution allows us to enlarge the field of view to obtain high-level multi-scale contextual information without increasing the number of parameters or the computation cost. Consider a convolution operation with kernel size $k = K$. The output $y[i]$ is defined as: \begin{equation} y[i]=\sum_{k=1}^{K} x[i+r \cdot k] w[k] \ ,\label{eq} \end{equation} where $x[i]$ is the pixel value of input signal (image or feature map) at position $i$ and $w[i]$ is the value of the kernel at position $i$. The dilated rate parameter $r$ corresponds to the stride with which we sample the input signal. When dilation rate $r = 1$, the formula represents the standard convolution operation. Figure 3 depicts a simple example of dilated convolution with different dilated rates. As we can see, the dilation convolution with dilation rate $r$ can enlarge the field view of a $k\times k$ kernel from $k$ to $k+(k-1)\times(r-1)$ without downsampling. And since the dilation convolution simply inserts $r - 1$ zeros between two consecutive kernel values along each dimension, there are no extra parameters added into the model. Therefore, pre-trained weights trained by original Resnet-50 can still be used in our model. \subsection{Post Processing} In order to compare our results with object-detection-based methods, we draw the minimum bounding box for each connected component in both output segmentation map and ground truth mask. And we use the resulting bounding boxes to calculate the recall, precision, and F-1 score. In the real-life application, we will show both the segmentation result and its bounding box simultaneously on the output image. In order to improve display effectiveness during colonoscopy and the results of recall, precision, and F-1 score, the following post processes are adopted. \subsubsection{Smooth} To obtain a polyp segmentation with a smooth border, which is usually the case in the real situation, we apply a set of morphological transformations, including several opening and closing operations with different kernel sizes. The opening operation that applies erosion followed by dilation operation can remove noises in the segmentation image. The closing operation that applies dilation followed by erosion operation can close small holes inside the segmented objects. \begin{figure} \centering \subfigure[Two small objects one the right side of the image (tiny green bounding boxes) are removed since they are smaller than 100 pixels.]{ \includegraphics[width=0.42\textwidth]{figure6a.png} } \subfigure[Two nearby objects are merged into a large polyp.]{ \hspace{0.03in}\includegraphics[width=0.42\textwidth]{figure6b.png} } \caption{Two examples of post processing. Left hand side of figure (a) and figure (b) are the results before post processing. Right hand side of figure (a) and figure (b) are the results after post processing.} \label{fig} \end{figure} \subsubsection{Drop Small Objects} After the smooth operation, we still observe a small number of tiny segmented objects. These tiny objects will reduce precision and have a negative effect on display during colonoscopy. We perform a statistics on the dataset and find that all the polyps are larger than 150 pixels after we resize the image to $384 \times 384$. Therefore, we remove those objects smaller than 100 pixels in the output segmentation image. \subsubsection{Combine Nearby Objects} In the test set, a large polyp may be split into several nearby small objects. If we draw a bounding box for each of these small objects, it will result in a rumpled display and reduction in precision. Therefore, we combine the bounding boxes of two nearby objects A and B into one object if they satisfy the following condition: \begin{figure}[htbp] \centerline{\includegraphics[width=0.38\textwidth]{figure4.png}} \caption{A sample of augmented images using different data augmentation methods.} \label{fig} \end{figure} \begin{equation} \left \| \left ( x_{cA}, y_{cA} \right ),\left ( x_{cB}, y_{cB} \right ) \right \|\leqslant \left ( \frac{diag(A)}{2}+\frac{diag(B)}{2} \right ) \ ,\label{eq} \end{equation} where $\left \| \left ( x_{cA}, y_{cA} \right ),\left ( x_{cB}, y_{cB} \right ) \right \|$ is the distance between the centers of two bounding boxes and diag(A) and diag(B) are the length of diagonal of A and B. This process is repeated until there is no objects that satisfy the above condition. \section{Implementation} \subsection{Dataset} The proposed method is developed and evaluated on the datasets from GIANA (Gastrointestinal Image ANAlysis) 2018 \cite{bernal2017comparative} and ETIS-Larib Polyp DB \cite{silva:hal-00843459}. {\bf Train set:} The train set consists of the following 3 datasets: \begin{itemize} \item 18 short videos from CVC-ClinicVideoDB (train set from GIANA Polyp detection Challenge) which contains 10,025 images of size 384 $\times$ 288 with pixel level annotated polyp masks in each frame. \item CVC-ColonDB: 300 images of size 574 $\times$ 500 from the train set of GIANA Polyp segmentation Challenge with pixel level annotated polyp masks for each image. \item 56 high definition images from GIANA Polyp segmentation Challenge with a resolution of 1920 $\times$ 1080 and the pixel level annotated polyp masks for each image with the same resolution. \end{itemize} {\bf Test set:} All the results are evaluated on the following 2 test sets: \begin{itemize} \item CVC-ClinicDB: 612 images of size 384 $\times$ 288 from the test set of GIANA Polyp Segmentation Challenge with pixel level annotated polyp masks corresponding to the region covered by the polyp in each image. \item ETIS-Larib Polyp DB: 196 images of size 1225 $\times$ 966 from ETIS-Larib Polyp DB with pixel level annotated polyp masks corresponding to the region covered by the polyp in each image. \end{itemize} \subsection{Data Pre-processing} The data pre-processing consists of 3 steps. First, since semantic image segmentation is essentially a classification problem on the pixel level, those images without polyp in them bring extremely unbalanced data in the train phase, and thus are removed from the train set. Among all the train sets, only the eighteenth video contains 77 non-polyp frames at the beginning. All other images contain at least one up to three polyps in the image. Second, we remove the black borders in the images since the black borders contain invisible random noises which could make the CNN model learn some wrong features or patterns. Meanwhile, by removing black borders, we can increase the ratio of the image area which includes valuable information. Third, we resize the input images to $384\times384\times3$, and then normalize the input images with the mean and standard deviation of the train set. The normalization operation can make the model converge faster during the training process since gradients can reach the minimum in a direct path. \subsection{Data Augmentation} Data augmentation is an important technique that has been widely used in machine learning pipeline. By introducing variations of images, such as different orientation, location, scale, brightness, etc, to existing data, we can increase the robustness and reduce over-fitting of our model. We apply some basic augmentation methods such as random rotation, random horizontal, vertical flip, and random zoom. We also apply tilt (skew), shearing, random distortion, since the tortuous appearance of the intima caused by the movement of colonoscopy lens and the colon. Besides, we apply random contrast and random brightness change to simulate different lighting environments or different photography equipment which may occur during colonoscopy procedures. Fig.5 shows a sample of the augmented images after crop, rotate, tilt, shear, and distort. After the data augmentation, 90,000 images are created in the train set. The data augmentation is implemented by the image augmentation library `Augmentor'\cite{10.1093/bioinformatics/btz259}. \subsection{Training} Our model is implemented on Pytorch library with a single NVIDIA GeForce GTX 1080 Ti GPU. We choose ADAM as the optimizer with a weight of $5e-4$. For learning rate scheduler, we choose CosineAnnealingLR with initial learning rate $lr = 1e-5$ and max epoch 80. We use the binary-cross-entropy loss function to calculate the loss for each pixel over the final output segmentation map. \subsection{Evaluation Measures} \subsubsection{Dice Coefficient} Dice coefficient is a spatial overlap index and a reproducibility validation metric used in machine learning, especially in semantic image segmentation. It measures the similarity between the prediction binary segmentation result and the ground truth mask. The value of a dice coefficient ranges from 0, indicating no spatial overlap between two sets of binary segmentation results, to 1, indicating complete overlap. The value of the Dice coefficient equals twice the number of elements common to both sets divided by the sum of the number of elements in each set. Formally, the Dice coefficient is defined as: \begin{equation} d=\frac{2|X \cap Y|}{|X|+|Y|} \ ,\label{eq} \end{equation} where $|X|$ and $|Y|$ are the cardinalities of the two sets (the number of pixels in each binary mask image). To evaluate and analyze the performance of the CNN model from the perspective of machine learning, all the results of the Dice coefficient are calculated before the post processing. \subsubsection{Recall, Precision, and F1-score} To give an intuitive sense of how our model performance, we also follow the evaluation metrics presented in the MICCAI 2015 challenge \cite{bernal2017comparative} to calculate the value of recall, precision, and F1-score, which are defined as the following: \begin{equation} recall=\frac{TP}{TP+FN} \;, \ precision = \frac{TP}{TP+FP}\label{eq} \end{equation} \begin{equation} F1 = \frac{2 \times precision \times recall}{precision + recall}\ ,\label{eq} \end{equation} where TP and TN denote the number of true positives and true negatives, FP and FN denote the number of false positives and false negatives. A detection is considered a true positive when the center of the prediction bounding box is located within the ground truth bounding box. In binary classification, recall is also referred to as sensitivity which shows the model's ability to return the most true positive samples, for example, polyps in our application. And precision represents the model's ability to detect substantially more true positives than false positives, for example, more real polyps than incorrectly detected normal tissue. Both high recall model and high precision model have their own limitations. A model with high recall may return some false positives while a model with high precision may miss some true positives. An ideal system with high precision and high recall will return most of true positives, with predictions labeled correctly. In real life application, there is always a trade-off between recall and precision. Therefore, we use the F-1 score that conveys the balance between precision and recall to evaluate the performance of our model. \renewcommand{\arraystretch}{1} \begin{table}[tp] \setlength{\abovecaptionskip}{-10pt} \caption{Comparison between our model and previous methods on CVC-ClinicDB.} \centering \fontsize{8.5}{10.5}\selectfont \begin{threeparttable} \label{tab:performance_comparison} \begin{tabular}{ccccc} \toprule[2pt] \multirow{2}{*}{Methods}& \multicolumn{4}{c}{ CVC-ClinicDB}\cr \cmidrule(lr){2-5} &Dice &Precision &Recall &F1-score\cr \midrule SNU \cite{bernal2017comparative} &-&26.80&26.40&26.50\cr PLS \cite{Riegle:how} &-&28.70&76.10&41.60\cr CVC-Clinic \cite{bernakL:wm-dova} &-&83.50&83.10&83.30\cr ASU \cite{tajbakhsh2015automated} &-&97.20&85.20&90.80\cr OUS \cite{bernal2017comparative} &-&90.40&94.40&92.30\cr CUMED \cite{Chen:2016:DCN:3015812.3015985} &-&91.70&98.70&95.00\cr Faster R-CNN \footnotemark[1] \cite{Mo2018AnEA} &-&86.60&98.50&92.20\cr SegNet \cite{Wang2018} &-&-&88.24&-\cr FCN \cite{Li2017ColorectalPS} &-&89.99&77.32&83.01\cr FCN-8S \cite{akbari2018polyp} &79.30&91.80&97.10&94.38\cr Hybrid \footnotemark[2] \cite{guo2019giana} &78.25&-&-&-\cr \midrule Ours&82.48&96.71&95.51&{\bf 96.11}\cr \bottomrule[2pt] \end{tabular} \end{threeparttable} \end{table} \renewcommand{\arraystretch}{1} \begin{table}[tp] \setlength{\abovecaptionskip}{-10pt} \caption{Comparison between our model and previous methods on ETIS-Larib.} \centering \fontsize{8.5}{10.5}\selectfont \begin{threeparttable} \label{tab:performance_comparison} \begin{tabular}{ccccc} \toprule[2pt] \multirow{2}{*}{Methods}& \multicolumn{4}{c}{ ETIS-Larib Polyp DB}\cr \cmidrule(lr){2-5} &Dice &Precision &Recall &F1-score\cr \midrule SNU \cite{bernal2017comparative} &-&10.20&9.60&9.70\cr ETIS-LARIB \cite{silva:hal-00843459} &-&6.90&49.50&12.20\cr CVC-Clinic \cite{bernakL:wm-dova} &-&10.00&49.00&16.50\cr PLS \cite{Riegle:how} &-&15.80&57.20&24.90\cr UNS+UCLAN \cite{bernal2017comparative} &-&32.70&52.80&40.40\cr CUMED \cite{Chen:2016:DCN:3015812.3015985} &-&72.30&69.20&70.70\cr OUS \cite{bernal2017comparative} &-&69.70&63.00&66.10\cr FCN-VGG \cite{Brandao2017FullyCN} &-&73.61&86.31&79.46\cr Faster R-CNN \footnotemark[3] \cite{app9122404} &-&72.93&80.29&76.43\cr \midrule Ours& 62.54&{\bf80.48}&81.25&{\bf80.86}\cr \bottomrule[2pt] \end{tabular} \end{threeparttable} \end{table} \renewcommand{\arraystretch}{1} \begin{table*}[tp] \setlength{\abovecaptionskip}{-10pt} \centering \fontsize{8.5}{10.5}\selectfont \begin{threeparttable} \caption{Results of Ablation Experiments.} \label{tab:performance_comparison} \begin{tabular}{cccccccc} \toprule[2pt] \multirow{2}{*}{Method}&\multirow{2}{*}{Dice Coefficient}& \multicolumn{3}{c}{ with post processing}&\multicolumn{3}{c}{ without post processing}\cr \cmidrule(lr){3-5} \cmidrule(lr){6-8} &&Precision&Recall&F1-score&Precision&Recall&F1-score\cr \midrule with U-Net Decoder &80.82&93.47&95.36&94.41&95.20&95.20&95.20\cr with original Resnet-50 &79.22&93.86&94.58&94.22&95.77&94.58&95.17\cr U-Net &74.24&92.03&91.18&91.60&91.14&90.71&90.92\cr U-Net with last 4 layers &74.79&93.03&93.19&93.11&92.33&93.19&92.76\cr \midrule Ours&{\bf 82.48}&{\bf 96.71}&{\bf 95.51}&{\bf 96.11}&94.66&96.13&95.39\cr \bottomrule[2pt] \end{tabular} \end{threeparttable} \end{table*} All the results of recall, precision, and F-1 score are calculated after post processing. \section{Experiments and Results} \subsection{Comparison with State-of-the-Art} Table 1 and 2 compare our results with current state-of-the-art methods on CVC-ClinicDB and ETIS-Larib Polyp DB respectively. Dice coefficient result is only available for segmentation methods such as FCN and SegNet based models. The results show that our model outperforms all the previous approaches. The detailed results are described as follows. On CVC-ClinicDB, our model achieves a dice coefficient of 82.48\%. After post processing, we achieves a recall of 95.51\%, a precision of 96.71\% and a F1-score of 96.11\%. On ETIS-Larib Polyp DB, our model achieves 62.54\% for the dice coefficient. After post processing, we achieves a recall of 81.25\%, a precision of 80.48\% and a F-1 score of 80.86\%. \footnotetext[1]{With VGG-16 backbone.} \footnotetext[2]{Dilated ResFCN + SE-Unet.} \footnotetext[3]{With Inception ResNet \cite{DBLP:journals/corr/SzegedyIV16} backbone.} \begin{figure}[htbp] \centerline{\includegraphics[width=0.45\textwidth]{figure5_new.png}} \caption{Sample segmentation results of our model. Purple masks are the polyp segmentation results. Green rectangles represents the minimum bounding box of prediction of polyp segmentation and red rectangles represents the minimum bounding box of the ground truth polyp mask. Best viewed in color.} \label{fig} \end{figure} From the results in Tables 1 and 2, we find that, compared to traditional methods, DCNN based methods achieve much better results on precision or recall. However, those DCNN based models cannot yield outstanding results on both precision and recall at the same time. For those models with high recall but low precision, such as Faster R-CNN \footnotemark[1], FCN-8S, and CUMED, they will only miss a small fraction of polyps during colonoscopy, the physicians will need to spend more time screening the prediction results to verify if the detected polyps are true positives. On the other hand, for those models with high precision but low recall, such as ASU and FCN, although they rarely predict normal tissue as a polyp, polyps will be missed during the colonoscopy. That means a lot of potential CRC patients cannot be diagnosed in the early stage, which is a more severe case than the previous one. Therefore, the F-1 score that seeks a balance between the precision and the recall is a more reasonable measure to evaluate the model performance. Our model achieves the highest F-1 score on two datasets with both precision and recall higher than 95.00\% on CVC-ClinicDB and 80.00\% on ETIS-Larib Polyp DB, which shows the powerful performance and robustness of our model. Moreover, unlike those object detection based models, our model can provide a segmentation mask of polyps, which can reduce doctor's workload, segmentation errors, and subjectivity. From Table 1, we can observe that our model obtains higher dice coefficient than the other two models whose dice coefficients are available. In summary, our model outperforms previous methods and achieves state-of-the-art performance for both polyp detection and segmentation. \subsection{Run Time Performance} Our polyp segmentation process consists of inference step and post processing step. The inference step costs about 45 msec per image and post processing costs about 5 msec per image. Therefore, the total time to generate one polyp segmentation map is about 45 ms (22 fps). All the test are performed on a single NVIDIA GeForce GTX 1080 Ti GPU. \subsection{Ablation Study} To verify the effectiveness of the components of our proposed model, we run a number of ablation experiments. \subsubsection{Decoder Structure} Unlike the decoder of U-Net which integrates two feature maps of consecutive layers at each step, we concatenate feature maps from different layers together after upsampling them to the size of the original image. We implement a decoder which follows U-Net's structure and replace our decoder with it. The result is shown in Table 3. We find that with our new decoder structure, the performance of dice coefficient, precision, recall, and F1-score improves by 1.66\%, 3.24\%, 0.15\%, and 1.70\% respectively. Moreover, the size of our model drops from 28.995M to 25.632MB on NVIDIA GeForce GTX 1080 Ti GPU. \subsubsection{Dilated Convolution} To verify the effectiveness of the dilated convolution, we replace our encoder backbone to original Resnet-50 network which only uses $3\times3$ standard kernels with dilated rate $r = 1$. The performance of dice coefficient, precision, recall, and F1-score improves 3.26\%, 2.85\%, 0.93\%, and 1.89\% respectively. Detailed results are shown in Table 3. \subsubsection{Encoder Backbone} U-Net has proven to be a powerful tool for biomedical image segmentation. In order to improve the feature extraction ability of the encoder, we replace the U-Net's encoder which consists of repeated two $3\times3$ convolutions and $2\times2$ max pooling operation with Resnet-50 network. To verify the effectiveness of this more complicated backbone, we implement a U-Net to test its performance. Since Resnet-50 consists of 5 stages, which applies 5 downsampling operations, we add another two $3\times3$ convolutions and $2\times2$ max pooling operation after the original U-Net (originally 4 stages). The performance of dice coefficient, precision, recall, and F1-score drop by 7.69\%, 4.68\%, 4.33\%, and 4.51\% respectively. Moreover, we try to combine the feature maps from different layers and find that the model can generate the best results by combining the last 4 feature maps. This proves that the feature map from the lowest layer (with stride $s=2$) which has the smallest field of view is not beneficial for polyp segmentation, probably due to reason that there is no small size of the polyp in the train set. Statistics shows that all the polyps are larger than 150 pixels after we resize the image to $384\times384$. This results shows that our decoder architecture which does not contain the feature map of stage 1 (R1, stride s = 2) is beneficial for polyp segmentation. It also proves that Resnet model (which applies a large $7\times7$ kernel in stage one) can enlarge the field of view and improve polyp segmentation performance. The result of U-Net and U-Net with last 4 layers are shown in Table 3. \subsubsection{Post Processing} To improve the performance of precision and recall, we apply three post processing methods on the output segmentation map. In Table 3, we present the results before and after the post processing in left and right columns separately. We can observe that post processing obviously improves the performance of precision, recall, and F1-score. \section{Conclusion} In this paper, we proposed a novel convolution neural network for the colorectal polyp segmentation. The network consists of an encoder to extract multi-scale semantic features and a decoder to expand the feature maps to a polyp segmentation map. For the encoder, we redesign the backbone structure of common encoders originally optimized for image classification problem by introducing dilated convolution to improve segmentation performance. For the decoder, we combines multi-scale features to improve segmentation performance with fewer parameters. Comparison with other existing methods shows that our model achieves state-of-the-art performance for polyp segmentation with high recall, recision, and F1-score. \linespread{0.96}\bibliographystyle{IEEEtran}
1,314,259,995,438
arxiv
\section{Introduction} Let $\vv(n)$ denote the number of \emph{compositions} of a positive integer $n$ into powers of $2$ (compositions are sometimes called \emph{ordered partitions}): this is the amount of finite sequences $\{q_{1},q_{2},\ldots,q_{\ell}\}$ of non-negative integers such that $n=2^{q_{1}}+2^{q_{2}}+\cdots+2^{q_{\ell}}$. Thus, for example, $ 3=1+1+1=2+1=1+2$ give all possible compositions, hence $\vv(3)=3$. This sequence appears in Sloane's encyclopedia \cite{sloane} as A023359. Several properties of this sequence are also listed there. Let us call this function \emph{the binary composition function}. It is easy to see (with the help of calculus of residues) that \begin{eqnarray*} \vv(n)\sim \frac{c}{\rho^{n+1}}, \end{eqnarray*} where $\rho$ is the unique zero of $f(x)=1-\sum_{k=0}^{\infty}x^{2^{k}}$ in the interval $(0,1)$, and $c=-\frac{1}{f'(\rho)}$. Nevertheless, in this note we are mainly concerned with $2-$adic rather than with real asymptotics.\\ \emph{The binary partition function} $b(n)$ (which counts the partitions of $n$ into non-negative powers of $2$ neglecting the order of the summands) was investigated by many authors, beginning with L. Euler (1750), and in the 20th century A. Tanturi (1918), K. Mahler (1940) (who explored asymptotic behavior). See the sequences A018819 and A000123 in Sloane's Encyclopedia \cite{sloane}; one can find numerous references there. Congruence properties of $b(n)$ modulo powers of $2$ were first observed by R. F. Churchhouse \cite{church} (the main congruence was given without a proof as a conjecture). This conjecture was later proved by H. Gupta \cite{gupta} and independently by {\O}. R{\o}dseth \cite{rodseth}. This result can also be found in Andrews' monograph \cite{andrews}. The paper by the author \cite{alkauskas1} gives another proof of this fact along with one possible generalization of this congruence. As an aside \cite{alkauskas1}, for every positive integer $s$ which is not divisible by $8$ there exists a finite algorithm to verify the fact that infinitely many terms of the sequence $b(n)$ are divisible by $s$. Calculations confirmed this for $2\leq s\leq 14$, $s\neq 8$ (as was noticed by Churchhouse himself, $b(n)$ is never divisible by $8$). Moreover, for every power of $2$ there exists a finite table which lists all the possible remainders of $b(n)$ modulo this power. For example, modulo $32$ one of the entries is \cite{alkauskas1} \begin{eqnarray*} b(4n+2)\equiv 2+4w(n)+8w\big{(}\lfloor n/2\rfloor\big{)}+16\tau(n)\text{ (mod }32). \end{eqnarray*} Here $\lfloor\star\rfloor$ stands for the ``floor" function, $w(n)$ represents the Thue-Morse sequence with initial conditions $w(0)=0$, $w(1)=1$, and $\tau(n)$ stands for the Rudin-Shapiro sequence with conditions $\tau(0)=0$ and $\tau(3)=1$.\\ We will now formulate the R{\o}dseth-Gupta theorem. \begin{thmm} If $n$ is odd positive integer, then for any integer $s\geq 1$ we have \begin{eqnarray*} b(2^{s+2}n)\equiv b(2^{s}n)\text{ (mod }2^{\mu(s)}),\quad \text{were }\mu(s)=\Big{\lfloor}\frac{3s+4}{2}\Big{\rfloor}; \end{eqnarray*} moreover, this congruence is exact. \end{thmm} On the other hand, the binary composition function has not yet been arithmetically investigated. The only papers (apart from The On-Line Encyclopedia of Integer Sequences) where this sequence appears are the papers by the author \cite{alkauskas2} and by Chinn and Niederhausen \cite{chinn}. The authors of the latter are concerned with finding an exact formula for the number of binary compositions of $n$ into exactly $n-k$ parts for small $k$.\\ \indent If we consider compositions of a positive integer $n$ with no limitation on the non-negative summands, then the amount of these is equal to $2^{n-1}$. On the other hand, compositions with summands coming from a certain set reveal new congruence phenomena. For example, one of our main results is the following surprising fact. Let us denote by $s_{2}(n)$ the amount of $1$'s in the binary expansion of $n$. This is the sequence A000120. \begin{thm}Suppose that $n\geq 1$, $N\geq 1$, and $s_{2}(n+2^{N-1})\geq 2^{N}$. Then \begin{eqnarray*} \vv(n)\equiv0\text{ (mod }2^{N}). \end{eqnarray*} \label{teom} \end{thm} \indent Let us say that a property $\mathcal{A}$ is satisfied for \emph{almost all natural numbers}, if \begin{eqnarray*} \lim\limits_{M\rightarrow\infty}\frac{\#\{n\leq M: n\text{ satisfies property }\mathcal{A}\}}{M}=1. \end{eqnarray*} \begin{cor} Let $N\in\mathbb{N}$. Then for almost all natural numbers the congruence $\vv(n)\equiv0\text{ (mod }2^{N})$ is satisfied. \end{cor} \begin{proof} This is clear: for a fixed $M\in\mathbb{N}$, almost all natural numbers will have more than $M$ $1$'s in the binary expansion. \end{proof} \section{Congruence properties} Now we will derive some basic facts about $\vv(n)$. Les us make a convention $\vv(0)=1$ and $\vv(-n)=0$ for $n\in\mathbb{N}$. Binary compositions of $n$ can be divided into disjoined subsets, each of which consists of compositions with the first summand equal to $2^k$, $1\leq 2^{k}\leq n$. Thus, this gives the recurrence relation, which also appears in \cite{alkauskas2,chinn,sloane}: \begin{eqnarray} \vv(n)=\sum_{k\geq0}\vv(n-2^k).\label{rec} \end{eqnarray} Hence, the generating function is give by \begin{eqnarray*} \sum_{n=0}^{\infty}\vv(n)x^{n}=(1-\sum_{k=0}^{\infty}x^{2^{k}})^{-1}. \end{eqnarray*} From the recurrence relation we can already determine the parity of $\vv(n)$. This is the only property which admits easy proof directly from (\ref{rec}). \begin{prop} The number $\vv(n)$ is odd if and only if $n=2^{u}-1$, $u\geq0$.\label{elem} \end{prop} \begin{proof} Suppose we have already proved this statement for all positive integers $\leq n-1$. From the recurrence relation and inductive hypothesis we inherit that the parity of $\vv(n)$ equals the parity of the amount of odd terms among $\vv(n-2^{u})$, $u\geq 0$. This term is odd iff (according to the induction hypothesis) $n-2^{u}=2^{v}-1$; that is, it happens only iff $n+1=2^{u}+2^{v}$. Hence, if $s_{2}(n+1)>2$, this cannot occur. If $s_{2}(n+1)=2$, so $n+1=2^{u}+2^{v}$, $u\neq v$, we have exactly two odd summands: $\vv(n-2^{u})$ and $\vv(n-2^{v})$, and therefore the sum is even. Finally, we have only one odd summand iff $n=2^{u}-1$, and this summand is $\vv(n-2^{u-1})=\vv(2^{u-1}-1)$. We finish by induction. \end{proof} \indent The congruence properties of the binary composition function modulo higher powers of $2$ were observed by the author \cite{alkauskas2}. One of these congruences claim that \begin{eqnarray} \vv(2^{k})\equiv 8\text{ (mod }16)\text{ for }k\geq 3.\label{ast} \end{eqnarray} Unfortunately, despite many efforts to manipulate with (\ref{rec}), this and similar claims were not proved in \cite{alkauskas2} but rather extrapolated from numerical data. This failure suggests the fact that the recurrence (\ref{rec}) alone is insufficient in proving these congruences. Luckily, one can derive some other recurrence relations, much more convenient and powerful. \begin{prop} For $n\geq1$, we have \begin{eqnarray} \vv(2n)&=&\vv^{2}(n)+\sum\limits_{{a+b=2n-2^{s}}\atop{a,b<n,s\geq 1}}\vv(a)\cdot\vv(b).\label{dvig} \end{eqnarray} In general, for $m,n\geq 1$, the following equality holds \begin{eqnarray} \vv(m+n)&=&\vv(m)\cdot\vv(n)+\sum\limits_{{a+b=m+n-2^{s}}\atop{a<m,b<n,s\geq 1}}\vv(a)\cdot\vv(b).\label{sum} \end{eqnarray} Thus, in case $m=n$ this gives (\ref{dvig}), and in case $m=1$ this reduces exactly to (\ref{rec}). \end{prop} \begin{proof}As a matter of fact, this identity is valid for any function which counts compositions of $n$ into positive integers $1=a_{1}<a_{2}<a_{3}<\cdots$, only $2^{s}$ must be replaced with $a_{s}$ in the formula. To prove the identity, consider any composition of $m+n$: \begin{eqnarray*} m+n=2^{q_{1}}+2^{q_{2}}+\cdots+2^{q_{\ell}}. \end{eqnarray*} Let $s$ be the largest non-negative integer such that $\sum_{i=1}^{s}2^{q_{i}}\leq m$ (if $s=0$, the empty sum is $0$ by convention). The number of compositions of $m+n$ where $\sum_{i=1}^{s}2^{q_{i}}=m$ for some $s$ is obviously equal to $\vv(m)\cdot\vv(n)$. If $a=\sum_{i=1}^{s}2^{q_{i}}<m$, then $b=\sum_{i=s+2}^{\ell}2^{q_{i}}<n$. Thus, $m+n-a-b=2^{q_{s+1}}$. Thus, if $a<m$ and $b<n$ are fixed, the amount of such compositions is equal to $\vv(a)\cdot\vv(b)$. This proves the formula (\ref{sum}). \end{proof} Now we are able to derive the following \begin{prop} The sequence $\vv(n)$ can be completely described modulo $4$.\\ (i) For $n\geq 3$, let $\tau_{3}(n)$ denote the number of solutions of $n+1=2^{s}+2^{v}+2^{u}$ with $s\geq v>u\geq 0$, let $\tau_{2}(n)$ denote the number of solutions of $n+1=2^{s}+2^{u}$, and let $\tau_{1}(n)$ denote the number of solutions of $n+1=2^{s}$. Then \begin{eqnarray*} \vv(2n)\equiv2\tau_{3}(n)+\tau_{2}(n)+\tau_{1}(n)\text{ (mod }4). \end{eqnarray*} (ii) In a similar way, $\vv(2^{k}-1)\equiv3\text{ (mod }4)$ for $k\geq 2$, and $\vv(2^{k}+2^{l}-1)\equiv2\text{ (mod }4)$ for $k>l\geq1$. In all other cases, $\vv(2n-1)\equiv0\text{ (mod }4)$ (the first occurrence is $n=7$). \label{proposition3} \end{prop} \begin{proof}{\it (i) }Note that (\ref{dvig}) can be rewritten as \begin{eqnarray} \vv(2n)&=&\vv^{2}(n)+2\sum\limits_{{a+b=2n-2^{s}}\atop{0\leq a<b<n,s\geq 1}}\vv(a)\cdot\vv(b)+\sum\limits_{s\geq 1}\vv^{2}(n-2^{s-1}).\label{even} \end{eqnarray} Consider this equality modulo $4$. Obviously, $\vv^{2}(n)\equiv \tau_{1}(n)\text{ (mod }4)$, since this is implied by Proposition \ref{elem}. In the second sum, only the terms with $a=2^{u}-1$ and $b=2^{v}-1$, $u,v\geq 0$, do contribute to the final result. In this case $2n=2^{s}+2^{u}-1+2^{v}-1$. Thus, suppose $n$ is of this form. Since $a<b$, we have $u<v$, $u\geq 1$, and also, since $2^{v}-1<n$, it is easy to see that $s\geq v$. The number of solutions is thus $\tau_{2}(n)$. Finally, the second sum contributes exactly $\tau_{2}(n)$.\\ \noindent{\it (ii) }Equally, (\ref{sum}) for $m=n-1$ reads as \begin{eqnarray} \vv(2n-1)&=&\vv(n-1)\cdot\vv(n)+\sum\limits_{{a+b=2n-1-2^{s}}\atop{a<n-1,b<n,s\geq 1}}\vv(a)\cdot\vv(b)\nonumber\\ &=&\vv(n-1)\cdot\vv(n)+\sum\limits_{s\geq 1}\vv(n-2^{s})\cdot\vv(n-1)+2\sum\limits_{{a+b=2n-1-2^{s}}\atop{a<b<n-1,s\geq 1}}\vv(a)\cdot\vv(b)\nonumber\\ &=&2\vv(n-1)\cdot\vv(n)-\vv^2(n-1)+2\sum\limits_{{a+b=2n-1-2^{s}}\atop{a<b<n-1,s\geq 1}}\vv(a)\cdot\vv(b).\label{odd} \end{eqnarray} Here we used (\ref{rec}). In the sum, we have zero contribution to $\vv(2n-1)$ modulo $4$ unless $a=2^{u}-1$, $b=2^{v}-1$, and $2^{u}-1+2^{v}-1=2n-1-2^{s}$. Thus, $u=0$ and $n=2^{s-1}+2^{v-1}$. Since $b=2^{v}-1<n-1$, this implies $s>v$. Thus, there exists at most one such solution, and this happens exactly when $s_{2}(n)=2$. If $n=2^{k}$, $k\geq 1$, we get that $\vv(2^{k+1}-1)\equiv-\vv^{2}(2^{k}-1)\equiv 3\text{ (mod }4)$. \end{proof} The following table summarizes the results of Proposition \ref{proposition3}.\\ \begin{center} \begin{tabular}{|r | r|r|} \hline \multicolumn{3}{|c|}{\textbf{1. Sequence $\vv(n)$ modulo $4$, $n\geq 2$}}\\ \hline $n$ & $\vv(n)\text{ (mod }4)$ & $\text{Condition}$\\ \hline $2^{k}+2^{l}+2^{m}-2$ & $2$& $k>l>m\geq 1$ \\ $2^{k}-2$ & $2$& $k\geq 2$\\ $3\cdot2^{k}-2$ & $2$& $k\geq 1$\\ $\text{Other even numbers}$ & $0$& $ $\\ $2^{k}-1$ & $3$ & $k\geq 2$\\ $2^{k}+2^{l}-1$ & $2$ & $k>l\geq 1$\\ $\text{Other odd numbers}$ & $0$ & $ $\\ \hline \end{tabular}\\ \end{center} This table should list all even numbers $n$ such that $s_{2}(n+2)\leq 3$. However, two types of these numbers, namely $\{2^{k},k\geq 3\}$ and $\{2^{k}+2^{l}-2,k>l+1\geq 3\}$, fall under the qualification ``other even numbers", while the type $\{n=2^{k}+2^{l},\l\geq 2\}$ is a special case of the first type listed in the table.\\ Let us now inspect the recurrences (\ref{even}) and (\ref{odd}) more carefully. We will use the following well-known implication which, as a matter of fact, makes the investigations of quadratic forms over $2-$adic number field rather exceptional in $p-$adic analysis. Let $U,V\in\mathbb{Z}$, $N\geq 1$. Then \begin{eqnarray*} \text{if }U\equiv V\text{ (mod }2^{N}) \Rightarrow U^{2}\equiv V^{2}\text{ (mod }2^{N+1}). \end{eqnarray*} Suppose we know the sequence $\vv(n)$ modulo $2^{N}$. In this case the recurrences (\ref{even}), (\ref{odd}) and the above fact show us that the sequence $\vv(n)$ is completely describable modulo $2^{N+1}$ as well. Further, note that Table 1 lists only those even and odd numbers $n$ such that $s_{2}(n+2)\leq3$. The recurrence (\ref{even}) shows then that the corresponding table for $\vv(n)$ modulo $8$ will list only those even numbers $n$ such that $s_{2}(n+4)\leq 7$. Exactly the same conclusion follows for odd $n$. Here is one tricky point. In fact, consider odd number $2n-1$ and the multiplier $\vv(n)$ of the term $2\vv(n-1)\vv(n)$ in (\ref{odd}). This multiplier matters if $s_{2}(n+2)\leq 3$. This shows that odd numbers $2n-1$ such that $s_{2}((2n-1)+5)\leq 3$ should also be considered as candidates to be listed in the table for $\vv(n)$ modulo $8$. Indeed, it can happen that $s_{2}((2n-1)+5)\leq 3$ and $s_{2}((2n-1)+4)\geq 8$ are simultaneously satisfied. But then elementary considerations show that $s_{2}(n+1)\geq 7$. Thus, $\vv(n-1)\equiv 0\text{ (mod }4)$ and, due to this multiplier, the term $2\vv(n-1)\vv(n)$ does not contribute to $\vv(2n-1)\text{ (mod }8)$. We can proceed by induction on $N$. Therefore, a careful analysis of (\ref{even}) and (\ref{odd}) implies the following \begin{thm} For each positive integer $N$ there exists a finite table (analogous to the Table 1) which lists a finite number of possibilities for $\vv(n)$ modulo $2^{N}$. The table encompasses only finite number of classes of those positive integers $n$ such that \begin{eqnarray} s_{2}(n+2^{N-1})<2^{N}. \label{bound} \end{eqnarray} If the entry $n=2^{k_{1}}+2^{k_{2}}+\cdots+2^{k_{\ell}}-2^{N-1}$ is in this table, the corresponding residue depends solely on $\ell$ and the exact shape of the collection of inequalities or equalities (the amount of these collections is also finite) satisfied by $k_{1},k_{2},\ldots,k_{\ell}$. These inequalities or equalities are of the form $k_{i}=k_{i+1}+d_{i}$, or $k_{i}\geq k_{i+1}+d_{i}$, for a fixed collection of $d_{i}\in\mathbb{N}$. For those positive integers $n$ which are not in this table, $\vv(n)\equiv0\text{ (mod }2^{N})$. \label{thm3} \end{thm} This result has numerous corollaries. One immediate corollary is Theorem \ref{teom}. Also, Theorem \ref{thm3} says that to prove the congruence (\ref{ast}) or even to improve it to \begin{eqnarray*} \vv(2^{k})\equiv 8\text{ (mod }32)\text{ for }k\geq 8 \end{eqnarray*} (which does hold), one needs to perform only a finite number of calculations: all what is demanded is to check that this congruence holds for $k$ up to a given bound, to be rest assured that this holds throughout. Indeed, according to the Theorem \ref{thm3}, two numbers $2^{k+1}$ and $2^{k}$ will eventually qualify for the same entry in the table for $k$ large enough (or be both left out of the table). For the very same reason, this allows to make the following claim. Let $a\in\mathbb{Z}$. Then there exists \begin{eqnarray*} \lim\limits_{k\rightarrow\infty}\vv(2^{k}+a)=\Theta(a)\in\mathbb{Z}_{2}; \end{eqnarray*} here $\mathbb{Z}_{2}$ stands for the ring of $2-$adic integers, and the limit is taken in $2-$adic topology. A generalization of this is the following \begin{cor} Let $P(x)=\sum_{i=0}^{d}a_{i}x^{i}$ be a polynomial with non-negative integral coefficients. Then for every integer $N\in\mathbb{N}$ there exists $k_{0}\in\mathbb{N}$ such that \begin{eqnarray*} \vv\Big{(}P(2^{k+1})\Big{)}\equiv \vv\Big{(}P(2^{k})\Big{)}\text{ (mod }2^{N})\text{ for }k\geq k_{0}. \end{eqnarray*} \end{cor} According to Theorem \ref{teom}, if a polynomial $P(x)=\sum_{i=0}^{d}a_{i}x^{i}$, $a_{d}\geq 1$, has at least one negative coefficient $a_{i}<0$ with $i\geq 1$, then $\vv(P(2^{k}))\rightarrow0$ in $2-$adic topology.\\ We finish with providing the table for $\vv(n)$ modulo $8$. \begin{center} \begin{tabular}{|r | r|r|} \hline \multicolumn{3}{|c|}{\textbf{2. Sequence $\vv(n)$ modulo $8$, $n\geq 7$}}\\ \hline $n$ & $\vv(n)\text{ (mod }8)$ & $\text{Condition}$\\ \hline $2^{k}+2^{l}+2^{m}-2$ & $6$& $k>l>m\geq 1$ \\ $2^{k}-2$ & $2$& $k\geq 2$\\ $3\cdot2^{k}-2$ & $6$& $k\geq 1$\\ $\text{Other even numbers}$ & $0$& $ $\\ $2^{k}-1$ & $7$ & $k\geq 3$\\ $2^{k}+1$ & $6$ & $k\geq 4$\\ $2^{k}+2^{l}-1$ & $2$ & $k>l\geq 2$\\ $2^{k}-3$ & $4$ & $k\geq 4$\\ $3\cdot2^{k}-3$ & $4$ & $k\geq 3$\\ $2^{k}+2^{l}+2^{m}-3$ & $4$ & $k>l>m\geq 2$\\ $\text{Other odd numbers}$ & $0$ & $ $\\ \hline \end{tabular}\\ \end{center} Thus, for example, $4|\vv(2n)\Rightarrow8|\vv(2n)$. The complete table for $2^{N}=16$ has the following entries, which are not members of larger classes: \begin{eqnarray*} \vv(7\cdot2^{k}-2)\equiv 14 \text{ (mod }16)\text{ for }k\geq 1, \quad\vv(5\cdot2^{k}-2)\equiv 8 \text{ (mod }16)\text{ for }k\geq 3. \end{eqnarray*} One could wonder, for example, whether these have combinatoric proofs.\\ \noindent Concerning the condition (\ref{bound}), it is certainly not sharp for $N>2$, as the above table for $2^{N}=8$ suggests. Employing this table, we can show that in fact $2^{N}$ on the right hand side can be replaced by $2^{N-1}+2^{N-3}-1$ for $N\geq 4$. This might be also far from the optimal for larger $N$.
1,314,259,995,439
arxiv
\section{Introduction} Studies on the emergence of collective and synchronized dynamics in large ensembles of coupled units have been carried out since the beginning of the nineties in different contexts and in a variety of fields, ranging from biology, ecology, and semiconductor lasers, to electronic circuits \cite{pik,k2,zanette}. Collective synchronized dynamics has multiple applications in technology, and is a common framework to investigate the crucial features in the emergence of critical phenomena in natural systems. For instance, it is a relevant issue to fully understand some diseases that appear as the result of a sudden and undesirable synchronization of a large number of neuronal units \cite{lglass}. Recently, synchronization phenomena have also been proved to be helpful outside the traditional fields where it applies, for instance, in sociology where it can be used to study the mechanisms leading to the formation of social collective behaviors \cite{vito1,vito2}. Among the many models that have been proposed to address synchronization phenomena, one of the most successful attempts to understand them is due to Kuramoto \cite{KuramotoLNP75,KuramotoBook84}, who capitalized on previous works by Winfree \cite{WinfreeJTB67}, and proposed a model system of nearly identical weakly coupled limit-cycle oscillators. The Kuramoto (KM) mean field case corresponding to a uniform, all-to-all and sinusoidal coupling is described by the equations of motion, \begin{equation} \dot{\theta}_{i}=\omega_{i}+\frac{K}{N}\sum_{j=1}^{N} \sin{(\theta_{j}-\theta_{i})} \hspace{0.5cm} (i=1,...,N)\;. \label{eq:kuramodel} \end{equation} where the factor $1 / N$ is incorporated in order to ensure a good behavior of the model in the thermodynamic limit, $N\rightarrow\infty$, $\omega_i$ stands for the natural frequencies of the oscillators, and $K$ is the coupling constant. Moreover, the coherence of the population of $N$ oscillators is measured by the complex order parameter, \begin{equation} r(t)\exp{({\mbox i}\phi(t))}=\frac{1}{N}\sum_{j=1}^{N}\exp{({\mbox i}\theta_{j}(t))}\;, \label{eq:kuraorderparam} \end{equation} where the modulus $0\le r(t) \le 1$ measures the phase coherence of the population and $\phi(t)$ is the average phase. In what follows, we will focus on the synchronization of coupled oscillators described by the dynamics Eq.\ (\ref{eq:kuramodel}), because of its validity as an approximation for a large number of nonlinear equations and its ubiquity in the nonlinear literature \cite{conradrev}. The KM approach to synchronization was a great breakthrough for the understanding of the emergence of synchronization in large populations of oscillators, in particular it presents a second-order phase transition from incoherence to synchronization, in the order parameter Eq.(\ref{eq:kuraorderparam}) for a critical value of the coupling constant. However, a large amount of real systems do not show a homogeneous pattern of interconnections among their parts \cite{newmanrev,physrep} where the original KM assumptions apply. Many real natural \cite{JeongNat01,SolePRSLB01}, social \cite{NewmanPNAS01} and technological \cite{FaloutsosCCR99,vespignanibook,WangPRE06} systems conform as networks of nodes with connectivity patterns that diverge considerably from homogeneity, and are usually characterized by a scale-free degree distribution, $P(k)\sim k^{-\gamma}$ (the degree $k$ is the number of connections of a node). The study of processes taking place on top of scale-free networks has led to reconsider classical results obtained for regular lattices or random graphs due to the radical changes of the system's dynamics when the heterogeneity of the connectivity patterns can not be neglected \cite{cnsw00,ceah00,ceah01,pv00,pv01,mpv02}. In this case one has to deal with two sources of complexity, the nonlinear character of the dynamics and the complex structures of the substrate, which are usually entangled. A contemporary effort to attack this entangled problem was due to Watts and Strogatz, that in 1998, trying to understand the synchronization of cricket chirps, which show a high degree of coordination over long distances as though the insects where ``invisibly" connected, end up with a seminal paper \cite{WattsNat98} about the small-world connectivity property. This work was the seed of the modern theory of complex networks \cite{newmanrev,physrep}. Nevertheless, the understanding of the synchronization dynamics in complex networks still remains a challenge. In recent years, scientists have addressed the problem of synchronization on complex networks capitalizing on the Master Stability Function (MSF) formalism \cite{PecoraPRL98} which allows to study the stability of the {\em fully synchronized state} \cite{BarahonaPRL02,NishikawaPRL03,HongPRE04,ChavezPRL05,MotterPRE05,LeePRE05,DonettiPRL05,ZhouPRL06}. The MSF is the result of a linear stability analysis for a completely synchronized system. While the MSF approach is useful to get a first insight into what is going on in the system as far as the stability of the synchronized state is concerned, it tells nothing about how synchronization is attained and whether or not the system under study exhibits a transition similar to the original KM. To this end, one must rely on numerical calculations and explore the {\em entire phase diagram}. Surprisingly, there are only a few works that have dealt with the study of the whole synchronization dynamics in specific scenarios \cite{YamirEPL04,OhPRE05,McgrawPRE05,ArenasPRL06, ArenasPhysD,usijbc} as compared with those where the MSF is used, given that the onset of synchronization is reacher in its behavioral repertoire than the state of complete synchronization. In a previous work \cite{prljya}, we have shown how, for fixed coupling strengths, local patterns of synchronization emerge differently in homogeneous and heterogeneous complex networks, driving the process towards a certain degree of global synchronization following different paths. In this paper, we extend the previous work to different topologies, even those with modular structure, and report more results supporting the previous claim. First, we extend the analysis carried out in \cite{prljya} to networks in which the degree of heterogeneity can be tuned between the two limits of random scale-free networks and random graphs with a Poisson degree distribution. Second, in order to get further insights about the role of the structural properties on the route towards complete synchronization, we study the same dynamics on top of networks with a non-random structure at the mesoscopic level, i.e., networks with communities. The results support the usefulness of the tools developed and highlight the relevance of synchronization phenomena to study in detail the relationship between structure and function in complex networks. \section{KM model on complex networks} \label{themodel} Let us now focus on the paradigmatic Kuramoto model. In order to manage with the KM on top of complex topologies we reformulate eq. (\ref{eq:kuramodel}) to the form \begin{equation} \frac{d\theta_i}{dt}=\omega_i + \sum_{j} \Lambda_{ij}A_{ij}\sin(\theta_j-\theta_i) \hspace{0.5cm} (i=1,...,N)\;, \label{ks} \end{equation} where $\Lambda_{ij}$ is the coupling strength between pairs of connected oscillators and $A_{ij}$ is the connectivity matrix ($A_{ij}=1$ if $i$ is linked to $j$ and $0$ otherwise). The original Kuramoto model introduced above assumed mean-field interactions so that $A_{ij}=1, \forall i\neq j$ (all-to-all) and $\Lambda_{ij}=K/N, \forall i,j$. The first problem when dealing with the KM in complex networks is the definition of the dynamics. In the seminal paper by Kuramoto \cite{KuramotoLNP75}, eq. (\ref{eq:kuramodel}), the coupling term in the right hand side of eq. (\ref{ks}) is an intensive magnitude. The dependence on the number of oscillators $N$ is avoided by choosing $\Lambda_{ij}=K/N$. This prescription turns out to be essential for the analysis of the system in the thermodynamic limit $N\rightarrow \infty$. However, choosing $\Lambda_{ij}=K/N$ the dynamics of the KM in a complex network becomes dependent on $N$. Therefore, in the thermodynamic limit, the coupling term tends to zero except for those nodes with a degree that scales with $N$ \cite{note1}. A second prescription consists of taking $\Lambda_{ij}=K/k_i$ (where $k_i$ is the degree of node $i$) so that $\Lambda_{ij}$ is a weighted interaction factor that also makes intensive the right hand side of Eq. (\ref{ks}). This form has been used to solve the so-called {\em paradox of heterogeneity} that states that the heterogeneity in the degree distribution, which often reduces the average distance between nodes, may suppress synchronization in networks of oscillators coupled symmetrically with uniform coupling strength \cite{MotterPRE05}. One should consider this result carefully because it refers to the stability of the {\em fully synchronized state} (see below) not to the {\em whole evolution} of synchronization in the network. More important, the inclusion of weights in the interaction strongly affects the original KM dynamics in complex networks because it imposes a dynamic homogeneity that could mask the real topological heterogeneity of the network. Finally, the prescription $\Lambda_{ij}=K$ \cite{YamirEPL04,IchinomiyaPRE04,RestrepoPRE05}, which may seem more appropriate, also presents some conceptual problems because the sum in the right hand side of eq. (\ref{ks}) could eventually diverge in the thermodynamic limit if synchronization is achieved. To our understanding, the most accurate interpretation of the KM dynamics in complex networks should preserve the essential fact of treating the heterogeneity of the network independently of the interaction dynamics, and at the same time, should remain calculable in the thermodynamic limit. Taking into account these factors, the interaction $\Lambda_{ij}$ in complex networks should be inversely proportional to the largest degree of the system $\Lambda_{ij}=K/k_{max}=\lambda$ keeping in this way the original formulation of the KM valid in the thermodynamic limit (in SF networks $k_{max}\sim N^{1/(\gamma-1)}$). In addition, the same order parameter, eq. (\ref{eq:kuraorderparam}), can be used to describe the coherence of the synchronized state. Since $k_{max}$ is constant for a given network, the physical meaning of this prescription is a re-scaling of the time units involved in the dynamics. Note, however, that for a proper comparison of the synchronizability of different complex networks, the global and local measures of coherence should be represented according to their respective time scales. Therefore, given two complex networks A and B with $k_{max}=k_A$ and $k_{max}=k_B$ respectively, the comparison between observable must be done for the same effective coupling $K_A/k_A=K_B/k_B=\lambda$. With this formulation in mind eq. (\ref{ks}) reduces to \begin{equation} \frac{d\theta_i}{dt}=\omega_i + \lambda \sum_{j} A_{ij}\sin(\theta_j-\theta_i) \hspace{0.5cm} (i=1,...,N)\;, \label{eq:kscn} \end{equation} independently of the specific topology of the network. This allow us to study the dynamics of eq. (\ref{eq:kscn}) over different topologies in order to compare the results and properly inspect the interplay between topology and dynamics in what concerns to synchronization. \section{Homogeneous {\em versus} heterogeneous topologies} \label{sec:Kura-heterogeneous} Recent results have shed light on the influence of the local interactions' topology on the route to synchronization \cite{McgrawPRE05,usijbc}. However, in these studies at least two parameters (clustering and average path length) vary along the studied family of networks. This paired evolution, although yielding an interesting interplay between the two topological parameters, makes it difficult to distinguish what effects were due to one or other factors. Here, we would like to address first what is the influence of heterogeneity, keeping the number of degrees of freedom to a minimum for the comparison to be meaningful. The family of networks used in the present section are comparable in their clustering, average distance and correlations so that the only difference relies on the degree distribution, ranging from a Poissonian type to a scale-free distribution. Later on in this paper, we will relax these constraints and study networks in which the main topological feature is given at the mesoscopic scale, i.e., networks with community structure. Therefore, let us first scrutinize and compare the synchronization patterns in Erd\"os-R\'enyi (ER) and Scale-Free (SF) networks. For this purpose we make use of the model proposed in \cite{presfer} that allows a smooth interpolation between these two extremal topologies. Besides, we introduce a new parameter to characterize the synchronization paths to unravel their differences. The results reveal that the synchronizability of these networks does depend on the coupling between units, and hence, that general statements about their synchronizability are eventually misleading. Moreover, we show that even in the incoherent solution, $r=0$, the system is self-organizing towards synchronization. We will analyze in detail how this self-organization is attained. The first numerical study about the onset of synchronization of Kuramoto oscillators in SF networks \cite{YamirEPL04} revealed the great propense of SF networks to synchronization, which is revealed by a non-zero but very small critical value $\lambda_c$ \cite{note2}. Besides, it was observed that at the synchronized state, $r=1$, hubs are extremely robust to perturbations since the recovery time of a node as a function of its degree follows a power law with exponent $-1$. However, how do SF networks compare with homogeneous networks and what are the roots of the different behaviors observed? We first concentrate on global synchronization for the Kuramoto model Eq. (\ref{eq:kscn}). For this we follow the evolution of the order parameter $r$, Eq. (\ref{eq:kuraorderparam}), as $\lambda$ increases, to capture the global coherence of the synchronization in networks. We will perform this analysis on the family of networks generated with the model introduced in \cite{presfer}. This model generates a one-parameter family of networks labeled by $\alpha\in[0,1]$. The parameter $\alpha$ measures the degree of heterogeneity of the final networks so that $\alpha=0$ corresponds to the heterogeneous BA network and $\alpha=1$ to homogeneous ER graphs. For intermediate values of $\alpha$ one obtains networks that have been grown combining both preferential attachment and homogeneous random linking so that each mechanism is chosen with probabilities $(1-\alpha)$ and $\alpha$, respectively. It is worth stressing that the growth mechanism preserves the total number of links, $N_{l}$, and nodes, $N$, for a proper comparison between different values of $\alpha$. Specifically, assuming the final size of the network to be $N$, the network is build up starting from a fully connected core of $m_{0}$ nodes and a set ${\cal S}(0)$ of $N-m_{0}$ unconnected nodes. Then, at each time step, a new node (not selected before) is chosen from $ {\cal S}(0)$ and linked to $m$ other nodes. Each of the $m$ links is attached with probability $\alpha$ to a randomly chosen node (avoiding self- connections) from the whole set of $N-1$ remaining nodes and with probability $(1-\alpha)$ following a linear preferential attachment strategy \cite{doro}. After repeating this process $N-m_{0}$ times, networks interpolating between the limiting cases of ER ($\alpha=1$) and SF ($\alpha=0$) topologies are generated \cite{presfer}. Furthermore, with this procedure, the degree of heterogeneity of the grown networks varies smoothly between the two limiting cases. The curves $r(\lambda)$ for several network topologies ranging from ER to SF are shown in Fig.\ref{10000R}. We have performed extensive numerical simulations of eq. (\ref{eq:kscn}) for each network substrate starting from $\lambda=0$ and increasing it up to $\lambda=0.4$ with $\delta\lambda=0.02$. A large number (at least $500$) of different network realizations and initial conditions were considered for every value of $\lambda$ in order to obtain an accurate phase diagram. The natural frequencies $\omega_i$ and the initial values of $\theta_i$ were randomly drawn from a uniform distribution in the interval $(-1/2,1/2)$ and $(-\pi,\pi)$, respectively. \begin{figure}[!t] \begin{center} \epsfig{file=Fig1pre.eps,width=3.3in,angle=0,clip=1} \end{center} \caption{Global synchronization curves $r(\lambda)$ for different network topologies labeled by $\alpha$ ($\alpha=0$ corresponds to the BA limit and $\alpha=1$ to ER graphs). The inset shows the region where the onset of synchronization takes place. The network sizes are $N=10^4$ and $\langle k\rangle=6$ ($N_l=3\cdot 10^4$) and were generated using the model introduced in \cite{presfer}} \label{10000R} \end{figure} Fig.\ref{10000R} reveals the differences in the critical behavior as a function of the substrate heterogeneity. The global coherence of the synchronized state, represented by $r$, shows that the onset of synchronization first occurs for SF networks. As the network substrate becomes more homogeneous the critical point $\lambda_c$ shifts to larger values and the system seems to be less synchronizable. On the other hand, it is also clear that the route to complete synchronization, $r=1$, is faster for homogeneous networks. That is, when $\lambda>\lambda_c(\alpha)$ the growth rate of $r$ increases with $\alpha$. To inspect in depth the critical parameters of the system dynamics we perform a finite size scaling analysis. This allows to determine with precision the curve $\lambda_c(\alpha)$ and study the critical behavior near the synchronization transition. We assume a scaling relation of the form \begin{equation} r=N^{-\nu}f(N^{\beta}(\lambda-\lambda_c)), \label{eqfss} \end{equation} where $f(x)$ is as usual a universal scaling function bounded as $x \rightarrow \pm \infty$ and $\nu$ and $\beta$ are critical exponents to be determined. The detailed analysis performed for both SF and ER topologies shows that the critical value of the effective coupling, $\lambda_c$, corresponds in scale-free networks to $\lambda_c^{SF} = 0.051$, and in random networks to $\lambda_c^{ER} = 0.122$, accordingly with Fig.\ref{10000R}. In both cases, the transition strongly recalls the classical transition of the original KM \cite{KuramotoLNP75} with a critical exponent near $1/2$ for the SF network \cite{YamirEPL04}. For intermediate values of $\alpha$, the results show that the critical point shifts to larger values as the degree of heterogeneity increases. They are shown in Table\ \ref{table1} together with some topological properties of the networks. \begin{table} \caption{\label{table1} Topological properties of the networks used in this work and critical points for the onset of synchronization obtained from a FSS analysis (Eq.\ (\ref{eqfss})). The topological quantities reported are the result of an average over 1000 network realizations. $\langle k \rangle=4 $ and $N=10^4$ have been set for all networks. Standard deviation of the mean values for $\lambda_c$ is $\pm 2$ units in the last significant digit.} \begin{ruledtabular} \begin{tabular}{llll} $\alpha$ & $ \langle k^2 \rangle $ & $k_{max}$ & $\lambda_c $\\ \hline 0.0 & 115.5 & 326.3 & 0.051\\ 0.2 & 56.7 & 111.6 & 0.066\\ 0.4 & 44.9 & 47.7 & 0.088\\ 0.6 & 41.1 & 25.6 & 0.103\\ 0.8 & 39.6 & 16.8 & 0.108\\ 1.0 & 39.0 & 14.8 & 0.122 \end{tabular} \end{ruledtabular} \end{table} The differences between ER and SF topologies observed when looking at global patterns of synchronization motivate a more detailed study of the synchronization onset for both topologies. The original work by Kuramoto pointed out that at the onset of synchronization small clusters of locked oscillators emerge and that the recruitment of more oscillators into these clusters as the coupling is increased makes it larger the global coherence $r$ of the system. Obviously the emergence of these clusters would depend on the underlying topology which drives the possible configurations that locked oscillators would eventually form. To see how this initial coherence is achieved we propose a new order parameter, $r_{link}$. This parameter measures the local construction of the synchronization patterns \cite{note3} and allows for the exploration of how global synchronization is attained. We define \begin{equation} r_{link}=\frac{1}{2N_l}\sum_{i}\sum_{j\in \Gamma_i}\left |\lim_{\Delta t\rightarrow\infty}\frac{1}{\Delta t}\int_{t_r}^{t_r+\Delta t}e^{i\left[\theta_i(t) -\theta_j(t)\right]}dt\right |, \label{r_link} \end{equation} being $\Gamma_i$ the set of neighbors of node $i$. The parameter $r_{link}$ measures the fraction of all possible links that are synchronized in the network. The averaging time $\Delta t$ should be taken large enough in order to obtain good measures of the degree of coherence between each pair of physically connected nodes. Besides, $r_{link}$ is computed after the system relaxes at some large time $t_r$. Note that in the limit of all-to-all coupling the information provided by $r_{link}$ is exactly the same that the one provided by $r$ because in this case $r_{link}\propto r^2$. Therefore, no additional information would be provided by this new parameter in the all-to-all case. Here, however, it turns out to be the key parameter to characterize how synchronization emerges at a local scale. \begin{figure}[!t] \epsfig{file=Fig2pre.eps,width=2.8in,angle=0,clip=1} \caption{Evolution of the control parameters $r$ and $r_{link}$ as a function of the coupling strength for networks generated with the model introduced in \cite{presfer}, corresponding to $\alpha=0.0$, $0.25$, $0.5$, $0.75$ and $1.0$. The size of the networks is $N=10^3$ and their average degree is $\langle k \rangle=6$. The exponent of the SF networks increases from $\gamma=3$ ($\alpha=0$).} \label{R} \end{figure} In Fig.\ref{R} we represent the evolution of both order parameters, $r$ and $r_{link}$, as a function of the coupling strength $\lambda$ for several values of $\alpha$. The behavior of $r_{link}$ shows a change in synchronizability between ER and SF and provides additional information to that reported by $r$. Interestingly, the nonzero values of $r_{link}$ for $\lambda\leq\lambda_c$ indicate the existence of some local synchronization patterns even in the regime of global incoherence ($r \approx 0$). Right at the onset of synchronization for the SF network limit, its $r_{link}$ value deviates from that of the ER recovering the known result about the synchronization of SF networks for lower values of the coupling. In this region, while the synchronization patterns continue to grow for the ER network at the same rate, the formation of locally synchronized structures occurs at a faster rate in the SF network. Finally, when the incoherent solution in the ER network destabilizes, the growing in its synchronization pattern increases drastically up to values of $r_{link}$ comparable to those obtained in SF networks and even higher. For intermediate values of $\alpha$, the results show that the effect of varying the heterogeneity of the underlying network is twofold. On one hand, the more heterogeneous the network is, the smaller the values of $\lambda$ needed for the onset of synchronization. Conversely, the increase in the degree of heterogeneity results in larger values of $\lambda$ in order to achieve complete synchronization. In short, as the heterogeneity is increased, the onset of synchronization is anticipated, but at the same time, the appearance of the fully synchronized state is delayed. These results undoubtedly point out that statements about synchronizability are dependent on the coupling strength value. To shed new light on this phenomenon, we have studied the characteristics of the synchronization patterns along the evolution of $r_{link}$. Following the usual picture, synchronization patterns are formed by pairs of oscillators, physically connected, whose phase difference in the stationary state tends to zero. In order to determine which pairs of nodes are truly synchronized we should determine the coherence of their dynamics. Note that eq.(\ref{r_link}) is the average dynamical coherence between every pair of linked nodes and then the synchronization degree of every pair of connected oscillators can be written in terms of a symmetric matrix \begin{equation} {\cal D}_{ij} =A_{ij}\left |\lim_{\Delta t\rightarrow\infty}\frac{1}{\Delta t}\int_{t_r}^{t_r+\Delta t}e^{i\left[\theta_i(t)-\theta_j(t)\right]}dt\right |\;. \label{eq:Dij} \end{equation} Then one has to analyze each matrix term $D_{ij}$ in order to label a link $(i,j)$ as synchronized or not. As introduced above, from the computation of $r_{link}$ one determines the fraction of physical links that are synchronized so that one would expect that $2r_{link}\cdot N_l$ elements of the matrix ${\bf {\cal D}}$ are $D_{ij}= 1$, while the remaining elements are $D_{ij}= 0$. However, this is not the real situation since the network dynamics is not well defined in terms of a fully synchronized cluster and a set of completely incoherent oscillators. On the other hand, the worst scenario would be found if there were $2N_l$ elements of matrix ${\bf {\cal D}}$ so that $D_{ij}=r_{link}$, implying that all the physically connected pairs are equally synchronized and hence the parameter $r_{link}$ could not be interpreted as the fraction of links that are dynamically coherent and no information about the topological patterns of synchronization could be extracted from matrix ${\bf {\cal D}}$. The situation found is not as simple as the former possibility and not so dramatic as the latter. The contributions $D_{ij}$ of the $N_{l}$ elements of matrix ${\bf {\cal D}}$ that correspond to physical links can be ordered from the highest to the lowest one. We have checked that for two situations, corresponding to the onset of synchronization ($\lambda=0.05$) and when high global coherence ($\lambda=0.13$) is observed for a SF network, synchronized links can be clearly identified. For the onset of synchronization, a subset of nearly $20 \%$ of links displaying coherent dynamics with high degree of synchronization, ${\cal D}_{ij}>0.8$, is well separated from the behavior of the remaining links as a dramatic decrease of $D_{ij}$ takes place. In this sense, it is clear that the dynamics of a $20\%$ of the possible pairs can be regarded as synchronized which is in agreement with the obtained value $r_{link}=0.25$ for $\lambda=0.05$ and support that although macroscopic coherence is not observed ($r\simeq0$ at this point) the system is seen to walk towards it. For $\lambda=0.13$ ($r_{link}\simeq 0.82$) a plateau of nearly $75 \%$ of links is observed, thus revealing the high degree of global coherence, $r\simeq 0.7$, at this point. Therefore, the shape of the ranked ${\cal D}_{ij}$ curves confirm that $r_{link}$ gives the fraction of synchronized links and thus the latter allows to obtain information about synchronized patterns from ${\bf {\cal D}}$. To determine exactly which pairs of nodes are regarded as synchronized, the matrix ${\cal D}$ is filtered using a threshold $T$ such that the fraction of synchronized pairs equals $r_{link}$. In this way $T$ is a moving threshold so that if ${\cal D}_{ij}>T$ oscillators $i$ and $j$ are considered synchronized. The value of $T$ depends on the particular realization and is determined by means of an iterative scheme starting from $T=1$. Decreasing it with $\delta T=0.01$ one computes the amount of links that fulfills the condition. In this way, the value of $T$ progressively decreases and more pairs of oscillators are chosen. The process lasts until $T$ is such that the fraction of chosen links is equal to the desired value $r_{link}$ previously computed from ${\cal D}$. Finally, when the synchronized links are identified the clusters of synchronized nodes are reconstructed. \begin{figure}[!t] \epsfig{file=Fig3pre.eps,width=2.8in,angle=0,clip=1} \caption{Evolution of the number of synchronized clusters $N_c$ and the synchronized giant component size $GC$ as a function of $r_{link}$ for the the different topologies considered. Small values of $r_{link}$ correspond to values of $\lambda$ for which $r\approx 0$. Despite $r$ being vanishing and hence no global synchronization is yet attained, a significant number of clusters show up. This indicates that for any $\lambda>0$ the system self-organizes towards macroscopic synchronization. The network parameters are as in Fig.\ \ref{R}.} \label{pattern} \end{figure} \begin{figure*}[!t] \begin{center} \epsfig{file=Fig4prev1.eps,height=1.5in,angle=0,clip=1} \end{center} \caption{Giant synchronized components for several values of $\lambda$ in the two limiting cases of the different topologies studied (ER and SF). The size of the underlying networks is small ($N=100$ nodes), in order to have a sizeable picture of the system. Note that for the SF case links and nodes are incorporated together to the $GC$, while for the ER network, what is added are links between nodes already belonging to the $GC$.} \label{nets} \end{figure*} Figure \ref{pattern} represents the number of synchronized clusters and the size of the giant component ($GC$) as a function of $r_{link}$ for the same values of $\alpha$ used in Fig.\ \ref{R}. The local information extracted from it points to a novel feature of the synchronization process that is not possible to derive from Figs.\ref{10000R} and \ref{R}, and that is unexpected. The emergence of clusters of synchronized pairs of oscillators (links) in the networks shows that for values of $\lambda$ for which the incoherent solution $r=0$ is stable, the networks have developed a largest cluster of synchronized pairs of oscillators involving $50\%$ of the nodes, and an equal number of smaller synchronization clusters. From this point on, the behavior of both $GC$ and $N_c$ depends on the specific value of $\alpha$. When heterogeneity dominates, the GC grows and the number of smaller clusters goes down, whereas for less heterogeneous networks the growth of $GC$ is more abrupt and nodes are incorporated to it more faster. Moreover, the results highlight the fact that although heterogeneous networks exhibit more coherence in terms of $r$ and $r_{link}$, the microscopic evolution of the synchronization patterns is faster in homogeneous networks, being these networks far more locally synchronizable than the heterogeneous ones once $\lambda>\lambda_c$. The observed differences in the behavior at a local scale are rooted in the growth of the $GC$. For homogeneous topologies, many small clusters of synchronized pairs of oscillators (note in Fig.\ref{pattern} the large number of clusters formed when a $15 \%$ of the links are synchronized) merge together to form a GC when the effective coupling is increased. This coalescence of many small clusters results in a giant component made up of almost the size of the system once the incoherent state destabilizes. On the other hand, for heterogeneous graphs, the growth of the giant component is more smooth and the oscillators form new pairs starting from a core made up of half the nodes of the network. That is, in one case (ER-like networks), almost all the nodes of the network takes part of the giant component from the beginning and latter on, when $\lambda$ is increased, what is added to the $GC$ are the links among these nodes that were missing in the original cluster of synchronized nodes. For SF-like networks, the mechanism is the opposite. Nodes are added to the $GC$ {\em together} with most of their links, resulting in a growth of $r_{link}$ much slower than for the homogeneous topologies. \begin{figure}[!b] \begin{center} \epsfig{file=Fig5pre.eps,width=2.3in,angle=-90,clip=1} \end{center} \caption{Evolution of the ratio between the clustering coefficient of the giant synchronized cluster, $\langle c_{sync}\rangle$, and that of the substrate network $\langle c_{network}\rangle$, as a function of $\lambda$ for the two limiting cases of BA and ER networks. Network parameters are those used in figure \ref{R}.} \label{clust} \end{figure} The above picture is confirmed in Fig.\ref{nets}, where we have represented the evolution of the local synchronization patterns of the giant components in ER and SF networks for several values of $\lambda$ \cite{note4}. It is clear that when $r\simeq 0$ the two networks follow different paths toward synchronization. In particular, the giant component for the SF network seems to retain the topological features of the substrate network, while this is not the case for the ER network (for instance, the small-world property is clearly lacking). This study about the patterns of self-organization towards synchronization reveals that the quantitative difference about the macroscopic behavior, shown by the computation of the evolution of the global coherence $r$ for ER and SF networks, has its roots on a qualitatively different route at the microscopic level of description. The use of the new parameter $r_{link}$ which involves the computation of the degree of coherence between each pair of linked nodes is a useful tool for describing such differences. Moreover, the results suggest that the degree of heterogeneity of the network is the key ingredient to explain the two different routes observed. The technique developed to extract the synchronization patterns allows the analysis of the topological features of such clusters of nodes. We can compute the average measures of relevant quantities such as the clustering coefficient or the degree distribution, and see how these magnitudes evolve from the uncoupled limit, where no synchronization occurs, to the coherent regime where the synchronized network coincides with the underlying substrate. It is then relevant to explore the regions where the onset of synchronization takes place and characterize topologically these emergent synchronized clusters. In Fig.\ref{clust} the evolution of the average clustering coefficient $\langle c_{sync}\rangle$ of the giant synchronized cluster referred to $\langle c_{network}\rangle$ in the underlying network, is plotted as a function of $\lambda$ for both the BA and ER networks. It is worth mentioning that the results depicted in the figure have been computed taking into account that nodes with degree 1 does not contribute to the clustering coefficient of the $GC$, as $c$ is not properly defined for these nodes. The results are illustrative of the local organization of synchronized nodes. The figure shows that for both topologies the clustering decreases as the coupling is increased beyond their respective $\lambda_c$ or, in other words, as the giant component grows by the addition of new synchronized pairs of nodes. However, the effects of the two different routes to complete synchronization observed for ER and SF networks are well appreciated from the results. For the heterogeneous network there is a smooth decrease of the clustering coefficient for $\lambda>\lambda_c^{SF}$ and the effects of the emergence of global coherence are not dramatic in what refers to the behavior of $\langle c_{sync}\rangle$. This is because in this case the giant component mainly grows by recruiting new synchronized nodes and their links. On the other hand, for the ER graph the behavior observed for $\lambda<\lambda_c^{ER}$, i.e. when no macroscopic coherence is observed, is interrupted by a sudden jump near its critical value. In fact, for $\lambda>\lambda_c^{ER}$ the clustering of the synchronized cluster quickly approaches the value of $\langle c\rangle$ of the substrate network. This effect becomes clear if one has in mind the coalescence of small clusters, which happens around the critical point for ER graphs. In fact, taking into account the giant synchronized component on ER for $\lambda<\lambda_c^{ER}$, implies to consider one of the several disjoint synchronized clusters of similar sizes that are in this region. Moreover, the coalescence process leads to the formation of a giant cluster that contains almost all the nodes of the network (see Fig.\ \ref{pattern}), but a number of links significantly smaller. Hence, when the clusters collapse into a much larger one, the topological features change dramatically as observed from the evolution of the clustering coefficient. \begin{figure} \begin{center} \epsfig{file=Fig6pre.eps,width=3.4in,angle=0,clip=1} \end{center} \caption{(color online) The plot shows the fraction of links that a node with degree $k$ belonging to the synchronized cluster shares with other nodes of the same synchronized cluster. This fraction $k_{int}/k$ is plotted as a function of $k$ and $\lambda$. The figure shows how the hubs progressively incorporate their neighbors to the synchronized component as $\lambda$ grows. The network is SF with parameters as those used in Fig.\ref{R} and $\alpha=0$.} \label{ki} \end{figure} All the results reported above point out that the ultimate reason behind the two different routes to complete synchronization is the heterogeneous character of the SF network and the role played by the hubs. The natural cohesion that hubs provide to SF network prevents the existence of independent macroscopic clusters of synchrony as occurs for ER networks. It is then interesting to study how these hubs participate in the formation of the final synchronized state. For this, we first study the evolution with $\lambda$ of the composition of the synchronized cluster in terms of the degree of its components. In \cite{prljya}, we reported the probability that a node with degree $k$ belongs to the giant synchronized cluster as a function of its degree $k$ and the coupling $\lambda$ for the SF network. This probability turns out to be an increasing function of $k$ for every value of $\lambda$ and therefore the more connected a node is, the more likely it takes part in the cluster of synchronized links. In particular, the results confirm the hypothesis made above that the hubs participate from the very beginning on the formation of the synchronized cluster. A similar result was obtained in \cite{ZhouChaos06}, where Zhou and Kurths studied the hierarchical organization in complex networks, using the MSF and a mean-field approach in the weak coupling limit. The above characterization of the synchronized cluster in terms of the degree of its component can be completed studying their effective degree, $k_{int}$. The effective degree of a synchronized node is the number of links it shares with other nodes belonging to the same synchronized cluster. Obviously, at the complete synchronized regime a node with degree $k$ will have $k_{int}=k$. We have plotted in Fig.\ref{ki} the quantity $k_{int}/k$ (the fraction of links that a node has with synchronized neighbors) as a function of $\lambda$ and the degree $k$ of the nodes ($\alpha=0$). The results reveal that although hubs are the first to take part of the synchronized cluster, their neighbors are progressively incorporated to the cluster as $\lambda$ grows. Besides, if a node with small $k$ is synchronized the probability that its neighbors are also synchronized grows very fast with $\lambda$ which is an effect of the network topology. These results further support the statement about the essential role played by the hubs in the recruitment of oscillators into the synchronized group and in the emergence of complete synchronization in SF networks. \section{Synchronization in structured networks} \label{sec:Kura-community} In light of the results of the above section we have extended the study beyond unstructured networks to structured or modular networks. This is a limiting situation in which the local structure may greatly affect the dynamics, irrespective of whether or not we deal with homogeneous or heterogeneous networks and then they constitute a perfect framework for testing the new order parameter $r_{link}$ introduced in the last section. Many complex networks in nature are modular, i.e. composed of certain subgraphs with differentiated internal and external connectivity that form communities \cite{physrep,alexcomm}. The use of modular networks where a proper comparison in synchronizability can be performed (same number of nodes and links) restricts us to the consideration of synthetic structured networks. To this end, we make use of a common benchmark of random network with community structure, first proposed by Newman \cite{NewmanPRE04} considering one hierarchical level and later extended to several hierarchical levels \cite{ArenasPRL06, ArenasPhysD}. The modular network structure we build is as follows: in a set of $N$ nodes, we prescribe $n$ compartments that will represent our first community organizational level, and $m$ compartments, each one embedding four different compartments of the first level, that define the second organizational level of the network. The internal degree of nodes at the first level $z_{in_1}$ and the internal degree of nodes at the second level $z_{in_2}$ keep an average degree $z_{in_1}+z_{in_2}+z_{out}=\langle k\rangle$ so that these networks are strictly homogeneous in the sense of the degree distribution , $P(k)=\delta(k-\langle k\rangle)$. Networks with two hierarchical levels are labeled as $z_{in_1}$ - $z_{in_2}$, e.g. a network with $i$-$j$ means $i$ links with the nodes of its first hierarchical community level (more internal), $j$ links with the rest of communities that form the second hierarchical level (more external) and $(\langle k\rangle-i-j)$ links with any community of the rest of the network. \begin{figure}[!t] \begin{center} \epsfig{file=Fig7pre.eps,width=3.4in,angle=-90,clip=1} \end{center} \caption{(color online) Evolution of $r$ (top) and $r_{link}$ (bottom) as a function of $\lambda$ for structured modular networks. The networks are synthetically built with an {\em a priori} community structure. The network size is 256 nodes and the number of links is 4608. We prescribe 16 compartments that will represent our first community organizational level, and four compartments each one embedding four different compartments of the above first level, that define the second organizational level of the network. Each node has 18 links distributed between its first community level, the second, and the whole network at random. The network 13-4 has 13 internal connections in its first hierarchical level, 4 external connections in its second hierarchical level, and 1 connection with any other community in the network. The generation of the 15-2 structure is equivalent. The curves show that although 13-4 has always a better global synchronization, 15-2 has better local synchronization as shown by $r_{link}$.} \label{Communities} \end{figure} Synchronization processes on top of modular networks of this type have been recently studied as a mechanism for community detection \cite{ArenasPRL06,vitosync}. In \cite{ArenasPRL06}, the authors studied the situation in which starting from a set of homogeneous (in terms of the natural frequencies) Kuramoto oscillators with different initial conditions the system evolves after a transient of time to the synchronized state. It was shown that the community structure is progressively unveiled at the same time the system's dynamics evolves toward the coherent state and the synchronization is attained. In particular, the nodes belonging to the first community level are the first to get synchronized, subsequently the second level nodes achieve the frequency entrainment and finally the whole system shows global synchronization. \begin{figure}[!t] \begin{center} \epsfig{file=Fig8pre.eps,width=3.1in,angle=-90,clip=1} \end{center} \caption{(color online) Size of the largest synchronized cluster $GC$ (top) and number of clusters $N_c$ (bottom) for the same networks of Fig.\ \ref{Communities}. See the text for details.} \label{gc_comm} \end{figure} \begin{figure}[!t] \begin{center} \epsfig{file=Fig9pre.eps,width=2.7in,angle=0,clip=1} \end{center} \caption{Number of links ($N_{links}$) that connect synchronized nodes in each of the three levels in the community hierarchy of the networks ($1$ means inner layer). The numbers are normalized by the total number of links at each level in each network.} \label{nlinks} \end{figure} Here we adopt a different perspective since we will consider as previously a set of non-identical Kuramoto oscillators with random assignment of natural frequencies and hence the final degree of system's synchronization will depend on the strength of the coupling. It is then interesting to study how the degree of synchronization evolves as a function of $\lambda$ and whether the coherence between nodes is progressively distributed following the hierarchy imposed by the underlying topology. For this, we make use of the order parameters $r$, eq. (\ref{eq:kuraorderparam}), and $r_{link}$, eq (\ref{r_link}), to characterize the synchronization transition on two slightly different modular networks with two well defined hierarchical levels, $13-4$ and $15-2$, being this difference the cohesion of the internal community core, 13 links out of 15 possible neighbors or 15 links (i.e., all-to-all) at the most internal level. Both networks have $N=256$ and $\langle k\rangle=18$. Fig.\ref{Communities} shows the results for both kinds of networks revealing that the path towards synchronization as a function of the interaction is again affected by the structure. They also show that the information provided by $r_{link}$ is essential to unveil the synchronization process. While the global synchronization parameter $r$ is reflecting that the $13-4$ structure globally synchronizes always better, $r_{link}$ tells us again about the local synchronization. It shows that local synchronization is indeed favored in the $15-2$ structure since $r_{link}$ is larger for this topology for small values of $\lambda$ where the system is locally forming synchronized clusters. This result, not captured by the macroscopic indicator $r$, is expected since the internal cohesion of communities at the first hierarchical level is larger for the 15-2 than for the 13-4. The evolution of $r_{link}$ shows that when the coupling $\lambda$ is increased the number of links synchronized in the $13-4$ network becomes larger than in the $15-2$ structure revealing that complete synchronization is then favored by the presence of more external links connecting the first level communities. \begin{figure}[!t] \epsfig{file=Fig10pre.eps,width=\columnwidth,angle=0,clip=1} \caption{(color online) We represent the degree of synchronization between pairs of connected nodes for several values of the coupling $\lambda$ in a $13-4$ modular network (with two organizational levels) of $N=256$ nodes. The color code denotes the value of the averaged (over different initial conditions) filtered matrix $\langle{\cal D}_{ij}\rangle\in [0,1]$. The values of the coupling are (from left to right and top to bottom) $\lambda =0.011$, $0.026$, $0.032$, $0.035$, $0.038$, $0.046$, $0.210$ (corresponding to full synchronization). The pictures show that the order of synchronization is given by the organizational levels. The first community level is the first one to get synchronized, subsequently, second level nodes attain synchronization for a larger value of $\lambda$ and finally the full synchronized state is reached when outer links have $\langle D_{ij}\rangle=1$.} \label{13-4} \end{figure} Fig.\ref{gc_comm} shows the size of the giant component of synchronized clusters and the number of them as a function of $\lambda$. An interesting effect of the community structure of the networks and of the dynamics of the synchronization process is revealed in the figure. Right at the value of $\lambda$ where the onset of global coherence takes place, the size of $GC$ suddenly falls to increase again at larger values of the coupling strength. Additionally, note that this point coincides with that corresponding to a change in the concavity of the $r_{link}(\lambda)$ curves. This change at the microscopic level is due to the readjustment of links that connect synchronized nodes. In fact, as Fig.\ \ref{nlinks} illustrates for both networks, in this region of $\lambda$ values, the number of links connecting synchronized nodes of the third level decreases while the number of those ascribed to the second level raises. That is, the synchronization process takes place in such a way that the first to synchronize are the nodes of the inner community level, then the second and so on until the whole network gets synchronized. The relevant fact is that in order for $r_{link}$ and $r$ to grow, the nodes and links of the second level adjust their phases at the expense of those of the outer layer, the third level. This is also reflected in the number of clusters of synchronized links ($N_c$), i.e., the network appears like if the nodes of the third level were ``temporarily'' disconnected. Moreover, as the $13-4$ network has more links connecting the first and the second hierarchical levels, $N_{links2}$ raises faster in this network than in the $15-2$. We have further inspected the synchronization path in modular networks. This can be easily done and visualized by the representation of the filtered matrix ${\cal D}$. It implies to reassign the values of matrix ${\cal D}$ so that ${\cal D}_{ij}=1$ if ${\cal D}_{ij}>T$ , and ${\cal D}_{ij}=0$ otherwise. Plotting this filtered matrix for different values of the coupling $\lambda$ one can easily determine which links are the first to synchronize since the form of the adjacency matrix (that includes all the physical links between nodes) is also easy to interpret because of its nested structure. Fig.\ref{13-4} shows how the community structure determines the internal organization of the system in the route towards full synchronization for the $13-4$ network. For this study we have computed the value of the filtered matrix ${\cal D}$ for a number of initial conditions and then took its average value so that $\langle {\cal D}_{ij} \rangle\in [0,1]$ accounts for the synchronization strength of the network link $(i,j)$. The results point out that link synchronization depends on the organizational level they belong to. Those connecting nodes belonging to the same first level community are the fastest (in terms of the coupling strength $\lambda$) to reach full synchronization. For larger values of $\lambda$ full synchronization is attained progressively for the subsequent organizational levels. Then, one can conclude that the inner the link is the faster it gets synchronized in agreement with previous studies reported above \cite{ArenasPRL06}. \section{Conclusions} In this paper we have explored several issues about synchronization in complex networks of Kuramoto phase oscillators. Our main concern has been the study of the synchronization patterns that emerge as the coupling between non-identical oscillators increases. We have described the degree of synchronization between each pair of connected oscillators. The use of a new parameter, $r_{link}$, allows to reconstruct the synchronization clusters from the dynamical data. We have studied how the underlying topology (ranging from homogeneous to heterogeneous structures) affects the evolution of synchronization patterns. The results reveal that the route towards full synchronization depends strongly on whether one deals with homogeneous or heterogenous topologies. In particular, it has been shown that a giant cluster of synchronization in heterogeneous networks comes out from a unique core formed by highly connected nodes (hubs) whereas for homogeneous networks several synchronization clusters of similar size can coexist. In the latter case, a coalescence of these clusters is observed in the synchronization path which is macroscopically manifested by the sudden growth of global coherence. Another important effect of the underlying topology is manifested in an anticipated onset of global coherence for heterogeneous networks with respect to more homogeneous topologies. However, the latter reaches the state of full synchronization at lower values of the coupling strength, therefore showing that statements about synchronizability of complex networks are relative to the region of the phase diagram where they operate. Additionally, we have shown that these systems are seen to organize towards synchronization even when no macroscopic signs of global coherence is observed. Finally, the framework of structured networks has provided a useful benchmark for testing the validity of the new parameter $r_{link}$ and the information obtained from the computation of matrix ${\cal D}$. The results obtained by means of these quantities allow to conclude that for modular networks synchronization is first locally attained at the most internal level of organization and, as the coupling is increased, it progressively evolves toward outer shells of the network. The latter process is however achieved at the expense of partially readjusting some pairs of synchronized nodes between the inner and outer community levels. Besides, we have obtained evidences that a high cohesion at the first level communities produce a high degree of local synchronization although it delays the appearance of the global coherent state. This study has extended the previous findings about the paths towards synchronization in complex networks \cite{prljya}, and provides a deeper understanding of phase synchronization phenomena on top of complex topologies. In general, the work supports the idea that in the absence of analytical tools to confront the resolution of non-linear dynamical models in complex networks, the introduction of new parameters to describe the statistical properties of the emergence of local patterns is needed as they give novel and useful information that might guide our comprehension of these phenomena. On more general grounds, this work adds to other recent findings \cite{chaos,games} about the topology emerging from dynamical processes. The evidences that are being accumulated point to a dynamical organization, both at the local and global scales, that is driven by the underlying topology. Whether or not this intriguing regularity has something to do with the ubiquity of complex heterogeneous networks in Nature is not clear yet. More works in this direction are needed, but we think that they may ultimately lead to uncover important universal relations between the structure and function of complex natural systems that form networks. Another issue to explore in future works concerns the behavior of non-linear dynamical systems on top of directed networks \cite{timme}, which will allow deeper insights into the behavior of natural systems. \begin{acknowledgments} We thank J.A. Acebr\'on, S. Boccaletti, A. D\'{\i}az-Guilera, C.J. P\'{e}rez-Vicente and V. Latora for helpful comments. J.G.G. and Y.M. are supported by MEC through a FPU grant and the Ram\'{o}n y Cajal Program, respectively. This work has been partially supported by the Spanish DGICYT Projects FIS2006-13321-C02-02, FIS2006-12781-C02-01 and FIS2005-00337 and by the European NEST Pathfinder project GABA under contract 043309. \end{acknowledgments}
1,314,259,995,440
arxiv
\section{Introduction} We fix a base field $K$ of characteristic 0, an integer $m\geq 2$ and a set of symbols $X=\{x_1,\ldots,x_m\}$. We call the elements of $X$ variables. Sometimes we shall use other symbols, e.g. $y,z,y_i$, etc. for the elements of $X$. We denote by $V_m$ the vector space with basis $X$. \par Let $K\langle X\rangle=K\langle x_1,\ldots,x_m\rangle$ be the free unitary associative algebra freely generated by $X$ over $K$. The elements of $K\langle X\rangle$ are linear combinations of words $x_{j_1}\cdots x_{j_n}$ in the noncommuting variables $X$. The general linear group $GL_m=GL_m(K)$ acts naturally on the vector space $V_m$ and this action is extended diagonally on $K\langle X\rangle$ by the rule \[ g(x_{j_1}\cdots x_{j_n})=g(x_{j_1})\cdots g(x_{j_n}),\ g\in GL_m,\ x_{j_1},\ldots,x_{j_n}\in X. \] All associative algebras which we consider in this paper are homomorphic images of $K\langle X\rangle$ modulo ideals $I$ which are invariant under this $GL_m$-action. We shall use the same symbols $x_j$ and $X$ for the generators and the whole generating set of $K\langle X\rangle/I$. Most of the algebras in our considerations will be relatively free algebras in varieties of unitary associative algebras. Examples of relatively free algebras are the polynomial algebra $K[X]$ and the free algebra $K\langle X\rangle$ which are free, respectively, in the varieties of all commutative algebras and all associative algebras. We also shall consider Lie algebras which are homomorphic images of the free Lie algebra with $X$ as a free generating set modulo ideals which are also $GL_m$-invariant. \par Let $A$ be any (not necessarily associative or Lie) algebra over $K$. Recall that the $K$-linear operator $\delta$ acting on $A$ is called a derivation of $A$ if \[ \delta(uv)=\delta(u)v+u\delta(v)\ \text{\rm for all }\ u,v\in A. \] The elements $u\in A$ which belong to the kernel of $\delta$ are called constants of $\delta$ and form a subalgebra of $A$ which we shall denote by $A^{\delta}$. The derivation $\delta$ is locally nilpotent if for any $u\in A$ there exists a positive integer $n$ such that $\delta^n(u)=0$. If $\delta$ is a locally nilpotent derivation of $A$, then the linear operator of $A$ \[ \exp\delta=1+\frac{\delta}{1!}+\frac{\delta^2}{2!}+\cdots \] is well defined and is an automorphism of the $K$-algebra $A$. It is easy to see that $A^{\delta}$ coincides with the subalgebra of fixed points (or invariants) of $\exp\delta$ which we shall denote by $A^{\exp\delta}$. The mapping $\alpha\to\exp(\alpha\delta)$, $\alpha\in K$, defines an additive action of $K$ on $A$. It is well known that for polynomial algebras every additive action of $K$ is of this kind, see for more details Snow \cite{Sn}. See also Drensky and Yu \cite{DY1} for relations between exponents of locally nilpotent derivations and automorphisms $\varphi$ with the property that the orbit $\{\varphi^n(a)\mid n\in{\mathbb Z}\}$ of each $a\in A$ spans a finite dimensional vector space in the noncommutative case. \par If $A=K\langle X\rangle/I$ for some $GL_m$-invariant ideal $I$, then the derivation $\delta$ of $A$ is called triangular, if $\delta(x_j)$, $j=1,\ldots,m$, belongs to the subalgebra of $A$ generated by $x_1,\ldots,x_{j-1}$. Clearly, the triangular derivations are locally nilpotent. If $\delta$ acts linearly on the vector space $V_m=\sum_{j=1}^mKx_j\subset A$, then it is called linear. \par If $\delta$ is a triangular derivation, then $\exp\delta$ is a triangular automorphism of $A$, with the property \[ \exp\delta(x_j)=x_j+f_j(x_1,\ldots,x_{j-1}),\ j=1,\ldots,m, \] where $f_j(x_1,\ldots,x_{j-1})$ depends on $x_1,\ldots,x_{j-1}$ only. Every triangular automorphism $\varphi$ of this form can be presented in the form $\varphi=\exp\delta$ for some triangular derivation \[ \delta=\log(\varphi)=\frac{\varphi-1}{1}-\frac{(\varphi-1)^2}{2} +\frac{(\varphi-1)^3}{3}-\frac{(\varphi-1)^4}{4}+\cdots. \] (The $K$-linear operator $\delta$ of $A$ is well defined because the linear operators $(\varphi-1)^k$ map every $f(x_1,\ldots,x_j)\in A$ to a polynomial depending on $x_1,\ldots,x_j$ only, $\text{deg}_{x_j}(\varphi-1)^kf\leq \text{deg}_{x_j}f-k$ and $(\varphi-1)K=0$.) \par Every locally nilpotent linear derivation $\delta$ is triangular with respect to a suitable basis of $V_m$ and the automorphism $\exp\delta$ is a unipotent linear transformation (i.e. an automorphism of the algebra $A$ which acts as a unipotent linear operator on $V_m$). \par In commutative algebra, the triangular linear derivations of the polynomial algebra $K[X]=K[x_1,\ldots,x_m]$ are called Weitzenb\"ock derivations. The classical theorem of Weitzenb\"ock \cite{W} states that the algebra of constants of such a derivation is finitely generated. This algebra coincides with the algebra of invariants of a single unipotent transformation. \par In this paper we study the problem of finite generation of the algebras of constants of triangular linear derivations of (usually noncommutative) algebras $K\langle X\rangle/I$ where the ideal $I$ is $GL_m$-invariant. As in the commutative case, the algebra of constants coincides with the algebra of invariants of some unipotent transformation. The paper is organized as follows. Below we assume that $\delta$ is a nonzero triangular linear derivation of $K\langle X\rangle$ which induces a derivation (which we shall also denote by $\delta$) on the factor algebras of $K\langle X\rangle$ modulo $GL_m$-invariant ideals. \par In Section 2 we present a short survey on constants of locally nilpotent derivations and invariant theory both in the commutative and noncommutative case, giving some motivation for our investigations. We believe that some of the results exposed there can serve as a motivation and inspiration for further investigations on noncommutative algebras. \par Section 3 presents a summary of the results on the Weitzenb\"ock derivations of polynomial algebras which we need in the next sections. \par In Section 4 we are interested in the problem of lifting the constants: If $I\subset J$ are two $GL_m$-invariant ideals of $K\langle X\rangle$, then we show that the subalgebra of constants $(K\langle X\rangle/J)^{\delta}$ can be lifted to the subalgebra of constants $(K\langle X\rangle/I)^{\delta}$. In the special case of algebras with two generators $x,y$ we may assume that $\delta(x)=0$, $\delta(y)=x$. Then the subalgebra of constants is spanned by elements which have a very special behaviour under the action of the general linear group $GL_2$, the so called highest weight vectors. This allows to involve classical combinatorial techniques as theory of generating functions and representation theory of general linear groups. \par In Section 5 we present various examples of subalgebras of constants of relatively free associative algebras. In particular, he handle the case of the free algebra $K\langle x,y\rangle$ and show that the algebra of constants is generated by $x$ and a set of $SL_2(K)$-invariants which we describe explicitly. As a consequence, we obtain a similar generating set for all factor algebras $K\langle x,y\rangle/I$. \par Section 6 considers relatively free algebras $F_m({\mathfrak W})$ in varieties $\mathfrak W$ of associative algebras. It is known that every variety $\mathfrak W$ is either nilpotent in Lie sense or contains the algebra of $2\times 2$ upper triangular matrices. We show that for all $\mathfrak W$ which are not nilpotent in Lie sense the subalgebras of constants $F_m({\mathfrak W})^{\delta}$ are not finitely generated. \par In Section 7 we apply results from commutative algebra and construct classes of automorphisms of the relatively free algebra $F_2(\text{\rm var }M_2(K))$. This algebra is isomorphic to the algebra generated by two generic $2\times 2$ matrices $x$ and $y$. The centre of the associated generic trace algebra (which coincides with the algebra of invariants of two $2\times 2$ matrices under simultaneous conjugation by $GL_2$) is generated by the traces of $x$, $y$ and $xy$ and the determinants of $x$ and $y$ and is isomorphic to the polynomial algebra in five variables. We want to mention that up till now most of the investigations have been performed in the opposite direction. The automorphisms of $F_2(\text{\rm var }M_2(K))$ and of the trace algebra have been used to produce automorphisms of the polynomial algebra in five variables, see e.g. Bergman \cite{B}, Alev and Le Bruyn \cite{AL}, Drensky and Gupta \cite{DG}. \par Finally, we obtain also some partial results on relatively free Lie algebras. \section{Survey} \subsection{Motivation from Commutative Algebra} Locally nilpotent derivations of the polynomial algebra $K[X]=K[x_1,\ldots,x_m]$ have been studied for many decades and have had significant impact on different branches of algebra and invariant theory, see e.g. the books by Nowicki \cite{No} and van den Essen \cite{E2}. \par Let $G$ be a subgroup of $GL_m$ and let $K[X]^G=K[x_1,\ldots,x_m]^G$ be the algebra of $G$-invariants. The problem for finite generation of $K[X]^G$ was the main motivation for the famous Hilbert Fourteenth Problem \cite{H}. The theorem of Emmy Noether \cite{N} gives the finite generation of $K[X]^G$ for finite groups $G$. More generally, the Hilbert-Nagata theorem states the finite generation of $K[X]^G$ for reductive groups $G$, see e.g. \cite{DC}. \par The first counterexample of Nagata \cite{N1} to the Hilbert Fourteenth Problem was the non-finitely generated algebra of invariants $K[x_1,\ldots,x_{32}]^G$ of a specially constructed triangular linear group $G$. Today, most of the known counterexamples have been obtained (or can be obtained) as algebras of constants of some derivations. This includes the original counterexample of Nagata, see Derksen \cite{De} who was the first to recognize the connection between the Hilbert 14-th problem and constants of derivations (but his derivations were not always locally nilpotent) and the counterexample of Daigle and Freudenburg \cite{DF} of a triangular (but not linear) derivation of $K[x_1,\ldots,x_5]$ with not finitely generated algebra of constants. For more counterexamples to the Hilbert 14-th problem we refer to the recent survey by Freudenburg \cite{Fr2}. \par The theorem of Weitzenb\"ock gives the finite generation of the algebra of constants for a triangular linear derivation or, equivalently, for the algebra of invariants of a single unipotent transformation. (This contrasts to the counterexample of Nagata described above.) The original proof of Weitzenb\"ock from 1932 was for $K={\mathbb C}$. Later Seshadri \cite{Se} found a proof for any field $K$ of charactersitic 0. A simple proof for $K=\mathbb C$ using ideas from \cite{Se} has been recently given by Tyc \cite{T}. To the best of our knowledge, no constructive proof, with effective estimates of the degree of the generators of the algebra of constants has been given up till now. \par For each dimension $m$ there are only finite number of essentially different Weitzen\-b\"ock derivations to study: Up to a linear change of the coordinates, the Weitzen\-b\"ock derivations $\delta$ are in one-to-one correspondence with the partition $(p_1+1,p_2+1,\ldots,p_s+1)$ of $m$, where $p_1\geq p_2\geq\cdots\geq p_s\geq 0$, $(p_1+1)+(p_2+1)+\cdots+(p_s+1)=m$, and the correspondence is given in terms of the Jordan normal form $J(\delta)$ of the matrix of the derivation \[ J(\delta)=\begin{pmatrix} J_1&0&\cdots&0\\ 0&J_2&\cdots&0\\ \vdots&\vdots&\cdots&\vdots\\ 0&0&\cdots&J_s \end{pmatrix},\quad\ \text{ \rm where }\ J_i=\begin{pmatrix} 0&1&0&\cdots&0&0\\ 0&0&1&\cdots&0&0\\ \vdots&\vdots&\vdots&\cdots&\vdots&\vdots\\ 0&0&0&\cdots&0&1\\ 0&0&0&\cdots&0&0 \end{pmatrix}, \] is the $(p_i+1)\times(p_i+1)$ Jordan cell with zero diagonal. \par Another important application of locally nilpotent derivations is the construction of candidates for wild automorphisms of polynomial algebras, see e.g. the survey of Drensky and Yu \cite{DY2}. Typical example is the following. If $\delta$ is a Weitzenb\"ock derivation of $K[x_1,\ldots,x_m]$ and $0\not= w\in K[x_1,\ldots,x_m]^{\delta}$, then $\Delta=w\delta$ is also a locally nilpotent derivation of $K[x_1,\ldots,x_m]$ with the same algebra of constants as $\delta$ and $\exp\Delta$ is an automorphism of $K[x_1,\ldots,x_m]$. By the theorem of Martha Smith \cite{Sm}, all such automorphisms are stably tame and become tame if extended to $K[x_1,\ldots,x_m,x_{m+1}]$ by $(\exp\Delta)(x_{m+1})=x_{m+1}$. The famous Nagata automorphism of $K[x,y,z]$, see \cite{N2}, also can be obtained in this way: We define the derivation $\delta$ by \[ \delta(x)=-2y,\quad \delta(y)=z,\quad \delta(z)=0,\quad w=xz+y^2\in K[x,y,z]^{\delta}, \] and for $\Delta=w\delta$ the Nagata automorphism is $\nu=\exp\Delta$: \begin{align*} \nu(x)&=x+(-2y)\frac{w}{1!}+(-2z)\frac{w^2}{2!} =x-2(xz+y^2)y-(xz+y^2)^2z,\\ \nu(y)&=y+z\frac{w}{1!} =y+(xz+y^2)z,\\ \nu(z)&=z. \end{align*} Recently Shestakov and Umirbaev \cite{SU} proved that the Nagata automorphism is wild. It is interesting to mention that their approach is based on Poisson algebras and methods of noncommutative, and even nonassociative, algebras. \par There are few exceptions of locally nilpotent derivations and their exponents which do not arrise immediately from triangular derivations: the derivations of Freudenburg (obtained with his local slice construction \cite{Fr1}) and the automorphisms of Drensky and Gupta (obtained by methods of noncommutative algebra, \cite{DG}). Later, Drensky, van den Essen and Stefanov \cite{DES} have shown that the automorphisms from \cite{DG} also can be obtained in terms of locally nilpotent derivations and are stably tame. \subsection{Noncommutative Invariant Theory} An important part of noncommutative invariant theory is devoted to the study of the algebra of invariants of a linear group $G\subset GL_m$ acting on the free associative algebra $K\langle X\rangle=K\langle x_1,\ldots,x_m\rangle$, relatively free algebras $F_m({\mathfrak W})$ in varieties of associative algebras $\mathfrak W$, the free Lie algebra $L_m=L(X)$ and relatively free algebras $L_m({\mathfrak V})$ in varieties of Lie algebras $\mathfrak V$. For more detailed exposition we refer to the surveys on noncommutative invariant theory by Formanek \cite{F1}, Drensky \cite{D5} and the survey on algorithmic methods for relatively free semigroups, groups and algebras by Kharlampovich and Sapir \cite{KS}. \subsubsection{Free Associative Algebras} By a theorem of Lane \cite{L} and Kharchenko \cite{K1}, the algebra of invariants $K\langle X\rangle^G$ is always a free algebra (independently of the properties of $G\subset GL_m$). By the theorem of Dicks and Formanek \cite{DiF} and Kharchenko \cite{K1}, if $G$ is finite, then $K\langle X\rangle^G$ is finitely generated if and only if $G$ is cyclic and acts on $V_m=\sum_{j=1}^mKx_j$ as a group of scalar multiplications. This result was generalized for a much larger class of groups by Koryukin \cite{Ko} who also established a finite generation of $K\langle X\rangle^G$ if we equip it with a proper action of the symmetric group. \par Recall that if $V$ is a multigraded vector space which is a direct sum of its multihomogeneous components $V^{(n_1,\ldots,n_m)}$, then the Hilbert series of $V$ is defined as the formal power series \[ H(V,t_1,\ldots,t_m)=\sum\text{\rm dim}(V^{(n_1,\ldots,n_m)})t_1^{n_1}\cdots t_m^{n_m}. \] If $V$ is ``only'' graded with homogeneous components $V^{(n)}$, then its Hilbert series is \[ H(V,t)=\sum_{n\geq 0}\text{\rm dim}(V^{(n)})t^n. \] Dicks and Formanek \cite{DiF} proved also an analogue of the Molien formula for the Hilbert series of $K\langle X\rangle^G$, $\vert G\vert<\infty$, which was generalized for compact groups $G$ by Almkvist, Dicks and Formanek \cite{ADF} (an analogue of the Molien-Weyl formula in classical invariant theory). In particular, Almkvist, Dicks and Formanek showed that the Hilbert series of the algebra of invariants $K\langle X\rangle^g$ is an algebraic function if $g$ is a unipotent matrix. (Hence the same holds for the algebra of constants $K\langle X\rangle^{\delta}$ for a Weitzenb\"ock derivation $\delta$.) \subsubsection{Relatively Free Associative Algebras} Let $f(x_1,\ldots,x_m)\in K\langle x_1,x_2,\ldots\rangle$ be an element of the free algebra of countable rank. Recall that $f(x_1,\ldots,x_m)=0$ is a polynomial identity for the algebra $A$ if $f(a_1,\ldots,a_m)=0$ for all $a_1,\ldots,a_m\in A$. The algebra is called PI, if it satisfies some nontrivial polynomial identity. The class of all algebras satisfying a given set $U\subset K\langle x_1,x_2,\ldots\rangle$ of polynomial identities is called the variety of associative algebras defined by the system $U$. We shall denote the varieties by German letters. If $\mathfrak W$ is a variety, then $T({\mathfrak W})$ is the ideal of $K\langle x_1,x_2,\ldots\rangle$ consisting of all polynomial identities of $\mathfrak W$ and the algebra \[ F_m({\mathfrak W})=K\langle x_1,\ldots,x_m\rangle/ (K\langle x_1,\ldots,x_m\rangle\cap T({\mathfrak W})) \] is the relatively free algebra of rank $m$ in $\mathfrak W$. The ideals $K\langle x_1,\ldots,x_m\rangle\cap T({\mathfrak W})$ of $K\langle x_1,\ldots,x_m\rangle$ are invariant under all endomorphisms of $K\langle x_1,\ldots,x_m\rangle$ and, in particular, are $GL_m$-invariant. \par Most of the work on invariant theory of relatively free algebras is devoted to the description of the varieties $\mathfrak W$ such that $F_m({\mathfrak W})^G$ is finitely generated for all $m=2,3,\ldots$, and all groups $G\subset GL_m$ from a given class $\mathfrak G$. The description of such varieties for the class of all finite groups is given in different terms by several authors, see the surveys by Formanek \cite{F1}, Drensky \cite{D5}, Kharlampovich and Sapir \cite{KS}. In particular, the finite generation of $F_m({\mathfrak W})^G$ for all finite groups holds if and only if all finitely generated algebras of $\mathfrak W$ are weakly noetherian (i.e. noetherian with respect to two-sided ideals) which is equivalent to the fact that $\mathfrak W$ satisfies a polynomial identity of a very special form. One of the simplest descriptions is the following (see \cite{D3}): $F_m({\mathfrak W})^G$ is finitely generated for all $m\geq 2$ and all finite groups $G\subset GL_m$ if and only if $F_2({\mathfrak W})^g$ is finitely generated for the linear transformation $g$ defined by $g(x_1)=-x_1$, $g(x_2)=x_2$. \par If we consider the finite generation of $F_m({\mathfrak W})^G$ for the class all reductive groups $G$, then the results of Vonessen \cite{V}, Domokos and Drensky \cite{DD} give that $F_m({\mathfrak W})^G$ is finitely generated for all reductive $G$ if and only if the finitely generated algebras in $\mathfrak W$ are one-side noetherian. For unitary algebras this means that $\mathfrak W$ satisfies the Engel identity $[x_2,x_1,\ldots,x_1]=0$. \par Concerning the Hilbert series of subalgebras of invariants of relatively free algebras, Formanek \cite{F1} generalized the Molien-Weyl formula for the Hilbert series of $K[x_1,\ldots,x_m]^G$ for $G$ finite or compact to the case of any relatively free algebra, expressing the Hilbert series of $F_m({\mathfrak W})^G$ in terms of the Hilbert series $H(F_m({\mathfrak W}),t_1,\ldots,t_m)$. If $G$ is finite, then $H(F_m({\mathfrak W})^G,t)$ involves the eigenvalues of all $g\in G$. By a theorem of Belov \cite{Be}, the Hilbert series of $F_m({\mathfrak W})$ is always a rational function and this imlies that $H(F_m({\mathfrak W})^G,t)$ is also rational for $G$ finite. For reductive $G$ the rationality of $H(F_m({\mathfrak W})^G,t)$ is known only for varieties $\mathfrak W$ satisfying a nonmatrix polynomial identity, see Domokos and Drensky \cite{DD}. \subsubsection{Lie Algebras} We shall mention few results only. By a theorem of Bryant \cite{Br}, if $G$ is a nontrivial finite linear group, then the algebra of fixed points of the free Lie algebra $L_m^G$ is never finitely generated. This result was extended by Densky \cite{D4} to the fixed points of all relatively free algebras $L_m({\mathfrak V})$ (and also for all finite $G\not=1$) for nonnilpotent varieties $\mathfrak V$ of Lie algebras. We refer also to the work done by several authors and mainly by Bryant, Kovacz and St\"ohr about fixed points of free Lie algebras in the modular case, see e.g. \cite{BKS} and the references there. \subsection{Derivations of Free Algebras} The algebra of constants of the formal partial derivatives $\partial/\partial x_j$, $j=1,\ldots,m$, for $K\langle X\rangle=K\langle x_1,\ldots,x_m\rangle$ was described by Falk \cite{Fa}. It is generated by all Lie commutators $[[\ldots[x_{j_1},x_{j_2}],\ldots],x_{j_n}]$, $n\geq 2$. Specht \cite{Sp} applied products of such commutators in the study of algebras with polynomial identities, see also Drensky \cite{D2} or the book \cite{D6} for further application to the theory of PI-algebras. It is known, see Gerritzen \cite{G}, that in this case the algebra of constants is free, see also Drensky and Kasparian \cite{DK} for an explicit basis. (The freedom of the algebra of constants of the partial derivatives of $K[X]$ does not follow immediately from the result of Lane \cite{L} and Kharchenko \cite{K1}. The derivations $\partial/\partial x_j$ are locally nilpotent and their exponents $\exp(\partial/\partial x_j)$ generate a group of automorphisms of $K\langle X\rangle$ which consists of all translations of the form $x_i\to x_i+a_i$, $a_i\in\mathbb Z$. Although this group is a subgroup of the affine group, we cannot apply directly \cite{L} and \cite{K1} because the group is not linear.) \par Similar study of the algebra of constants in a very large class of (not only associative) algebras was performed by Gerritzen and Holtkamp \cite{GH} and Drensky and Holtkamp \cite{DH}. We shall finish the survey section with the following, probably folklore known lemma. \begin{lemma} \label{extending derivations} Let $\mathfrak W$ be any variety of algebras and let $F({\mathfrak W})$ be the relatively free algebra of any rank. Every mapping from the free generating set to $F({\mathfrak W})$ can be extended to a derivation. \end{lemma} \begin{proof} We shall prove the lemma for relatively free associative algebras of finite or countable rank only. The same considerations work in the case of any infinite rank. Let $\delta_0:\{x_1,x_2,\ldots\}\to F_{\infty}({\mathfrak W})$ be any mapping and let $T({\mathfrak W})$ be the T-ideal of $K\langle x_1,x_2,\ldots\rangle$ of all polynomial identities of $\mathfrak W$. We fix $f_1,\ldots,f_m\in K\langle X\rangle$ such that $\delta_0(x_j)=f_j+T({\mathfrak W})\in F_{\infty}({\mathfrak W})$, $j=1,2,\ldots$. Since every mapping $\{x_1,x_2,\ldots\}\to K\langle x_1,x_2,\ldots\rangle$ can be extended to a derivation of $K\langle x_1,x_2,\ldots\rangle$, it is sufficient to show that the derivation $\Delta$ of $K\langle x_1,x_2,\ldots\rangle$ defined by $\Delta(x_j)=f_j$, $j=1,2,\ldots$, has the property $\Delta(T({\mathfrak W}))\subset T({\mathfrak W})$. Since the field $K$ is of characteristic 0, if $u(x_1,\ldots,x_m)$ belongs to $T({\mathfrak W})$, then the multihomogeneous components of $u$ also are in $T({\mathfrak W})$ and we may assume that $u(x_1,\ldots,x_m)\in T({\mathfrak W})$ is multihomogeneous. The partial linearization $u_j(x_1,\ldots,x_m,x_{m+1})$ in $x_j$ of $u(x_1,\ldots,x_m)$, i.e. the linear component in $x_{m+1}$ of $u(x_1,\ldots,x_{j-1},x_j+x_{m+1},x_{j+1},\ldots,x_m)$ also belongs to $T({\mathfrak W})$. It is easy to see that $\Delta$ acts on $u(x_1,\ldots,x_m)$ by \[ \Delta(u(x_1,\ldots,x_m))=\sum_{j=1}^mu_j(x_1,\ldots,x_m,\Delta(x_j)). \] Since $u_j(x_1,\ldots,x_m,\Delta(x_j))\in T({\mathfrak W})$ we obtain that $\Delta(u)\in T({\mathfrak W})$ and this means that $\Delta$ induces a derivation $\delta$ on $F_{\infty}({\mathfrak W})=K\langle x_1,x_2,\ldots\rangle/T({\mathfrak W})$ with the additional property $\delta(x_j)=f_j$, and $\delta$ extends $\delta_0$. This implies also the case of $F_m({\mathfrak W})$: If $f_1,\ldots,f_m\in F_m({\mathfrak W})$, then we extend the mapping to a derivation of $F_{\infty}({\mathfrak W})$ (e.g. by $\delta_0(x_j)=0$ for $j>m$). Then the restriction to $F_m({\mathfrak W})$ of the derivation of $F_{\infty}({\mathfrak W})$ is a derivation of $F_m({\mathfrak W})$. \end{proof} \section{Weitzenb\"ock Derivations of Polynomial Algebras} Since we consider nonzero Weitzenb\"ock derivations only, without loss of generality we may assume that the derivation $\delta$ is in its Jordan normal form, $\delta(x_1)=0$, $\delta(x_2)=x_1$ and the set of variables $X=\{x_1,\ldots,x_m\}$ is a Jordan basis of $V_m=\sum_{j=1}^mKx_j$. If the rank of $\delta$ is equal to $m-1$ (i.e. $\delta(x_j)=x_{j-1}$, $j=2,\ldots,m$), then $\delta$ is called the basic Weitzenb\"ock derivation of $K[X]$. The following proposition, see \cite{No}, gives the description of the algebras of constants of any Weitzenb\"ock derivation. (It is a very special case of the more general situation of an arbitrary locally nilpotent derivation.) For our purposes we work in the localization of the polynomial algebra $K[X][x_1^{-1}]=K[x_1,x_2,\ldots,x_m][x_1^{-1}]$ consisting of all polynomials in $x_1,\ldots,x_m$ allowing negative degrees of $x_1$. Since $x_1$ is a constant (i.e. $\delta(x_1)=0$), we may extend $\delta$ to a derivation of $K[X][x_1^{-1}]$. \begin{proposition} \label{generators Weitzenboeck} Let $\delta^{p_j+1}(x_j)=0$, $j=1,\ldots,m$, and let \[ z_j=\sum_{k=0}^{p_j}\frac{\delta^k(x_j)}{k!}(-x_2)^kx_1^{p_j-k},\quad j=3,4,\ldots,m, \]. \par {\rm (i)} $(K[X][x_1^{-1}])^{\delta}=K[x_1,z_3,z_4,\ldots,z_m][x_1^{-1}]$; \par {\rm (ii)} $K[X]^{\delta}=K[X]\cap(K[X][x_1^{-1}])^{\delta}$. \end{proposition} \begin{example} If $\delta$ is a basic Weitzeb\"ock derivation, then \[ z_3=x_3x_1^2-\frac{x_2^2x_1}{2}=\frac{x_1}{2}(2x_3x_1-x_2^2),\quad z_4=x_1\left(x_4x_1^2-x_3x_2x_1+\frac{x_2^3}{3}\right),\ldots, \] \[ z_j=(-1)^j\frac{j}{(j+1)!}x_1\left(x_2^{j+1} +\frac{(j+1)!}{j}\sum_{k=0}^{j-1}(-1)^{j-k}\frac{1}{(j+1)!}x_1^{j-k}x_2^kx_{j+1-k}\right). \] \end{example} \begin{corollary} \label{transcendence degree} For any Weitzenb\"ock derivation $\delta$, the transcendence degree (i.e. the maximal number of algebraically independent elements) of $K[x_1,\ldots,x_m]^{\delta}$ is equal to $m-1$. \end{corollary} The explicit form of the generators of $K[x_1,\ldots,x_m]^{\delta}$ is known for small $m$ only. Tan \cite{Ta} presented an algorithm for computing the generators of the algebra of constants of a basic derivation. It was generalized by van den Essen \cite{E1} for any locally nilpotent derivation assuming that the finite generation of the algebra of constants is known. The algorithm involves Gr\"obner bases techniques. \begin{examples} \label{examples of concrete generators} We have selected few examples of the generating sets of the algebra of constants, all of them taken from \cite{No}. For $\delta$ being a basic Weitzenb\"ock derivation (with $\delta(x_1)=0$ and $\delta(x_j)=x_{j-1}$, $j=2,\ldots,m$): \[ K[x_1,x_2]^{\delta}=K[x_1],\quad K[x_1,x_2,x_3]^{\delta}=K[x_1,x_2^2-2x_1x_3], \] \[ K[x_1,x_2,x_3,x_4]^{\delta}=K[x_1,x_2^2-2x_1x_3, x_2^3-3x_1x_2x_3+3x_1^2x_4, \] \[ x_2^2x_3^2-2x_2^3x_4+6x_1x_2x_3x_4-\frac{8}{3}x_1x_3^3-3x_1^2x_4^2], \] (see \cite{No}, Example 6.8.2), \[ K[x_1,x_2,x_3,x_4,x_5]^{\delta}=K[x_1,x_2^2-2x_1x_3, 2x_2x_4-x_3^2-2x_1x_5, \] \[ x_2^3-3x_1x_2x_3+3x_1^2x_4, 6x_2^2x_5-6x_2x_3x_4+2x_3^3-12x_1x_3x_5+9x_1x_4^2], \] (see \cite{No}, Example 6.8.4). \par For $\delta$ nonbasic, $\delta(x_2)=x_1$, $\delta(x_4)=x_3$, $\delta(x_1)=\delta(x_3)=0$ (see \cite{No}, Proposition 6.9.5): \[ K[x_1,x_2,x_3,x_4]^{\delta}=K[x_1,x_3, x_1x_4-x_2x_3], \] for $\delta$ defined by $\delta(x_3)=x_2$, $\delta(x_2)=x_1$, $\delta(x_5)=x_4$, $\delta(x_1)=\delta(x_4)=0$ (see \cite{No}, Example 6.8.5): \[ K[x_1,x_2,x_3,x_4,x_5]^{\delta}=K[x_1,x_4, x_1x_5-x_2x_4,x_2^2-2x_1x_3, 2x_3x_4^2-2x_2x_4x_5+x_1x_5^2]. \] \end{examples} \begin{remark} Springer \cite{Sr} found a formula for the Hilbert series of the algebra of invariants of $SL_2(K)$ acting on the forms of degree $d$. This is equivalent to the description of the Hilbert series of the algebra of constants of the basic Weitzenb\"ock derivation of $K[x_1,\ldots,x_{d+1}]$. Almkvist \cite{A1, A2} related these invariants with invariants of the modular action of a cyclic group of order $p$. \end{remark} \section{Lifting and Description of the Constants} We need the following easy lemma. \begin{lemma} \label{lifting in general case} Let $G\subset H$ be groups and let the $H$-module $M$ be completely reducible. If $N\subset M$ is an $H$-submodule and $\bar m\in M/N$ is a $G$-invariant, then $\bar m$ can be lifted to a $G$-invariant $m\in M$. \end{lemma} \begin{proof} Let $P$ be an $H$-complement of $N$ in $M$, i.e. $M=N\oplus P$. Since $M/N\cong P$, there exists an element $m\in P$ which maps on $\bar m$ under the natural homomorphism $M\to M/N$. Since $\bar m$ is $G$-invariant, we obtain that $\overline{G(m)}=G(\bar m)=\bar m$. Taking into account that $m_1,m_2\in P$, $m_1\not= m_2$, implies that $\bar m_1\not= \bar m_2$ in $M/N$, and $G(P)=P$, we deduce that $G(m)=m$ in $M$, i.e. $m$ is $G$-invariant. \end{proof} \begin{proposition} \label{lifting of invariants of linear groups} Let $K\langle X\rangle=K\langle x_1,\ldots,x_m\rangle$ be the free associative algebra with the canonical $GL_m$-action, and let $I\subset J$ be $GL_m$-invariant two-sided ideals of $K\langle X\rangle$. Then for every subgroup $G$ of $GL_m$, the $G$-invariants of $K\langle X\rangle/J$ can be lifted to $G$-invariants of $K\langle X\rangle/I$. \end{proposition} \begin{proof} The statement follows immediately from Lemma \ref{lifting in general case} because, as a $GL_m$-module, $K\langle X\rangle$ is completely reducible. \end{proof} \begin{corollary} \label{lifting of constants} If $I\subset J$ are $GL_m$-invariant two-sided ideals of $K\langle X\rangle$ and $\delta$ is a Weitzenb\"ock derivation on $K\langle X\rangle$, then the algebra of constants $(K\langle X\rangle/J)^{\delta}$ can be lifted to the algebra of constants $(K\langle X\rangle/I)^{\delta}$. \end{corollary} \begin{proof} The corollary is a straightforward consequence of Proposition \ref{lifting of invariants of linear groups} because the algebras of constants $(K\langle X\rangle/J)^{\delta}$ and $(K\langle X\rangle/I)^{\delta}$ coincide, respectively, with the algebras of $g$-invariants $(K\langle X\rangle/J)^g$ and $(K\langle X\rangle/I)^g$, where $g=\exp\delta$ is the linear transformation corresponding to $\delta$. \end{proof} \begin{corollary} \label{test for finite generation} Let $I\subset J$ be $GL_m$-invariant two-sided ideals of $K\langle X\rangle$ and let $\delta$ be a Weitzenb\"ock derivation on $K\langle X\rangle$. If the algebra of constants $(K\langle X\rangle/J)^{\delta}$ is not finitely generated, then $(K\langle X\rangle/I)^{\delta}$ is also not finitely generated. \end{corollary} \begin{remark} Corollary \ref{test for finite generation} holds also for Lie algebras and other free algebras including free (special or not) Jordan algebras and the absolutely free algebra $K\{x_1,\ldots,x_m\}$. \end{remark} Now we shall describe the algebras of constants in the case of two variables, assuming that $K\langle x_1,x_2\rangle=K\langle x,y\rangle$ and $\delta(x)=0$, $\delta(y)=x$. \par Recall that any irreducible polynomial $GL_2$-module $W(\lambda_1,\lambda_2)$ has a unique (up to a multiplicative constant) element $w(x,y)$ which is bihomogeneous of degree $(\lambda_1,\lambda_2)$ and is called the highest weight vector of $W(\lambda_1,\lambda_2)$. For any $GL_2$-invariant homomorphic image $K\langle x,y\rangle/I$ of $K\langle x,y\rangle$ the algebra of constants $(K\langle x,y\rangle/I)^{\delta}$ coincides with the algebra of $g$-invariants $(K\langle x,y\rangle/I)^g$ where $g=\exp\delta$. Since $g(x)=x$, $g(y)=x+y$ and $\text{\rm char}K=0$, the algebra of $g$-invariants coincides with the algebra of invariants of the unitriangular group $UT_2(K)$. Hence, as in Almkvist, Dicks and Formanek \cite{ADF}, we may use Theorem 3.3 (i) of De Concini, Eisenbud and Procesi \cite{DEP} and obtain: \begin{theorem} \label{constants in two variables} For any $GL_2$-invariant ideal $I$ of $K\langle x,y\rangle$ the algebra of constants $(K\langle x,y\rangle/I)^{\delta}$ is spanned by the highest weight vectors of the $GL_2$-irreducible components of $K\langle x,y\rangle/I$. \end{theorem} \begin{remarks} \label{three remarks} 1. A direct proof of Theorem \ref{constants in two variables} can be obtained using the criterion of Koshlukov \cite{Ko1} which states: A multihomogeneous of degree $\lambda=(\lambda_1,\ldots,\lambda_m)$ polynomial $w(x_1,\ldots,x_m)\in K\langle x_1,\ldots,x_m\rangle$ is a highest weight vector of an irreducible $GL_m$-submodule $W(\lambda)$ of $K\langle x_1,\ldots,x_m\rangle$ if and only if for all partial linearizations $w_j(x_1,\ldots,x_m,x_{m+1})$ of $w(x_1,\ldots,x_m)$ one has $w_j(x_1,\ldots,x_m,x_i)=0$ for all $i<j$. \par 2. By Almkvist, Dicks and Formanek \cite{ADF} the algebra $(K\langle x_1,\ldots,x_m\rangle)^{UT_m(K)}$ of all $UT_m(K)$-invariants coincides with the vector space spanned by all highest weight vectors $w(x_1,\ldots,x_m)\in W(\lambda)\subset K\langle x_1,\ldots,x_m\rangle$, when $\lambda=(\lambda_1,\ldots,\lambda_m)$ runs on the set of all partitions in not more than $m$ parts. \par 3. Following Almkvist, Dicks and Formanek \cite{ADF}, for any unipotent transformation $g$ of $K\langle x_1,\ldots,x_m\rangle$ (and hence for any Weitzenb\"ock derivation $\delta$) one can define a $GL_2$-action on $K\langle x_1,\ldots,x_m\rangle$ and on the factor algebras $K\langle x_1,\ldots,x_m\rangle/I$ modulo $GL_m$-invariant ideals, such that $(K\langle x_1,\ldots,x_m\rangle)^g$ and $(K\langle x_1,\ldots,x_m\rangle/I)^g$ are spanned by the highest weight vectors with respect to the $GL_2$-action. \end{remarks} The necessary background on symmetric functions which we need can be found e.g. in the book by Macdonald \cite{M}. Any symmetric function in $m$ variables $f(t_1,\ldots,t_m)$ which can be expressed as a formal power series has the presentation \[ f(t_1,\ldots,t_m)=\sum_{\lambda}m(\lambda)S_{\lambda}(t_1,\ldots,t_m), \] where $S_{\lambda}(t_1,\ldots,t_m)$ is the Schur function corresponding to the partition $\lambda=(\lambda_1,\ldots,\lambda_m)$ and $m(\lambda)$ is the multiplicity of $S_{\lambda}(t_1,\ldots,t_m)$ in $f(t_1,\ldots,t_m)$. This presentation agrees with the theory of polynomial representations of $GL_m$ because the Schur functions play the role of characters of the irreducible polynomial $GL_m$-representations. In our case this relation gives the following: If $K\langle X\rangle/I$ for some $GL_m$-invariant ideal $I$, then the Hilbert series of $K\langle X\rangle/I$ has the presentation \[ H(K\langle X\rangle/I,t_1,\ldots,t_m) =\sum_{\lambda}m(\lambda)S_{\lambda}(t_1,\ldots,t_m), \] if and only if $K\langle X\rangle/I$ is decomposed as a $GL_m$-module as \[ K\langle X\rangle/I\cong\sum_{\lambda}m(\lambda)W(\lambda). \] In the case of two variables the Schur functions have the following simple expression \[ S_{(\lambda_1,\lambda_2)}(t_1,t_2)=(t_1t_2)^{\lambda_2} \frac{t_1^{\lambda_1-\lambda_2+1}-t_2^{\lambda_1-\lambda_2+1}}{t_1-t_2}. \] Drensky and Genov \cite{DGn} defined the multiplicity series of \[f(t_1,t_2)=\sum_{\lambda}m(\lambda)S_{\lambda}(t_1,t_2) \] as the formal power series \[ M(f)(t,u)=\sum_{\lambda}m(\lambda)t^{\lambda_1}u^{\lambda_2}, \] or, if one introduces a new variable $v=tu$, as \[ M'(f)(t,v)=\sum_{\lambda}m(\lambda)t^{\lambda_1-\lambda_2}v^{\lambda_2}. \] The relation between the symmetric function and its multiplicity series is \[ f(t_1,t_2)=\frac{t_1M'(f)(t_1,t_1t_2)-t_2M'(f)(t_2,t_1t_2)}{t_1-t_2}. \] Theorem \ref{constants in two variables} gives that the Hilbert series of the algebra of constants $(K\langle x,y\rangle/I)^{\delta}$ (with respect to the bigrading) is equal to the multiplicity series of the Hilbert series of $K\langle x,y\rangle/I$: \begin{corollary} \label{Hilbert series of constants} For any $GL_2$-invariant ideal $I$ of $K\langle x,y\rangle$ and for the basic Weitzen\-b\"ock derivation $\delta$ \[ H((K\langle x,y\rangle/I)^{\delta},t,u) =M(H(K\langle x,y\rangle/I)(t,u). \] \end{corollary} If we consider the usual grading, Corollary \ref{Hilbert series of constants} has the form \[ H((K\langle x,y\rangle/I)^{\delta},t) =M(H(K\langle x,y\rangle/I)(t,t) =M'(H(K\langle x,y\rangle/I)(t,t^2). \] We shall apply Corollary \ref{Hilbert series of constants} in the next section in the concrete description of the generators of the constants in $K\langle x,y\rangle$ and, more generally, in any relatively free associative algebra. \section{Examples and Concrete Generators of Algebras of Constants} We start this section with several examples when we determine completely the algebras of constants and their generators. We shall consider algebras of rank 2 and 3 only and shall denote the free generators by $x,y$ and $x,y,z$, respectively. We shall handle the case of basic Weitzenb\"ock derivations $\delta$ only, assuming that $\delta(x)=0$, $\delta(y)=x$ (and $\delta(z)=y$ if the rank of the algebra is equal to 3). \begin{example} \label{Grassmann} Let ${\mathfrak L}_2$ be the variety of associative algebras defined by the identity $[[x,y],z]=0$. By the theorem of Krakowski and Regev \cite{KR} ${\mathfrak L}_2$ coincides with the variety generated by the infinite dimensional Grassmann algebra. The $S_n$-cocharacter sequence of ${\mathfrak L}_2$ is equal to \[ \chi_n({\mathfrak L}_2)=\sum_{k=1}^n\chi_{(k,1^{n-k})},\quad n\geq 1, \] see \cite{KR}. In virtue of the correspondence between cocharacters and Hilbert series, see \cite{Bl} and \cite{D1} (or the book \cite{D6}) the Hilbert series of the relatively free algebra $F_m({\mathfrak L}_2)$ is equal to \[ H(F_m({\mathfrak L}_2),t_1,\ldots,t_m) =1+\sum_{k\geq 1}\sum_{l=0}^{m-1}S_{(k,1^l)}(t_1,\ldots,t_m). \] It is well known that $F_m({\mathfrak L}_2)$ has a basis \[ x_1^{a_1}\cdots x_m^{a_m}[x_{i_1},x_{i_2}]\cdots[x_{i_{2p-1}},x_{i_{2p}}], \quad 1\leq i_1<i_2<\cdots<i_{2p-1}<i_{2p}\leq m, \] see for example Bokut and Makar-Limanov \cite{BM} or the book \cite{D6}. The commutators $[x_i,x_j]$ are in the centre of $F_m({\mathfrak L}_2)$ and satisfy the relations \[ [x_{\sigma(1)},x_{\sigma(2)}]\cdots[x_{\sigma(2p-1)},x_{\sigma(2p)}] =(\text{\rm sign}\sigma)[x_1,x_2]\cdots[x_{2p-1},x_{2p}],\quad \sigma\in S_{2p}. \] \par Let $m=2$. Then $F_2({\mathfrak L}_2)$ has a basis \[ \left\{ x^ay^b,x^ay^b[x,y]\mid a,b\geq 0\right\}. \] Its Hilbert series and the related multiplicity series are, respectively, \[ H(F_2({\mathfrak L}_2),t_1,t_2) =\frac{1+t_1t_2}{(1-t_1)(1-t_2)} =\sum_{n\geq 0}S_{(n)}(t_1,t_2) +\sum_{n\geq 2}S_{(n-1,1)}(t_1,t_2), \] \[ M(H(F_2({\mathfrak L}_2))(t,u)=\sum_{n\geq 0}t^n+\sum_{n\geq 2}t^{n-1}u =\frac{1+tu}{1-t}. \] By Corollary \ref{Hilbert series of constants}, \[ H((F_2({\mathfrak L}_2))^{\delta},t,u)=\frac{1+tu}{1-t}. \] Since the vector subspace of $F_2({\mathfrak L}_2)$ spanned by $x^n$, $n\geq 0$, and $x^{n-2}[x,y]$, $n\geq 2$, consists of $\delta$-constants and has the same Hilbert series as $(F_2({\mathfrak L}_2))^{\delta}$, we obtain that it coincides with the algebra of constants. This immediately implies that the algebra $(F_2({\mathfrak L}_2))^{\delta}$ is generated by $x$ and $[x,y]$. \par Let $m=3$. Then $F_3({\mathfrak L}_2)$ has a basis \[ \left\{x^ay^bz^c,x^ay^bz^c[x,y],x^ay^bz^c[x,z],x^ay^bz^c[y,z] \mid a,b,c\geq 0 \right\} \] and the commutator ideal $C$ of $F_3({\mathfrak L}_2)$ is a free $K[x,y,z]$-module with free generators $[x,y],[x,z],[y,z]$. By Examples \ref{examples of concrete generators}, $K[x,y,z]^{\delta}=K[x,y^2-2xz]$. We may choose $y^2-xz-zx$ as a lifting in $(F_3({\mathfrak L}_2))^{\delta}$ of $y^2-2xz\in (K[x,y,z])^{\delta}$. Hence $(F_3({\mathfrak L}_2))^{\delta}$ is generated by $x,y^2-xz-zx$ and some elements in the commutator ideal $C$. Every element of $K[x,y,z]$ can be written in a unique way as \[ f_0(x,y^2-2xz)+\sum_{n\geq 1}f_n(x,y^2-2xz)z^n+\sum_{n\geq 1}g_n(x,y^2-2xz)yz^{n-1}. \] Hence the elements in $C$ have the form \[ f=\alpha(x,y,z)[x,y]+\beta(x,y,z)[x,z]+\gamma(x,y,z)[y,z],\quad \alpha,\beta,\gamma\in K[x,y,z]. \] If $f$ is a $\delta$-constant, then \[ 0=\delta(f)=(\delta(\alpha)+\beta)[x,y]+(\delta(\beta)+\gamma)[x,z]+\delta(\gamma)[y,z]. \] In this way, $f\in (F_3({\mathfrak L}_2))^{\delta}$ if and only if \[ \delta(\gamma)=0,\quad \delta(\beta)=-\gamma,\quad \delta(\alpha)=-\beta. \] We present $\beta(x,y,z)$ in the form \[ \beta=f_0+\sum_{n\geq 1}(f_nz^n+g_nz^{n-1}y), \quad f_0,f_n,g_n\in (K[x,y,z])^{\delta}, \] and calculate, bearing in mind that $y^2=(y^2-2xz)+2xz$, \[ -\gamma=\delta(\beta) =\sum_{n\geq 1}\left(nf_nz^{n-1}y+(n-1)g_nz^{n-2}y^2+xg_nz^{n-1}\right) \] \[ =\sum_{n\geq 1}\left((n-1)g_n(y^2-2xz)z^{n-2} +(2n-1)xg_nz^{n-1}+nf_nz^{n-1}y\right). \] This easily implies that $f_n=0$, $n\geq 1$, $g_n=0$, $n\geq 2$, and $\beta=f_0+g_1y$, $f_0,g_1\in K[x,y^2-2xz]=(K[x,y,z])^{\delta}$. Hence $\gamma=-g_1x$. Continuing in this way, we obtain the final form of $\alpha,\beta,\gamma$: \[ \alpha=\alpha_0z+\alpha_1y+\alpha_2,\quad \beta=-\alpha_0y-\alpha_1x,\quad \gamma=\alpha_0x. \] Hence the part of the algebra of constants of $F_3({\mathfrak L}_2)$ which belongs to the commutator ideal $C$ is spanned as a $(K[x,y,z])^{\delta}$-module by \[ [x,y], \quad y[x,y]-x[x,z],\quad z[x,y]-y[x,z]+x[y,z], \] and $(F_3({\mathfrak L}_2))^{\delta}$ is generated by \[ x,\quad y^2-xz-zx,\quad [x,y], \quad y[x,y]-x[x,z],\quad z[x,y]-y[x,z]+x[y,z]. \] \end{example} \begin{example} \label{free metabelian associative algebra} Let us consider the variety $\mathfrak M$ of all ``metabelian'' associative algebras defined by the identity $[x_1,x_2][x_3,x_4]=0$. It is well known that $F_2({\mathfrak M})$ has a basis \[ \{x^ay^b,\quad x^ay^b[x,y]x^cy^d\mid a,b,c,d\geq 0\}. \] We shall write the element $x^ay^b[x,y]x^cy^d$ as $[x,y]x_1^ay_1^bx_2^cy_2^d$. In this way, the commutator ideal $C$ of $F_2({\mathfrak M})$ is a free cyclic $K[x,y]$-bimodule (or a free cyclic $K[x_1,y_1,x_2,y_2]$-module) with the $K[x,y]$-action defined by \[ x[x,y]=[x,y]x_1,\quad y[x,y]=[x,y]y_1,\quad [x,y]x=[x,y]x_2,\quad [x,y]y=[x,y]y_2. \] The Hilbert series of $F_2({\mathfrak M})$ is \[ H(F_2({\mathfrak M}),t_1,t_2) =\frac{1}{(1-t_1)(1-t_2)}+\frac{t_1t_2}{(1-t_1)^2(1-t_2)^2}. \] One can calculate directly the $S_n$-cocharacter of $\mathfrak M$ using the Young rule as in \cite{D6} or to apply techniques of \cite{DGn} to see that the multiplicty series of $F_2({\mathfrak M})$ is \[ M'(F_2({\mathfrak M}))(t,v)=\frac{1}{1-t}+\frac{v}{(1-t)^2(1-v)}. \] By Corollary \ref{Hilbert series of constants} this is also the Hilbert series of the algebra of constants $(F_2({\mathfrak M}))^{\delta}$. We consider the linearly independent highest weight vectors \[ x^n,\ n\geq 0,\quad [x,y]x_1^px_2^q(x_1y_2-y_1x_2)^r,\ p,q,r\geq 0. \] They span a graded vector subspace of $(F_2({\mathfrak M}))^{\delta}$ and its Hilbert series coincides with the HIlbert series of $(F_2({\mathfrak M}))^{\delta}$. Hence the above highest weight vectors span $(F_2({\mathfrak M}))^{\delta}$. Since the square of the commutator ideal $C$ is equal to 0, the element $x$ together with all $[x,y]x_1^px_2^q(x_1y_2-y_1x_2)^r$, $p,q,r\geq 0$, is a minimal generating set of $(F_2({\mathfrak M}))^{\delta}$ and the algebra of constants is not finitely generated. \end{example} Now we start with the description of the constants of the free algebra $K\langle x,y\rangle$ which will gives also the description of the constants in any two-generated associative algebra. \begin{proposition} \label{Hilbert series of constants of free associative algebra} The Hilbert series of the algebra of constants $(K\langle x,y\rangle)^{\delta}$ are \[ H((K\langle x,y\rangle)^{\delta},t,u)=\sum_{(\lambda_1,\lambda_2)} \left(\binom{\lambda_1+\lambda_2}{\lambda_2} -\binom{\lambda_1+\lambda_2}{\lambda_2-1}\right)t^{\lambda_1}u^{\lambda_2} \] \[ =\frac{1-\sqrt{1-4v}}{2v}\cdot \frac{1}{1-\frac{1-\sqrt{1-4v}}{2v}t}, \] where $v=tu$ and, in one variable, \[ H((K\langle x,y\rangle)^{\delta},t)=\sum_{p\geq 0} \left(\binom{2p}{p}t^{2p}+\binom{2p+1}{p}t^{2p+1}\right). \] \end{proposition} \begin{proof} By Corollary \ref{Hilbert series of constants} the Hilbert series of the algebra of constants $(K\langle x,y\rangle)^{\delta}$ is equal to the multiplicity series of the Hilbert series of $K\langle x,y\rangle$. By representation theory of general linear groups, the multiplicity $m_{\lambda}$ of the irreducible $GL_m$-module $W(\lambda)$ in $K\langle x_1,\ldots,x_m\rangle$ for the partition $\lambda$ of $n$ is equal to the degree $d_{\lambda}$ of the irreducible $S_n$-character $\chi_{\lambda}$. By the hook formula, for $\lambda=(\lambda_1,\lambda_2)$ \[ d_{\lambda}=\frac{(\lambda_1+\lambda_2)(\lambda_1-\lambda_2+1)}{(\lambda_1+1)!\lambda_2!} =\binom{\lambda_1+\lambda_2}{\lambda_2}-\binom{\lambda_1+\lambda_2}{\lambda_2-1}. \] This gives the expression for $H((K\langle x,y\rangle)^{\delta},t,u)$. If we set there $u=t$ we obtain that the coefficient of $t^{2p}$ is equal to \[ \sum_{i=0}^p\left(\binom{2p}{i}-\binom{2p}{i-1}\right)=\binom{2p}{p}, \] and similarly for the coefficient of $t^{2p+1}$. In order to obtain the formula in terms of $t$ and $v$ we can either use the known formulas for the summation of formal power series with binomial coefficients or proceed in the following way using ideas from \cite{DGn}. The Hilbert series of $K\langle x,y\rangle$ is equal to \[ f(t_1,t_2)=H(K\langle x,y\rangle,t_1,t_2)=\frac{1}{1-(t_1+t_2)}. \] It is sufficient to show that the multiplicity series of $f(t_1,t_2)$ is \[ M'(f)(t,v)=\frac{1-\sqrt{1-4v}}{2v}\cdot \frac{1}{1-\frac{1-\sqrt{1-4v}}{2v}t}. \] Since the multiplicity series of any symmetric function $f(t_1,t_2)\in K[[t_1,t_2]]$ is a uniquely determined formal power series in $K[[t,v]]$, it is sufficient to show that the expansion of \[ \frac{1-\sqrt{1-4v}}{2v}\cdot \frac{1}{1-\frac{1-\sqrt{1-4v}}{2v}t} \] is in $K[[t,v]]$ (which is obvious because $1-\sqrt{1-4v}=\sum_{n\geq 1}a_nv^n$ for some $a_n\in K$ and $\frac{1-\sqrt{1-4v}}{2v}\in K[[v]]$) and to use the formula \[ \frac{t_1M'(f)(t_1,t_1t_2)-t_2M'(f)(t_2,t_1t_2)}{t_1-t_2}=f(t_1,t_2). \] Direct verification shows that for \[ g(t,v)=\frac{1-\sqrt{1-4v}}{2v}\cdot \frac{1}{1-\frac{1-\sqrt{1-4v}}{2v}t} \] \[ \frac{t_1g(t_1,t_1t_2)-t_2g(t_2,t_1t_2)}{t_1-t_2}=\frac{1}{1-(t_1+t_2)} \] which gives that $g(t,v)=M'(f)(t,v)$. \end{proof} By the theorem of Lane \cite{L} and Kharchenko \cite{K1}, the algebra of constants $(K\langle X\rangle)^{\delta}$ is a graded free algebra and hence has a homogeneous system of free generators. The following theorem describes the generating function of the set of free generators. \begin{theorem}\label{generating function of the algebra of constants} The generating function of any bihomogeneous system of free generators of $(K\langle X\rangle)^{\delta}$ with respect to the variables $t$ and $v=tu$ is \[ a(t,v)=t+\frac{1-\sqrt{1-4v}}{2}. \] \end{theorem} \begin{proof} If $a(t,v)$ is the generating function of the set of free generators of $(K\langle X\rangle)^{\delta}$, then the Hilbert series of $(K\langle X\rangle)^{\delta}$ is \[ H((K\langle X\rangle)^{\delta},t,v)=\frac{1}{1-a(t,v)}. \] Applying Proposition \ref{Hilbert series of constants of free associative algebra} we obtain that \[ \frac{1}{1-a(t,v)} =\frac{1-\sqrt{1-4v}}{2v}\cdot \frac{1}{1-\frac{1-\sqrt{1-4v}}{2v}t} \] and the expression of $a(t,v)$ is a result of easy calculations. \end{proof} \begin{corollary} \label{generatation by x and sl-invariants} The algebra of constants $(K\langle x,y\rangle)^{\delta}$, where $\delta(x)=0$, $\delta(y)=x$, is generated by $x$ and by $SL_2(K)$-invariants. \end{corollary} \begin{proof} An element $f(x,y)\in K\langle x,y\rangle$ is a $SL_2$-invariant if and only if it is a linear combination of highest weight vectors of a $GL_2$-submodules $W(\lambda_1,\lambda_1)$. By Theorem \ref{constants in two variables}, the $\delta$-constants are linear combinations of highest weight vectors $w_{(\lambda_1,\lambda_2)}$, and $w_{(\lambda_1,\lambda_2)}$ is bihomogeneous of degree $(\lambda_1,\lambda_2)$. Hence we obtain that the set of $SL_2$-invariants coincides with the linear combinations of bihomogeneous elements of degree $(p,p)$. The only nonzero coefficients of the Hilbert series $H((K\langle x,y\rangle)^{SL_2},t,u)$ are for $v^n=(tu)^n$ and $H((K\langle x,y\rangle)^{SL_2},t,u)$ is obtained from $H((K\langle x,y\rangle)^{\delta},t,v)$ by replacing $t$ with 0. Hence Theorem \ref{generating function of the algebra of constants} gives that the set of homogeneous generators of the algebra of $\delta$-constants is spanned by $x$ and $SL_2$-invariants. \end{proof} Corollary \ref{lifting of constants} gives immediately: \begin{corollary} \label{generatation by sl-invariants for algebras of rank two} For any $GL_2$-invariant ideal $I$ of $K\langle x,y\rangle$ the algebra of constants $(K\langle x,y\rangle/I)^{\delta}$, where $\delta(x)=0$, $\delta(y)=x$, is generated by $x$ and by $SL_2(K)$-invariants. \end{corollary} \begin{remark}\label{Catalan} By Almkvist, Dicks and Formanek \cite{ADF} Example 5.10, the Hilbert series of the algebra of $SL_2$-invariants of $K\langle x,y\rangle$ is \[ H((K\langle x,y\rangle)^{SL_2},v)=\frac{1-\sqrt{1-4v}}{2v} =\sum_{n\geq 0}\frac{1}{n+1}\binom{2n}{n}v^n, \] and the coefficient of $v^n$ is the $(n+1)$-st Catalan number $c_{n+1}$. (By definition $c_n$ is the number of possibilities to distribute parentheses in the sum $1+1+\cdots+1$ of $n$ units, see e.g. \cite{Ha}.) This agrees with Proposition \ref{Hilbert series of constants of free associative algebra} because $H((K\langle x,y\rangle)^{SL_2},v)$ is obtained from \[ H((K\langle x,y\rangle)^{\delta},t,v) =\frac{1-\sqrt{1-4v}}{2v}\cdot \frac{1}{1-\frac{1-\sqrt{1-4v}}{2v}t} \] by replacing $t$ with 0. \par Theorem \ref{generating function of the algebra of constants} gives that the generating function of a homogeneous system of free generators of $(K\langle X\rangle)^{SL_2}$ is \[ b(v)=\frac{1-\sqrt{1-4v}}{2}=vH((K\langle x,y\rangle)^{SL_2},v). \] Since $v=tu$ is of second degree, the number of generators of $(K\langle x,y\rangle)^{SL_2}$ of degree $2n$ is equal to the $n$-th Catalan number. \end{remark} Below we give an inductive procedure to construct an infinite set of free generators of the algebra $(K\langle x,y\rangle)^{SL_2}$. \begin{algorithm} The following infinite procedure gives a complete set $\{w_1,w_2,\ldots\}$ of free generators of the algebra $(K\langle x,y\rangle)^{SL_2}$. We set $w_1=[x,y]$. If we have already constructed all free generators $w_1,w_2,\ldots,w_k$ of degree $\leq 2n$, then we form all $c_{n+1}$ products $w_{i_1}\cdots w_{i_s}$ of degree $2n$, which we number as $\omega_j$, $j=1,\ldots,c_{n+1}$, and add to the system of generators the $c_{n+1}$ elements \[ w_{k+j}=x\omega_jy-y\omega_jx =xw_{i_1}\cdots w_{i_s}y-yw_{i_1}\cdots w_{i_s}x,\quad j=1,\ldots,c_{n+1}. \] \end{algorithm} The first several elements of the generating set are: \[ w_1=[x,y],\quad w_2=x[x,y]y-y[x,y]x, \] \[ w_3=xw_1^2y-yw_1^2x=x[x,y]^2y-y[x,y]^2x, \] \[ w_4=xw_2y-yw_2x=x(x[x,y]y-y[x,y]x)y-y(x[x,y]y-y[x,y]x)x. \] \begin{proof} By Remark \ref{Catalan} and by inductive arguments, we may assume that the number of products $\omega_j=w_{i_1}\cdots w_{i_s}$ of degree $2n$ is equal to the Catalan number $c_{n+1}$. Hence the number of words $x\omega_jy-y\omega_jx$, all of degree $2(n+1)$ is also equal to $c_{n+1}$ which agrees with the number of free generators of degree $2(n+1)$. Clearly, if $\omega_j$ is an $SL_2$-invariant, the element $x\omega_jy-y\omega_jx$ is also an $SL_2$-invariant. Hence it is sufficient to show that all products $w_{j_1}\cdots w_{j_p}$ of degree $2(n+1)$ and all $x\omega_jy-y\omega_jx$ are linearly independent. \par We introduce the lexicographic ordering on $K\langle x,y\rangle$ assuming that $x<y$. Then by induction we prove that the minimal monomials $z_{k_1}\cdots z_{k_{2n+2}}$, $z_k\in\{x,y\}$, of $w_{j_1}\cdots w_{j_p}$ and $x\omega_jy-y\omega_jx$ have the property that the number of $x$'s in every beginning $z_{k_1}\cdots z_{k_q}$ of $z_{k_1}\cdots z_{k_{2n+2}}$ is bigger or equal to the number of $y$'s. For example, the minimal monomial of $w_2=x[x,y]y-y[x,y]x$ is $xxyy$, all its beginnings are $x,xx,xxy,xxyy$ and the number of entries of $x$ and $y$ are $(1,0),(2,0),(2,1),(2,2)$, respectively. Similarly, the minimal monomial of \[ w_1w_2^2=[x,y](x[x,y]y-y[x,y]x)(x[x,y]y-y[x,y]x) \] is $xyxxyyxxyy$ and the entries of $x$ and $y$ in the beginnings are \[ (1,0),(1,1),(2,1),(3,1),(3,2),(3,3),(4,3),(5,3),(5,4),(5,5). \] Pay attention that the first place where the number of $x$'s is equal to the number of $y$'s, namely the beginning $xy$, corresponds to the beginning $w_1=[x,y]$ in $w_1w_2^2$ and the rest of the minimal monomial $xxyyxxyy$ has the same property. \par We shall show that the products $w_{j_1}\cdots w_{j_p}$ (including the case $p=1$ of a product of one free generator $x\omega_jy-y\omega_jx$) are in a 1-1 correspondence with the words $z_{k_1}\cdots z_{k_{2n+2}}$ in $x$ and $y$ with the property that the number of $x$'s in every beginning $z_{k_1}\cdots z_{k_q}$ is bigger or equal to the number of $y$'s. Let $\omega=w_{j_1}\cdots w_{j_p}$ be a product of elements of the constructed set. If $p=1$, i.e. $\omega=w_j$ is in the set, then $w_j=x\omega'y-y\omega'x$ and the minimal monomial $z_1\cdots z_{2n}$ of $\omega'$ has the property that the number of $x$'s in every beginning of $z_1\cdots z_{2n}$ is bigger or equal to the corresponding number of $y$'s. Since the minimal monomial of $w_j$ is $xz_1\cdots z_{2n}y$, we obtain that in every of its proper beginnings the number of occurances of $x$ is strictly bigger than the number of entries of $y$. If $p>1$, then, reading the minimal word from left to right, the first place where the numbers of the $x$'s and the $y$'s is the same, is the end of $w_{j_1}$. This arguments combined with induction easily imply that the different products $w_{j_1}\cdots w_{j_p}$ have different minimal monomials and each word corresponds to some product $w_{j_1}\cdots w_{j_p}$. Hence the products $w_{j_1}\cdots w_{j_p}$ are linearly independent and this completes the proof. \end{proof} \begin{corollary} \label{constants of Engel algebras} For any variety $\mathfrak W$ of associative algebras which does not contain the metabelian variety $\mathfrak M$, the algebra of constants $F_2({\mathfrak W})^{\delta}$ is finitely generated. \end{corollary} \begin{proof} It is well known that any variety $\mathfrak W$ which does not contain $\mathfrak M$ satisfies some Engel identity $[x_2,x_1,\ldots,x_1]=0$. By a theorem of Latyshev \cite{La} any finitely generated PI-algebra satisfying a non-matrix polynomial identity, satisfies also some identity of the form $[x_1,x_2]\cdots[x_{2k-1},x_{2k}]=0$. Applying this result to $F_2({\mathfrak W})$ we obtain that $F_2({\mathfrak W})$ is solvable as a Lie algebra, and, by a theorem of Higgins \cite{Hi} $F_2({\mathfrak W})$ is Lie nilpotent. (Actually Zelmanov \cite{Z} proved the stronger result that any Lie algebra over a field of characteristic zero satisfying the Engel identity is nilpotent.) \par By Drensky \cite{D2}, for any nilpotent variety $\mathfrak W$, and for a fixed positive integer $m$, the vector space $B_m({\mathfrak W})$ of so called proper polynomials in $F_m({\mathfrak W})$ is finite dimensional. Using the relation \[ F_m({\mathfrak W})\cong K[x_1,\ldots,x_m]\otimes_K B_m({\mathfrak W}) \] between the $GL_m$-module structure of $F_m({\mathfrak W})$ and $B_m({\mathfrak W})$ and the Young rule, we can derive the following. There exists a positive constant $p$ such that the nonzero irreducible components $W(\lambda_1,\ldots,\lambda_m)$ of the $GL_m$-module $F_m({\mathfrak W})$ satisfy the restriction $\lambda_2\leq p$. Hence the subalgebra $F_2({\mathfrak W})^{SL_2}$ of $SL_2$-invariants of $F_2({\mathfrak W})$ (which is spanned on the highest weight vectors of $W(\lambda_1,\lambda_2)$ with $\lambda_1\leq p$) is finite dimensional. Now the statement follows from Corollary \ref{generatation by sl-invariants for algebras of rank two} because $F_2({\mathfrak W})^{\delta}$ is generated by $x$ and the finite dimensional vector space $F_2({\mathfrak W})^{SL_2}$. \end{proof} Corollary \ref{constants of Engel algebras} inspires the following: \begin{question} Is it true that, for $m\geq 2$ and for a fixed nonzero Weitzenb\"ock derivation $\delta$, the algebra of constants $F_m({\mathfrak W})^{\delta}$ is finitely generated if and only if the variety of associative algebras $\mathfrak W$ does not contain the metabelian variety $\mathfrak M$? \end{question} Corollary \ref{lifting of constants}, Example \ref{free metabelian associative algebra} and Corollary \ref{constants of Engel algebras} show that the answer to this question is affirmative for $m=2$. In the next section we shall show that the algebra of constants $F_m({\mathfrak W})^{\delta}$ is not finitely generated if $\mathfrak W$ contains $\mathfrak M$. \section{Constants of Relatively Free Associative Algebras} First we shall work in the free metabelian associative algebra $F_m({\mathfrak M})$ where the metabelian variety is defined by the polynomial identity $[x_1,x_2][x_3,x_4]=0$. We need an embedding of $F_m({\mathfrak M})$ into a wreath product. For this purpose, let $Y=\{y_1,\ldots,y_m\}$, $U=\{u_1,\ldots,u_m\}$ and $V=\{v_1,\ldots,v_m\}$ be three sets of commuting variables and let \[ M=\sum_{i=1}^ma_iK[U,V] \] be the free $K[U,V]$-module of rank $m$ generated by $\{a_1,\ldots,a_m\}$. Clearly, $M$ has also a structure of a free $K[Y]$-bimodule with the action of $K[Y]$ defined by \[ y_ja_i=a_iu_j,\quad a_iy_j=a_iv_j,\quad i,j=1,\ldots,m. \] Define the trivial multiplication $M\cdot M=0$ on $M$ and consider the algebra \[ W=K[Y]\rightthreetimes M, \] which is similar to the abelian wreath product of Lie algebras, see \cite{Sh2} ($M$ is an ideal of $W$ with multiplication by $K[Y]$ induced by the bimodule action of $K[Y]$ on $M$). Obviously $W$ satisfies the metabelian identity and hence belongs to $\mathfrak M$. The following proposition is a partial case of the main result of Lewin \cite{Le}, see also Umirbaev \cite{U} for further applications of this construction to automorphisms of relatively free associative algebras. \begin{proposition} \label{embedding in wreath products} The mapping $\iota:x_j\to y_j+a_j$, $j=1,\ldots,m$, defines an embedding $\iota$ of $F_m({\mathfrak M})$ into $W=K[Y]\rightthreetimes M$. \end{proposition} \begin{proposition} \label{metabelian algebras of any rank} For any nontrivial Weitzenb\"ock derivation $\delta$ of the free metabelian associative algebra $F_m({\mathfrak M})$ of rank $m\geq 2$, the algebra of constants $F_m({\mathfrak M})^{\delta}$ is not finitely generated. \end{proposition} \begin{proof} The derivation $\delta$ acts as a linear operator on the vector space with basis $\{x_1,\ldots,x_m\}$ and we define in a similar way the action of $\delta$ on the vector spaces with bases $\{y_1,\ldots,y_m\}$ and $\{a_1,\ldots,a_m\}$: If $\delta(x_j)=\sum_{i=1}^m\alpha_{ij}x_j$, $\alpha_{ij}\in K$, then $\delta(y_j)=\sum_{i=1}^m\alpha_{ij}y_j$ and $\delta(a_j)=\sum_{i=1}^m\alpha_{ij}a_j$, $j=1,\ldots,m$. As in the proof of Lemma \ref{extending derivations} we can show that this action $\delta$ defines a derivation on $W$ and on the polynomial algebra $K[U,V]$ (which we denote also by $\delta$). Additionally, we consider the embedding $\iota$ of $F_m({\mathfrak M})^{\delta}$ as a subalgebra in $W$, as stated in Proposition \ref{embedding in wreath products}. By definition $\delta(\iota(x_j))=\delta(y_j+a_j)=\iota(\delta(x_j))$ and hence if $\delta(f(X))=0$ in $F_m({\mathfrak M})$, then the same holds for the image $\iota(f)$ of $f$ in $W$. In this way, $\iota$ embeds the algebra of constants $F_m({\mathfrak M})^{\delta}$ into the algebra of constants $W^{\delta}$. \par As till now, we assume that $\delta(x_1)=0$ and $\delta(x_2)=x_1$. If the algebra of constants $F_m({\mathfrak M})^{\delta}$ is generated by a finite set $\{f_1,\ldots,f_n\}$, then, as elements of $W$, \[ \iota(f_k)=g_k(Y)+\sum_{i=1}^ma_ih_{ik}(U,V),\quad g_k(Y)\in K[Y], h_{ik}(U,V)\in K[U,V], \] $i=1,\ldots,m$, $k=1,\ldots,n$, and \[ g_k(Y),b_k=\sum_{i=1}^ma_ih_{ik}(U,V),\quad k=1,\ldots,n, \] are also constants. Hence $\iota\left(F_m({\mathfrak M})^{\delta}\right)$ is a subalgebra of the subalgebra of $W^{\delta}$ generated by the union of the finite sets \[ \{g_1,\ldots,g_n\}\subset K[Y]^{\delta},\quad \{b_1,\ldots,b_n\}\subset M^{\delta}. \] This implies that $F_m({\mathfrak M})^{\delta}$ is a subalgebra of \[ W_0=K[Y]^{\delta}\rightthreetimes \sum_{k=1}^nb_kK[U]^{\delta}K[V]^{\delta}. \] By Corollary \ref{transcendence degree} the transcendence degree of $K[Y]^{\delta}$ is equal to $m-1$ and hence the transcendence degree of $K[U]^{\delta}K[V]^{\delta}$ is equal to $2(m-1)$. Since, see e.g. the book by Krause and Lenagan \cite{KL}, the Gelfand-Kirillov dimension of a commutative algebra is equal to the transcendence degree of the algebra, we easily derive that the Gelfand-Kirillov dimension of the algebra $W_0$ is bounded from above by $2(m-1)$. On the other hand, the vector space $\iota\left([x_1,x_2]\right)K[U,V]$ is contained in $\iota\left(F_m({\mathfrak M})\right)$ and is a free $K[U,V]$-module generated by $a_1(v_2-u_2)+a_2(u_1-v_1)$. Since $\iota\left([x_1,x_2]\right)\in M^{\delta}$, we obtain that $\iota\left([x_1,x_2]\right)K[U,V]^{\delta}$ is a free $K[U,V]^{\delta}$-module. By Corollary \ref{transcendence degree} the transcendence degree of $K[U,V]^{\delta}$ is equal to $2m-1$, and hence the Gelfand-Kirillov dimension of the $K[U,V]^{\delta}$-module is equal to $2m-1$. This is also a lower bound for the Gelfand-Kirillov dimension of $F_m({\mathfrak M})^{\delta}$ which contradicts with the inequality $\text{\rm GKdim}(F_m({\mathfrak M})^{\delta})\leq \text{\rm GKdim}(W_0)\leq 2(m-1)$. \end{proof} \begin{remark} \label{concrete elements for metabelian case} In the notation of Proposition \ref{metabelian algebras of any rank}, if $b_1,\ldots,b_k$ is a finite number of elements in $M^{\delta}$, then the subalgebra of $K[Y]^{\delta}\rightthreetimes M^{\delta}$ generated by $K[Y]^{\delta}$ and $b_1,\ldots,b_k$, contains only a finite number of elements $\iota([x_1,x_2])(u_1v_2-u_2v_1)^n$. This can be seen in the following way. We consider the localization of the polynomial algebra $K[Y][y_1^{-1}]=K[y_1,y_2,\ldots,y_m][y_1^{-1}]$, and similarly $K[U][u_1^{-1}],K[V][v_1^{-1}]$. Then we define $W'=K[Y][y_1^{-1}]\rightthreetimes MK[u_1^{-1},v_1^{-1}]$. Since $y_1,u_1,v_1$ are $\delta$-constants, we can extend the action of $\delta$ as a derivation on $W$ to a derivation on $W'$. Let $\delta^{p_j+1}(y_j)=0$, $j=1,\ldots,m$, and let us define \[ \tilde y_j=\sum_{k=0}^{p_j}\frac{\delta^k(y_j)}{k!}(-y_2)^ky_1^{p_j-k},\quad j=3,4,\ldots,m, \] and similarly $\tilde y_j, \tilde u_j, \tilde v_j$. Let also $\tilde w_2=u_1v_2-u_2v_1$. By Proposition \ref{generators Weitzenboeck} \[ (K[Y][y_1^{-1}])^{\delta}=K[y_1,y_1^{-1}][\tilde y_3,\tilde y_4,\ldots,\tilde y_m], \] \[ (K[U,V][u_1^{-1},v_1^{-1}])^{\delta}=K[u_1,v_1,u_1^{-1},v_1^{-1}] [\tilde u_3,\ldots,\tilde u_m,\tilde v_3,\ldots,\tilde u_m,\tilde w_2]. \] The algebra generated by $K[Y]^{\delta}$ and $b_1,\ldots,b_k$ is a subalgebra of \[ (K[Y][y_1^{-1}])^{\delta}\rightthreetimes \sum_{j=1}^kb_j(K[U][u_1^{-1}])^{\delta}(K[V][v_1^{-1}])^{\delta} \] and hence its elements have the form \[ f(\tilde y_3,\ldots,\tilde y_m) +\sum_{j=1}^mb_jf_j(\tilde u_3,\ldots,\tilde u_m,\tilde v_3,\ldots,\tilde v_m), \] where $f$ and $f_j$ are polynomials with coefficients depending respectively on $y_1,y_1^{-1}$ and $u_1,v_1,u_1^{-1},v_1^{-1}$. Since $\tilde u_3,\ldots,\tilde u_m,\tilde v_3,\ldots,\tilde u_m,\tilde w_2$ are algebraically independent on $K[u_1,v_1,u_1^{-1},v_1^{-1}]$, and the finite number of elements $b_1,\ldots,b_k$ contains only a finite number of summands, we cannot present all elements $\iota([x_1,x_2])(u_1v_2-u_2v_1)^n=(a_1(v_2-u_2)+a_2(u_1-v_1))\tilde w_2^n$ in the form \[ (a_1(v_2-u_2)+a_2(u_1-v_1))\tilde w_2^n= \sum_{j=1}^mb_jf_{jn}(\tilde u_3,\ldots,\tilde u_m,\tilde v_3,\ldots,\tilde v_m). \] \end{remark} \begin{theorem} \label{constants of relatively free associative algebras} Let $\mathfrak W$ be a variety of associative algebras containing the metabelian variety $\mathfrak M$. Then for any $m\geq 2$ and for any fixed nonzero Weitzenb\"ock derivation $\delta$, the algebra of constants $F_m({\mathfrak W})^{\delta}$ is not finitely generated. \end{theorem} \begin{proof} By Corollary \ref{lifting of constants} the algebra $F_m({\mathfrak M})^{\delta}$ is a homomorphic image of $F_m({\mathfrak W})^{\delta}$. Now the proof follows immediately because $F_m({\mathfrak M})^{\delta}$ is not finitely generated by Proposition \ref{metabelian algebras of any rank}. \end{proof} \begin{remark} Using the elements $\iota([x_1,x_2])(u_1v_2-u_2v_1)^n$, $n\geq 0$, from Remark \ref{concrete elements for metabelian case} for any variety $\mathfrak W$ containing the metabelian variety $\mathfrak M$ and any nontrivial Weitzenb\"ock derivation $\delta$ we can construct an infinite set of constants which is not contained in any finitely generated subalgebra of $F_m({\mathfrak W})^{\delta}$. Again, we assume that $\delta(x_1)=0$, $\delta(x_2)=x_1$. Let $l_u$ and $r_u$ be, respectively, the operators of left and right multiplication by $u\in F_m({\mathfrak W})$. Consider the elements \[ (l_{x_1}r_{x_2}-l_{x_2}r_{x_1})^n[x_1,x_2],\quad n\geq 0. \] All these elements are constants which are liftings of the constants from Remark \ref{concrete elements for metabelian case} and hence any finitely generated subalgebra of $F_m({\mathfrak W})^{\delta}$ does not contain $(l_{x_1}r_{x_2}-l_{x_2}r_{x_1})^n[x_1,x_2]$ for sufficiently large $n$. \end{remark} \begin{corollary} \label{unitriangular invariants} Let $\mathfrak W$ be a variety of associative algebras containing the metabelian variaty $\mathfrak M$. Then for any $m\geq 2$ the algebra $F_m({\mathfrak W})^{UT_m}$ of $UT_m(K)$-invariants is not finitely generated. \end{corollary} \begin{proof} Let the algebra $F_m({\mathfrak W})^{UT_m}$ be finitely generated. By Remarks \ref{three remarks}, the algebra $(K\langle x_1,\ldots,x_m\rangle)^{UT_m}$, and hence also $F_m({\mathfrak W})^{UT_m}$ is spanned by all highest weight vectors. Hence $F_m({\mathfrak W})^{UT_m}$ is generated by a finite system of highest weight vectors $w(x_1,\ldots,x_m)\in W(\lambda)\subset F_m({\mathfrak W})^{UT_m}$. Hence $F_m({\mathfrak W})^{UT_m}$ is multigraded and has a finite multihomogeneous set of generators. The generators which depend on $x_1$ and $x_2$ only, generate the subalgebra spanned by all highest weight vectors $w(x_1,\ldots,x_m)\in W(\lambda_1,\lambda_2,0,\ldots,0)$. This subalgebra coincides with the algebra of $UT_2$-invariants of $F_2({\mathfrak W})$ and hence with the algebra of constants of the Weitzenb\"ock derivation $\delta$ of $F_2({\mathfrak W})$ defined by $\delta(x_1)=0$, $\delta(x_2)=x_1$. By Theorem \ref{constants of relatively free associative algebras} for $m=2$ (or by Corollary \ref{lifting of constants} and Example \ref{free metabelian associative algebra}) $F_2({\mathfrak W})^{\delta}$ is not finitely generated. Hence the algebra $F_m({\mathfrak W})^{UT_m}$ cannot be finitely generated. \end{proof} \begin{remark} \label{unitriangular invariants of Lie nilpotent varieties} Let $\mathfrak W$ be a Lie nilpotent variety of associative algebras and let $m$ be a fixed positive integer. Using the approach of \cite{D2} (as in the proof of Corollary \ref{constants of Engel algebras}), and the fact that $F_m({\mathfrak W})$ is a direct sum of $GL_m$-modules of the form $W(\lambda_1,\ldots,\lambda_m)$ with $\lambda_2\leq p$ for some $p$, one can show that there exists a finite system of highest weight vectors $w_i(x_1,\ldots,x_k)\in F_m({\mathfrak W})$, $i=1,\ldots,k$, such that all highest weight vectors of $F_m({\mathfrak W})$ are linear combinations of $x^nw_i(x_1,\ldots,x_k)$. Hence the algebra $F_m({\mathfrak W})^{UT_m}$ of $UT_m$-invariants is generated by $x$ and $w_i(x_1,\ldots,x_k)$, $i=1,\ldots,k$. Hence $F_m({\mathfrak W})^{UT_m}$ is finitely generated. \end{remark} \section{Generic $2\times 2$ Matrices} In this section we construct classes of automorphisms of the relatively free algebra $F_2(\text{\rm var }M_2(K))$. This algebra is isomorphic to the algebra generated by two generic $2\times 2$ matrices $x$ and $y$. So, the results are stated in the natural setup of the trace algebra. We start with the necessary background, see Formanek \cite{F2}, Alev and Le Bruyn \cite{AL}, or Drensky and Gupta \cite{DG}. \par We consider the polynomial algebra in 8 variables $\Omega=K[x_{ij},y_{ij}\mid i,j=1,2]$. The algebra $R$ of two generic $2\times 2$ matrices \[ x = \begin{pmatrix} x_{11}&x_{12}\\ x_{21}&x_{22}\\ \end{pmatrix}\quad \text{ \rm and }\quad y = \begin{pmatrix} y_{11}&y_{12}\\ y_{21}&y_{22}\\ \end{pmatrix} \] is the subalgebra of $M_2(\Omega)$ generated by $x$ and $y$. We denote by $C$ the centre of $R$ and by $\bar C$ the algebra generated by all the traces of elements from $R$. Identifying the elements of $\bar C$ with $2 \times 2$ scalar matrices we denote by $T$ the generic trace algebra generated by $R$ and $\bar C$. It is well known that $\bar C$ is generated by \[ \text{\rm tr}(x), \text{\rm tr}(y), \text{\rm det}(x), \text{\rm det}(y), \text{\rm tr}(xy) \] and is isomorphic to the polynomial algebra in five variables. \begin{proposition} \label{centre of generic matrix algebra} {\rm (Formanek, Halpin, Li \cite{FHL})} The vector subspace of $C$ consisting of all polynomials without constant term is a free $\bar C$-module generated by $[x,y]^2$. \end{proposition} For our purposes it is more convenient to replace in $T$ (as in \cite{AL}) the generic matrices $x$ and $y$ by the generic traceless matrices $$ x_0=x-\frac{1}{2}\text{\rm tr}(x),\, y_0=y-\frac{1}{2}\text{\rm tr}(y) $$ and assume that $T$ is generated by $x_0$, $y_0$, $\text{\rm tr}(x)$, $\text{\rm tr}(y)$, $\text{\rm det}(x_0)$, $\text{\rm det}(y_0)$, $\text{\rm tr}(x_0y_0)$. A further reduction is to use the formulas \[ \text{\rm det}(x_0)= -\frac{1}{2}\text{\rm tr}(x_0^2),\, \text{\rm det}(y_0)= -\frac{1}{2}\text{\rm tr}(y_0^2), \] and to replace the determinants by $\text{\rm tr}(x_0^2)$ and $\text{\rm tr}(y_0^2)$. In this way, we may assume that $\bar C$ is generated by \[ p = \text{\rm tr}(x), q = \text{\rm tr}(y), u = \text{\rm tr}(x_0^2), v = \text{\rm tr}(y_0^2), t = \text{\rm tr}(x_0y_0). \] Then $[x,y]^2=t^2-uv$ and \[ T=\bar C+\bar Cx_0+\bar Cy_0+\bar C[x_0,y_0] \] is a free $\bar C$-module generated by $1,x_0,y_0,[x_0,y_0]$. \par The defining relations of the algebra generated by the $2\times 2$ traceless matrices $x_0$ and $y_0$ are $[x_0^2,y_0]=[y_0^2,x_0]=0$, see e.g. \cite{LB} or \cite{DKo} for the case of characteristic 0 and \cite{Ko2} for the case of an arbitrary infinite base field. More generally, the defining relations of the algebra generated by $m$ generic $2\times 2$ traceless matrices $y_1,\ldots,y_m$ are $[v_1^2,v_2]=0$, where $v_1,v_2$ run on the set of all Lie elements in $K\langle y_1,\ldots,y_m\rangle$ which is a restatement of the theorem of Razmyslov \cite{R} for the weak polynomial identities of $M_2(K)$. An explicitly written system of defining relations consists of $[y_i^2,y_j]=0$, $i,j=1,\ldots,m$, and the standard polynomials $s_4(y_{i_1},y_{i_2},y_{i_3},y_{i_4})=0$, $1\leq i_1<i_2<i_3<i_4\leq m$, see \cite{DKo}. \begin{lemma} \label{derivations of generic matrices} Every mapping $\delta: \{p,q,x_0,y_0\}\to T$ such that \[ \delta(p),\delta(q)\in \bar C,\quad \delta(x_0),\delta(y_0)\in \bar Cx_0+\bar Cy_0+\bar C[x_0,y_0] \] can be extended to a derivation of $T$. \end{lemma} \begin{proof} The defining relations of $T$ are \[ [p,q]=[p,x_0]=[p,y_0]=[q,x_0]=[q,y_0]=0, \] together with the defining relations of the subalgebra generated by $x_0,y_0$. It is sufficient to see that the extension of $\delta$ (inductively, by the rule $\delta(fg)=\delta(f)g+f\delta(g)$) to a derivation on $T$ is well defined, i.e. sends the defining relations to 0. For the relations involving $p$ and $q$ this can be checked directly: \[ \delta([p,q])=[\delta(p),q]+[p,\delta(q)]=0, \] analoguously for $\delta([p,x_0]),\delta([p,y_0]),\delta([q,x_0]),\delta([q,y_0])$, because $p,q,\delta(p),\delta(q)$ are in the centre of $T$. The condition for the defining relations of the algebra generated by $x_0,y_0$ can be proved using the universal properties of this algebra or directly: Since $x_0^2, y_0^2, x_0y_0+y_0x_0,[x_0,y_0]^2$ are in the centre of $T$, and $x_0[x_0,y_0]+[x_0,y_0]x_0=y_0[x_0,y_0]+[x_0,y_0]y_0=0$, if $\delta(x_0)=ax_0+by_0+c[x_0,y_0]$, $a,b,c\in\bar C$, then \[ (\delta(x_0))^2=a^2x_0^2+b^2y_0^2+c^2[x_0,y_0]^2+ab(x_0y_0+y_0x_0), \] \[ \delta(x_0)x_0+x_0\delta(x_0)=ax_0^2+b(x_0y_0+y_0x_0) \] are in the centre of $T$ and $\delta([x_0^2,y_0])=0$. In the same way $\delta([y_0^2,x_0])=0$. \end{proof} \begin{example}\label{automorphisms fixing x} Let us consider the basic Weitzenb\"ock derivation $\delta$ defined on the relatively free algebra $F_2(\text{\rm var }M_2(K))$ in its realization as the generic trace algebra generated by generic $2\times 2$ matrices $x$ and $y$ by $\delta(x)=0$, $\delta(y)=x$. We extend $\delta$ to the trace algebra $T$ by \[ \delta(p)=\delta(\text{\rm tr}(x))=\text{\rm tr}(\delta(x)), \] \[ \delta(q)=\delta(\text{\rm tr}(y))=\text{\rm tr}(\delta(y)), \] \[ \delta(x_0)=0,\quad \delta(y_0)=x_0, \] \[ \delta(u)=\delta(\text{\rm tr}(x_0^2))=\text{\rm tr}(\delta(x_0^2)), \] \[ \delta(v)=\delta(\text{\rm tr}(y_0^2))=\text{\rm tr}(\delta(y_0^2)) \] \[ \delta(t)=\delta(\text{\rm tr}(x_0y_0))=\text{\rm tr}(\delta(x_0y_0)). \] By Lemma \ref{derivations of generic matrices} this is possible. Direct calculations give that \[ \delta(p)=0,\quad \delta(q)=p,\quad \delta(u)=0,\quad \delta(t)=u,\quad \delta(v)=2t. \] Replacing $v$ with $2v_1$, we obtain that the action of $\delta$ on $\bar C=K[p,q,u,t,v_1]$ is as in Examples \ref{examples of concrete generators}. Hence \[ (\bar C)^{\delta}=K[p,u,pt-qu,t^2-2uv_1,2p^2v_1-2pqt+q^2u] \] \[ =K[p,u,pt-qu,t^2-uv,q^2u-2pqt+p^2v]. \] The generators of $(\bar C)^{\delta}$ satisfy the relation \[ u(q^2u-2pqt+p^2v)+p^2(t^2-uv)=(pt-qu)^2. \] If $w\in(\bar C)^{\delta}$, then $\exp(w\delta)$ is an automorphism of $T$. If $t^2-uv$ divides $w$, then $\exp(w\delta)$ is an automorphism also of $R$. This automorphism acts on $R$ as \[ \exp(w\delta): x\to x,\quad \exp(w\delta): y\to y+wx, \] where $w=(t^2-uv)w_1(p,u,pt-qu,t^2-uv,q^2u-2pqt+p^2v)$ for some polynomial $w_1$. Such automorphisms (fixing $x$) were studied in the Ph. D. Thesis of Chang \cite{C}. \end{example} \begin{example} Now we shall modify Example \ref{automorphisms fixing x} in the following way. We use Lemma \ref{derivations of generic matrices} and define the derivation $\delta$ of $T$ by \[ \delta(p)=\alpha_1u+\beta_1t+\gamma_1v, \quad \delta(q)=p+\alpha_2u+\beta_2t+\gamma_2v, \] $\alpha_i,\beta_i,\gamma_i\in \bar C$, $i=1,2$, \[ \delta(u)=0,\quad \delta(t)=u,\quad \delta(v)=2t. \] This derivation is locally nilpotent and acts on the generic matrices $x=\frac{1}{2}\text{\rm tr}(x)+x_0$ and $y=\frac{1}{2}\text{\rm tr}(y)+y_0$ by \[ \delta(x)=\frac{1}{2}(\alpha_1u+\beta_1t+\gamma_1v),\quad \delta(y)=x+\frac{1}{2}(\alpha_2u+\beta_2t+\gamma_2v). \] The matrix of the linear operator $\delta$ acting on the vector space $Kp+Kq+Ku+Kt+Kv$ (with respect to the basis $\{p,q,u,t,v\}$) is \[ \begin{pmatrix} 0&1&0&0&0\\ 0&0&0&0&0\\ \alpha_1&\alpha_2&0&1&0\\ \beta_1&\beta_2&0&0&2\\ \gamma_1&\gamma_2&0&0&0 \end{pmatrix} \] and has rank 3 or 4 depending on whether $\gamma_1=0$ or $\gamma_1\not=0$. Hence its Jordan normal form is one of the following matrices: \[ \begin{pmatrix} 0&1&0&0&\\ 0&0&1&0&\\ 0&0&0&1&\\ 0&0&0&0&\\ &&&&0 \end{pmatrix}, \quad \begin{pmatrix} 0&1&0&&\\ 0&0&1&&\\ 0&0&0&&\\ &&&0&1\\ &&&0&0 \end{pmatrix}, \quad \begin{pmatrix} 0&1&0&0&0\\ 0&0&1&0&0\\ 0&0&0&1&0\\ 0&0&0&0&1\\ 0&0&0&0&0 \end{pmatrix}. \] Examples \ref{examples of concrete generators} give concrete systems of generators of the algebras of constants of $(\bar C)^{\delta}$ and hence automorphisms of the algebras $T$ and $R$. \par For example, if we fix $\delta(p)=v$, $\delta(q)=p$, then $\delta$ is a basic derivation with \[ \delta(q)=p,\quad \delta(p)=v,\quad ,\delta(v)=2t,\quad \delta(t)=u,\quad \delta(u)=0. \] Considering $\bar C=K[q/2,p/2,v/2,t,u]$, we obtain after some easy calculations that the algebra of constants is generated by \[ u,\quad t^2-uv,\quad tp-qu-\frac{v^2}{4}, \] \[ t^3-\frac{3}{2}utv+\frac{3}{2}u^2p, \quad 3t^2q-\frac{3}{2}tvp+\frac{v^3}{4}-3uvq+\frac{9}{4}up^2. \] In this case $\delta$ acts on $x$ and $y$ by \[ \delta(x)=\frac{1}{2}\text{\rm tr}(y_0^2)=\frac{1}{2}v,\quad \delta(y)=x. \] If $w$ is in $(\bar C)^{\delta}$, then \[ \exp(w\delta):x\to x+\frac{wv}{2.1!}+\frac{w^2t}{2!}+\frac{w^3u}{3!}, \] \[ \exp(w\delta):y\to y+\frac{wx}{1!}+\frac{w^2v}{2.2!}+\frac{w^3t}{3!}+\frac{w^4u}{4!}. \] If $w$ is divisible by $t^2-uv$, then $\exp(w\delta)$ is also an automorphism of $R$. Since all these automorphisms $\exp(w\delta)$ are obtained by the construction of Martha Smith \cite{Sm}, they induce stably tame automorphisms of $\bar C=K[p,q,u,t,v]$. \end{example} \section{Relatively Free Lie Algebras} We start with few examples for the algebras of constants of relatively free algebras. By the well known dichotomy a variety of Lie algebras either satisfies the Engel condition (and by the theorem of Zelmanov \cite{Z} is nilpotent) or contains the metabelian variety $\mathfrak A^2$ (which consists of all solvable of class 2 Lie algebras and is defined by the identity $[[x_1,x_2],[x_3,x_4]]=0$). Since the finitely generated nilpotent Lie algebras are finite dimensional, the problem for the finite generation of the algebras of constants of relatively free nilpotent Lie algebras is solved trivially. \par The bases of the free polynilpotent Lie algebras were described by Shmelkin \cite{Sh1}. Considering relatively free algebras of rank 2, we assume that the algebra is generated by $x$ and $y$ and the basic Weitzenb\"ock derivation $\delta$ is defined by $\delta(x)=0$, $\delta(y)=x$. \begin{example} Let $L_2({\mathfrak A}^2)=L_2/L''_2$ be the free metabelian Lie algebra of rank 2. It has a basis \[ \{x,y,[y,x,\underbrace{x,\ldots,x}_{a\text{ \rm times}}, \underbrace{y,\ldots,y}_{b\text{ \rm times}}]\mid a,b\geq 0\}. \] It is well known (and can be also obtained by simple argumensts from the Hilbert series of $L_m({\mathfrak A}^2)$) that the $n$-th cocharacter of the variety ${\mathfrak A}^2$ is \[ \chi_1({\mathfrak A}^2)=\chi_{(1)},\quad \chi_n({\mathfrak A}^2)=\chi_{(n-1,1)},\, n\geq 2. \] The corresponding highest weight vectors are \[ w_{(1)}=x,\quad w_{(n-1,1)}=[y,x,\underbrace{x,\ldots,x}_{n-2\text{ \rm times}}],\, n\geq 2. \] Hence the algebra of constants $L_2({\mathfrak A}^2)^{\delta}$ is generated by $x$ and $[x,y]$. \end{example} \begin{example} The free abelian-by-\{nilpotent of class 2\} Lie algebra $L_2({\mathfrak A}{\mathfrak N}_2)=L_2/[L_2,L_2,L_2]'$ satisfies the identity \[ [[x_1,x_2,x_3],[x_4,x_5,x_6]]=0 \] and has a basis \[ \{x,y,[x,y],[y,x,\underbrace{x,\ldots,x}_{a\text{ \rm times}}, \underbrace{y,\ldots,y}_{b\text{ \rm times}}, \underbrace{[x,y],\ldots,[x,y]}_{c\text{ \rm times}}]\mid a+b>0,c\geq 0\}. \] Its Hilbert series is \[ H(L_2({\mathfrak A}{\mathfrak N}_2),t_1,t_2)=t_1+t_2+t_1t_2 +\frac{t_1t_2(t_1+t_2)}{(1-t_1)(1-t_2)(1-t_1t_2)} \] \[ =S_{(1)}(t_1,t_2)+S_{(1^2)}(t_1,t_2) +\sum_{\lambda_1>\lambda_2\geq 1}S_{(\lambda_1,\lambda_2)}(t_1,t_2) \] and the highest weight vectors of $L_2({\mathfrak A}{\mathfrak N}_2)$ are \[ x,\quad [x,y],\quad [y,x,\underbrace{x,\ldots,x}_{a\text{ \rm times}}, \underbrace{[x,y],\ldots,[x,y]}_{c\text{ \rm times}}],\ a>0,c\geq 0. \] Hence the algebra $L_2({\mathfrak A}{\mathfrak N}_2)^{\delta}$ is generated by $x$ and $[x,y]$. \end{example} \begin{example} We consider the relatively free algebra $L_2(\text{\rm var }sl_2(K))$ of the variety of Lie algebras generated by the algebra of $2\times 2$ traceless matrices. This algebra is isomorphic to the Lie algebra generated by the generic $2\times 2$ traceless matrices $x_0,y_0$ considered in Section 7. By Drensky \cite{D1}, as a $GL_2$-module $L_2(\text{\rm var }sl_2(K))$ has the decomposition \[ L_2(\text{\rm var }sl_2(K))\cong W(1)\bigoplus\sum W(\lambda_1,\lambda_2), \] where the summation runs on all $\lambda=(\lambda_1,\lambda_2)$ such that $\lambda_2>0$ and at least one of the integers $\lambda_1,\lambda_2$ is odd. The highest weight vectors of $W(\lambda_1,\lambda_2)$ are given in \cite{D1} but we do not need their concrete form for our purposes. The algebra of constants $L_2(\text{\rm var }sl_2(K))^{\delta}$ is bigraded. Assuming that the degree of $x$ corresponds to $t$ and the degree of $y$ is $u=v/t$, the Hilbert series of $L_2(\text{\rm var }sl_2(K))^{\delta}$ is \[ H(L_2(\text{\rm var }sl_2(K))^{\delta},t,v)= t+v\left(\sum_{p,q\geq 0}t^pv^q-\sum_{p,q\geq 0}t^{2p}v^{2q+1}\right) \] \[ =t+\frac{v}{(1-t)(1-v)}-\frac{v^2}{(1-t)^2(1-v)^2}. \] If $L_2(\text{\rm var }sl_2(K))^{\delta}$ is finitely generated, we may fix a finite system of bigraded generators. For every homogeneous $f\in L_2(\text{\rm var }sl_2(K))^{\delta}$ we have $\text{deg}_xf\geq\text{deg}_yf$. Hence the subalgebra spanned on the homogeneous components of bidegree $(n,n)$, $n$ odd, is also finitely generated. This subalgebra is infinite dimensional and its Hilbert series is obtained from the Hilbert series $H(L_2(\text{\rm var }sl_2(K))^{\delta},t,v)$ by the substitution $t=0$, i.e. \[ H(L_2(\text{\rm var }sl_2(K))^{\delta},0,v) =\frac{v}{1-v}-\frac{v^2}{(1-v)^2}. \] Besides, the subalgebra is abelian because the commutator of any two highest weight vectors $w_{(2p+1,2p+1)}$ and $w_{(2q+1,2q+1)}$ is a highest weight vector $w_{(2(p+q+1),2(p+q+1))}$ which does not participate in the decomposition of $L_2(\text{\rm var }sl_2(K))^{\delta}$. Since the finitely generated abelian Lie algebras are finite dimensional, we obtain a contradiction which gives that $L_2(\text{\rm var }sl_2(K))^{\delta}$ cannot be finitely generated. \end{example} \begin{example} \label{abelian-by-nilpotent of class 3} The free abelian-by-\{nilpotent of class 3\} Lie algebra $L_2({\mathfrak A}{\mathfrak N}_3)=L_2/[L_2,L_2,L_2,L_2]'$ has a basis consisiting of $x,y$ and commutators of the form \[ [y,x,\underbrace{x,\ldots,x}_{a\text{ \rm times}}, \underbrace{y,\ldots,y}_{b\text{ \rm times}}, \underbrace{[x,y],\ldots,[x,y]}_{c\text{ \rm times}}, \underbrace{[y,x,x],\ldots,[y,x,x]}_{d\text{ \rm times}}, \underbrace{[y,x,y],\ldots,[y,x,y]}_{e\text{ \rm times}}], \] with some natural restrictions of $a,b,c,d,e\geq 0$ which guarantee that these commutators are different from zero and, up to a sign, pairwise different. If the algebra of constants $L_2({\mathfrak A}{\mathfrak N}_3)^{\delta}$ is finitely generated, then it has a generating set consisting of a finite number of bihomogeneous elements $w_1,\ldots,w_k$ of degree $\geq 4$ (and bidegree $(n_1,n_2)$, where $n_1\geq n_2$) and constants of degree $\leq 3$ (i.e. $x,[x,y],[y,x,x]$). Since the commutators of length $\geq 4$ commute, we derive that $L_2({\mathfrak A}{\mathfrak N}_3)^{\delta}$ is a sum of the Lie subalgebra $N$ generated by $x,[x,y],[y,x,x]$ and the $N$-module generated by $w_1,\ldots,w_k$. The following elements are constants: \[ u_n=\sum_{\rho,\sigma,\ldots,\tau\in S_2}\text{\rm sign}(\rho\sigma\cdots\tau) [y,x,x,z_{\rho(1)},z_{\sigma(1)},\ldots,z_{\tau(1)}, \] \[ [x,y,z_{\rho(1)}],[x,y,z_{\sigma(1)}],\ldots,[x,y,z_{\tau(1)}]], \] where $\{z_1,z_2\}=\{x,y\}$ and, in the summation, $\rho,\sigma,\ldots,\tau$ run on $n$ copies of the symmetric group $S_2$. They are homogeneous of bidegree $(2n+2,2n+1)$ and hence can be written as linear combinations of commutators involing a $w_i$, several $[x,y]$ and not more than one $x$ or $[y,x,x]$. But this is impossible because for sufficiently large $n$ one cannot obtain the summands of $u_n$ \[ [y,x,x,\underbrace{x,\ldots,x}_{n\text{ \rm times}}, \underbrace{[x,y,y],\ldots,[x,y,y]}_{n\text{ \rm times}}]. \] Hence the algebra $L_2({\mathfrak A}{\mathfrak N}_3)^{\delta}$ is not finitely generated. \end{example} \begin{example} Let $m>2$ and let $\delta$ be the Weitzenb\"ock derivation of the free metabelian Lie algebra $L_m({\mathfrak A}^2)$ defined by $\delta(x_2)=x_1$, $\delta(x_j)=0$ for $j\not=2$. Then, since $L_m({\mathfrak A}^2)$ has a basis consisting of $x_j$ and all commutators $[x_{i_1},x_{i_2},\ldots,x_{i_n}]$ with $i_1>i_2\leq i_3\leq\cdots\leq i_n$, then the free generators $x_j$, $j\not=2$, and the commutators which do not include $x_2$ are constants. It is easy to see that the commutators with $x_2$ are of the form \[ u'=[x_2,\underbrace{x_1,\ldots,x_1}_{a\text{ \rm times}}, \underbrace{x_2,\ldots,x_2}_{b\text{ \rm times}},x_{i_k},\ldots,x_{i_n}],\quad a>0,b\geq 0,i_k>2, \] \[ u''=[x_i,\underbrace{x_1,\ldots,x_1}_{a\text{ \rm times}}, \underbrace{x_2,\ldots,x_2}_{b\text{ \rm times}}, x_{i_k},\ldots,x_{i_n}],\quad a\geq 0,b>0,i,i_k>2. \] It is easy to see that a linear combination of $u'$ and $u''$ is a constant if and only if it contains as summands only $u'$ with $b=0$ and does not contain any $u''$. Hence the algebra of constants $L_m({\mathfrak A}^2)^{\delta}$ is generated by $x_1,x_j$, $j>2$, and $[x_1,x_2]$. \end{example} \begin{example} Let $m>2$ and let $\delta$ be the Weitzenb\"ock derivation of the free abelian-by-\{nilpotent of class 2\} Lie algebra $L_m({\mathfrak A}{\mathfrak N}_2)$ defined, as in the previous example, by $\delta(x_2)=x_1$, $\delta(x_j)=0$ for $j\not=2$. We define a $GL_2$-action on $L_m({\mathfrak A}{\mathfrak N}_2)$ assuming that $GL_2$ fixes $x_3,\ldots,x_m$ and acts canonically on the linear combinations of $x_1,x_2$. Then the subspaces of $L_m({\mathfrak A}{\mathfrak N}_2)$ which are homogeneous in each variable $x_3,\ldots,x_m$ are $GL_2$-invariant. This easily implies that the algebra of constants $L_m({\mathfrak A}{\mathfrak N}_2)^{\delta}$ is multigraded and $\text{\rm deg}_{x_1}f\geq \text{\rm deg}_{x_2}f$ for each multihomogeneous constant $f$. If the algebra $L_m({\mathfrak A}{\mathfrak N}_2)^{\delta}$ is finitely generated, then as in Example \ref{abelian-by-nilpotent of class 3}, it is generated by $x_1,[x_1,x_2],x_3,x_4,\ldots,x_m$ and a finite system $w_1,\ldots,w_k$ of homogeneous elements of degree $\geq 3$. Then $L_m({\mathfrak A}{\mathfrak N}_2)^{\delta}$ is a sum of the subalgebra $N$ generated by $x_1,[x_1,x_2],x_3,x_4,\ldots,x_m$ and the $N$-module generated by $w_1,\ldots,w_k$. The constants \[ \sum_{\rho,\sigma,\ldots,\tau\in S_2}\text{\rm sign}(\rho\sigma\cdots\tau) u_n=[x_1,x_2,x_{\rho(1)},x_{\sigma(1)},\ldots,x_{\tau(1)}, \] \[ [x_3,x_{\rho(1)}],[x_3,x_{\sigma(1)}],\ldots,[x_3,x_{\tau(1)}]], \] where in the summation $\rho,\sigma,\ldots,\tau$ run on $n$ copies of the symmetric group $S_2$, are homogeneous of degree $(n+1,n+1,n,0,\ldots,0)$ and arguments as in Example \ref{abelian-by-nilpotent of class 3} show that this is impossible. Hence the algebra $L_m({\mathfrak A}{\mathfrak N}_2)^{\delta}$ cannot be finitely generated. \end{example} In the above examples, the matrix of the Weitzenb\"ock derivation $\delta$ (as a linear operator acting on the vector space with basis $\{x_1,\ldots,x_m\}$) is of rank 1. This gives rise to the following natural problem. \begin{problem} If the matrix of the Weitzenb\"ock derivation $\delta$ is of rank $1$, find the exact frontier where the algebra of constants $L_m({\mathfrak W})^{\delta}$ becomes finitely generated, i.e. describe all varieties of Lie algebras $\mathfrak W$ and all integers $m>1$ such that the algebra $L_m({\mathfrak W})^{\delta}$ is finitely generated. \end{problem} Finally, we shall give the solution of this problem in the case of rank $\geq 2$. \begin{theorem} Let $\mathfrak W$ be a nonnilpotent variety of Lie algebras and let $\delta$ be a Weitzenb\"ock derivation of the relatively free algebra $L_m({\mathfrak W})$, $m\geq 3$. If the rank of the matrix of $\delta$ is $\geq 2$, then the algebra of constants $L_m({\mathfrak W})^{\delta}$ is not finitely generated. \end{theorem} \begin{proof} As in the associative case, it is sufficient to establish the theorem for the metabelian variety of Lie algebras only. We consider the abelian wreath product of Lie algebras \[ W_m=(Ky_1\oplus \cdots\oplus Ky_m)\rightthreetimes \sum_{j=1}^ma_jK[y_1,\ldots,y_m], \] where $[y_i,y_j]=[a_if_i,a_jf_j]=0$ and $[a_if_i,y_j]=a_if_iy_j$ ($f_i,f_j\in K[y_1,\ldots,y_m]$). Then by the theorem of Shmelkin \cite{Sh2} the mapping $\iota:x_j\to a_j+y_j$, $j=1,\ldots,m$, defines an embedding of the free metabelian Lie algebra $L_m({\mathfrak A}^2)$ into $W_m$. We assume that $\delta$ is in its normal Jordan form (and $\delta(x_2)=x_1$, $\delta(x_1)=0$). Hence the fixed part of $Ky_1\oplus \cdots\oplus Ky_m$ is of dimension $m-\text{\rm rank}(\delta) \leq m-2$ and is spanned on some free generators $x_{j_1}=x_1,x_{j_2},\ldots,x_{j_p}$, $p\leq m-2$. If the algebra $L_m({\mathfrak A}^2)^{\delta}$ is finitely generated, then it is a sum of $Kx_1\oplus Kx_{j_2}\oplus\cdots\oplus Kx_{j_p}$ and a finitely generated $K[x_1,x_{j_2},\ldots,x_{j_p}]$-submodule of the commutator ideal $L_m({\mathfrak A}^2)'$. But, as in the associative case, this is impossible because the image of this module under $\iota$ should contain for example $\iota([x_2,x_1])K[y_1,\ldots,y_m]^{\delta}$ and the transcendence degree of $K[y_1,\ldots,y_m]^{\delta}$ is equal to $m-1$. \par One can see directly, that if $\delta(x_3)=x_2$, then a finitely generated subalgebra of $\iota\left(L_m({\mathfrak A}^2)^{\delta}\right)$ cannot contain all constants \[ \iota([x_2,x_1])(x_2^2-2x_1x_3)^n,\quad n\geq 0. \] Similarly, if $\delta(x_4)=x_3,\delta(x_3)=0$, then $\iota\left(L_m({\mathfrak A}^2)^{\delta}\right)$ cannot contain all \[ \iota([x_2,x_1])(x_1x_4-x_2x_3)^n,\quad n\geq 0. \] \end{proof} \section*{Acknowledgements} This project was carried out when the first author visited the Department of Mathematics of the University of Manitoba in Winnipeg. He is very thankful for the kind hospitality and the creative atmosphere. The first author is also very grateful to Andrzej Nowicki for the useful discussions on Weitzenb\"ock derivations of polynomial algebras.
1,314,259,995,441
arxiv
\section{An upper bound for $f(n,a)$} The goal of this section is to provide an upper bound for $f(n,a)$ for all $a,n\in \mathbb{N}^+$. To do so, we consider an integer program that outputs $f(n,a)$ in \ref{sec:results}. We first introduce some necessary concepts from linear and integer programming in \ref{sec:IP}. \subsection{Linear and Integer Programming}\label{sec:IP} Let $A\in \mathbb{R}^{m\times n}$, $\mathbf{b}\in \mathbb{R}^m$ and $\mathbf{c}\in \mathbb{R}^n$. The following is an \emph{integer program}: $\max\{\mathbf{c}^\top\mathbf{x}| A\mathbf{x} \leq \mathbf{b}, \mathbf{x}\in \mathbf{Z}^n\}$. Let $\lambda$ be the resulting \emph{optimal value} and say $\mathbf{x}^*$ is an \emph{optimal solution} if $A\mathbf{x}^* \leq \mathbf{b}$, $\mathbf{x}^* \in \mathbf{Z}^n$ and $\mathbf{c}^\top \mathbf{x}^* = \lambda$. Let $P:= \{\mathbf{x}\in \mathbf{Z}^n | A\mathbf{x}\leq \mathbf{b}\}$. Any $\mathbf{x}\in P$ is said to be a \emph{solution} of the integer program. We now consider a \emph{linear relaxation} of the previous integer program: $\max\{\mathbf{c}^\top\mathbf{x}| A\mathbf{x} \leq \mathbf{b}, \mathbf{x}\in \mathbb{R}^n\}$. This program is a \emph{linear program}. Let $\bar{\lambda}$ be the resulting optimal value and say $\bar{\mathbf{x}}$ is an \emph{optimal solution} if $A\bar{\mathbf{x}} \leq \mathbf{b}$, $\bar{\mathbf{x}} \in \mathbb{R}^n$ and $\mathbf{c}^\top \bar{\mathbf{x}} = \bar{\lambda}$. Let $\bar{P}:= \{\mathbf{x}\in \mathbb{R}^n | A\mathbf{x}\leq \mathbf{b}\}$. Any $\mathbf{x}\in \bar{P}$ is said to be a \emph{solution} of the linear program. First note that $P \subseteq \bar{P}$. Thus any optimal solution $\mathbf{x}^*$ for the integer program is in $\bar{P}$ and $\mathbf{c}^\top x^* \leq \bar{\lambda}$. Thus $\lambda \leq \bar{\lambda}$, that is, the linear relaxation yields an upper bound to the original integer program. One advantage of the linear relaxation is that it can be solved in polytime through the \emph{ellipsoid method}, for example. No polytime algorithm to solve integer programs is known in general. Further, one can apply the theory of \emph{strong duality} to the linear relaxation. Through it, we obtain that, if $\bar{P}$ is not empty, then $\bar{\lambda}=\min\{\mathbf{b}^\top \mathbf{y} | A^\top \mathbf{y} = \mathbf{b}, \mathbf{y} \geq \mathbf{0}, \mathbf{y}\in \mathbb{R}^m\}$, i.e., there exists a different linear program that yields the same optimal value. Finally, consider one last linear program where we add additional constraints $$\min\{\mathbf{b}^\top \mathbf{y} | A^\top \mathbf{y} = \mathbf{b}, C\mathbf{y}=\mathbf{d}, \mathbf{y} \geq \mathbf{0}, \mathbf{y}\in \mathbb{R}^m\}$$ where $C\in \mathbb{R}^{k\times m}$ and $\mathbf{d}\in \mathbb{R}^m$. Let the optimal value of this linear program be $\tilde{\lambda}$. Note that any optimal solution $\tilde{\mathbf{y}}$ of this linear program is a solution of the previous linear program. Therefore, $\tilde{\lambda}\geq\bar{\lambda}\geq \lambda$ and this last linear program also yields an upper bound to the original integer program. \subsection{Results}\label{sec:results} In \cite{PRT}, the authors introduced the following integer program that computes $f(n,a)$ for any fixed $n, a\in \mathbb{N}^+$. \begin{align*} f(n,a) = \max & \sum_{S\in \mathcal{S}_n} x_S & \\ \textup{s.t. } & x_S+x_T \leq 1+x_{S\cup T} & \forall S, T \in \mathcal{S}_n \\ & \sum_{\substack{S\in \mathcal{S}_n: \\ e\in S}} x_S \leq a & \forall e\in [n] \\ & x_S \in \{0,1\} & \forall S\in \mathcal{S}_n \end{align*} The claim is that $\mathcal{S}:=\{S \in 2^{[n]}|x_S=1\}$ is a union-closed family. Indeed, from the first set of constraints, if sets $S$ and $T$ are present in $\mathcal{S}$, then the associated variables will be one, and thus $x_{S\cup T}$ must also be one, meaning that $S\cup T$ must also be in $\mathcal{S}$. If either $S$ or $T$ is not present, then there is no restriction on whether $S\cup T$ must be in the collection. The second set of constraints ensures that each element is in at most $a$ sets of the collection. Finally, the total number of sets in such a union-closed collection is maximized by the objective function. The following lemma from \cite{PRT} (Proposition 12.1 and Theorem 20 in that paper) is useful in restricting which $f(n,a)$'s need to be studied. \begin{lemma}{\cite{PRT}}\label{diagonal} In general, $f(n,a)\leq f(n+1,a)$ for all $a, n\in \mathbb{N}$. Moreover, $f(n,a)=f(n+1,a)$ for all $n\geq a-1$. \end{lemma} Note that this implies that for a fixed $a\in \mathbb{N}$, $f(n,a)\leq f(a,a)$ for all $n\in \mathbb{N}$. Integer programming solvers such as Gurobi or Cplex can compute the values of $f(a,a)$ up to 8. \[ \begin{array}{|c|c|} \hline a & f(a,a)\\ \hline 1 & 2\\ 2 & 4\\ 3& 5\\ 4 & 8\\ 5 & 9\\ 6 & 10\\ 7 & 12\\ 8 & 16\\ \hline \end{array} \] For other values, we give the following upper bound by applying the concepts of linear and integer programming discussed in the previous subsection. We will consider the dual of $f(n,a)$ and add constraints requiring all variables corresponding to union-closed inequalities involving sets of some fixed cardinalities to be the same. \begin{theorem}\label{thm:upperbound} We have that $f(n,a)\leq \frac{5a^4-12a^3+31a^2-24a+48}{12(a^2-3a+4)}$ for all $a\in \mathbb{Z}$ and $n\geq 7$. \end{theorem} \begin{proof} Let \begin{align*} \alpha & = 1-\frac{2\binom{n-1}{2}}{3+3\binom{n-1}{2}} = \frac{n^2-3n+8}{3n^2-9n+12}\\ \beta & = \frac{2}{3+3\binom{n-1}{2}}=\frac{4}{3n^2-9n+12}\\ \gamma &=\frac{1}{\binom{n-2}{2}}\left(-1 + \frac{2(n-2)^2}{3+3\binom{n-1}{2}}\right). \end{align*} Note that $\gamma\geq 0$ if $n\geq 7$ and $\alpha, \beta \geq 0$ for all $n\geq 0$. We claim that the linear combination obtained by taking \begin{align*} &\sum_{e\in [n]} \alpha \left(\sum_{\substack{S\in \mathcal{S}_n:\\e\in S}} x_S \leq a\right)\\ + &\sum_{\substack{S,T\in \mathcal{S}_n:\\|S|=1, |T|=2\\ |S\cup T|= 3}} \beta \left(x_S+x_T -x_{S\cup T} \leq 1\right) \\ + &\sum_{\substack{S, T\in \mathcal{S}_n:\\|S|=2, |T|=2\\ |S\cup T|= 4}} \gamma \left(x_S+x_T -x_{S\cup T} \leq 1 \right) \\ + & x_\emptyset \leq 1 \end{align*} yields $$\sum_{S\in \mathcal{S}} c_S x_S \leq \bar{f}(n,a)$$ where each $c_S \geq 1$ and $\bar{f}(n,a)=n\cdot a \cdot \alpha + 3\binom{n}{3} \cdot 1 \cdot \beta + 3\binom{n}{4}\cdot 1\cdot \gamma + 1$. Let's check this by calculating the coefficient for sets $S$ of different size. Let's call inequalities $\sum_{\substack{S\in \mathcal{S}:\\e\in S}} x_S \leq a$ \emph{frequency inequalities}, $x_S+x_T -x_{S\cup T}\leq 1$ where $|S|=1$, $|T|=2$, $|S\cup T|= 3$ \emph{123-union-closed inequalities}, and $x_S+x_T -x_{S\cup T}\leq 1$ where $|S|=2$, $|T|=2$, $|S\cup T|= 4$ \emph{224-union-closed inequalities}. \begin{itemize} \item $|S|=0$: the empty set only appears once with a coefficient of 1. \item $|S|=1$: any $1$-element set will appear in exactly one frequency inequality and $\binom{n-1}{2}$ 123-union-closed inequalities, and no 224-union-closed inequalities. Thus the coefficient for any $1$-element will be $1\alpha + \binom{n-1}{2} \beta = 1$. \item $|S|=2$: any $2$-element set will appear in exactly two frequency inequalities and $\binom{n-2}{1}$ 123-union-closed inequalities and $\binom{n-2}{2}$ 224-union-closed inequalities. Note that it always appear positively. Thus the coefficient for any $2$-element set will be $2\alpha + \binom{n-2}{1} \cdot \beta + \binom{n-2}{2}\gamma = 1$. \item $|S|=3$: any $3$-element set will appear in exactly three frequency inequalities. It will also appear negatively in three union-closed 123-union-closed inequalities, and zero 224-union-closed inequality. Thus any $3$-element set will have coefficient $3\alpha-3\beta=1$. \item $|S|=4$: any $4$-element set will appear in exactly four frequency inequalities. It will also appear negatively in three 224-union-closed inequalities, and zero 123-union-closed inequality. Thus any $4$-element set will have coefficient $4\alpha - 3\gamma\geq 1$. \item $|S|=i, i\geq 5$: any $i$-element set will appear in exactly five frequency inequalities and nowhere else. Then $c_S=\frac{5n^2-15n+40}{3n^2-9n+12}$ which is always at least 1. \end{itemize} Finally, note that in our linear combination, we take $n$ frequency inequalities, $3\binom{n}{3}$ 123-union-closed inequalities, $3\binom{n}{4}$ 224-union-closed inequalities and one empty set inequality. Thus $$\bar{f}(n,a)=n\cdot a \cdot \alpha + 3\binom{n}{3} \cdot 1 \cdot \beta + 3\binom{n}{4}\cdot 1\cdot \gamma + 1.$$ Since $x_S\geq 0$ for all $S\in \mathcal{S}$, $\sum_{S\in \mathcal{S}} x_S \leq \sum_{S\in \mathcal{S}} c_S x_S$, and so $\bar{f}(n,a)$ is an upper bound for $f(n,a)$. By Lemma~\ref{diagonal}, we know that, for a fixed $a\in \mathbb{N}$, $f(n,a)\leq f(a,a)$ for all $n\in \mathbb{N}$. Thus, finding an upper bound for $f(a,a)$ yields an upper bound for all $f(n,a)$. Thus $$\bar{f}(a,a)=\frac{5a^4-12a^3+31a^2-24a+48}{12(a^2-3a+4)}$$ is an upper bound for $f(n,a)$ for all $n\in \mathbb{N}$. \end{proof} To give the reader a better grasp on this upper bound, here is a table compiling a few values of $\lfloor \bar{f}(a,a) \rfloor$. \[ \begin{array}{|c|c|} \hline a & \lfloor \bar{f}(a,a) \rfloor \\ \hline 7 & 24\\ 8 & 30\\ 9 & 37\\ 10 & 46\\ 11 & 55\\ 12 & 64\\ 13 & 75\\ 14 & 86 \\ 15 & 99\\ 16 & 112\\ \hline \end{array} \] \subsection{Future directions}\label{sec:future} We first note that the result we found in Theorem \ref{thm:upperbound} is an upper bound for the linear relaxation of $f(n,a)$ where we replace $x_e \in \{0,1\}$ by $0\leq x_e \leq 1$. In other words, we are giving an upper bound to an upper bound of $f(n,a)$, namely to its linear relaxation $f_r(n,a)$. For example, $\lfloor f_r(8,8)\rfloor =20<\lfloor \bar{f}(8,8) \rfloor = 30$ and $\lfloor f_r(9,9)\rfloor =26 < \lfloor \bar{f}(9,9)\rfloor =36$. To find this upper bound for the linear relaxation, we considered its dual and added constraints that required that all variables corresponding to union-closed inequalities involving sets of some fixed cardinalities $a$, $b$ and $c$ be the same. This is very restrictive. Thus it might be possible to give a better upper bound for the linear relaxation of $f(n,a)$ or even to find its exact value. Furthermore, the linear relaxation itself gets weaker as $n$ increases. By adding valid linear inequalities, one can strengthen the linear relaxation. A few are discussed in \cite{thesis}. Finally, note that the formula we found for $\bar{f}(n,a)$ in the proof of \ref{thm:upperbound} is for any $n,a$ with $n\geq 7$, and not only for $n=a$. In \cite{PRT}, the authors conjectured that $f(n,a)=f(n+1,a)$ for all $n\geq \lceil \log_2 a \rceil +1$, i.e., for all values of $n$ for which it makes sense to compute $f(n,a)$ given some particular $a$. If that conjecture is true, then $f(n,a)$ is upper bounded by $f(\lceil \log_2 a \rceil + 1,a)$ and thus by $\bar{f}(\lceil \log_2 a \rceil+1, a)$ for all $a$. Note that $\bar{f}(\lceil \log_2 a \rceil+1, a)$ yields an upper bound similar to Knill's lower bound that states that for any union-closed family with $m$ sets, there exists an element in at least $\frac{m-1}{\log_2 m}$ sets. We believe that these techniques have much more to offer. Despite all the simplifications we used, they still led to some results. By removing the harshest of these simplifications, one might obtain new and interesting results for the Frankl conjecture. \section{A proof of $f(n,2^{n-1}-1)=2^n-n$}\label{sec:summer} \begin{definition} Fix $n$ and $m$. Then let $g(n,m)$ be the minimum number of sets containing the most frequent element in a union-closed family of $m$ sets on $n$ elements. \end{definition} \begin{lemma}\label{lem:missingsubsets} Consider a union-closed family that does not contain some set $S$ where $|S|\geq 2$. Then the family contains at most one set $T\subset S$ such that $|T|=|S|-1$. \end{lemma} \proof Suppose not: suppose there exist sets $T_1$ and $T_2$ in the family such that $T_1,T_2\subset S$ and $|T_1|=|T_2|=|S|-1$. Then $T_1\cup T_2=S$, and so $S$ would have to be in the family as well since it is union-closed, a contradiction. \qed \smallskip \begin{lemma}\label{lem:missingcovering} Let $\mathcal{S}$ be a union-closed family on $n$ elements, and let $\mathcal{S}_n\backslash \S=\{S_1, \ldots, S_k\}$. If $S_1\cup \ldots\cup S_k \supseteq \{e_1, \ldots, e_l\}\neq \emptyset$ for some $e_1, e_2, \ldots, e_l\in [n]$, then $k\geq l$. \end{lemma} \begin{proof} We show this by induction on $l$. If $l=1$, then $\mathcal{S}_n\backslash \S$ cannot be empty, so $k\geq 1$. (Similarly, if $l=2$, then one cannot simply remove one set containing both $e_1$ and $e_2$ since then $\S$ would not be union-closed. Thus, $k\geq 2$.) Now suppose this holds up to $l-1$, and we will show it for $l$. Among $S_1, S_2, \ldots, S_k$, let $S^*$ be a set of maximum cardinality. \textbf{Case 1:} Suppose $2 \leq |S^*| \leq l-1$. Let $S_{i_1}, \ldots, S_{i_t} \in \mathcal{S}_n \backslash \S$ be such that $S_{i_j}$ contains no other nonempty set in $\mathcal{S}_n\backslash\S$ and $S_{i_j}\not\subseteq S^*$. Note that $S^*\cup S_{i_1}\cup \ldots \cup S_{i_t} \supseteq \{e_1, e_2, \ldots, e_l\}$ since any element $e_j$ is in at least one set of $\mathcal{S}_n\backslash\S$, and certainly a set $\bar{S}$ in $\mathcal{S}_n\backslash\S$ of minimum cardinality containing $e_j$ contains no other nonempty set in $\mathcal{S}_n\backslash\S$. Either we picked $\bar{S}$ or it is a subset of $S^*$; in both cases, $e_j$ will be in the union. By Lemma~\ref{lem:missingsubsets}, since $|S^*|\geq 2$, there are at least $|S^*|-1$ subsets of $S^*$ of size $|S^*|-1$ that are also in $\mathcal{S}_n \backslash \S$. Note that none of these subsets is a set that we kept since we did not keep any set that is a subset of $S^*$. Note that the collection of sets $S_{i_1}, \ldots, S_{i_t}$ is such that $$S_{i_1}\cup \ldots \cup S_{i_t} \supseteq \{e_1, e_2, \ldots, e_l\}\backslash S^*.$$ Furthermore, note that there is a union-closed family $\S'$ on $n$ elements such that $\mathcal{S}_n\backslash\S'=\{S_{i_1}, \ldots, S_{i_t}\}$. Indeed, it cannot be that there exists $T_1$ and $T_2$ in $\S'$ such that $T_1\cup T_2=S_{i_j}$ for some $1\leq j \leq t$. If either $T_1$ or $T_2$ had been in $\mathcal{S}_n\backslash \S$, then we would not have kept $S_{i_j}$ since $T_1$ and $T_2$ are subsets of that set. Then that means that $T_1$ and $T_2$ were both in $\S$, but then $\S$ would not have been union-closed since their union $S_{i_j}$ was not in $\S$. Thus, we can use the induction hypothesis to deduce that $t\geq l-|S^*|$. So we have found that there are at least these $l-|S^*|$ sets in $\mathcal{S}_n\backslash \S$, as well as $S^*$ itself and its $|S^*|-1$ subsets, for a total of $l$ sets as desired. \textbf{Case 2:} Suppose $|S^*|=1$. Then all sets $S_1, \ldots, S_k$ are singletons (or the empty set), so to cover $l$ elements, one needs at least $l$ sets, so $k\geq l$ as desired. \textbf{Case 3:} Suppose $|S^*|=l$. By Lemma~\ref{lem:missingsubsets}, at least $l-1$ subsets of $S^*$ of cardinality $l-1$ are also in $\mathcal{S}_n\backslash\S$, meaning that there are also at least $l$ sets in $\mathcal{S}_n \backslash \S$ as desired. \end{proof} \begin{theorem}\label{thm:g} The following holds: $g(n,2^n-i)=2^{n-1}$ for $0\leq i \leq n-1$. \end{theorem} \begin{proof} We will show that for any union-closed family $\S$ of $m$ sets on $n$ elements where $m=2^n-i$ for some $0 \leq i \leq n-1$, there exists an element in $2^{n-1}$ sets. In other words, we will show that there is an element that is not in any of the sets in $\mathcal{S}_n\backslash \S$. By Lemma~\ref{lem:missingcovering}, if the sets in $\mathcal{S}_n\backslash\S$ covered $[n]$, there would have to be at least $n$ sets in $\mathcal{S}_n\backslash \S$. But we know that $|\mathcal{S}_n\backslash\S|=i$ for some $0\leq i \leq n-1$. Thus the sets in $\mathcal{S}_n\backslash\S$ cannot cover $[n]$ and there is an element that is not in any of the sets in $\mathcal{S}_n\backslash\S$. \end{proof} Similarly, one can show that $f(n,2^{n-1}-1)=2^n-n$. Note that it is clear that $f(n,2^{n-1})=2^n$ as one can take the whole power set of $n$. \begin{theorem} We have that $f(n,2^{n-1}-1)=2^n-n$ for all $n\in \mathbb{N}^+$. \end{theorem} \begin{proof} Suppose that $f(n,2^{n-1}-1)=2^n-n+k$ for some $k\in [n]$. Then, by Proposition 12.5 of \cite{PRT}, $g(n,2^n-n+k)=2^{n-1}-1$ which is a contradiction to \ref{thm:g}. Thus, we have that $f(n,2^{n-1}-1)\leq 2^n-n$. Now consider the power set $\mathcal{S}_n$. Each element is in exactly $2^{n-1}$ sets. Remove the $n$ singletons, i.e., the $n$ sets containing exactly one element. The family one thus obtains is still union-closed, has $2^n-n$ sets, and each element is in $2^{n-1}-1$ sets. Therefore, we also have that $f(n,2^{n-1}-1)\geq 2^n-n$, and so the theorem holds. \end{proof} \bibliographystyle{alpha}
1,314,259,995,442
arxiv
\section{Introduction} \label{Sec:intro} During the past forty years it has become clear that galaxy properties and evolution can be driven as much by environment as by initial conditions, even if the details of environmental influence are not yet well quantified. Some observed properties show a strong dependence on the environment, for instance, on the optical and ultraviolet luminosity and atomic gas mass function \citep{2009ARA&A..47..159B}, on the infrared luminosity function \citep{1991ApJ...374..407X} or the associated stellar mass function \citep{2001ApJ...557..117B}, on the morphology-mass relation \citep{2011MNRAS.tmpL.354C}, and on the galaxy colours \citep{2004ApJ...601L..29H,2006MNRAS.373..469B}. Isolated galaxies are located in environments of such low density that they have not been appreciably affected by their closest neighbours during a past crossing time $t_{\rm{cc}} = $\,3\,Gyr \citep{2005A&A...436..443V}. The observed physical properties of these systems are expected to be mainly determined by initial formation conditions and secular evolutionary processes. A representative sample of isolated galaxies is therefore needed to test models of galaxy formation and evolution. It may also serve as a reference sample in studies of galaxies in pairs, triplets, groups, and clusters. This will aid our understanding of the effects of the environment on fundamental galaxy properties. Statistical studies of isolated galaxies require a large and morphologically diverse sample. Few good samples of isolated galaxies exist, one of the largest being the Catalogue of Isolated Galaxies \citep[CIG;][]{1973AISAO...8....3K}. The original visual systematic search for isolated galaxies using the First Palomar Observatory Sky Survey (POSS-I) employs a visual projected isolation criterion. Since the redshift distances of only a few galaxies were known at that time, isolation in the third dimension could not be directly estimated. Instead, any galaxy with nearby similar-size neighbours was rejected. The resulting CIG includes 1050 galaxies (plus CIG 781, a globular cluster mistakenly included in the original list). Despite the importance of analysing pure 'nature' samples, not many additional studies of isolated galaxies were carried out in the following three decades \citep{1977ApJ...216..694H,1980AJ.....85.1010A,1981Afz....17...53A,1982ApJ...253..526B,2004A&A...420..873V}. This led many scientists to assume that no real isolated galaxy population exists. It is natural therefore that the AMIGA (\textbf{A}nalysis of the interstellar \textbf{M}edium of \textbf{I}solated \textbf{GA}laxies\footnote{\texttt{http://amiga.iaa.es}}) project \citep{2005A&A...436..443V} is based upon a re-evaluation of the CIG. It is a first step in trying to identify and better understand isolated galaxies in the local Universe. \citet{2008MNRAS.390..881D} used a subsample of 100 typical CIG galaxies (Sb, Sbc, and Sc) and found that most of them have a bulge-to-total luminosity ratio $B/T~<$~0.1. If $B/T$ is a measure of environmental dynamical processing \citep{2010ApJ...709L..53M}, galaxies in the CIG sample appear to be very little affected by it. The late-type population that dominates the AMIGA sample \citep{2006A&A...449..937S} may indicate that they have been alone for most of their lives. \citet{2007A&A...472..121V,2007A&A...470..505V} calculated two isolation parameters (the local number density and the tidal strength) for 950 galaxies in the CIG sample using an automated search for neighbours on the first and second digitised POSS (DPOSS-I and II) based on photographic plates. They provided an exhaustive list of $\sim$\,54,000 possible satellites that were used to identify several CIG galaxies failing the CIG isolation criteria. The first data release of the Sloan Digital Sky Survey \citep[SDSS-DR1;][]{2003AJ....126.2081A} has rekindled the interest on isolated galaxy studies in the past decade \citep{2005AJ....129.2062A}. In this context, new and available photometric data from the SDSS motivated us to perform a fully digital revision of the isolation degree for the CIG galaxies. The SDSS-III \citep{2011AJ....142...72E} maps one third of the sky using CCD detectors. The SDSS also provides spectra that allow us to estimate galaxy distances, enabling an improved revision of the degree of isolation for 411 CIG galaxies with a fairly complete spectroscopic coverage in the last Data Release \citep[DR9;][]{2012ApJS..203...21A} catalogue. Some other recent catalogues of isolated galaxies have been compiled, introducing isolation criteria using the spectroscopic information from earlier SDSS data releases. \citet{2009MNRAS.394.1409E} applied three-dimensional Voronoi tessellation to volume-limited galaxy samples, using spectroscopic data from SDSS-DR5, and identified 2394 isolated galaxies. \citet{2009AN....330.1004V} refined the sample by selecting galaxies with the highest level of isolation, the QIsol sample, composed of 600 galaxies. These two samples suffer from the incompleteness in the SDSS spectroscopic sample, limited at $m_{\rm{r,Petrosian}} < 17.77$\,mag. To compensate for the SDSS spectroscopic incompleteness, other authors used statistical techniques \citep{2012MNRAS.424.1454E}, photometric redshifts also provided by the SDSS \citep{2011MNRAS.417..370G}, or selected different volume-limited samples \citep{2011ApJ...738..102T}. A revision for possible photometric companions is needed out of the volume-limited samples considered \citep{2010AJ....139.2525H}. Here we perform a photometric and spectroscopic census and quantify the environment of the CIG galaxies covered by the SDSS-DR9. In Sect.~\ref{Sec:amiga}, we present the CIG, as well as the revisions and improvements on isolation performed within the AMIGA project. In Sect.~\ref{Sec:data}, we describe in detail the data and methodology used to revise the isolation of the CIG galaxies in the SDSS, including a description of our automated pipeline used to produce a catalogue of their potential neighbours. The method to quantify the isolation degree, as well as the selection of the comparison samples used, are explained in Sect.~\ref{Sec:isolparam}. Results from the study using the photometric and spectroscopic catalogues of the SDSS are presented in Sect.~\ref{Sec:result}. A revision of the CIG is presented in Sect.~\ref{Sec:diss} to determine how many galaxies remain isolated based on the recent SDSS-DR9 data from both the photometric and the spectroscopic catalogues. Neighbours considered in each study are then used for the estimation of the isolation degree. We present our conclusions in Sect.~\ref{Sec:con}. \section{AMIGA project} \label{Sec:amiga} The AMIGA project adopts the Catalogue of Isolated Galaxies \citep[CIG;][]{1973AISAO...8....3K} as a starting point and proceeds to extract a refined sample of the historically most significant sample of isolated galaxies in the local Universe. All CIG galaxies are found in the Catalogue of Galaxies and Clusters of Galaxies \citep[CGCG;][]{1968cgcg.bookR....Z} with apparent photographic magnitudes $m_{\rm{pg}} < 15.7$\,mag. These very isolated systems represent $\sim$\,3\,\% of the CGCG. The CIG isolation criteria (Eqs.~\ref{Eq:kara2} and \ref{Eq:kara1}) consider a primary galaxy of angular diameter $D_P$ as isolated if there is no neighbour $i$ with an angular diameter $D_{i}$ between 0.25 and $4\times D_P$ lying within a projected distance 20 times the diameter of the neighbour: \begin{equation} \label{Eq:kara2} \frac{1}{4} \,D_{P} \leq D_{i} \leq 4 \,D_{P}\quad; \end{equation} \begin{equation} \label{Eq:kara1} R_{iP} \geq 20 \,D_{i}\quad. \end{equation} The AMIGA project refines the pioneering CIG in several ways, including a revision of all galaxy positions \citep{2003yCat..34110391L}, an optical study, including sample redefinition, magnitude correction, and full-sample analysis of the optical luminosity function \citep{2005A&A...436..443V}, a morphological revision and type-specific optical luminosity function analysis \citep{2006A&A...449..937S}, a study on H{$\alpha$} morphology \citep{2007A&A...474...43V}, and a re-evaluation of the degree of isolation of the CIG \citep{2007A&A...472..121V,2007A&A...470..505V}. The original CIG contains 1051 items, but one of the compiled objects is a globular cluster \citep[CIG 781;][]{2005A&A...436..443V} so the size of the sample considered in the rest of this paper is N = 1050. The AMIGA project also started several multiwavelength studies for galaxies in the CIG: characterisation of the $B$-band luminosity function \citep{2006A&A...449..937S}; Fourier photometric decomposition, optical asymmetry, and photometric clumpiness and concentration \citep{2008MNRAS.390..881D,2009MNRAS.397.1756D}; characterisation of the FIR luminosity function \citep{2007A&A...462..507L}, radio-continuum \citep{2008A&A...485..475L}, molecular gas \citep{2012A&A...538C...1L}, and atomic gas \citep{2011A&A...532A.117E}; characterisation of nuclear activity \citep{2008A&A...486...73S,2012A&A...545A..15S}; optical colours \citep{2012A&A...540A..47F}; and optical study of the stellar mass-size relation \citep{2013MNRAS.434..325F}. \subsection{Previous revision of the CIG environment} \label{Sec:sample} One of the AMIGA improvements of the CIG involves the revision and quantification of the CIG isolation criteria. \citet{2007A&A...470..505V} used DPOSS-I and DPOSS-II images for this revision. The digitised images from photographic plates enabled them to revise the environment description for all 950 CIG galaxies with radial velocities higher than 1500\,km\,s$^{-1}$ within a minimum physical radius of 0.5\,Mpc. All neighbour candidates brighter than $m_{B}~=~17.5$\,mag were identified in each field with a fair degree of confidence, using the LMORPHO software \citep{1995PASP..107..770O, 1996ApJ...472L..13O, 2002ApJ...568..539O}. A catalogue of approximately 54,000 neighbours was created, and redshifts are available for only $\sim$30\% of this sample. \citet{2007A&A...472..121V} used two complementary parameters to quantify the isolation degree of the CIG galaxies, the local number density of neighbour galaxies $\eta_{k}$, and the tidal strength $Q$ affecting the central galaxy by its neighbourhood. The local number density $\eta_{k}$ is defined as follows: \begin{equation} \label{Eq:etak} \eta_{k} \propto {\rm log}\left(\frac{k - 1}{V(r_{k})}\right)\quad, \end{equation} where $V(r_{k}) = \frac{4}{3}\,\pi\,r_{k}^{3}$ and $r_{k}$ is the projected distance to the $k^{\rm{th}}$ nearest neighbour, with $k$ equal to 5 or lower if there are not enough neighbours in the field. And the tidal strength exerted by one companion is defined as \begin{equation} \label{Eq:Qip} Q_{iP} \equiv \frac{F_{\rm{tidal}}}{F_{\rm{bind}}} \propto {\frac{M_{i}}{M_{P}}}\left(\frac{D_{P}}{R_{iP}}\right)^{3}\quad, \end{equation} where $M_{i}$ and $M_{P}$ are the mass of the neighbour and the primary galaxy, respectively, $D_{P}$ is the apparent diameter of the primary galaxy, and $R_{iP}$ the projected distance between the neighbour and primary galaxy. Using the apparent diameter as an approximation for galaxy mass, \begin{equation} \label{Eq:Q2007} Q_{iP} \equiv \frac{F_{\rm{tidal}}}{F_{\rm{bind}}} \propto \left(\frac{\sqrt{D_{P}D_{i}}}{R_{iP}}\right)^{3}\quad. \end{equation} This approximation is based on the dependence of galaxy mass $M$ on size: $M \varpropto D^{\gamma}$, with $\gamma = 1.5$ \citep{1984AJ.....89..966D,2004ApJ...604..521T}. The final tidal parameter considered is a dimensionless estimation of the gravitational interaction strength, calculated from the logarithm of the sum of the tidal strengths created by all the neighbours in the field, $Q=$\,log$( \sum Q_{iP} )$. In this paper we calculate modified improved versions of these two parameters for the CIG using photometry and spectroscopy from the SDSS (see Sect.~\ref{Sec:isolparam}). \section{Data and methodology} \label{Sec:data} The SDSS-DR9\footnote{\texttt{http://www.sdss3.org/}} \citep{2011AJ....142...72E,2012ApJS..203...21A} provides images and spectra covering $14,555$ square degrees mostly of the northern sky. The SDSS database provides homogeneous and moderately deep photometry in five pass-bands. The 95\% completeness limits for the images are $(u,g,r,i,z) = (22.0, 22.2, 22.2, 21.3, 20.5)$, respectively. The images are mostly taken under good/average seeing conditions (the median is about 1$\farcs$4 in $r$-band) on moonless nights. The photometric catalogue of detected objects was used to identify the targets for spectroscopy: a) the main galaxy sample \citep{2002AJ....124.1810S}, with a target magnitude limit of $m_{r,\rm{Petrosian}}~<~17.77$\,mag corrected for Galactic dust extinction, and b) the Baryon Oscillation Spectroscopic Survey \citep[BOSS;][]{2013AJ....145...10D}, which uses a new spectrograph \citep{2012arXiv1208.2233S} to obtain spectra of galaxies with $0.15 < z < 0.8$ and quasars with $2.15 < z < 3.5$, which is useful to reject background objects in our study. The data are processed using automatic pipelines \citep{2011AJ....142...31B}. \subsection{CIG galaxies in the SDSS} \label{Sec:CIGselect} The scheme of the pipeline we followed is presented in Fig.~\ref{Fig:pipeline}. We found $N = $~799 CIG galaxies included in the SDSS photometric catalogue, of which ten were removed because the photometric data were unreliable (due to a nearby bright star or because the galaxy is too close to an edge of the field): CIG 13, 95, 388, 402, 573, 713, 736, 781, 802, and 810. We used recession velocities from the AMIGA database\footnote{\texttt{http://amiga.iaa.es/p/139-amiga-public-data.htm}} \citep{2012A&A...540A..47F} for CIG galaxies. We chose a projected physical radius of 1\,Mpc ($H_{0}$=75\,km\,s$^{-1}$\,Mpc$^{-1}$) to evaluate the isolation degree\footnote{Note that the search radius is larger than in \citet{2007A&A...470..505V}, who used a minimum physical radius of 0.5\,Mpc due to technical limitations.}. If we were to assume a typical field velocity dispersion of the order of 190\,km\,s$^{-1}$ \citep{2000ApJ...530..625T} it would require about $t_{\rm{cc}}\sim\,5.2$\,Gyr for a companion to cross this distance, guaranteeing that the galaxy has been isolated most of its lifetime. We focused our study on CIG galaxies with recession velocities $\varv \geq 1500$\,km\,s$^{-1}$, which additionally reduced our sample to $N = $~693, to avoid an overwhelming search for potential neighbours (the angular size on the sky for 1\,Mpc at a distance of 1500\,km\,s$^{-1}$ is approximately 2{\textdegree}.9). The CIG isolation criteria (Eqs.~\ref{Eq:kara2} and \ref{Eq:kara1}) require one to examine the isolation in a field as large as 80 times the diameter of each CIG galaxy. For a typical CIG galaxy, with diameter $D_{P} = 30$\,kpc, this translates into a projected distance of $R_{iP} = 2.4$\,Mpc. Even selecting a reasonable and constant search radius, the variable radius resulting from the CIG isolation criteria usually represents a very large field, and 1\,Mpc often corresponds to corresponds to a part of it. Therefore, we cannot verify the isolation for the entire field used by \citet{1973AISAO...8....3K}, but we are able to determine if neighbour galaxies close to the primary CIG galaxy are violating the CIG isolation criteria. Model magnitudes in $r$-band (the deepest images) were used in our study. We used $r_{90}$, the Petrosian radius containing 90\,\% of the total flux of the galaxy in the $r$-band\footnote{\texttt{http://www.sdss3.org/dr9/algorithms/magnitudes.php}}, as explained in Sect.~\ref{Sec:diameter}. Our final sample is composed of $N = $~636 CIG galaxies, and 1\,Mpc radius fields are completely covered in the photometric SDSS-DR9 catalogue. \begin{figure*} \centering \includegraphics[width=.9\textwidth]{images/pipeline_cig.pdf} \caption[Diagram of the methodology]{Diagram of the methodology. The scheme used to select primary galaxies is shown in the left column, and the selection for the neighbours is shown in the right column.} \label{Fig:pipeline} \end{figure*} \subsection{Catalogues of neighbours} \label{Sec:neighselect} We used the CasJobs\footnote{\texttt{http://skyservice.pha.jhu.edu/CasJobs/}} tool to search for neighbour galaxies within a 1\,Mpc radius around each of the 636 CIG galaxies\footnote{To allow the reproducibility of this work, initial tables are available at http://amiga.iaa.es, CDS, and SDSS-DR9 websites and requests on demand.} (see right column in Fig.~\ref{Fig:pipeline}). Neighbour galaxies were selected with the following criteria to extract a sample as clean as possible: 1) galaxies with $11.0 \leq m_{r} \leq 21.0$ without flags on size measures, 2) removal of suspicious detections, checking that the object has pixels detected in the first pass and has a valid radial profile, and 3) flagged as a non-saturated source. A first sample of 1,241,442 candidate neighbour galaxies was compiled using these conditions. Without imposing the condition for non-saturated objects, we find a contamination of nearly 50\% by saturated stars with magnitudes brighter than $m_{r} \sim 17$\,mag. Galaxies with a very bright nucleus can also be flagged as saturated sources in the SDSS, which makes it necessary to complete our sample by adding saturated galaxies from the spectroscopic catalogue (66,387 galaxies). Our final sample contains 1,307,829 neighbour galaxies selected by an automated method from the SDSS. We found a contamination of multiple object identifications for nearby and extended galaxies. A clean sample of 1,305,130 galaxies was obtained selecting the brightest (typically also the largest) object in these cases. We also improved the star-galaxy separation provided by the SDSS from an empirical selection of objects using a size/magnitude diagram (see Fig.~\ref{Fig:SGsep}). Objects situated in the horizontal bottom part are mostly stars misclassified as galaxies. Bright objects in the upper part of the diagram are saturated stars, fainter objects in the upper part (and below log[Area]~=~1.1) are spurious detections. Objects fainter than $m_{r} = 19$\,mag were removed, since the star-galaxy separation becomes difficult and inaccurate. Our final photometric sample of neighbour galaxies is composed of a total of 479,504 neighbour galaxies around 636 CIG galaxies. \begin{figure} \centering \includegraphics[width=0.49\textwidth]{images/SGsep.png} \caption[Star-galaxy separation]{Star-galaxy separation. Considering the object area from the Petrosian radius $r_{90}$, we carried out an empirical inspection, performing a selection in apparent magnitude and size. Resolved objects within the red contour are very likely galaxies. The dashed blue line corresponds to the selected cut at $m_{r} = 19$\,mag.} \label{Fig:SGsep} \end{figure} \subsection{Estimation of apparent diameters from the SDSS} \label{Sec:diameter} The CIG isolation criteria defined by \citet{1973AISAO...8....3K} are based on apparent diameters of galaxies, which makes these measurements critical in our study. SDSS gives different radius measurements (for the five photometric bands): 1) de Vaucouleurs and exponential radii, which depend directly on the galaxy intensity profile, and 2) Petrosian radii $r_{\rm{Petrosian}}$, using a modification of the \citet{1976ApJ...209L...1P} system. Petrosian values measure galaxy fluxes within a circular aperture and define the radius using the shape of the azimuthally averaged light profile. Petrosian radii containing 90\% ($r_{90}$) and 50\% ($r_{50}$) of the total flux are provided by the SDSS. We adopted Petrosian values for this study because they do not depend on model fits \citep{2010MNRAS.404.2087B}. However, after a visual inspection of SDSS three-colour images for CIG galaxies, Petrosian diameters do not recover the total galaxy major axis, and are generally smaller than the projected major axis of a galaxy at the 25 mag/arcsec$^{2}$ isophotal level ($D_{25}$) originally used by Karachentseva (values measured in the $B$-band), even though a new approach for background subtraction was applied in the last two data releases of the SDSS \citep{2011AJ....142...31B}. To transform Petrosian sizes into more accurate optical measurements equivalent to the original $D_{25}$ used by Karachentseva, we compared Petrosian diameters from the SDSS ($D_{\rm{SDSS}} = 2r_{90}$) with apparent optical diameters given in ancillary databases. We performed a linear regression analysis for CIG galaxies using measures from 1) HyperLeda\footnote{\texttt{http://leda.univ-lyon1.fr/}}; 2) $isoA_{\rm{r}}$ (isophotal major axis at the 25 mag/arcsec$^2$ isophote in $r$-band) from SDSS-DR7 \citep{2009ApJS..182..543A} because we visually verified that in general it covers the total galaxy; 3) the major axis from the NASA/IPAC Extragalactic Database (NED\footnote{\texttt{http://ned.ipac.caltech.edu/}}); and 4) the Kron radii, running SExtractor \citep{1996A&AS..117..393B} on $i$-band images of CIG galaxies from the SDSS-DR9, which, after visually examining on the fits images, is the measurement that better recovers the total size of the CIG galaxy. But CIG galaxies are not representative of the size of the galaxies that we are interested in: neighbour galaxies are typically smaller than CIG galaxies. To avoid bias, we focused the size correction on neighbour galaxies. We performed a cross-match with the catalogue of neighbours compiled by \citet{2007A&A...470..505V}, based on $D_{25}$ and SExtractor diameters. Since this correlation is made from digitised photographic measurements, other correlations were calculated: based on $D_{25}$ from HyperLeda for galaxies in the neighbour sample brighter than $m_{r,\rm{model}}=16$\,mag and with $r_{90} > 5$\,arcsec; and based on the SExtractor Kron radii from SDSS-DR9 images in the $i$-band for four CIG fields at different recession velocities. Correction factors for SDSS diameters as a function of the above measurements are shown in Table~\ref{tab:Dsdss}, including the corresponding number of galaxies used. In the rest of this study, we use a corrected apparent diameter $D_{\rm{SDSS,\,corr}} = 1.43\,D_{\rm{SDSS}}$ \citep[see also][]{2013MNRAS.430..638S} both for neighbours and for CIG galaxies. This factor was obtained as the median of the values in Table~\ref{tab:Dsdss} to approximate the original $D_{25}$ used by \citet{1973AISAO...8....3K}. \begin{table} \caption[Estimation of apparent diameters from the SDSS]{Estimation of apparent diameters from the SDSS using different comparison samples. Col. 1: Galaxy samples considered for the estimation. Col. 2: Apparent diameter measure used for the estimation. Col. 3: Apparent diameter source. Col. 4: Number of objects used. Col. 5: Correction factor for each estimation $D_{25}\simeq \rm{factor}\times D_{\rm{SDSS}}$.} \label{tab:Dsdss} \centering \begin{tabular}{lllcc} \hline \hline Objects & Apparent & Database & \# of & Factor \\ & diameter & & matches & \\ \hline CIGs & $D_{25}$ & HyperLeda & 636 & 1.58 \\ CIGs & $isoA_{\rm{r}}$ & SDSS-DR7 & 560 & 1.41 \\ CIGs & Major axis & NED & 567 & 1.83 \\ CIGs & Kron radius & SDSS-DR9 & 719 & 2.17 \\ \hline Neighbours & $D_{25}$ & Verley+07c & 28,209 & 1.41 \\ Neighbours & $D_{25}$ & HyperLeda & 13,972 & 1.23 \\ Neighbours & Kron radius & SDSS-DR9 & 27,719 & 1.43 \\ \hline \end{tabular} \end{table} \section{Quantification of the isolation} \label{Sec:isolparam} \subsection{Isolation parameters} \label{Sec:defQeta} We used modified isolation parameters from Sect.~\ref{Sec:amiga} to quantify the isolation degree throughout. An estimate of $\eta_{k}$ was calculated taking into account the distance of the $k^{\rm{th}}$ nearest neighbour to the CIG galaxy. We calculated this parameter according to Eq.~\ref{Eq:etak}, choosing $k$ equal to 5 or lower when there were not enough neighbours in the field. The farther the $k^{\rm{th}}$ nearest neighbour, the smaller the local number density $\eta_{k}$. We calculated a second independent parameter involving a cumulative measure of the tidal strength produced by neighbour galaxies. To improve the quantification of the isolation degree, we adopted a modified version of the $Q$ parameter (Eq.~\ref{Eq:Qip}), where apparent magnitudes from the SDSS-DR9 were used to estimate galaxy masses. This methodology therefore minimises the effect of the correction factor used for estimating apparent diameters. Assuming that the stellar mass is proportional to the $r$-band flux, that is a linear mass-luminosity relation \citep{2003ApJS..149..289B,2006ApJ...652..270B}, we considered $Flux_{r}\propto \mathscr{M}ass$ at a fixed distance, with $m_{r}$~=~$-$2.5\,log$(Flux_{r})$. Then, for one companion from Eq.~\ref{Eq:Qip}: \begin{equation} \label{Eq:Q2012} {\rm log}Q_{iP} \propto 0.4\,(m_{r}^{P}-m_{r}^{i}) + 3\,{\rm log}\left(\frac{D_{P}}{R_{iP}}\right)\quad, \end{equation} where $m_{r}^{P}$ and $m_{r}^{i}$ are the apparent magnitudes in $r$-band of the primary CIG galaxy and the $i^{\rm{th}}$ neighbour, respectively. The total tidal strength created by all the neighbours is then defined as \begin{equation} \label{Eq:Q2012tot} Q = {\rm log}\left(\sum_{i}Q_{iP}\right)\quad. \end{equation} The higher the value of $Q$, the more affected from external influence the galaxy, and viceversa. Given that the CIG is assembled with the requirement that no similar size neighbours are found close to the CIG galaxy, companion galaxies are expected to be faint (mostly dwarf companions). Therefore, no companion galaxies with similar brightness are expected close to the CIG galaxy. \subsection{Comparison with denser environments} \label{Sec:denser} We selected other samples of galaxies from denser environments to make a comparison with the isolation degree of galaxies in the CIG sample: 1) isolated pairs of galaxies \citep[KPG; ][]{1972SoSAO...7....1K}, 2) galaxy triplets \citep[KTG; ][]{1979AISAO..11....3K}, 3) galaxies in compact groups \citep[HCG; ][]{1982ApJ...255..382H}, and 4) galaxies in Abell clusters \citep[ACO; ][]{1958ApJS....3..211A,1989ApJS...70....1A}. The KPG sample allowed us to separate effects of galaxy environment density from effects of one-on-one interactions, while the KTG, HCG, and ACO galaxy samples show the effects of increasingly richer environments. The KPG, KTG, and HCG catalogues were visually compiled using visual isolation criteria as well, accordingly, they complement the CIG sample nicely. The KTG, HCG, and ACO samples were adopted from \citet{2007A&A...472..121V} because they were selected to sample a volume of space roughly equivalent to the one covered by the CIG, and to avoid possible biases. For consistency, we also followed the same selection criterion as for CIG galaxies, keeping galaxies with recession velocities $\varv \geq 1500$\,km\,s$^{-1}$. The final comparison sample is composed of 360 KPGs out of 603 pairs listed by \citet{1972SoSAO...7....1K}, 30 KTGs out of 84 triplets listed by \citet{1979AISAO..11....3K}, 24 HCGs out of 100 compact groups compiled by \citet{1982ApJ...255..382H}, and 12 ACOs out of more than 2,700 galaxy clusters listed by \citet{1958ApJS....3..211A} and \citet{1989ApJS...70....1A}. \section{Results} \label{Sec:result} We performed a photometric and, for the first time, spectroscopic revision of the CIG isolation criteria around each CIG galaxy within a projected radius of 1\,Mpc. The available redshift information allowed us to identify possible physical companions down to the SDSS spectroscopic completeness. Neighbour galaxies were used to estimate the isolation degree for the CIG galaxies. \subsection{Photometric study} \label{Sec:resultphoto} \subsubsection{Photometric revision of the CIG isolation criteria} \label{Sec:resultphotokara} We applied both of the CIG isolation criteria to the neighbours around each CIG galaxy within a projected field radius of 1\,Mpc. First of all, we identified neighbour galaxies with an apparent diameter in the range defined by Eq.~\ref{Eq:kara2}. They represent the galaxies considered as potential perturbers by \citet{1973AISAO...8....3K}. The second step was to determine which of these objects were projected at a distance lower than the one defined in Eq.~\ref{Eq:kara1}. When a galaxy was found to have no neighbour violating Eqs.~\ref{Eq:kara2} and \ref{Eq:kara1}, that galaxy was considered isolated according to the CIG isolation criteria. In Sect.~\ref{Sec:neighselect} we have compiled an automatic selected sample of 479,504 neighbours; of these, 2,109 are potential companions according to the CIG isolation criteria (Eqs.~\ref{Eq:kara2} and \ref{Eq:kara1}) within 1\,Mpc. After an additional visual inspection we found that 89 candidates fail our aim of obtaining a sample of neighbour galaxies without contamination by saturated stars. The revision of the CIG isolation criteria was performed for 636 CIG galaxies using 479,415 neighbour galaxies within 1\,Mpc radius around each CIG galaxy. Of these, 121,872 neighbour galaxies violate Eq.~\ref{Eq:kara2} within 1\,Mpc, and a small number, 3,433 neighbour galaxies, violate Eq.~\ref{Eq:kara1} within 1\,Mpc. The total number of potential companions violating Eqs.~\ref{Eq:kara2} and \ref{Eq:kara1} within 1\,Mpc is very small, 2,020 galaxies. We found 86 CIG galaxies without a neighbour, 117 CIG galaxies with one possible companion, and 433 CIG galaxies with more than one possible companion after applying the CIG isolation criteria within 1\,Mpc. There are 13 CIG galaxies with more than ten possible companions. CIG 589 has the largest number of companions (14 possible companions). The search radius of 1\,Mpc for each of the 636 CIG galaxies considered in the photometric study (Sect.~\ref{Sec:resultphoto}), covers the area defined by \citet{1973AISAO...8....3K} for 59 fields only; of these, four CIG galaxies are isolated (CIG 50, 299, 651, and 1032) according to the CIG isolation criteria. The results of this revision are listed in columns 2, 3, and 4 of Table~\ref{tab:resultphoto}. \begin{table} \caption[Revision of the isolation degree using photometric data]{Revision of the isolation degree using photometric data.} \label{tab:resultphoto} \centering \begin{tabular}{cccccc} \hline \hline (1) & (2) & (3) & (4) & (5) & (6) \\ CIG & $r_{1\rm{Mpc}}$ & $\frac{r_{1\rm{Mpc}}}{r_{80D_{P}}}$ & isol & $Q_{\rm{Kar, p}}$ & $\eta_{k,\rm{p}}$ \\ \hline 1 & 35.32 & 0.31 & 0 & -3.35 & 2.03 \\ 2 & 36.92 & 0.49 & 1 & -3.19 & 1.70 \\ 4 & 111.58 & 0.65 & 0 & -2.78 & 1.92 \\ 5 & 32.78 & 0.62 & 0 & -1.60 & 3.02 \\ 6 & 56.94 & 0.71 & 0 & -2.84 & 2.34 \\ 7 & 20.22 & 0.26 & 1 & -3.37 & 2.11 \\ 8 & 40.65 & 0.49 & 0 & -1.83 & 2.65 \\ \ldots & \ldots & \ldots & \ldots & \ldots & \ldots \\ \hline \end{tabular} \tablefoot{The full table is available in electronic form at http://amiga.iaa.es and in CDS. The columns correspond to (1) the galaxy identification according to the CIG, (2) the projected angular radius, in arcmin, equal to the adopted distance at 1\,Mpc, and (3) the ratio between projected angular radius at 1\,Mpc (in arcmin) over the original field radius used in the CIG isolation criteria. When the ratio is greater than or equal to 1, the fixed physical radius of 1\,Mpc covers the entire original area. (4) Result of the CIG isolation criteria: ''1'' if the galaxy passes, ''0'' if it fails. (5) $Q_{\rm{Kar, p}}$, tidal strength estimate of similar-size neighbours. (6) $\eta_{k,\rm{p}}$, local number density of similar-size neighbours.} \end{table} \subsubsection{Photometric isolation parameters} \label{Sec:resultphotoparam} The isolation parameters local number density $\eta_{k,\rm{p}}$ (Eq.~\ref{Eq:etak}) and tidal strength $Q_{\rm{Kar,p}}$ (Eq.~\ref{Eq:Q2012tot}) were calculated using the photometric data. Only galaxies within a factor 4 in apparent diameter with respect to the CIG galaxy were considered (Eq.~\ref{Eq:kara2}) to minimise the contamination of background/foreground galaxies, following \citet{1973AISAO...8....3K}. We calculated ${R_{iP}}$ using projected angular distances on the sky. For the local number density, the projected distance to the $k^{\rm{th}}$ nearest neighbour, $r_{k}$, was calculated as the angular separation in arcmin normalised by the apparent diameter of the central CIG galaxy. The values of the isolation parameters are listed in Table~\ref{tab:resultphoto} and are plotted in Fig.~\ref{Fig:photoparam}a. The tidal strength $Q_{\rm{Kar,p}}$ and the local number density $\eta_{k,\rm{p}}$, were also calculated for the comparison samples KPG, KTG, HCG, and ACO (see Fig.~\ref{Fig:photoparam}b). Means and standard deviations are shown in Table~\ref{tab:compdenser}. As expected, the trend of the mean values, from isolated to denser environments, shows that isolation parameters are sensitive enough to the effects of the environment. \begin{figure*} \centering \includegraphics[width=\textwidth]{images/photoparam.pdf} \caption[Photometric isolation parameters]{Photometric isolation parameters. {\it (a):} Calculated isolation parameters (local number density $\eta_{k,\rm{p}}$ and tidal strength $Q_{\rm{Kar,p}}$) for similar-size neighbour galaxies using the photometric data. Symbols and colours in the legend correspond to the number of neighbours that violate the CIG isolation criteria. {\it (b):} Comparison between isolation parameters (local number density $\eta_{k,\rm{p}}$ and tidal strength $Q_{\rm{Kar,p}}$) for the CIG and the comparison samples using photometric data. Pairs (KPG) are depicted by violet pluses, triplets (KTG) by blue triangles, compact groups (HCG) by green rectangles, and Abell clusters (ACO) by red diamonds. The mean values of each sample are indicated following the same colour code.} \label{Fig:photoparam} \end{figure*} \begin{table} \caption[Means and standard deviations of the isolation parameters for the CIG and for the comparison samples]{Means and standard deviations of the isolation parameters for the CIG and for the comparison samples.} \label{tab:compdenser} \centering \begin{tabular}{lrrrrr} \hline \hline & CIG & KPG & KTG & HCG & ACO \\ \hline N & 636 & 360 & 30 & 24 & 12 \\ mean($Q_{\rm{Kar, p}}$) & $-$2.51 & $-$0.95 & $-$1.11 & $-$0.32 & $-$0.75 \\ std($Q_{\rm{Kar, p}}$) & 0.68 & 1.11 & 0.79 & 0.89 & 0.70 \\ mean($\eta_{k,\rm{p}}$) & 2.39 & 2.85 & 3.09 & 3.49 & 3.51 \\ std($\eta_{k,\rm{p}}$) & 0.45 & 0.56 & 0.46 & 0.57 & 0.59 \\ \hline \end{tabular} \end{table} \subsection{Spectroscopic study} \label{Sec:resultspec} \subsubsection{Spectroscopic revision of the CIG isolation criteria} \label{Sec:resultspeckara} Owing to the relatively new availability of large spectroscopic surveys, most of the environment has long been estimated only with photometric analysis for any type of samples (isolated, pairs, triplets, groups, and clusters). Only during the past decade and despite the inhomogeneity and incompleteness of the spectroscopic surveys at very low and high redshifts, some spectroscopic studies were performed \citep{2009MNRAS.394.1409E,2009AN....330.1004V}. In this section we performed a spectroscopic revision and an improvement of the CIG isolation criteria. Eqs.~\ref{Eq:kara2} and \ref{Eq:kara1} within 1\,Mpc were applied to identify the CIG galaxies that appear to be physically isolated using the spectroscopic sample of the SDSS. For this study, we selected fields with a redshift completeness greater than 80\% with respect to the photometric sample at $m_{r} \leq 17.7$\,mag (the percentage of extended neighbours down to $m_r<17.7$\,mag lying within a 1\,Mpc projected separation from the CIG galaxy that have a measured redshift), which is, approximately, the redshift completeness limit of the SDSS spectroscopic main galaxy sample \citep{2002AJ....124.1810S}. Four hundred and eleven CIG fields fulfil this requirement, surrounded by 70,169 cleaned neighbour galaxies with spectroscopic information. To evaluate the physical association of the projected neighbours, we introduce a third condition based on the velocity difference between neighbour galaxies with respect to each CIG galaxy $|\Delta\,\varv|= \varv_{i} - \varv_{P}$. Surprisingly, the velocity difference distribution shows a peak close to $|\Delta\,\varv|~=~0$\,km\,s$^{-1}$ (see Fig.~\ref{Hist:diffvel}). From the figure, we are able to separate a flat continuum distribution of foreground/background neighbours, considered as the fraction of galaxies that are probably not linked to the central galaxy, from physically linked satellites. More than one third of the neighbours within $|\Delta\,\varv|~\leq~3,000$\,km\,s$^{-1}$ have a velocity difference $|\Delta\,\varv|~\leq~250$\,km\,s$^{-1}$ (36\%, see Fig.~\ref{Hist:diffvel}). To recover all of these probable physical companions, we considered from the figure and adopting a conservative enough velocity difference selection, that a CIG galaxy fulfils the CIG isolation criteria if it has no neighbour violating Eqs.~\ref{Eq:kara2} and \ref{Eq:kara1} within 1\,Mpc and $|\Delta\,\varv|~\leq~500$\,km\,s$^{-1}$ \citep[see also][]{2009MNRAS.394.1409E,2010Ap.....53..462K,2011AstBu..66..389K}. The results of the revision of the CIG isolation criteria, using spectroscopic data, are listed in column 2 of Table~\ref{Tab:resultspec}. The number of galaxies that appear as isolated increases when the third condition is introduced, because they have spectroscopic neighbours with discordant redshifts and violating Eqs.~\ref{Eq:kara2} and \ref{Eq:kara1} within 1\,Mpc. We found that 347 CIG galaxies appear to be isolated according to the CIG isolation criteria and have no companion within 1\,Mpc and $|\Delta\,\varv|~\leq~500$\,km\,s$^{-1}$. The search radius of 1\,Mpc covers the area defined by \citet{1973AISAO...8....3K} for 35 fields only. Of these, 32 CIG galaxies pass, while 3 fail the CIG isolation criteria within 1\,Mpc and $|\Delta\,\varv|~\leq~500$\,km\,s$^{-1}$ (CIG 264, 480, and 637). A total of 30,222 neighbour galaxies of the 70,169 galaxies with available redshift within the spectroscopic magnitude limit violate Eq.~\ref{Eq:kara2} within 1\,Mpc, and a very small number, 643 neighbours, also violate Eq.~\ref{Eq:kara1} within 1\,Mpc, 75 of them also fulfil the third condition, that is $|\Delta\,\varv|~\leq~500$\,km\,s$^{-1}$ within 1\,Mpc. \begin{figure*} \centering \includegraphics[width=\textwidth]{images/hist_diffz.pdf} \caption[Velocity difference distributions]{Comparison of the velocity difference distributions $|\Delta\,\varv|$ for neighbour galaxies with respect to the central CIG galaxy (411 fields): for the neighbour galaxies violating Eq.~\ref{Eq:kara2} (i.e., a factor 4 in apparent diameter with respect to their associated CIG galaxy); for the remaining neighbours (outside the factor 4 in apparent diameter); and for the whole sample of neighbours (sum of the previous two samples). The vertical line corresponds to the selected value of reference at $|\Delta\,\varv| = 500$\,km\,s$^{-1}$.} \label{Hist:diffvel} \end{figure*} \begin{table*} \caption[Revision of the isolation degree using spectroscopic data]{Revision of the isolation degree using spectroscopic data.} \label{Tab:resultspec} \centering \begin{tabular}{ccccccccccccc} \hline \hline (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9) & (10) & (11) & (12) & (13)\\ CIG & isol & $k_{500}$ & $Q_{500}$ & $f_{Q_{500}}$ & $\eta_{k,500}$ & $f_{\eta_{k,500}}$ & $z_{\rm{comp}}$\,[\%] & $k_{500,\rm{ul}}$ & $Q_{500,\rm{ul}}$ & $f_{Q_{500,\rm{ul}}}$ & $\eta_{k,500,\rm{ul}}$ & $f_{\eta_{k,500,\rm{ul}}}$ \\ \hline 11 & 1 & 0 & NULL & 2 & NULL & 2 & 87.16 & 1 & -6.11 & 0 & NULL & 1 \\ 33 & 1 & 1 & -5.41 & 0 & NULL & 1 & 98.78 & 1 & -5.41 & 0 & NULL & 1 \\ 56 & 1 & 5 & -4.12 & 0 & 0.02 & 0 & 94.31 & 5 & -4.12 & 0 & 0.02 & 0 \\ 60 & 1 & 5 & -4.94 & 0 & 0.32 & 0 & 98.16 & 5 & -4.94 & 0 & 0.32 & 0 \\ 187 & 1 & 0 & NULL & 2 & NULL & 2 & 87.72 & 0 & NULL & 2 & NULL & 2 \\ 198 & 0 & 2 & -2.92 & 0 & 0.15 & 0 & 93.94 & 2 & -2.92 & 0 & 0.15 & 0 \\ 199 & 0 & 4 & -3.78 & 0 & -0.12 & 0 & 86.21 & 4 & -3.78 & 0 & -0.12 & 0 \\ \ldots & \ldots & \ldots & \ldots & \ldots & \ldots & \ldots & \ldots & \ldots & \ldots & \ldots & \ldots & \ldots \\ \hline \end{tabular} \tablefoot{The full table is available in electronic form at http://amiga.iaa.es and in CDS. The columns correspond to (1) the galaxy identification according to the CIG; (2) the result of the CIG isolation criteria for neighbours within 1\,Mpc and $|\Delta\,\varv|~\leq~500$\,km\,s$^{-1}$: ''1'' if the galaxy passes, ''0'' if it fails; (3) $k_{500}$, number of neighbours within 1\,Mpc and $|\Delta\,\varv|~\leq~500$\,km\,s$^{-1}$; (4) $Q_{500}$, tidal strength estimation using neighbours within 1\,Mpc and $|\Delta\,\varv|~\leq~500$\,km\,s$^{-1}$; (5) $f_{Q_{500}}$, flag in $Q_{500}$:''0'' if $k_{500} \geq 1$, ''2'' if $k_{500} = 0$; (6) $\eta_{k, 500}$, local number density using neighbours within 1\,Mpc and $|\Delta\,\varv|~\leq~500$\,km\,s$^{-1}$; (7) $f_{\eta_{k,500}}$, flag in $\eta_{k, 500}$: ''0'' if $k_{500} \geq 2$, ''1'' if $k_{500} = 1$, ''2'' if $k_{500} = 0$; (8) $z_{\rm{comp}}$, redshift completeness in the field; (9) $k_{500,\rm{ul}}$, number of neighbours within 1\,Mpc and $|\Delta\,\varv|~\ leq~500$\,km\,s$^{-1}$ using upper limits; (10) $Q_{500,\rm{ul}}$, tidal strength upper limit estimation using neighbours within 1\,Mpc and $|\Delta\,\varv|~\leq~500$\,km\,s$^{-1}$; (11) $f_{Q_{500,\rm{ul}}}$, flag in $Q_{500,\rm{ul}}$:''0'' if $k_{500,\rm{ul}} \geq 1$, ''2'' if $k_{500,\rm{ul}} = 0$; (12) $\eta_{k,500,\rm{ul}}$, local number density upper limit using neighbours within 1\,Mpc and $|\Delta\,\varv|~\leq~500$\,km\,s$^{-1}$; (13) $f_{\eta_{k,500,\rm{ul}}}$, flag in $\eta_{k,500,\rm{ul}}$: ''0'' if $k_{500,\rm{ul}} \geq 2$, ''1'' if $k_{500,\rm{ul}} = 1$, ''2'' if $k_{500,\rm{ul}} = 0$.} \end{table*} \subsubsection{Spectroscopic isolation parameters} \label{Sec:resultspecparam} The available redshift data allowed us to calculate the two isolation parameters, the local number density and tidal strength, using physical size and physical projected distance. We estimated the isolation parameters $\eta_{k,500}$ and $Q_{500}$, from Eqs.~\ref{Eq:etak} and \ref{Eq:Q2012tot} respectively, taking into account all the neighbour galaxies within 1\,Mpc and $|\Delta\,\varv| \leq 500$\,km\,s$^{-1}$ with respect to the central CIG galaxy. To compare this with the photometric estimate, we also calculated $Q_{\rm{Kar, s}}$ and $\eta_{k,\rm{s}}$ only including similar-size galaxies with spectroscopy within a factor 4 in apparent diameter in 1\,Mpc. We calculated the isolation parameters for the 411 CIG fields considered in the spectroscopic revision, with more than 80\% completeness in redshift. If they were incomplete, we estimated upper limits using photometric redshifts also available in the SDSS. The values of the isolation parameters are listed in Table~\ref{Tab:resultspec}. \section{Discussion} \label{Sec:diss} \subsection{Photometric study} \label{Sec:dissphoto} \subsubsection{Photometric revision of the CIG isolation criteria} \label{Sec:dissphotokara} As mentioned in Sect.~\ref{Sec:resultphoto}, we performed a photometric revision of the CIG isolation criteria around each CIG galaxy within a projected radius of 1\,Mpc, finding that 2,020 neighbour galaxies are potential companions that violate Eqs.~\ref{Eq:kara2} and \ref{Eq:kara1}. The left panel in Fig.~\ref{Fig:photokara} shows that these potential neighbours tend to be smaller than their corresponding CIG galaxy and tend to concentrate at larger distances to the central galaxy. This means that small neighbours ($\frac{D_{i}}{D_{P}}<0.25$) can be located at closer distances to the CIG since their effect on the evolution of the central galaxy is almost negligible by the CIG isolation criteria. In contrast, larger neighbours could be located at gradually larger distances. That is the reason why we need to estimate the isolation degree, to quantify the effect of the missing neighbours on the evolution of the central CIG galaxy. The 2,020 potential companions are distributed around 550 CIG fields, of which 55 cover the original search area used by \citet{1973AISAO...8....3K}. The right panel in Fig.~\ref{Fig:photokara} clearly shows that about 90\% of the CIG fields do not cover the original search area ($\frac{r_{1\rm{Mpc}}}{r_{80D_{P}}}\ <\ 1$). \citet{2007A&A...470..505V} estimated that about 1/3 of the AMIGA sample (284 out of 950) fails the CIG isolation criteria within a minimum physical distance of 0.5\,Mpc. Although we were unable to search for companions within the original area used by \citet{1973AISAO...8....3K}, we can assess that about 1/6 of the sample fails the CIG isolation criteria within a fixed area of field radius of 1\,Mpc. The sample of neighbour galaxies inspected originally by \citet{1973AISAO...8....3K} in the construction of the CIG is not available. Nevertheless, we can compare our results with the catalogue of neighbours compiled by \citet{2007A&A...470..505V}, who revised the CIG on the same original material (Palomar Observatory Sky Survey, POSS). Compared with \citet{2007A&A...470..505V}, we found a much larger number of neighbours around each CIG galaxy. Indeed, \citet{2007A&A...470..505V} extracted neighbour galaxies brighter than $B = 17.5$\,mag. In the c and d panels of Fig.~\ref{Fig:compverley}, we show that the SDSS identification of neighbours goes deeper than the POSS and also detected smaller neighbours. We also found that the 2,020 potential companions according to the CIG isolation criteria within 1\,Mpc are mostly faint, with $\Delta m_{r}\geq3$\,mag, which suggests that they are nearby and low-luminosity galaxies missed by the CIG isolation criteria. The POSS search for companions misses the faintest and smaller galaxies, with respect to the magnitudes of the primary CIG galaxies. In fact, the mean magnitude difference $\Delta m_{r}$ and size ratio $\frac{D_{P}}{D_{i}}$ between neighbour and the central CIG galaxy is 1.58\,dex fainter and $0.18$\,dex lower in the SDSS than in the POSS. The presence of faint galaxies does not violate the CIG isolation criteria because these systems are smaller than $1/4\times D_{P}$. The SDSS also has a redshift incompleteness at lower magnitudes $m_{r,\rm{Petrosian}}~<~14.5$\,mag \citep{2002AJ....124.1810S}. After a visual inspection of the neighbours in common with \citet{2007A&A...470..505V} that were missed by the SDSS search, we estimate that at apparent magnitudes $m_{r} < 15$\,mag, we missed, approximately one galaxy per field. These missing neighbours are usually projected close to saturated stars, which were not considered in our selection of neighbours in the SDSS. Other studies about isolated galaxies claim that an equivalent CIG isolation criteria could be obtained by selecting neighbours within a magnitude range. \citet{2003ApJ...598..260P} modified Eq.~\ref{Eq:kara2} by selecting a magnitude difference of $\Delta m=2$, \citet{2010AJ....139.2525H} and \citet{2011ApJ...732...92T} considered neighbours within a magnitude difference of $\Delta m=2.5$, which translates into a factor 10 in brightness, and \citet{2005AJ....129.2062A}, the most restrictive, selected neighbours within a magnitude difference of $\Delta m=3$, which is about a factor of 16 in brightness. If we replace in Eq.~\ref{Eq:kara2} the approximate factor 4 in size with a factor 3 in magnitude, we find that 231 CIG galaxies appear to be isolated instead of 86 galaxies (see Sect.~\ref{Sec:resultphotokara}). The CIG isolation criteria are thus more restrictive and consider very faint galaxies as possible minor companions. Although the two definitions in the search for neighbours are not fully equivalent \citep[see][]{2007A&A...470..505V}, we found that 65\% of neighbour galaxies that violate Eq.~\ref{Eq:kara2} within 1\,Mpc have $\Delta m_{r}\geq3$, hence are low-mass objects; this means that we are able to observe faint associated satellite galaxies. This result justifies the need to quantify the isolation degree using the isolation parameters. The quantification of the tidal strength takes into account the size of the neighbour, and the effect of a satellite can be different from the effect of a similar-size neighbour galaxy. \begin{figure*} \centering \includegraphics[width=\textwidth]{images/fig_referee.pdf} \caption[Visualisation of photometric results]{{\it (a):} Characterisation of the 2,020 potential companions violating Eqs.~\ref{Eq:kara2} and \ref{Eq:kara1} within 1\,Mpc. {\it (b):} Distribution of the ratio between a projected angular radius at 1\,Mpc (in arcmin) across the original field radius used by \citet{1973AISAO...8....3K} ($\frac{r_{1\rm{Mpc}}}{r_{80D_{P}}}$). The vertical line corresponds to the reference value at $r_{1\rm{Mpc}}=r_{80D_{P}}$, i.e. the search area of 1\,Mpc radius is equal to the original search area used in the construction of the CIG.} \label{Fig:photokara} \end{figure*} \subsubsection{Photometric isolation parameters} \label{Sec:dissphotoparam} The isolation parameters, local number density ($\eta_{k,\rm{p}}$), and tidal strength that affect the CIG galaxy ($Q_{\rm{Kar, p}}$) were estimated using photometric data (see Sect.~\ref{Sec:resultphotoparam}). These two parameters are complementary in quantifying the isolation degree and give consistent results, as shown in Fig.~\ref{Fig:photoparam}a. When a galaxy presents low values for both the local number density and the tidal strength estimate, the galaxy is well isolated from any sort of external influence. In contrast, when the two values are high, the evolution of the galaxy can be perturbed by the environment, and this galaxy is not suitable to represent the normal features of isolated galaxies. Galaxies in denser environments, such as isolated pairs or triplets (see Fig.~\ref{Fig:photoparam}b), typically present relatively low values for the local number density, but high tidal strength. Studies that only use a density estimator can misclassify interacting galaxies as isolated because they do not take into account the mass of the neighbour galaxy, therefore another complementary parameter (the tidal strength) is needed. On the other hand, if the local number density is high and the tidal strength low, the environment of the galaxy is composed of nearby small neighbours. The CIG isolation criteria are also represented in Fig.~\ref{Fig:photoparam}a. The most isolated CIG galaxies, without a neighbour and depicted by black pluses, show the lowest values for both parameters. With a growing number of neighbours, CIG galaxies move to the upper right in the diagram. However, a galaxy apparently not isolated might appear in the lower left part if its $k^{\rm{th}}$ nearest neighbour is far away from the CIG and if it does not have many similar-size neighbours. CIG galaxies that fail Eqs.~\ref{Eq:kara2} and \ref{Eq:kara1} within 1\,Mpc and have a higher number of potential neighbours represent a population more that strongly interacts with their environment. According to numerical simulations, the evolution of a galaxy may be affected by external influence when the corresponding tidal force amounts to 1\% of the internal binding force \citep{1984PhR...114..321A,1992AJ....103.1089B}, that is, $\frac{F_{\rm{tidal}}}{F_{\rm{bind}}} = 0.01$, which corresponds to a tidal strength of $Q = -2$. For the local number density, this approximately translates into a value of $\eta_{k,\rm{p}}=2.7$ (see Fig.~\ref{Fig:photoparam}a). Note that the limit value for the local number density differs from previous AMIGA works \citep[$\eta_{k}=2.4$ in][]{2007A&A...472..121V}. This theoretical value allows us to separate the interactions that might affect the evolution of the primary galaxy. Figure~\ref{Fig:photoparam}a shows that the whole subsample of 86 CIG galaxies isolated according to the CIG isolation criteria within 1\,Mpc (represented by black pluses) satisfies the threshold $Q_{\rm{Kar, p}} < -2$. Of the 550 CIG galaxies that violate the CIG isolation criteria within 1\,Mpc, 433 CIG galaxies have $Q_{\rm{Kar, p}} < -2$, and 340 CIG galaxies also have a relatively low number density environment ($\eta_{k,\rm{p}}<2.7$), therefore they can be considered to be mildly affected by their environment. Hence, from the photometric study, 426 CIG galaxies are suitable to represent a reference sample of isolated galaxies (67\% of the sample of CIG galaxies found in the photometric catalogue of the SDSS), since their evolution is dominated by internal processes. \begin{figure*} \begin{center} \includegraphics[width=\textwidth]{images/comparison_verley.pdf} \end{center} \caption[Isolation parameters in comparison to \citet{2007A&A...472..121V}]{Isolation parameter differences between this study and \citet{2007A&A...472..121V}. {\it (a):} Difference in $\eta_{k,\rm{p}}$ isolation parameter for neighbour galaxies within a factor 4 in size. {\it (b):} Difference in $Q_{\rm{0.5,Kar,p}}$ isolation parameter for neighbour galaxies within a factor 4 in size. {\it (c):} Apparent magnitude distribution for neighbour galaxies in common with \citet{2007A&A...470..505V} and galaxies in this study. {\it (d):} Apparent diameter distribution for neighbour galaxies in common with \citet{2007A&A...470..505V} and SDSS galaxies in this study.} \label{Fig:compverley} \end{figure*} Figure~\ref{Fig:photoparam}\,b shows the comparison of the local number density and tidal strength estimate for the CIG and for galaxies in denser environments: KPG, KTG, HCG, and ACO. Both estimates of the parameters increase from isolated galaxies to denser environments. These results show that the isolation parameters, even for photometric studies suffering projection effects, are sensitive enough to distinguish between environments dominated by different numbers of similar size galaxies. Quantitatively, it is important to note that the mean values of the tidal strength for denser environment are, at least, one dex higher than $Q=-2$, which means that their evolution is clearly affected by their environment. To compare the quantification of the isolation in this study with that of \citet{2007A&A...472..121V}, we performed another calculation of the isolation parameters, restricting our fields to 0.5\,Mpc (which is the minimum physical radius used in the previous AMIGA work). When comparing tidal strengths calculated using Eqs.~\ref{Eq:Q2007} and ~\ref{Eq:Q2012}, we found a good correlation (with a systematic shift of nearly 0.5\,dex) with a large scatter. This scatter is directly related to the differences found between neighbours of the databases used, explained in Sect.~\ref{Sec:dissphotokara}. \citet{2007A&A...472..121V} provided a final catalogue of 791 isolated galaxies, based on an estimate of the best limits for selecting the sample ($Q~<~-2$ and $\eta_{k}~<~2.4$), of which 620 galaxies are in common with the present study. Of these, 486 also fulfil the new selection criteria defined in this study, hence, despite the poor correlation between the isolation parameters, only 22\% of these galaxies fail the new selection criteria defined here. In general, the galaxies in the samples studied here appear to be less isolated according to the new method and data. Mean values of the isolation parameters for galaxies in the CIG, KTG, HCG, and ACO, are higher (except for the tidal strength for HCG and ACO) than in previous AMIGA works \citep[see Table 8 in][compared with Table~\ref{tab:compdenser}]{2007A&A...472..121V}, which means fewer isolated galaxies. This result is directly related to the number of neighbours found in the SDSS compared with the POSS. We consider our modification for the tidal strength as a better estimate because it is based on the less scattered mass-luminosity relation. The SDSS provides linear photometric data (CCD), higher sensitivity, and better resolution than digitised photographic plates. \subsection{Spectroscopic study} \label{Sec:dissspec} \subsubsection{Spectroscopic revision of the CIG isolation criteria} \label{Sec:dissspeckara} As obtained in Sect.~\ref{Sec:resultspeckara} of 347 CIG galaxies, out of 411 fields with redshift completeness higher than 80\% fulfil the CIG isolation criteria within 1\,Mpc and $|\Delta\,\varv|~\leq~500$\,km\,s$^{-1}$ when the redshift is taken into account. The first isolation criterion of the CIG proposed by \citet{1973AISAO...8....3K} to remove fore- and background galaxies (Eq.~\ref{Eq:kara2}), is not fully efficient. About 50\% of the neighbours, considered as potential companions using Eq.~\ref{Eq:kara2} within 1\,Mpc, have very high recession velocities with respect to the central CIG galaxy, so the first isolation criterion of the CIG is too restrictive and could consider galaxies as not isolated that are mildly affected by their environment (see Fig.~\ref{Hist:diffvel}). On the other hand, the first isolation criterion of the CIG which requires similar apparent diameter companions, accounts for most of the physical neighbours. But we also found that about 92\% of the neighbour galaxies with recession velocities similar to the corresponding CIG galaxy are not considered as potential companions by the CIG isolation criteria. We considered a different isolation criterion using the spectroscopic data, taking into account only neighbour galaxies within 1\,Mpc and $|\Delta\,\varv| \leq 500$\,km\,s$^{-1}$ with respect to the velocity of the central galaxy, that is, without imposing any difference in size. We found that 105 CIG galaxies have no physical companions instead of 347 when considering the CIG isolation criteria within 1\,Mpc and $|\Delta\,\varv|~\leq~500$\,km\,s$^{-1}$ (see Sect.~\ref{Sec:resultspeckara}). In this case we were indeed too restrictive. According to Fig.~\ref{Hist:diffvel}, we can consider that only neighbours in the peak of the distribution are physical companions of their corresponding CIG galaxy. In this case, nearly a third of the CIG sample (126 galaxies) have no physical companions (within 1\,Mpc and $|\Delta\,\varv| \leq 250$\,km\,s$^{-1}$). This means that nearby dwarf galaxies linked to the corresponding CIG galaxy were not taken into account by the CIG isolation criteria. But also part of the similar redshift neighbours might be background galaxies that do not affect the central CIG galaxy. We were able to recover the brightest dwarfs in the spectroscopic study, down to the $m_{r}=17.77$ magnitude limit for SDSS spectra. A more extended study will be performed in a future work, taking into account nearby and similar redshift companions to identify physical satellites that affect the evolution of the central CIG galaxy and, by consequence, a more physical estimate of the isolation degree of the CIG. \subsubsection{Spectroscopic isolation parameters} \label{Sec:dissspecparam} The isolation parameters local number density ($\eta_{k,500}$) and tidal strength ($Q_{500}$) were estimated for the 411 fields considered in the spectroscopic study with a redshift completeness higher than 80\% at $m_{r}=17.7$\,mag within 1\,Mpc (see Sect.~\ref{Sec:resultspec}). Redshift information is necessary to reject fore- and background galaxies, which reduces projection effects. There is no correlation between the photometric and spectroscopic estimates (see Fig.~\ref{Fig:specparam}). In general, we were unable to predict the spectroscopic parameters from the photometric estimate. Overall, the values of the isolation parameters are in general much lower in the spectroscopic estimate, showing that the projection effects lower the number of isolated candidates in the photometric study. \begin{figure} \begin{center} \includegraphics[width=0.49\textwidth]{images/specparam.pdf} \end{center} \caption[Photometric {\it vs.} spectroscopic estimates of the isolation parameters]{Photometric {\it vs.} spectroscopic estimates of the isolation parameters when using the velocity differences to reject fore- and background galaxies (vertical axis) instead of galaxies within a factor 4 in size (horizontal axis). {\it (a):} Difference in the local number density estimate. {\it (b):} Difference in the tidal strength estimate.} \label{Fig:specparam} \end{figure} The upper-limit estimates of the isolation parameters were calculated considering photometric redshifts, as explained in Sect.~\ref{Sec:resultspecparam}. When the added neighbour is small and close to the CIG, the local number density changes, but the tidal strength remains almost the same. But if the neighbour is similar in size, there are marked increments in both parameters. The displacement due to the upper limits, represented by solid grey lines in Fig.~\ref{Fig:specparamdzUL}, is independent of the redshift completeness. Only ten CIG galaxies show changes in the parameters, the highest for CIG 492 with an increase of 0.07\,dex in the tidal strength and 0.12\,dex in the local number density. This change is due to the addition of one close ($R_{iP} \simeq 470$\,kpc) and faint ($\Delta\,m_{r}\geq-3.3$) companion with $|\Delta\,\varv| \simeq 500$\,km\,s$^{-1}$. CIG 254 and CIG 418 show an increase in the tidal strength of 0.53\,dex and 0.47\,dex, respectively. The local number density in these cases changes from being flagged as $-99$ to $\eta_{k,500} = 0.21$ and $\eta_{k,500} = -0.46$, respectively, due to the addition of a first nearest neighbour. We conclude that even if the redshift completeness of the SDSS is limited to $m_{\rm{r,Petrosian}} < 17.77$\,mag, the spectroscopic estimate of the isolation parameters is more realistic than the photometric estimate, from which the uncleaned objects are difficult to remove with an automated pipeline. \begin{figure} \centering \includegraphics[width=0.49\textwidth]{images/isolparam_upper_zcomp} \caption[Spectroscopic isolation parameters]{Estimate of the isolation parameters, local number density $\eta_{k,500}$ and tidal strength $Q_{500}$, for 306 CIG galaxies with at least one neighbour within 1\,Mpc and $|\Delta\,\varv|~\leq~500$\,km\,s$^{-1}$ using the spectroscopic data. Upper-limit estimates are depicted by solid grey lines. Colours, according to the legend, correspond to the redshift completeness of each CIG field (the percentage of extended neighbours, down to $m_r<17.7$\,mag lying within a projected separation of 1\,Mpc from the CIG galaxy with a measured redshift).} \label{Fig:specparamdzUL} \end{figure} \subsection{Photometric versus spectroscopic studies} We assessed the validity of some assumptions that were used during the construction of the CIG and, in light of the SDSS-DR9 spectroscopic information, considered and systematically quantified the differences between the photometric and spectroscopic studies. For the first time, we thus highlighted the quantified differences, strengths, and weaknesses of the two approaches and applied them to one common sample. Clearly the spectroscopic information provides a better physical view of the environment of the galaxies. Nevertheless, there is still not complete full spectroscopic coverage for the neighbour galaxies of the CIG, therefore a pure photometric estimation is still needed to obtain a lower limit of the isolation parameters that is homogeneously defined and consistent for the whole CIG in the SDSS footprint. In addition, since the original materials were very different between \citet{2007A&A...470..505V}, that is, digitised photographic POSS I - II plates, and our work, it has been necessary to repeat the photometric estimation of the isolation parameters. From this we were able to perform a fair comparison between photometric and spectroscopic isolation parameters, without being biased by the discrepancies in the constructed databases (star-galaxy separation, magnitudes, completeness limits, sizes, etc). We found that of the 411 CIG galaxies with more than 80\% redshift completeness (i.e., the percentage of extended neighbours down to $m_r<17.7$\,mag that lie within a projected separation of 1\,Mpc from the CIG galaxy with a measured redshift), 54 CIG galaxies previously identified as isolated according to the CIG isolation criteria within 1\,Mpc actually have similar redshift companions (i.e., neighbours with $|\Delta\,\varv|~\leq~500$\,km\,s$^{-1}$ within 1\,Mpc). When considering only neighbours with $|\Delta\,\varv|~\leq~500$\,km\,s$^{-1}$ within 1\,Mpc, we find that 105 CIG galaxies of 411 show no similar redshift neighbours. There are only nine CIG galaxies isolated according to these two modifications of the CIG isolation criteria (CIG 314, 451, 473, 541, 545, 608, 613, 655, and 668). \section{Summary and conclusions} \label{Sec:con} We used the SDSS-DR9 photometric and spectroscopic databases to re-evaluate the degree of isolation of 636 galaxies in the Catalogue of Isolated Galaxies \citep[CIG; ][]{1973AISAO...8....3K}. This re-evaluation using CCD images and spectra continues and improves the work of \citet{2007A&A...472..121V,2007A&A...470..505V} which was based upon the digitised photographic plates from POSS-1 and POSS-2. We used the SDSS-DR9 to search for neighbour galaxies within a projected physical radius of 1\,Mpc, which doubles the radius used in previous AMIGA works. We first applied the CIG isolation criteria within 1\,Mpc to the SDSS photometric database. Using the SDSS spectroscopic database, we then refined the study for 411 fields, of which more than 80\% of the extended neighbours down to $m_r<17.7$\,mag lying within a projected separation of 1\,Mpc from the CIG galaxy have a measured redshift. The isolation degree was quantified using two different and complementary parameters: the local number density $\eta_{k}$ and the tidal strength $Q$, which affect the central CIG galaxy. A summary of the different samples used in the photometric and spectroscopic studies is shown in Table~\ref{tab:summarysamples}. \begin{table*} \caption[Summary of samples used in the study]{\textbf{Summary of the samples used in the photometric and spectroscopic studies.}} \label{tab:summarysamples} \centering \begin{tabular}{cl} \hline \hline Number of entries & Definition of the sample \\ \hline 1050 & CIG galaxies in the original catalogue (Karachentseva 1973) \\ 799 & CIG galaxies found in the photometric catalogue of the SDSS-DR9 \\ 789 & CIG galaxies of 799 after deleting of 10 galaxies with unreliable photometric data from the SDSS-DR9 \\ 693 & CIG galaxies of 789 after deleting of 96 galaxies with $\varv < 1500$\,km\,s$^{-1}$ \\ 636 & CIG galaxies of 693 after deleting of 57 galaxies with a field radius of 1\,Mpc not covered in the photometric\\ & SDSS-DR9 catalogue \\ 411 & CIG galaxies of 636 of which more than 80\% have extended neighbours down to $m_r<17.7$\,mag that lie within \\ & a projected separation of 1\,Mpc from the CIG galaxy with a measured redshift \\ \hline \hline 86 & CIG galaxies of 636 without neighbours that violate Eqs.~\ref{Eq:kara2} and \ref{Eq:kara1} within 1\,Mpc \\ 117 & CIG galaxies of 636 without neighbours that violate Eqs.~\ref{Eq:kara2} and \ref{Eq:kara1} within 1\,Mpc \\ 433 & CIG galaxies of 636 with more than one neighbour that violate Eqs.~\ref{Eq:kara2} and \ref{Eq:kara1} within 1\,Mpc \\ 550 & CIG galaxies of 636 (117+433), with neighbours that violate Eqs.~\ref{Eq:kara2} and \ref{Eq:kara1} within 1\,Mpc \\ 231 & CIG galaxies of 636 without neighbours that violate Eq.~\ref{Eq:kara1} and for which, additionally, the approximate factor 4 \\ & in size was replaced in Eq.~\ref{Eq:kara2} by a factor 3 in magnitude within 1\,Mpc \\ \hline \hline 433 & CIG galaxies of 550 with a tidal strength $Q_{\rm{Kar, p}} < -2$ \\ 340 & CIG galaxies of 550 with a tidal strength $Q_{\rm{Kar, p}} < -2$ and a local number density $\eta_{k,\rm{p}}<2.7$ \\ 426 & CIG galaxies of 636 (86+340) without neighbours that violate Eqs.~\ref{Eq:kara2} and \ref{Eq:kara1} within 1\,Mpc \\ & and with tidal strength $Q_{\rm{Kar, p}} < -2$ and local number density $\eta_{k,\rm{p}}<2.7$ \\ \hline \hline 347 & CIG galaxies of 411 without neighbours that violate Eqs.~\ref{Eq:kara2} and \ref{Eq:kara1}, and with $|\Delta\,\varv|~\leq~500$\,km\,s$^{-1}$ within 1\,Mpc \\ 105 & CIG galaxies of 411 without neighbours with $|\Delta\,\varv|~\leq~500$\,km\,s$^{-1}$ within 1\,Mpc \\ 308 & CIG galaxies of 411 with at least one neighbour with $|\Delta\,\varv|~\leq~500$\,km\,s$^{-1}$ within 1\,Mpc \\ \hline \hline \end{tabular} \end{table*} Our conclusions are the following: \begin{enumerate} \item Of the 636 CIG galaxies considered in the photometric study, 426 galaxies appear to be isolated in projection: 86 CIG galaxies are isolated according to the CIG isolation criteria within a projected field radius of 1\,Mpc; 340 appear to be mildly affected by their environment. \item The use of the SDSS database permits one to identify faint companions that were not found in previous AMIGA papers \citep{2007A&A...470..505V}. The SDSS provides linear photometry, improved sensitivity, and better spatial resolution than digitised photographic plates. Consequently, the isolation parameters of the revised AMIGA sample are improved, which reduces the sample by about 20\%. \item On average, galaxies in the AMIGA sample show lower values in the local number density and the tidal strength parameters than galaxies in denser environments such as pairs, triplets, compact groups, and clusters. In general, galaxies in the studied samples show higher values of the isolation parameters than those reported by \citet{2007A&A...472..121V}. \item Of the 411 fields considered in the spectroscopic study with more than 80\% redshift completeness, 347 galaxies are isolated according to the CIG isolation criteria within a radius of 1\,Mpc and $|\Delta\,\varv|~\leq~500$\,km\,s$^{-1}$ with respect to the central CIG galaxy. \item The upper-limit estimates of the isolation parameters were calculated considering photometric redshifts: 103 CIG galaxies have no neighbours within 1\,Mpc within the specified apparent diameter range and $|\Delta\,\varv|~\leq~500$\,km\,s$^{-1}$. \item The spectroscopic local number density and the tidal strength were calculated for 308 CIG galaxies with at least one neighbour within 1\,Mpc and $|\Delta\,\varv|~\leq~500$\,km\,s$^{-1}$. This estimate improves the quantification of the isolation degree with respect to the photometric study, which is only a rough first approximation. \item The availability of the spectroscopic data allowed us to check the validity of the CIG isolation criteria within a field radius of 1\,Mpc, which is not fully efficient. About 50\% of the neighbours considered as potential companions in the photometric study are in fact background objects. On the other hand, we also found that about 92\% of neighbour galaxies that show recession velocities similar to the corresponding CIG galaxy are not considered by the CIG isolation criteria as potential companions. These neighbours are most likely dwarf systems, with $D_{i}\ <\ 0.25\ D_{P}$, which may have a considerable influence on the evolution of the central CIG galaxy. \end{enumerate} \begin{acknowledgements} The authors acknowledge the referee for his/her very detailed and useful report, which helped to clarify and improve the quality of this work. This work has been supported by Grant AYA2011-30491-C02-01, co-financed by MICINN and FEDER funds, and the Junta de Andalucía (Spain) grants P08-FQM-4205 and TIC-114, as well as under the EU 7th Framework Programme in the area of Digital Libraries and Digital Preservation. (ICT-2009.4.1) Project reference: 270192. This work was partially supported by a Junta de Andalucía Grant FQM108 and a Spanish MEC Grant AYA-2007-67625-C02-02. Funding for SDSS-III has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, and the U.S. Department of Energy Office of Science. The SDSS-III web site is http://www.sdss3.org/. SDSS-III is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS-III Collaboration including the University of Arizona, the Brazilian Participation Group, Brookhaven National Laboratory, University of Cambridge, University of Florida, the French Participation Group, the German Participation Group, the Instituto de Astrofisica de Canarias, the Michigan State/Notre Dame/JINA Participation Group, Johns Hopkins University, Lawrence Berkeley National Laboratory, Max Planck Institute for Astrophysics, New Mexico State University, New York University, Ohio State University, Pennsylvania State University, University of Portsmouth, Princeton University, the Spanish Participation Group, University of Tokyo, University of Utah, Vanderbilt University, University of Virginia, University of Washington, and Yale University. This research has made use of data obtained using, or software provided by, the UK's AstroGrid Virtual Observatory Project, which is funded by the Science and Technology Facilities Council and through the EU's Framework 6 programme. We also acknowledge the use of STILTS and TOPCAT tools \citet{2005ASPC..347...29T}. This research made use of python ({\tt http://www.python.org}), of Matplotlib \citep{Hunter:2007}, a suite of open-source python modules that provides a framework for creating scientific plots. This research has made use of the NASA/IPAC Extragalactic Database (NED), which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. We acknowledge the usage of the HyperLeda database ({\tt http://leda.univ-lyon1.fr}) \citep{2003A&A...412...45P}. \end{acknowledgements}
1,314,259,995,443
arxiv
\section{Introduction} \textit{Named Entity Recognition} (NER) is a crucial part of various \textit{Natural Language Processing} (NLP) tasks like entity linking, relation extraction, machine reading and ultimately \textit{Question Answering} (QA). % With the recent rise of neural networks, much emphasis has been put on high-resource languages like English or Chinese leading to fast advancements of many foundational tasks, in particular NER which in many areas reaches near-human performance for these languages \cite{glample2016, ouyang2017chinese}. % However, for other, less-resource languages like German, their neural NER counterparts did not attract similar attention from the deep learning community, leading to lower performance by a margin of up to 11\% F-score. In this paper, we look for the reasons and take steps towards solving them. % By example of German we bridge the current gap between the performance of neural NER for different languages and bring the performance to a new state-of-the-art. % We report evidence that the inferior quality of German text data and its small size are the major reasons for the observed lack of progress. To tackle this problem, we use a larger corpus for training the foundational word embeddings, namely \textit{Leipzig40} \cite{Goldhahn2012BuildingLM} (including the whole German Wikipedia till 2016) combined with the \textit{WMT 2010 German monolingual training data} \cite{callison2010findings}, and contrast its use with the \textit{COW corpus} \cite{Schaefer2015b}, the largest collection of German texts extracted from web documents with over 617 Mio.\ sentences. Besides, we bring all scattered (open-source) resources of annotated NER datasets for German together which are to date available, prepare and merge them to increase the amount of the final training data. This includes the major NER datasets of \textit{CoNLL-2003} \cite{tjong2003introduction} and \textit{GermEval-2014} \cite{Benikova2014NoStaDNE}, and the smaller datasets of \textit{Europarl-2010} \cite{faruqui10:_training} and of \textit{EuropeanaNewspapers-2016} \cite{neudecker16.110}. To this collection, we add the dataset of T\"ubingen Treebank (\textit{T\"{u}Ba-D/Z\xspace}) \cite{telljohann2006stylebook}, which to the knowledge of the authors is utilized the first time for the task of neural NER. It is an increasing scientific practice to make models open source accessible. % New models appear almost daily, for example in the \textit{Deep Learning} (DL) community. % As a consequence, changing existing models and trying out different hybrid setups is getting a scientific practice involving more and more scientists. % This is advantageous, since attempts to improve existing models can contribute to their validation. % However, it is often forgotten that \textit{data is the gold of scientist}: % it is the availability of limited resources that leads to significant improvements in various areas such as CoNLL, SNLI \cite{bowman2015large} and SQuAD \cite{rajpurkar2016squad} for the tasks NER, \textit{natural language inference} and QA and stand behind the recent success of neural networks in NLP. % Therefore it is important to consider sufficient available resources, to annotate them according to the task and to optimize them if necessary. This task is often time-consuming and costly. % The present paper deals with assessing the impact of resources to NER by example of a rather low-resource language like German. % We show the influence of different training sets on the performance of neural NER, of different combinations of these data sets and above all of different levels of their preprocessing. % We deal with the aspect of resource optimization with regard to lemmatization and \textit{Part-of-Speech} (POS) tagging and analyze their influence besides the training of word embeddings and task-specific neural networks. % Our main finding is: an increase of size and quality of the (task-independent) word embedding corpus and of the (task-specific) training dataset leads to a significant improvement of sequence labeling tasks like NER, which can be larger than just an amendment of the underlying neural architecture. For the future of neural NER by example of less- or low-resource languages this means: % collecting unlabeled corpora for training morphology-dependent, high quality embeddings is a good alternative to increase the performance of downstream-tasks. The remainder of the paper is organized as follows: Section 2 reviews related work, Section 3 presents a sketch of the underlying model, Section 4 describes our threefold experimental setup of a) single, b) joint, and c) resource optimized training, Section 5 reports and discusses our results, and, finally, Section 6 draws a conclusion. \section{Related Work} Compared to high-resource languages, comparatively less emphasis has been put on the task of neural NER by example of German. Noteworthy work has been done so far only by \cite{nreimers2014} on GermEval and by \cite{glample2016} on CoNLL; both will be used as baselines here. Reimers et al. \cite{nreimers2014} were among the first to apply neural networks to German NER. However, they did not consider GermEval in combination with CoNLL. Apart from them, the remaining studies (predominantly conducted by non-native speakers) consider this task as a side product of dealing with various other languages. In this way, the state-of-the-art on German neural NER has been established by \cite{glample2016} in 2016. Gillick et al. \cite{Gillick2016MultilingualLP} consider German as a variant in a multilingual training setup while additionally considering the datasets of two Germanic languages (English and Dutch) and one Romanic language (Spanish) from the CoNLL shared task; as a result, they reach 76.22 \% F-score. However, for the single training on the German part of CoNLL they stay below \cite{nreimers2014}. From the point of view of resource optimization, the recent work of \cite{klimek:2018:germeval:analysis} is worth mentioning. % Klimek et al.\ also observe the gap between the languages and therefore carry out a detailed analysis of the difficulties for the German NER task using the GermEval data set as an example. % They come to the conclusion that \textit{``the task of German NER could benefit from integrating morphological processing''} \cite{klimek:2018:germeval:analysis}. % To this end, we start our analysis and apply our designed morphological processing approach to all text corpora and NER datasets. \section{Model}\label{sec:Model} Our neural model consist of two separately trained components: a) foundational word embeddings, modeling the general knowledge from large unlabeled text corpora, and b) task-specific neural networks, modeling the domain knowledge from the labeled training data. In this section, both components are presented briefly. \paragraph{Word Embeddings} The language model of continuous space word representations (\textit{word2vec}) \cite{mikolov2013distributed} and its variations by \cite{levy2014dependency,komninos2016dependency} are the foundations of most ongoing research in NLP with neural networks. Based on the context, the model embeds words, phrases or sentences into high dimensional vector spaces. In such a space, the semantics of associations of words and phrases are captured to such an extent that algebraic operations lead to meaningful relationships (e.g.\ $\text{vec(\textit{king})} - \text{vec(\textit{man})} + \text{vec(\textit{woman})} \approx \text{vec(\textit{queen})}$ \cite{mikolov2013distributed}). This property is immensely useful for our application. We use the model of \textit{word2vec} and its extension \textit{wang2vec} \cite{Ling:2015:naacl} which explores syntactic data and, thus, better suites the task of NER. \paragraph{Neural Model}We give a brief sketch of the neural model \textit{LSTM-CRF} which we use throughout this paper. The model is similar to the one used in \cite{glample2016}, which goes back to the works of \cite{Chiu2016NamedER,Huang2015BidirectionalLM,collobert2011natural}. We use a neural model consisting of stacked LSTM and CRF layers. The \textit{base layer} is made of two parts: (i) a preprocessing sublayer generating the character-based embeddings with a cell of forward and backward LSTMs (\textit{biLSTM}) \cite{graves2013speech}, and the word embeddings from the input sentence, (ii) followed by an encoding sublayer again with a cell of a biLSTM extracting features and generating compressed hidden representations. The \textit{prediction layer} is made of CRFs and takes the previous hidden representations to finally produce the \textit{Named Entity} (NE) tag predictions. Let $ (w_1,\ldots, w_{N_s}) = [w_i]$ be the list of words of a sentence from the input corpus of texts. Furthermore, let $ (c_{i,1}, \ldots, c_{i,N_{w_i}})= [c_{i,l}] $ be the list of characters of the word $ w_i $ consisting of $ N_{w_i} $ characters with $ c_{i,l} $ being its $l$\textsuperscript{th} character. For a given word $ w_i $ and its NE-tag (gold label) $ t_i \in $ \textit{\{PER, LOC, ORG, MISC, O\}} the data flow within the neural network is as follow: \begin{IEEEeqnarray}{c} \text{char2vec}(c_{i,l})\mapsto \vec{c_{i,l}}\\ \text{biLSTM}([\vec{c_{i,l}}]) \mapsto \vec{h^c_i}\\ \text{word2vec}(w_i)\mapsto \vec{w_i}\\ \text{biLSTM}([(\vec{w_i}, \vec{h^c_i})]) \mapsto [\vec{h^w_i}]\\ \text{CRF}([\vec{h^w_i}]) \mapsto [t_i] \end{IEEEeqnarray} where char2vec is a (randomly initialized) lookup table for embedding all characters into a corresponding vector space, and $ (\vec{w_i},\vec{h^c_i}) $ is the concatenation of the embedding vector of word $w_i$ and its character-based hidden representation. The model is trained to predict the NE-tag $ t_i $ for each word after seeing the whole input sentence at once. \section{Experimental Setup} \subsection{Datasets} In order to evaluate our model of Section \ref{sec:Model} for neural NER on German data, we put emphasis on the major datasets of CoNLL (German part) and GermEval. However, more German resources are available that have so far gone unnoticed in the DL community. In Table \ref{tab-datasets-ner}, we gather all these NER datasets, which are to date freely accessible, and list them along their number of sentences. Additionally, for each dataset the total number of NE tokens is provided along the four categories from the standards defined in the CoNLL shared task 2003 (CoNLL format). Table \ref{tab-datasets-ner} shows that the T\"{u}Ba-D/Z\xspace dataset is the largest of these, both in terms of the number of sentences and of tokens, ideally fitting to the needs of deep neural networks. \begin{table}[htbp] \caption{NER Datasets} \begin{center} \resizebox{0.5\textwidth}{!}{ \begin{tabular}{|c|c||c|c|c|c|} \hline \textbf{Corpus} & \textbf{\textit{Sent.}}&\textbf{\textit{PER}}&\textbf{\textit{LOC}}&\textbf{\textit{ORG}}&\textbf{\textit{MISC}} \\ \hline\hline CoNLL-2003&\textcolor{white}{0}18,024&\textcolor{white}{0}\textbf{8,309}&\textcolor{white}{0}7,864&\textcolor{white}{0}7,621&\textcolor{white}{0}4,748\\ \hline Europarl-2010&\textcolor{white}{00}4,395&\textcolor{white}{00}514&\textcolor{white}{00}724&\textcolor{white}{00}874&\textcolor{white}{00}\textbf{966}\\ \hline GermEval-2014&\textcolor{white}{0}31,300&16,204&\textbf{16,675}&12,885&9,254\\ \hline Europ.Newsp.-2016&\textcolor{white}{00}8,879&\textcolor{white}{0}\textbf{7,914}&\textcolor{white}{0}6,143&\textcolor{white}{0}2,784&\textcolor{white}{0000}3\\ \hline T\"{u}Ba-D/Z\xspace-2018&\textbf{104,787}&\textbf{55,746}&28,582&32,224&12,865\\ \hline \end{tabular} } \label{tab-datasets-ner} \end{center} \end{table} \paragraph{Preprocessing of Training Data} Apart from CoNLL, most copora had to be further processed to fit the CoNLL format. For GermEval, we consider only the top-level NE, refraining from nested NE to stay in line with the remaining datasets. As a tagging scheme, we preferred the BIO (IOB2) scheme, as it has been shown to perform better \cite{Reimers2017ReportingSD}. All datasets are given in the BIO scheme, except CoNLL (IOB1) and Europarl (IOB1), which we converted into the target scheme. For EuropeanaNewspapers, we take the two datasets written in standard German orthography, namely \textit{enp\_DE.lft.bio} and \textit{enp\_DE.sbb.bio} based on historic newspapers from the Dr.\ Friedrich Tessmann Library and the Berlin State Library, respectively, and omit the Austrian historic newspapers which use a different orthography, differing heavily from the former samples. The original dataset is not provided in the 4-column CoNLL format, which writes each word of a sentence horizontally along its lemma, POS tag and NE-label, and separates each sentence by an empty newline. Therefore, we convert the data into our target format by using \textit{spaCy V2.0}\footnote{http://spacy.io} which by its recent release supports preprocessing German texts by providing language models for sentence boundary detection, lemmatization and POS tagging. For T\"{u}Ba-D/Z\xspace, we extracted the NE-tags from the \textit{tuebadz-11.0-conll2010} version. In the case of nested NE, we use a filtering heuristics to extract the longest spanning NE, which allowed us to get more robust training data, not splitting well known entities into parts (e.g.\ \textit{[Goethe Universit\"at Frankfurt]}\_ORG vs. \textit{[Goethe]}\_PER \textit{Universit\"at} \textit{[Frankfurt]}\_LOC). We converted the tagging scheme of T\"{u}Ba-D/Z\xspace to our target format. Lastly, to allow comparisons with other NER datasets, we mapped the NE category \textit{Geo Political Entity} (GPE) to \textit{LOC}. \paragraph{Data Splitting \& Merging} For CoNLL and Germ\-Eval we use the splits as provided in the original datasets. Further, we split T\"{u}Ba-D/Z\xspace into train/dev/test sets according to the common ratio of 80/10/10 percentages. Due to the smaller size of the Europarl und the EuropeanaNewspapers datasets, we did not consider them for the first experimental setup of single training, rather we merged them with the training data for the second experimental setup of joint training. For this setup, we aligned all datasets by mapping the NE category \textit{OTH} to \textit{MISC} to fit to the CoNLL format. In this way, we generated the currently largest training dataset for German NER of a size of $133,258$ sentences.\footnote{CoNLL (12,152) + GermEval (24,000) + Europarl (4,395) + EuropeanaNewspaper (8,879) + T\"{u}Ba-D/Z\xspace (83,832)} \subsection{Word Embeddings} German is a highly inflected language compared to English or Chinese whose syntax is more analytic. For languages like German, the embedding of a single word (e.g.\ \textit{klein}) is dispersed across its various morphological and spelling variants (stem: \textit{klein} $\rightarrow$ \textit{kleiner}, \textit{kleinste}, \textit{kleine}, \textit{kleines}, \textit{kleinen}, \textit{kleinem}, \textit{Klein} etc.), therefore reducing the number of its samples and weakening its information value if not being lemmatized appropriately. On the other hand, languages with a rather analytical syntax show such morphological variants to a lesser extent, if at all. % We assume that this difference is the reason why their embeddings are of higher quality and therefore their performance in downstream tasks is many times higher than in less analytical languages. \begin{table}[htbp] \caption{Text Corpora} \begin{center} \begin{tabular}{|c|c|} \hline \textbf{Corpus} & \textbf{\textit{Sentences}} \\ \hline\hline Leipzig40-2018&\textcolor{white}{0}40.00 Mio.\\ WMT-2010-German&\textcolor{white}{0}19.36 Mio.\\ \hline COW-2016&617.28 Mio.\\ \hline \end{tabular} \label{tab-datasets-w2v} \end{center} \end{table} % In order to mitigate this factor for the German language in its negative effect, we are therefore forced to use embeddings of higher quality. In the experimental setup of single training, we tackle this by using more text data. Table \ref{tab-datasets-w2v} lists the corpora we use for training our word embeddings. Leipzig40-2018 contains the largest possible extract from the so-called Leipzig Corpora Collection in 2018, which was generated by its maintainers on demand for our study, omitting any possible duplicate sentences. To increase the corpus size we combine this extract with WMT-2010-German forming our so-called \textit{LeipzigMT} corpus. Besides, we consider the COW-2016 corpus, arguably the largest text collection for German. This corpus contains not only a textbook-like language, as found for example in Wikipedia. Therefore, we assume that it fits well with the NER datasets used here, which in turn come from various sources (news, web, wikis, etc.). Both corpora are already preprocessed and split into sentences, containing words, numbers and punctuations. We do not remove punctuation marks, but separate them from words and numbers by surrounding them with spaces to avoid the introduction of variations with punctuation marks. In addition, as a preprocessing step, we write all words in lowercase to account for spelling and morphological variations. In a third variant of our experiment we deepen the optimization of resources by taking into account lemmatization and POS tagging in connection with writing words in lower case. While lemmatization increases the observation frequency of words, POS tagging allows a more correct specification of their syntactic roles in sentences and consequently differentiates individual observations that are included in the calculation of embeddings. On the other hand, lower case writing of words removes ambiguities, as they are induced in German especially by capitalization at the beginning of sentences. % Table \ref{tab-w2v-params} shows the variations we use for this setup. We apply lemmatization and POS tagging in combination with writing words in lowercase to all resources before they are used in training. % These conversions are coupled with an exact conversion of the NER data sets in the respective experiment to avoid mismatches and to increase the overlap with the trained embeddings. Again, we use spaCy for these tasks and use its language models for lemmatization and POS tagging. Listing \ref{lst:lemma-pos} shows an example of this approach. % \begin{lstlisting}[frame=single, basicstyle=\scriptsize, caption=Example for Lemma \& POS, label=lst:lemma-pos] raw sentence : Kleine Kinder sind mutiger. lemma : Klein Kind sein mutig . lemmapos : Klein_ADJA Kind_NN sein_VAFIN mutig_ADJD ._$ lemmapos_lower: klein_ADJA kind_NN sein_VVFIN mutig_ADJD ._$ \end{lstlisting} % These conversions are intended to standardize any text input and thus to solve the above-mentioned problems in connection with morphological variations. \begin{table}[htbp] \caption{Embedding Variants per Experimental Setups} \begin{center} \begin{tabular}{|c||c|c|} \hline \textbf{Experimental Setup} & \textbf{\textit{Variant}}& \textbf{\textit{Features}} \\ \hline\hline \textbf{Single Training}&1&lower\\ \textbf{Joint Training}&1&lower\\ \hline &2&lemma\\ \textbf{Optimized}& 3& lemma\_lower\\ \textbf{Training}& 4& lemmapos\\ &5& lemmapos\_lower\\ \hline \end{tabular} \label{tab-w2v-params} \end{center} \end{table} \subsection{Training Parameters} To remain comparable with the baseline models on CoNLL \cite{glample2016} and GermEval \cite{nreimers2014}, we train the word embeddings with dimension 100\footnote{Lample et al.\ \cite{glample2016} use dimension 100 for English, but 64 for German. We increase this dimension to close the gap.}, window size of 8 and minimum word count threshold of 4, consequently, setting the LSTM dimension to 100 as well\footnote{For word2vec, we performed an extensive search on numerous embeddings with dimension values $ (50,100,150,200,300) $ along with minimum word count threshold and window size values in the range of $ [4,200] $ and $ [5,10] $, respectively. However, no major differences were observed in the final results.}. We choose dimension 25 for character-based embeddings and the final CRF-layer, and train the network in 100 epochs with a batch-size of 1 and dropout rate of 0.5. As an optimization method, we use the stochastic gradient descent with a learning rate of 0.005. Apart from fitting the LSTM dimension to 300 while using the 300-dimensional pretrained German fastText embeddings \cite{bojanowski2017enriching}, the model is fixed throughout our experiments to these settings. Any further sophisticated hyperparameter tuning (e.g.\ \textit{Population Based Training}) is left for future work. \section{Results} In this section, we present the results we obtained for our three experimental settings. % As described in \cite{Reimers2017ReportingSD}, we perform every experiment up to 6 times, starting from different random seeds, in order to arrive at significant final values on the respective test dataset. We evaluate the NER results by using the official evaluation script from the shared task of CoNLL 2003. All our experiments were run on Nvidia's \textit{GTX 1080 Ti} GPUs. \subsection{Single Training} We compare our results with the current top performing models on CoNLL and GermEval. Table \ref{tab-single-training} shows the highest results we achieve on the single training setup (first experimental setting). \begin{table}[htbp] \caption{Single Training} \begin{center} \begin{tabular}{|c|c|c|l|} \hline \textbf{Data} & \textbf{Embeddings}& \textbf{Features}& \textbf{\textit{F-score} [\%]} \\ \hline \hline CoNLL&\textit{pre-trained \textbf{Leipzig}}&wang2v&78.76 \cite{glample2016}\\ \hline GermEval&\textit{pre-trained \textbf{UKP2014}}&word2v&75.9\textcolor{white}{0} \cite{nreimers2014}\\ \hline\hline CoNLL&\textit{self-trained \textbf{LeipzigMT}}&wang2v&80.81\\ CoNLL&\textit{self-trained \textbf{COW}}&wang2v&\textbf{83.29}\\ \hline GermEval&\textit{self-trained \textbf{LeipzigMT}}&wang2v&81.97\\ GermEval&\textit{self-trained \textbf{COW}}&wang2v&\textbf{83.14}\\ \hline T\"{u}Ba-D/Z\xspace&\textit{self-trained \textbf{LeipzigMT}}&wang2v&88.95\\ T\"{u}Ba-D/Z\xspace&\textit{self-trained \textbf{COW}}&wang2v&\textbf{89.26}\\ \hline \end{tabular} \label{tab-single-training} \end{center} \end{table} We achieve an improvement throughout the datasets, outperforming all previous results on German neural NER, and establishing a new state-of-the-art on each of them. Increasing the corpus size by means of the LeipzigMT corpus displays a side-by-side performance increase on the CoNLL baseline. Increasing the corpus size further through the COW corpus gives us finally the best results on CoNLL. From this perspective, looking at the three data points for CoNLL (or GermEval), we observe a logarithmic growth of F-score as a function of the size of the underlying embedding corpus. Even larger corpora than the COW corpus are needed to further support this observation. On the side of training data, we observe a similar but more powerful behavior. On LeipzigMT, the increase of training data size from CoNLL to GermEval, and then to T\"{u}Ba-D/Z\xspace leads to an improvement of +1.16\% and +6.98\% in F-score. For COW this behavior re-emerges for T\"{u}Ba-D/Z\xspace, closing the gap to high-resource languages like English, and almost crossing the 90\% barrier on T\"{u}Ba-D/Z\xspace. Besides, we see that the larger train dataset T\"{u}Ba-D/Z\xspace does not heavily depend on the corpus size implying that it is beneficial to invest in annotation efforts. We also find that wang2vec generally performs better than word2vec. This shows that a task-specific embedding algorithm is important (in our case taking into account the syntax for NER). Last but not least, our experiments show that keeping information about capitalization can even downgrade the quality of word embeddings. Likewise, we observe that integrating capitalization information as an additional input feature to our neural network does not lead to better results. We assume that this is due to the inflectional morphology of German, according to which nouns are capitalized at the beginning, in contrast to English, where mainly proper names (named entities) are written in this way. \subsection{Joint Training} As a first step towards joint training, we report the best results for fastText embeddings and compare them to UKP2014 embeddings, only using the two datasets from the baseline models. Next, we approach the full joint setup and perform the training on all German NER datasets. Starting from the results of the last section, we consider only COW for this setup. Table \ref{tab-joint-training} shows the top results for this setup. For fastText, we get the best results among all settings we examined (the results on single training were worse than for this setup). However, they are still below the ones with UKP2014, which themselves were trained with the original word2vec model back in 2014. This shows, that the fastText algorithm, being a promising extension of word2vec, does not suit well to our NER task, even though using a more informative vector space with 300 dimensions. Hence, we discard it for further experiments. For COW, the transfer learning on a single task works well and the performance for CoNLL and GermEval are improved further, lying slightly above the single training values. It can be noted that the final performance is more directed towards the low performing values. We assume that it depends more on the datasets with the lower single training performance (who make with $\sim37$\% a large part of the joint training dataset), as due to the data merging additional variety is introduced to the final training dataset. This makes the tasks more difficult and brings it closer to a real-world scenario. Still, the slightly improved performance indicates that the neural network is generalizing, and successfully performing \textit{task-related transfer learning on datasets}, i.e. the model is improving the same task on a heterogeneous dataset, given that it performs well on a single large homogeneous dataset. Overall, the results are promising; they indicate that we have a good candidate for applying a jointly trained tagger to large resources where the availability of labeled data is scarce. \begin{table}[htbp] \caption{Joint Training} \begin{center} \begin{tabular}{|c|c|c|l|} \hline \textbf{Data} & \textbf{Embeddings}& \textbf{Features}& \textbf{\textit{F-score} [\%]} \\ \hline \hline CoNLL+GermEval&\textit{pre-trained \textbf{UKP2014}}&word2v&78.06\\ CoNLL+GermEval&\textit{pre-trained \textbf{fastText}}&300dim&77.00\\ \hline \hline \textbf{all}&\textit{self-trained \textbf{COW}}&wang2v&\textbf{83.47}\\ \hline \end{tabular} \label{tab-joint-training} \end{center} \end{table} \subsection{Resource Optimization via Lemmatization \& POS tagging} In this final setup of resource optimization, we examine various constellations. Table \ref{tab-lemma-pos-result} reports the corresponding list of results. \begin{table}[htbp] \caption{Optimized Training via Lemma \& POS} \begin{center} \begin{tabular}{|c|c|c|l|} \hline \textbf{Data} & \textbf{Embeddings}& \textbf{Features}&\textbf{\textit{F-score} [\%]} \\ \hline \hline &\textit{\textbf{LeipzigMT}}&lemma&82.57\\ &\textit{\textbf{LeipzigMT}}&lemma\_lower&82.94\\ &\textit{\textbf{LeipzigMT}}&lemmapos&81.22\\ CoNLL&\textit{\textbf{LeipzigMT}}&lemmapos\_lower&81.20\\ &\textit{\textbf{COW}}&lemma&\textbf{83.64}\\ &\textit{\textbf{COW}}&lemma\_lower&83.14\\ &\textit{\textbf{COW}}&lemmapos&82.38\\ &\textit{\textbf{COW}}&lemmapos\_lower!&82.47\\ \hline &\textit{\textbf{LeipzigMT}}&lemma&82.53\\ &\textit{\textbf{LeipzigMT}}&lemma\_lower&82.47\\ &\textit{\textbf{LeipzigMT}}&lemmapos&81.46\\ GermEval&\textit{\textbf{LeipzigMT}}&lemmapos\_lower&81.05\\ &\textit{\textbf{COW}}&lemma&\textbf{82.87}\\ &\textit{\textbf{COW}}&lemma\_lower&82.53\\ &\textit{\textbf{COW}}&lemmapos&81.96\\ &\textit{\textbf{COW}}&lemmapos\_lower!&81.38\\ \hline &\textit{\textbf{LeipzigMT}}&lemma&88.50\\ &\textit{\textbf{LeipzigMT}}&lemma\_lower&88.27\\ &\textit{\textbf{LeipzigMT}}&lemmapos&87.85\\ T\"{u}Ba-D/Z\xspace&\textit{\textbf{LeipzigMT}}&lemmapos\_lower!&87.83\\ &\textit{\textbf{COW}}&lemma&89.08\\ &\textit{\textbf{COW}}&lemma\_lower&\textbf{89.24}\\ &\textit{\textbf{COW}}&lemmapos&88.43\\ &\textit{\textbf{COW}}&lemmapos\_lower&88.02\\ \hline \end{tabular} \label{tab-lemma-pos-result} \end{center} \end{table} Intuitively, using POS tagged sentences for training word embeddings may appear to be unusual, however, the results show a different picture. We get results very close to the top performances of the previous sections. A common pattern across all experiments can be detected. The variation of lemmatization on COW constantly delivers top scores for the three major datasets, and even produces the highest value for CoNLL across all setups. Lemmatization performs comparatively better than lemmatization combined with POS tagging. This shows that dispersing the semantics of a given word across various roles it can take does not improve the quality of the final embeddings. Rather it is better to decrease the (redundant) varieties in the vector space by assembling in advance all morphological variants to a common base form, which only then is mapped to a common semantic vector. After lemmatization is performed, we can see that lower casing does not lead to a notable improvement. We assume that lemmatization already performs a good filtering of the raw text, making lower casing almost ineffective. Regarding the size of the corpus used for generating the word embeddings, we come to the conclusion, that lemmatization and POS tagging reduce the performance differences from previous sections which depended so far on the latter size. This confirms our assumption that the word2vec algorithm in its original form does not suit well to morphological rich languages. The results of this setup show that the values for LeipzigMT and COW now lie closer to each other, making the performance to some extent independent from the size of the embedding corpus. This is an important finding, giving rise to promising opportunities and applications for low-resource languages. \section{Conclusion \& Future Work} In this paper, we performed a far reaching study on neural NER by example of a low-resource language like German. The study focused on a monolingual experimental setup. Nevertheless, the improved results pave the way for related languages with similar characteristics as German. There are various ways to improve existing neural models. Instead of just designing deeper and wider hybrid models, we showed the high importance of gathering and merging resources and how their careful optimization can eliminate the lack of progress. In particular, we found out that increasing the size and improving the quality of raw corpora for word embeddings by applying morphological processing like lemmatization \& POS tagging leads to meaningful improvements. In addition, we demonstrated the effect of transfer learning by merging data sets for a joint training setup, which also produced good results and makes this approach a promising candidate for NER applications in the area of scarce resources of annotated data sets. Overall, we conducted the first comprehensive research for the German NER on all existing training data sets and resources, including the study of common pre-trained embeddings such as fastText. % In this context, we established a new state-of-the-art using all open source data sets for the German NER, which exceeds the 80\% F-score limit for the German NER and closes the gap to other high-resource languages such as English. For future work we plan to further refine the training process of word embedding and in particular to investigate how the performance of downstream tasks can become more independent of the size of embedding corpora using linguistic methods such as lemmatization and POS tagging. % To this end, we intent to examine the recently published ELMo embeddings \cite{Peters:2018} for German. % Finally, we will examine the role of the multilingual COW corpus for word embedding by example of other languages such as Dutch, French, Spanish and English. \section*{Acknowledgment} This work was funded by the German Research Foundations (DFG) as part of the \textit{BIOfid} project (DFG-326061700). We plan to upload our source code and the trained embeddings on GitHub for the research community. Special thanks goes to G.\ Lample for his directions on the procedure for training the embeddings, and to Prof.\ G.\ Heyer and F.\ Helfer for providing the extract of Leipzig40-2018 corpus. \bibliographystyle{IEEEtran}
1,314,259,995,444
arxiv
\section{introduction}\label{sec:intro} A massive perturber moving through some background medium creates in it a density wake. The wake trails the perturber and exerts on it a gravitational force in the direction opposite to the motion, thus acting as a brake and earning this interaction a name: ``dynamical friction." In his seminal work, \citet{chandra43} studied the efficiency of dynamical friction for a massive perturber moving through a stellar background and found the drag force to be maximized when the velocity of the perturber is comparable to the velocity dispersion of stars, $v\approx \sigma$. \citet{ostriker99} later evaluated the dynamical friction force acting on a perturber traveling through a uniform gaseous medium and found that the gaseous drag is more efficient than stellar drag for transonic perturbers. This work also established that the gaseous drag is operating most efficiently when $1 < \mathcal{M} \lesssim {\rm few}$, where the Mach number $\mathcal{M} \equiv v/c_{s,\infty}$ is defined as the ratio between the perturber velocity $v$ and the sound speed of the ambient gas ``at infinity" $c_{s,\infty}$ , i.e., in a distant region unaffected by the perturber. In this mildly supersonic regime, the dynamical friction force takes form \begin{equation} F_{\rm DF} = - \frac{4\pi(GM_{\rm BH})^2 \rho_\infty}{v^2} \ln{\left[\Lambda \left( 1-\frac{1}{\mathcal{M}^2}\right)^{0.5}\right]} \label{eq_Fdf} \end{equation} where $M_{\rm BH}$ is the mass of the perturber (hereafter assumed to be a black hole; BH) and $\rho_\infty$ is the density of the gas at infinity. $\ln \Lambda= \ln{(r_{\rm max} / r_{\rm min})}$ is the Coulomb logarithm and $r_{\rm min}$ and $r_{\rm max}$ represent the smallest and largest spatial scales in the gas wake that contribute to dynamical friction, respectively. Based on Equation~(\ref{eq_Fdf}), it follows that the effect of dynamical friction is expected to be stronger for massive perturbers given that they can interact with a sufficiently dense pool of ambient gas. Because of its efficiency, gaseous drag has been extensively investigated in simulations that follow pairing of massive black holes (MBHs) in gas rich mergers of galaxies \citep[see][for review]{mayer13}. The most important questions for this research area are: in which galaxies does gaseous dynamical friction lead to successful gravitational pairing of MBHs and on what timescales? \cite{callegari09,callegari11}, for example, find that MBH pairs with mass ratios $q<0.1$ are less likely to form gravitationally bound binaries within a Hubble time, largely due to the inefficiency of the gas drag on the smaller of the two MBHs. When pairing is successful, gaseous dynamical friction is capable of transporting the MBHs from galactic radii of a few hundred to $\sim1$\,pc on timescales as short as $\sim10^7$\,year and tens of times faster than stellar dynamical friction \citep[for e.g.,][]{escala05, dotti06, mayer07}. Interestingly, gravitational interaction of the MBH with the surrounding gas is quite local. For example, for a BH with mass $\sim 10^6\,M_\sun$ most of the gravitational drag force is contributed by the gas that resides within only a few parsecs of the MBH \citep{chapon13}. This proximity implies that the dynamical friction wake can be strongly affected, and possibly obliterated, by irradiation and feedback from an accreting MBH. \khp{Indeed, studies of dynamical evolution of MBHs find that dynamical friction can be significantly reduced due to the wake dispersal caused by purely thermal feedback from MBHs in simulations that follow gravitationally recoiled MBHs \citep{Sijacki:11} and BH pairing \citep{SouzaLima:2016} where the term ``wake evacuation" is dubbed.} These results bring into question the assumed efficacy of gaseous dynamical friction and call for further exploration of the effects of radiative feedback powered by MBH accretion. Here, we investigate this question by considering interaction of matter and radiation in the gravitational potential well of a moving MBH. The most important findings of this work are that there are regimes, set by the properties of the MBH and its ambient medium, in which gas dynamical friction is rendered inefficient by the MBH feedback. We lay out the relevant physical regimes and scales in Section~\ref{sec:regimes} and evaluate the efficiency of dynamical friction using local radiation hydrodynamic simulations in Section~\ref{sec:method}. We discuss implications of our findings and conclude in Section~\ref{sec:conclusions}. \begin{table*} \begin{center} \caption{Simulation Parameters} \begin{tabular}{cccccccccc} \hline \hline ID & $N_{\rm r}\times{N}_{\theta}$ & $\Delta r/r$ & $\mathcal{M}$ & $t_{\rm end}\,$(Myr) \\ \hline HR & 400$\times$128 & 0.026 & 0.5, 1.0, 2.0 & 201.0, 74.9, 74.9 \\ MR & 300$\times$96 & 0.035 & 0.5, 1.0, 2.0 & 201.0, 201.0, 201.0 \\ LR & 200$\times$64 & 0.053 & 0.5, 1.0, 1.5, 2.0 & 201.0, 201.0, 201.0, 201.0 \\ \hline \end{tabular} \label{table:para} \end{center} \end{table*} \section{Regimes of dynamical friction in the presence of radiative feedback}\label{sec:regimes} Photoionization and radiation pressure exerted by radiation escaping from the innermost parts of the BH accretion flow can create the ionized region (the Str\"omgren sphere or \rm H\,{\textsc{ii}}~region) around the a BH and strongly alter the properties of surrounding gas. This radiation ``response", also referred to as the BH radiative feedback, has been found to lower the accretion rate by orders of magnitude relative to a fiducial case in which radiative feedback is neglected \citep{MiloCB:09,Li:11,ParkR:11,ParkR:12, PacucciF:2015, Park:2016}. In the context of dynamical friction, this accretion regime is also likely to correspond to reduced efficiency of the gas drag due to the impact of ionizing radiation on the BH's density wake. \citet{Park:2014a} and \cite{Inayoshi:2016} however find a physical regime at higher gas densities in which the nature of accretion onto the BH fundamentally changes from this picture, as the \rm H\,{\textsc{ii}}~region collapses under the gravity of the surrounding gas, and accretion continues unaffected by radiation at supper Eddington rates. The two studies differ in their treatment of radiation transport: the latter considers the effects of radiation trapping at high accretion rates \citep{Begelman:78, Begelman:79} while the former neglects them. Both nevertheless indicate that conditions for hyper-accretion arise for a BH immersed in neutral gas with number density $n_\infty$, when $n_\infty\,M_{\rm BH} > 10^9 - 10^{10} M_\sun\,{\rm cm^{-3}}$. This finding implies that the density distribution of gas in the hyper-accretion regime is only weakly affected by radiation pressure, and thus the efficiency of dynamical friction is likely to be restored to that predicted classically, i.e., in absence of radiative feedback. For moving BHs, accretion rate, and thus accretion luminosity, also depend on BH velocity. We draw on the results of \citet[][hereafter PR13]{ParkR:13}, who used radiation hydrodynamic simulations to investigate the growth and luminosity of BHs moving through a uniform gaseous medium with temperature $T_\infty = 10^4$\,K. PR13 show that in this case the BH radiative feedback causes a formation of an \rm H\,{\textsc{ii}}~region elongated along the direction of the BH motion and filled with ionized hydrogen (or H--He) gas of temperature, $T_{\rm in} \approx 4\times 10^4$\,K ($6\times 10^4$\,K). A shell of gas with increased density, corresponding to $(1+\mathcal{M}^2)\, n_\infty$, forms in front of the BH as a consequence of the ``snowplow" effect caused by radiation pressure. For a wide range of simulated gas densities the shell becomes gravitationally unstable and collapses when $\mathcal{M} \gtrsim 4$, restoring the properties of the gas flow around the BH to the classical Bondi--Hoyle--Lyttleton solution \citep{HoyleL:39,BondiH:44}. While PR13 have not explicitly investigated the properties of the gas wake in their simulations, it follows that in this regime the efficiency of dynamical friction is restored to its classically derived value. We therefore focus on scenarios described by $1 \le \mathcal{M} < 4$, when gas dynamical friction is classically expected to be most efficient, and $(1+\mathcal{M}^2)\, M_{\rm BH}\,n_\infty < 10^9\, M_\sun\,{\rm cm^{-3}}$, when the influence of the BH radiative feedback on dynamical friction wake is expected to be significant. Having established the relevant regime we estimate the extent of the dynamical friction wake and compare it to the size of the \rm H\,{\textsc{ii}}~region. \subsection{Relevant scales} The extent of the dynamical friction wake that would form behind the MBH in absence of radiative feedback can be estimated as a radius of the gravitational influence of the MBH \begin{equation} R_{\rm DF} \approx \frac{GM_{\rm BH}}{c_{\rm s,\infty}^2} = 52.1\,{\rm pc} \left(\frac{T_{\infty}}{10^4\,{\rm K}}\right)^{-1} \left(\frac{M_{\rm BH}}{10^6\,M_\odot} \right). \label{eq:Rw} \end{equation} Here $c_{s,\infty} = \sqrt{{\gamma k_{\rm B} T_\infty / \mu m_p}} = 9.1\,{\rm km\,s^{-1}} (T_\infty/10^4\,{\rm K})^{1/2}$, assuming isothermal gas with hydrogen composition characterized by the adiabatic index $\gamma =1$ and mean atomic weight $\mu =1$. The constants have their usual meaning. As mentioned in Section~\ref{sec:intro}, most of the gravitational drag force is contributed by the gas that resides within an order of magnitude smaller region than that estimated in equation~(\ref{eq:Rw}), so this value can be considered a conservative upper limit. The size of the elongated \rm H\,{\textsc{ii}}~region sensitively depends on the MBH accretion rate and velocity. For the purpose of the estimates presented here, we neglect the elongation of the \rm H\,{\textsc{ii}}~region and estimate the radius of a spherically symmetric ionization sphere that would form around a stationary MBH accreting hydrogen gas, $R_{\rm HII} = (3\dot{N}/4\pi \alpha_{\rm rec})^{1/3} n_{\infty}^{-2/3}$ \citep{osterbrock06}. The number of ionizing photons per unit time emitted from the innermost parts of a BH's accretion flow is \begin{equation} \dot{N} = \int_{\nu_0}^\infty \frac{L_\nu}{h\nu} d\nu \approx \frac{\alpha - 1}{\alpha} \left(\frac{L_{\rm bol}}{h\nu_0}\right) \label{eq:Ndot} \end{equation} which is characterized by bolometric luminosity $L_{\rm bol} = \int_0^\infty L_\nu d\nu$ and $L_\nu \propto \nu^{-\alpha}$. Here, $\alpha =1.5$ is the spectroscopic index representative of the spectral energy distribution (SED) of active galactic nuclei (AGN) in the high accretion state, and $h\nu_0 = 13.6\,$eV for hydrogen. \khp{$\alpha_{\rm rec} = 4\times 10^{-13}{\rm cm^3\,s^{-1}}$ corresponds to the Case A recombination coefficient for hydrogen gas at the temperature $T_\infty = 10^4$\,K, used here so to match simulations described in the next section. Note that the Case B coefficient is a more appropriate choice for the high gas densities in this problem, which would result in a small change Str\"{o}mgren radius being proportional to $\alpha_{\rm rec}^{-1/3}$.} We consider accretion powered BH luminosity, $L_{\rm bol} = \eta \dot{M} c^2$, and assume radiative efficiency $\eta =0.1$ \citep{ShakuraS:73}. \begin{figure}[t] \epsscale{1.15} \plotone{f1.pdf} \caption{Snapshots of gas number density at different times for the HR run with $\mathcal{M}=0.5$. The flow of gas is from top to bottom and the BH is located at $(x,y)= (0,0)$ in each panel. The shape of the \rm H\,{\textsc{ii}}~region and dynamical friction wake reach steady state after 150\,Myr in this simulation. } \label{fig:snapshot_m05} \end{figure} These ingredients provide an estimate for the size of the \rm H\,{\textsc{ii}}~region, given a known MBH accretion rate. In the case of moving MBHs with the Mach number in the range $1 < \mathcal{M} < 4$, PR13 find that the accretion rate mediated by radiation feedback can be expressed as \begin{equation} \dot{M} \approx 1.2\times 10^{-2} \mathcal{M}^2 \dot{M}_{\rm B} \left(\frac{T_\infty}{T_{\rm in}} \right)^{5/2} \label{eq:Mdot} \end{equation} where $\dot{M}_{\rm B}$ is the nominal Bondi accretion rate for isothermal gas \citep{BondiH:44} \begin{equation} \begin{split} &\dot{M}_{\rm B}= \frac{\pi e^{3/2} \rho_\infty (GM_{\rm BH})^2}{c_{s, \infty}^3} \\ &=8.7\!\times\!10^{-3}\! \left(\frac{n_\infty}{1\,{\rm cm^{-3}}} \right) \left(\frac{T_\infty}{10^4\,{\rm K}} \right)^{-\frac{3}{2}}\! \left(\frac{M_{\rm BH}}{10^6 M_\odot} \right)^2\, M_\sun\,{\rm yr}^{-1}. \label{eq:M_B} \end{split} \end{equation} Combining Equations~(\ref{eq:Ndot})--(\ref{eq:M_B}), we obtain the estimate for the radius of the \rm H\,{\textsc{ii}}~sphere \begin{equation} \small{ R_{\rm HII} \approx 430\,{\rm pc}\; \mathcal{M}^{2/3} \left(\frac{n_\infty}{1\,{\rm cm^{-3}}} \right)^{-\frac{1}{3}}\! \left(\frac{T_{\rm in}}{4\!\times\!10^4\,{\rm K}} \right)^{-\frac{1}{2}}\! \left(\frac{M_{\rm BH}}{10^6 M_\odot} \right)^{\frac{2}{3}}. } \label{eq:R_HII} \end{equation} Considering the ratio of Equations~(\ref{eq:R_HII}) and (\ref{eq:Rw}) we get \begin{equation} \small{ \frac{R_{\rm HII}}{R_{\rm DF}} \sim 8 \mathcal{M}^{\frac{2}{3}} \left(\frac{T_\infty}{10^4\,{\rm K}} \right) \left(\frac{T_{\rm in}}{4\!\times\!10^4\,{\rm K}} \right)^{-\frac{1}{2}} \left(\frac{M_{\rm BH} n_\infty}{10^6 M_\odot\, {\rm cm^{-3}}} \right)^{-\frac{1}{3}} } \end{equation} where we find that $R_{\rm HII} \gtrsim R_{\rm DF}$ for all values of the Mach number in the range $1 < \mathcal{M} < 4$ and when $(1+\mathcal{M}^2)\, M_{\rm BH}\,n_\infty < 10^9\,M_\odot\,{\rm cm^{-3}}$. It follows that in this regime the dynamical friction wake is likely to be fully ionized and dispersed by the MBH radiative feedback, especially given the proximity of the wake mentioned before. In the next section we examine the detailed structure of the wake and the \rm H\,{\textsc{ii}}~region, and numerically evaluate the resulting dynamical friction force from radiation hydrodynamic simulations. \begin{figure}[t] \epsscale{1.15} \plotone{f2.pdf} \caption{Accretion rate onto the MBH as a function of time shown in units of Bondi accretion rate defined in Equation~(\ref{eq:M_B}). Different line styles mark MR runs with $\mathcal{M}$ = 0.5, 1.0, and 2.0. } \label{fig:accrate} \end{figure} \begin{figure*}[t] \epsscale{1.2} \plotone{f3.pdf} \caption{{Top}: 2D snapshots of the number density of gas at t = 150\,Myr for MR runs $\mathcal{M}$ = 0.5, 1.0, and 2.0 (left to right). All snapshots show steady state gas distributions. {Bottom}: contribution to dynamical friction from the gas surrounding the MBH, evaluated as the gravitational acceleration per unit volume along the direction of the MBH motion. } \label{fig:snapshot} \end{figure*} \begin{figure*}[t] \epsscale{1.15} \plottwo{f4a.pdf}{f4b.pdf} \caption{{Left}: magnitude of the MBH acceleration along the direction of motion (either positive or negative) contributed by the gas within polar radius $r$ for the LR $\mathcal{M}=1.0$ run. The MBH is located at the origin $r=0$. Dashed lines mark the locations of the upstream bow shock and the tail of the downstream \rm H\,{\textsc{ii}}~region, respectively. {Right}: magnitude of the positive component of acceleration (resulting in MBH speed-up) contributed by the gas within $r$. Different lines mark LR runs with $\mathcal{M}$ = 0.5, 1.0, 1.5, and 2.0.} \label{fig:delta_acc} \end{figure*} \section{Radiation-hydrodynamic simulations} \label{sec:method} \subsection{Numerical Setup} \label{sec:setup} We run a set of 2D radiation hydrodynamic simulations using a parallel version of the non-relativistic code ZEUS-MP \citep{StoneN:92,Hayes:06}. Simulations are carried out in a polar coordinate system defined by coordinates ($r$, $\theta$) and assuming axisymmetry with respect to the MBH's direction of motion. The extent of the computational domain is given by $r \in (0.1\,{\rm pc}, 3.0\,{\rm kpc})$ and $\theta \in (0,\pi)$ and other relevant quantities are shown in Table~\ref{table:para}. In the calculation of gas dynamics we consider only the gravitational potential of the MBH and neglect the self-gravity of the gas. An MBH with mass $M_{\rm BH}=10^6\,M_\sun$, located at the origin of the coordinate system ($r=0$), is placed in a ``wind tunnel." In this setup, the gas of uniform density $n_\infty=1.0\,{\rm cm}^{-3}$ and temperature $T_\infty=10^4$\,K is assumed to flow into the computational domain in direction $\theta=\pi$. We evaluate the accretion rate onto the MBH by calculating the mass flux through the inner boundary of the computational domain, defined by the sphere with radius $r_{\rm min} = 0.1$\,pc, and convert it to MBH luminosity as $L_{\rm bol}=0.1 \dot{M} c^2$. The SED of ionizing radiation is described as $L_{\nu} \propto \nu^{-1.5}$ using 50 energy bins ranging from 13.6\,eV to $100$\,keV. The composition of the H--He gas is evolved by following the species of $\rm H\,{\textsc i}$, $\rm H\,{\textsc{ii}}$, ${\rm He}\,{\textsc {i}}$, ${\rm He}\,{\textsc {ii}}$, ${\rm He}\,{\textsc {iii}}$, and $e^-$. \khp{The radiation transport is coupled with hydrodynamics and includes photo-heating, photo-ionization, Compton heating by UV and X-ray photons from the BH, and gas cooling \citep{ParkR:11,ParkR:12,Park:2014a,Park:2014b}. We assume that BH radiation is the only heating source and adopt a simple analytic form of atomic cooling function $\Lambda(T)$ due to neutral and ionized H and He, which includes cooling by recombination, collisional ionization/excitation, free-free transitions, and di-electric recombination of ${\rm He}\,{\textsc {ii}}$ \citep{ShapiroKang:87}. Molecular cooling is neglected and the cooling rate is set to zero below $T=10^4$\,K.} The calculation of radiation transport accounts for the radiation pressure and thus allows us to accurately capture the effects of both energy and momentum feedback \citep[see][for implementation details]{ParkR:12}. Hydrodynamic and radiative transfer equations are solved at every time step defined as $\Delta t=$min$(\Delta_{\rm hydro}, \Delta_{\rm chem})$, where $\Delta_{\rm hydro}$ is the hydrodynamical time step and $\Delta_{\rm chem}$ is the time step required to calculate the change of chemical abundances \citep{RicottiGS:01,WhalenN:06}. We investigate the effect of radiative feedback on the dynamical friction wake by varying the MBH Mach number from 0.5 (subsonic) to 2.0 (supersonic), while keeping $M_{\rm BH}$, $n_\infty$, and $T_\infty$ fixed. Each simulation is run for several sound crossing timescales, defined as $t_{\rm cross} = R_{\rm HII}/c_{s,\infty}$, which corresponds to 46\,Myr for the size of the \rm H\,{\textsc{ii}}~region defined by equation~(\ref{eq:R_HII}). This ensures that the accretion rate onto the MBH and density distribution of the gas in the dynamical friction wake reach steady state. The length of each simulation is recorded as $t_{\rm end}$ in Table~\ref{table:para}. We test numerical convergence with the set of high (HR), medium (MR), and low resolution (LR) runs. The HR runs are carried out with resolution of $(N_r \times N_\theta) = (400\times128)$, MR with $(300 \times 96)$, and LR with $(200 \times 64)$. The radial bins are logarithmically spaced so that $\Delta r/r$ is constant everywhere on the grid while the bins in polar angle are evenly spaced and have size $\Delta\theta = \pi/N_{\theta}$. The outer boundary of the computational domain is defined as inflow where $0 \le \theta \le \pi/2$ and outflow for $\pi/2 < \theta \le \pi$. We apply the outflow boundary conditions at the inner domain boundary. \subsection{Evolution of the \rm H\,{\textsc{ii}}~region and overdensity wake} \label{sec:evolution} The snapshots in Figure~\ref{fig:snapshot_m05} illustrate the evolution of the gas density in the HR run with $\mathcal{M} = 0.5$. A low density \rm H\,{\textsc{ii}}~region with $T_{\rm in} \approx 6\times10^4$\,K forms promptly around the MBH, while the distribution of gas outside of it reaches steady state 150\,Myr after the beginning of the simulation. This timescale is consistent with $\sim7$ sound crossing times for the \rm H\,{\textsc{ii}}~region of the size 200\,pc. Once steady state is achieved, the shape of the $\rm H\,{\textsc{ii}}$~region shows minor deviation from spherical symmetry. Figure~\ref{fig:accrate} shows the accretion rate as a function of time for the MR runs with $\mathcal{M}$ = 0.5, 1.0, and 2.0. In all simulations MBH initially exhibits a relatively high accretion rate, which decreases as the \rm H\,{\textsc{ii}}~region expands into the background medium as a consequence of the MBH radiative feedback. The expansion of the \rm H\,{\textsc{ii}}~sphere is brought to a halt at $\sim15-30$\,Myr after the beginning of the simulation, at which point the accretion rate reaches a turning (minimum) point and readjusts to a steady state value after $\sim 20-35$\,Myr. This happens first in the LR run with $\mathcal{M}$ = 2.0, followed by $\mathcal{M}$ = 1.0 and 0.5. This hierarchy of timescales is determined by the inflow rate of the gas into the \rm H\,{\textsc{ii}}~region as seen by the MBH. Figure~\ref{fig:accrate} also illustrates the dependence $\dot{M} \propto \mathcal{M}^2$ captured by Equation~(\ref{eq:Mdot}) and applicable in the range $1 < \mathcal{M} < 4$. This dependence of accretion rate on the Mach number was first reported by PR13, who noted that accretion onto the MBH is mediated by a dense shell, which forms upstream from the MBH, at the interface of the \rm H\,{\textsc{ii}}~region and ambient gas. This dense shell and the associated bow shock are discernible in Figure~\ref{fig:snapshot}, in the MR runs with $\mathcal{M}$ = 1.0 and 2.0, and are absent for the subsonic run with $\mathcal{M}$ = 0.5. The same figure shows that the \rm H\,{\textsc{ii}}~region becomes more elongated in the direction of the MBH motion with the increasing Mach number, engulfing a larger volume in the region where the dynamical friction wake is supposed to form. Despite this effect, a noticeable overdensity of gas persists at the tail of the \rm H\,{\textsc{ii}}~region in all simulations, indicating that some fraction of the dynamical friction wake remains present. The bottom panels of Figure~\ref{fig:snapshot} illustrate the strength of gravitational interaction between the MBH and fluid elements in the computational domain, where we evaluate the magnitude of the acceleration per unit volume along the direction of the BH motion as $GM_{\rm BH}\, n | \cos{\theta} | /r^2$. The net effect of a given fluid element on the MBH depends on its location. The gas in the upstream ($0 \leq \theta \leq \pi/2$) acts to accelerate the MBH, while the gas in the downstream ($\pi/2 \leq \theta \leq \pi$) decelerates it. The shade of the color indicates that the MBH most strongly interacts with the gas in its immediate vicinity, within a radius of a few parsecs, but this distribution appears front--back symmetric and is thus not expected to significantly contribute to the acceleration of the MBH. Similarly, the fluid elements perpendicular to the MBH's line of motion at $\theta \approx \pi/2$ make a negligible contribution. The overdensity of gas that forms at the front and tail of the \rm H\,{\textsc{ii}}~region is thus expected to contribute the most to the net acceleration of the MBH. In the run with $\mathcal{M}$ = 0.5 this distribution is approximately front-back symmetric, indicating a lower magnitude of acceleration. The $\mathcal{M}$ = 1.0 and 2.0 runs, on the other hand, show enhanced contribution to MBH acceleration from the front of the \rm H\,{\textsc{ii}}~region, while contribution from the tail appears less significant. In order to determine whether the MBH accelerates or decelerates, once the steady state distribution of gas is achieved, we integrate contributions to the dynamical friction force from all fluid elements in the computational domain. \subsection{Efficiency of dynamical friction} \label{sec:efficiency} The left panel of Figure~\ref{fig:delta_acc} shows the steady state magnitude of the MBH acceleration (either positive or negative) due to the gas enclosed within the sphere of radius $r$ for the LR run with $\mathcal{M}=1.0$. Positive acceleration is defined to be in the direction of the MBH motion, thus resulting in the speed-up of the MBH. Note that the assumption of azimuthal symmetry guarantees that the components of acceleration perpendicular to the MBH's line of motion cancel out, leaving the parallel components as the only contribution. The magnitude of acceleration is relatively low in the immediate vicinity of the MBH ($r < 3$\,pc), consistent with the earlier observation of the front--back symmetry in this region. At $r \ga 3$\,pc the magnitude of acceleration gradually increases as the contrast between the upstream and downstream distribution of gas increases with radius. At $r \approx 100$\,pc, acceleration shows a sudden jump coinciding with the upstream overdensity and associated bow shock. The acceleration magnitude levels off beyond 400\,pc, which marks the spatial extent of the tail of the \rm H\,{\textsc{ii}}~sphere. Evidently, the dynamical friction wake does not contribute to the MBH acceleration beyond this point. The right panel of Figure~\ref{fig:delta_acc} shows the magnitude of the positive component of MBH acceleration contributed by the gas enclosed within radius $r$. The figure illustrates that in all simulations the largest contribution to the MBH acceleration originates from the region $r \gtrsim 10$\,pc. This acceleration is net positive, thus speeding up the MBH. In all cases, the MBH deceleration due to the wake beyond the tail of the \rm H\,{\textsc{ii}}~region is negligible. The acceleration originating from $r < 10$\,pc can be either positive or negative but is orders of magnitude smaller relative to the larger spatial scales and can be neglected. \begin{figure}[t] \epsscale{1.15} \plotone{f5.pdf} \caption{Evolution of the net acceleration, integrated over the entire computational domain, for the HR (solid lines), MR (dashed), and LR (dotted) runs with $\mathcal{M}$ = 0.5, 1.0, and 2.0. Positive acceleration implies speed-up of the MBH.} \label{fig:cross} \end{figure} Figure~\ref{fig:cross} shows the evolution of the net acceleration integrated over the entire computational domain as a function of time. In all cases acceleration initially peaks at $15-30$\,Myrs. This is the same point in time where the accretion rate reaches minimum (Figure~\ref{fig:accrate}), indicating a buildup of the dense shell of gas in the upstream of the MBH, which exerts gravitational influence and initially suppresses accretion onto the MBH. Gravitational influence of the upstream gas shell is counterbalanced by the gas in the downstream of the MBH, where overdensity wake develops with a delay corresponding to a few sound crossing times across the elongated \rm H\,{\textsc{ii}}~region. Consequently, the MBH acceleration reaches steady state with a delay of $\sim 150$\,Myr, long after the accretion rate has settled into steady state. An inspection of Figure~\ref{fig:cross} shows that the net acceleration for the scenario $\mathcal{M} = 1.0$ continues to gently increase even after 200\,Myr. For $\mathcal{M} = 1.0$ and 2.0 the net end value of acceleration is positive, indicating a speed-up of the MBH. As expected, the magnitude of acceleration is smallest for the $\mathcal{M}$ = 0.5 scenario, where the gas flow around the MBH exhibits a large degree of front--back symmetry. Figure~\ref{fig:cross} illustrates that the sign of net acceleration in this case is resolution dependent, where the final outcome of the LR run is MBH speed-up and in the MR and HR runs, deceleration. The net accelerations in the MR and HR runs track each other closely and differ by only $\sim 0.05\,{\rm km\,yr^{-2}}$, indicating numerical convergence of the MR run. We therefore use this value as a measure of error in net acceleration due to the finite numerical resolution of our simulations. \begin{figure}[t] \epsscale{1.1} \plotone{f6.pdf} \caption{Dynamical friction force as a function of the Mach number. The red dashed line illustrates the magnitude of the dynamical friction force in the absence of radiative feedback as calculated by \citet{ostriker99}. The blue dashed line is the same as the red one but plotted with a positive sign and arbitrarily scaled in magnitude to match data points. Different symbols mark results of the runs with radiative feedback at $t=201.0$\,Myr. Note the different magnitudes of the positive and negative $y$-axis.} \label{fig:df_mach} \end{figure} Figure~\ref{fig:df_mach} shows the values of the dynamical friction force measured at the end of each simulation for different values of $\mathcal{M}$. As a comparison, the red dashed line shows the dynamical friction force in the absence of radiative feedback, as predicted by \citet{ostriker99} for ln\,($c_{s, \infty} t/r_{\rm min})=4$. As noted before, the dynamical friction force calculated from simulations presented in this study is net positive for $1.0 \le \mathcal{M} < 4$ and negative for $\mathcal{M} = 0.5$. The magnitude of this force is however over $\sim 9$ orders of magnitude lower compared to that experienced by the MBHs in absence of radiative feedback and is therefore negligible for all values of the Mach number $\mathcal{M} < 4$. Interestingly, the dynamical friction force in the presence of radiative feedback appears to mirror its classical counterpart: it peaks around $\mathcal{M} \approx 1$ and decreases at larger values of the Mach number. \section{Discussion and conclusions} \label{sec:conclusions} We investigate the efficiency of dynamical friction in presence of radiative feedback from an MBH moving through a uniform density gas. Ionizing radiation that emerges from the innermost parts of the MBH's accretion flow results in the formation of the \rm H\,{\textsc{ii}}~region, which strongly affects the dynamical friction wake and renders dynamical friction inefficient for a range of physical scenarios. We summarize our main findings below: \begin{itemize} \item We identify a physical regime in which the dynamical friction wake is likely to be fully ionized and dispersed by the MBH radiative feedback as a regime in which the radius of the \rm H\,{\textsc{ii}}~sphere is larger than the extent of the dynamical friction wake. This condition is fulfilled when $\mathcal{M} < 4$ and $(1+\mathcal{M}^2)\, M_{\rm BH}\,n_\infty < 10^9 M_\sun\,{\rm cm^{-3}}$. Outside of this regime the formation of the \rm H\,{\textsc{ii}}~region is suppressed and the properties of the gas flow around the MBH are restored to the classical Bondi--Hoyle--Lyttleton solution, thus restoring the effect of dynamical friction. These criteria can be utilized as a sub-resolution model for dynamical friction in large scale simulations that do not resolve the scale of the \rm H\,{\textsc{ii}}~region or MBH density wake. \item Based on radiation hydrodynamic simulations we find that the net acceleration experienced by the MBHs in this regime tends to be positive, meaning that they speed up, contrary to the expectations for gaseous dynamical friction in absence of radiative feedback. This reversal happens because the dominant contribution to the MBH acceleration comes from the dense shell of gas that forms in front of the MBH as a consequence of the snowplow effect caused by radiation pressure. \item The magnitude of MBH acceleration peaks at $\mathcal{M} \approx 1$ and decreases for larger values of the Mach number, similar to the dynamical friction force in absence of radiative feedback. The magnitude of acceleration is however negligibly small, implying that MBHs in this regime will not significantly change their velocity over timescales that determine their motion and properties of their environment. \item Our results suggest that suppression of dynamical friction by radiative feedback should be more severe at the lower mass end of the MBH spectrum, because these BHs must reside in regions of relatively high gas density in order to experience efficient dynamical friction in the regime when $(1+\mathcal{M}^2)\, M_{\rm BH}\,n_\infty > 10^9 M_\sun\,{\rm cm^{-3}}$. Compounded with inefficiency of the gas drag for lower mass objects in general this implies that $<10^7\,M_\odot$ MBHs have fewer means to reach the centers of merged galaxies. \end{itemize} This study makes several idealized assumptions, such as the uniform gas density and infinite background medium. In reality however, MBHs in the aftermath of galactic mergers are likely to find themselves immersed in inhomogeneous and clumpy medium which can perturb their otherwise smooth orbital decay \citep{fiacconi13}. \khp{Furthermore, the geometry and properties of the \rm H\,{\textsc{ii}}~region will be affected in scenarios where the \rm H\,{\textsc{ii}}~region is not confined by the gaseous medium. An example of this would be an \rm H\,{\textsc{ii}}~region around the MBH orbiting within the galactic gas disk. When the radius of the \rm H\,{\textsc{ii}}~region exceeds the half thickness of the disk, the radiation can escape outside of the disk plane, making the problem fully three-dimensional. Even in such scenarios, the proximity of the dynamical friction wake to the MBH is likely to result in the wake obliteration by high energy radiation. Indeed, 3D hydrodynamic simulations that capture the dynamics of MBHs in proper galactic setups also find a weakening of the dynamical friction force in the presence of MBH feedback \citep{Sijacki:11, SouzaLima:2016}. The high resolution, local simulations presented here support these findings and provide a set of criteria, formulated in terms of the properties of the gas and MBH, under which the wake evacuation is efficient.} Along similar lines, we consider an isolated MBH on a linear trajectory, whereas the front--back asymmetry of the density wake for a perturber on a circular orbit has been shown to cause small differences in the dynamical friction force in absence of radiative feedback \citep{kim07}. A system consisting of multiple MBHs would further add to the complexity in cases when dynamical friction wakes can mutually affect one another \citep[e.g.,][]{KimKimS:2008}. This type of study may require a transition from 2D to 3D simulations in the future in order to relax the assumption of azimuthal symmetry and properly capture the interplay of inhomogeneities, radiative feedback, and dynamical friction. \acknowledgments This work is supported in part by the National Science Foundation under the Theoretical and Computational Astrophysics Network (TCAN) grant AST-1333360. T.B. acknowledges support from the Research Corporation for Science Advancement through a Cottrell Scholar Award. T.B. is a member of the MAGNA project (http://www.issibern.ch/teams/agnactivity) supported by the International Space Science Institute (ISSI) in Bern, Switzerland. Numerical simulations presented in this paper were performed using the high-performance computing cluster PACE, administered by the Office of Information and Technology at the Georgia Institute of Technology. \bibliographystyle{aasjournal}
1,314,259,995,445
arxiv
\section{Introduction} In the theory of mappings called quasiconformal in the mean, conditions of the type \begin{equation}\label{eq2} \int\limits_{D} \Phi (Q(z))\ dx\,dy\ <\ \infty\end{equation} are standard for various characteristics $Q$ of these mappings, see e.g. \cite{Ah}, \cite{Bi}, \cite{Gol}, \cite{GMSV}, \cite{Kr$_1$}--\cite{Ku}, \cite{Per}, \cite{Pes}, \cite{Rya} and \cite{UV}. The study of classes with the integral conditions (\ref{eq2}) is also actual in the connection with the recent development of the theory of degenerate Beltrami equations and the so--called mappings with finite distortion, see e.g. related references in the monographs \cite{IM$_1$} and \cite{MRSY}.\medskip In the present paper we study the problems of equicontinuity and normality for wide classes of the homeomorphisms with finite distortion on conditions that $K_{f}(z)$ has finite mean oscillation, singularities of logarithmic type or integral constraints of the type (\ref{eq2}) in a domain $D\subset{\C}.$ \medskip The concept of the generalized derivative was introduced by Sobolev in \cite{So}. Given a domain $D$ in the complex plane $\C$ the {\bf Sobolev class} $W^{1,1}(D)$ consists of all functions $f:D\to\C$ in $L^1(D)$ with first partial generalized derivatives which are integrable in $D$. A function $f:D\to\C$ belongs to $W^{1,1}_{\mathrm{loc}}(D)$ if $f\in W^{1,1}(D_*)$ for every open set $D_*$ with compact closure $\overline{D_*} \subset D$. Recall that a homeomorphism $f$ between domains $D$ and $D'$ in $\C$ is called of {\bf finite distortion} if $f\in W^{1,1}_{\mathrm{loc}}$ and \begin{equation}\label{eq1.0KR}||f'(z)||^2\leqslant K(z)\cdot J_{f}(z)\end{equation} with a.e. finite function $K$ where $||f'(z)||$ denotes the matrix norm of the Jacobian matrix $f'$ of $f$ at $z\in D$ and $J_{f}(z)=\det f'(z)$, see \cite{IM$_1$}. Later on, we use the notion $K_{f}(z)$ for the minimal function $K(z)\geqslant1$ in (\ref{eq1.0KR}). Note that $||f'(z)||=|f_z|+|f_{\bar{z}}|$ and $J_f(z)=|f_z|^2-|f_{\bar{z}}|^2$ at the points of total differentiability of $f$. Thus, $K_{f}(z)=\frac{||f'(z)||^2}{|J_{f}(z)|}=\frac{|f_z|+|f_{\bar{z}}|}{|f_z|-|f_{\bar{z}}|}$ if $J_{f}(z)\neq0$, $K_{f}(z)=1$ if $f'(z)=0$, i.e. $|f_z|=|f_{\bar{z}}|=0$, and $K_{f}(z)=\infty$ at the rest points. Recall that the {\bf (conformal) modulus} of a family $\Gamma$ of curves $\gamma$ in ${\C}$ is the quantity \begin{equation}\label{eq4132} M(\Gamma)=\inf_{\rho \in \,{\rm adm}\,\Gamma} \int\limits_{{\C}} \rho ^2 (z)\ \ dx\,dy \end{equation} where a Borel function $\rho:{\C}\,\rightarrow [0,\infty]$ is {\bf admissible} for $\Gamma$, write $\rho \in {\rm adm} \,\Gamma $, if \begin{equation}\label{eq4133} \int\limits_{\gamma}\rho \,\,ds\ge 1\ \ \ \ \ \ \ \forall\ \gamma \in \Gamma\ , \end{equation} where $s$ is a natural parameter of the length on $\gamma$. One of the equivalent geometric definitions of {\bf $K-$quasiconformal mappings} $f$ with $K\in [1,\infty)$ given in a domain $D$ in ${\C}$ is reduced to the inequality \begin{equation}\label{eq4*} M(f\Gamma)\le K\,M(\Gamma) \end{equation} that holds for an arbitrary family $\Gamma$ of curves $\gamma$ in the domain $D$. \medskip Similarly, given a domain $D$ in ${\C}$ and a (Lebesgue) measurable function $Q:D\to[1,\infty]$, a homeomorphism $f:D\to{\overline{{\C}}}$, ${\overline{{\C}}}={\C}\cup\{\infty\}$, is called {\bf $Q(z)$ -- homeomorphism} if \begin{equation} \label{eq19*} M(f\Gamma )\le \int\limits_D Q(z)\cdot \rho^2 (z)\ \ dx\,dy \end{equation} for every family $\Gamma$ of curves $\gamma$ in $D$ and every $\rho \in {\rm adm} \,\Gamma $, see e.g. \cite{MRSY}.\medskip In the case $Q(z)\le K$ a.e., we again come to the inequality (\ref{eq4*}). In the general case, the latter inequality means that the conformal modulus of the family $f\Gamma$ is estimated by the modulus $M_Q$ of $\Gamma$ with the weight $Q,$ $M(f\Gamma)\le M_{Q}(\Gamma),$ see e.g. \cite{AC$_1$}. The inequality of the type (\ref{eq19*}) was first stated by O. Lehto and K. Virtanen for quasiconformal mappings in the plane, see Section V.6.3 in \cite{LV}. Throughout this paper, $B(z_{0},\,r)=\{ z\in\C: |z_{0}-z|<r\}$, $S(z_{0},\,r)=\{ z\in\C: |z_{0}-z|=r\}$, $S(r)=S(0,\,r),$ $\mathbb{D}=B(0,\,1)$, $R(r_1,r_2,z_0)$ $=\{ z\,\in\,{\C} : r_1<|z-z_0|<r_2\} $ and $S^2(x,\,r)=\{ y\in \mathbb{R}^{3}: |x-y|=r\}$. Let $E,$ $F\subset\overline{{\C}}$ be arbitrary sets. Denote by $\Gamma(E,F,D)$ a family of all curves $\gamma:[a,b]\rightarrow \overline{{\C}}$ joining $E$ and $F$ in $D,$ i.e. $\gamma(a)\in E,\gamma(b) \in F$ and $\gamma(t)\in D$ as $t \in (a, b).$ The following notion generalizes and localizes the above notion of a $Q$--ho\-me\-o\-mor\-phism. It is motivated by the ring definition of Gehring for qua\-si\-con\-for\-mal mappings, see e.g. \cite{Ge$_3$}, introduced first in the plane, see \cite{RSY$_3$}, and extended later on to the space case in \cite{RS}, see also Chapters 7 and 11 in \cite{MRSY}. Given a domain $D$ in ${\C }$ a (Lebesgue) measurable function $Q:D\rightarrow\,[0,\infty]$, $z_0\in D,$ a homeomorphism $f:D\rightarrow \overline{{\C}}$ is said to be a {\bf ring $Q$--homeomorphism at the point $z_0$} if \begin{equation}\label{eq1} M\left(f\left(\Gamma\left(S_1,\,S_2,\,R(r_1,r_2,z_0)\right)\right)\right)\ \le \int\limits_{R(r_1,r_2,z_0)} Q(z)\cdot \eta^2(|z-z_0|)\ dx\,dy \end{equation} for every ring $R(r_1,r_2,z_0)$ and the circles $S_i=S(z_0, r_i)$, where $0<r_1<r_2< r_0\,\colon =\,{\rm dist}\, (z_0,\partial D),$ and every measurable function $\eta : (r_1,r_2)\rightarrow [0,\infty ]$ such that $$\int\limits_{r_1}^{r_2}\eta(r)\ dr\ \ge\ 1\,.$$ $f$ is called a {\bf ring $Q$--homeomorphism in the domain} $D$ if $f$ is a ring $Q$--ho\-meo\-mor\-phism at every point $z_0\in D$. Note that, in particular, homeomorphisms $f:D\rightarrow \overline{{\C}}$ in the class $W_{loc}^{1,2}$ with $K_f(z)\in L_{loc}^1(D)$ are ring $Q$--ho\-me\-o\-mor\-phisms with $Q(z)=K_f(z),$ see e.g. Theorem 4.1 in \cite{MRSY}. A regular homeomorphism of the Sobolev class $W^{1,1}_{loc}$ in the plane is a ring Q-homeomorphism with $Q(z)$ is equal to the so-called tangential dilatation, see Theorem 3.1. in \cite{Sa}, cf. Lemma 20.9.1 in \cite{AIM}. \medskip The notion of ring $Q$--ho\-me\-o\-mor\-phism can be extended in the natural way to $\infty$. More precisely, under $\infty\in D \subseteq \overline{{\C}}$ a homeomorphism $f:D\rightarrow \overline{{\C}}$ is called a {\bf ring $Q$--ho\-me\-o\-mor\-phism at ${\bf \infty}$} if the mapping $\widetilde{f}=f\left(\frac{z}{\,|z|^2}\right)$ is a ring $Q^{\,\prime}$--ho\-me\-o\-mor\-phism at the origin with $Q^{\,\prime}(z)=Q\left(\frac{z}{\,|z|^2}\right).$ In other words, a mapping $f:{\C}\rightarrow \overline{{\C}}$ is a ring $Q$--ho\-me\-o\-mor\-phism at $\infty$ iff $$M\left(f\left(\Gamma\left(S(R_1), S(R_2), R(R_1, R_2, 0)\right)\right)\right)\le \int\limits_{R(R_1, R_2, 0)} Q(w)\cdot \eta^2\left(|w|\right)du\,dv$$ holds for every ring $R(R_1, R_2, 0)$ in $D$ with $0<R_1<R_2<\infty,$ $S(R_i)$ and for every measurable function $\eta : (R_1,R_2)\rightarrow [0,\infty ]$ with $\int\limits_{R_1}^{R_2}\eta(r)\ dr\ \ge\ 1\,.$ A continuous mappings $\gamma$ of an open subset $\Delta$ of the real axis $\mathbb{R}$ or a circle into $D$ is called a {\bf dashed line}, see e.g. 6.3 in \cite{MRSY}. The notion of the modulus of the family $\Gamma$ of dashed lines $\gamma$ can be given by analogy, see (\ref{eq4132}). We say that a property $P$ holds for {\bf a.e.} (almost every) $\gamma\in\Gamma$ if a subfamily of all lines in $\Gamma$ for which $P$ fails has the modulus zero, cf. \cite{Fu}. Later on, we also say that a Lebesgue measurable function $\varrho:\C\to[0,\infty]$ is {\bf extensively admissible} for $\Gamma$, write $\varrho\in\mathrm{ext\,adm}\,\Gamma$, if (\ref{eq4132}) holds for a.e. $\gamma\in\Gamma$, see e.g. 9.2 in \cite{MRSY}. Given domains $D$ and $D'$ in $\lC=\C\cup\{\infty\}$, $z_0\in\overline{D}\setminus\{\infty\}$, and a measurable function $Q:D\to(0,\infty)$, we say that a homeomorphism $f:D\to D'$ is a {\bf lower Q-homeomorphism at the point} $z_0$ if \begin{equation}\label{eq1.4KR}M(f\Sigma_{\varepsilon})\ \geqslant\ \inf\limits_{\varrho\in\mathrm{ext\,adm}\,\Sigma_{\varepsilon}} \int\limits_{D\cap {R (\varepsilon,\,\varepsilon_0,\,z_0)}} \frac{\varrho^2(z)}{Q(z)}\,dx\,dy\end{equation} for every ring $R(\varepsilon,\,\varepsilon_0,\,z_0),\, \varepsilon\in(0,\varepsilon_0),\, \varepsilon_0\in(0,d_0)\,,$ where $d_0=\sup\limits_{z\in D}\,|z-z_0|\,,$ and $\Sigma_{\varepsilon}$ denotes the family of all intersections of the circles $S(z_0,r), r\in(\varepsilon,\varepsilon_0)\,,$ with $D$. The notion can be extended to the case $z_0=\infty\in\overline{D}$ in the standard way by applying the inversion $T$ with respect to the unit circle in $\lC$, $T(z)=z/|z|^2$, $T(\infty)=0$, $T(0)=\infty$. Namely, a homeomorphism $f:D\to D'$ is a {\bf lower $Q$-homeomorphism at} $\infty\in\overline{D}$ if $F=f\circ T$ is a lower Q$_*$-homeomorphism with $Q_*=Q\circ T$ at $0$. We also say that a homeomorphism $f:D\to{\lC}$ is a {\bf lower $Q$-homeomorphism in} $\partial D$ if $f$ is a lower $Q$-homeomorphism at every point $z_{0}\in\partial D$. Further we show that every homeomorphism of finite distortion in the plane is a lower $Q$-homeomorphism with $Q(z)=K_{f}(z)$ and, thus, the whole theory of the boundary behavior in \cite{KR$_2$}, see also Chapter 9 in \cite{MRSY} can be applied. The following term was introduced in \cite{IR}. Let $D$ be a domain in the complex plane $\mathbb{C}.$ Recall that a function ${\varphi }:D\rightarrow \mathbb{R}$ has {\bf finite mean oscillation at a point} $z_{0}\in {D}$ if \begin{equation} \overline{\lim\limits_{\varepsilon \rightarrow 0}}\ \ \ \mathchoice {{\setbox0=\hbox{$\displaystyle{\textstyle -}{\int}$} \vcenter{\hbox{$\textstyle -$}}\kern-.5\wd0}} {{\setbox0=\hbox{$\textstyle{\scriptstyle -}{\int}$} \vcenter{\hbox{$\scriptstyle -$}}\kern-.5\wd0}} {{\setbox0=\hbox{$\scriptstyle{\scriptscriptstyle -}{\int}$} \vcenter{\hbox{$\scriptscriptstyle -$}}\kern-.5\wd0}} {{\setbox0=\hbox{$\scriptscriptstyle{\scriptscriptstyle -}{\int}$} \vcenter{\hbox{$\scriptscriptstyle -$}}\kern-.5\wd0}}\!\int_{D(z_{0} \varepsilon )}|{\varphi }(z)-\overline{{\varphi }}_{\varepsilon }(z_{0})|\ dxdy\ <\ \infty , \label{eq12.2.4} \end{equation where \begin{equation} \overline{{\varphi }}_{\varepsilon }(z_{0})=\mathchoice {{\setbox0=\hbox{$\displaystyle{\textstyle -}{\int}$} \vcenter{\hbox{$\textstyle -$}}\kern-.5\wd0}} {{\setbox0=\hbox{$\textstyle{\scriptstyle -}{\int}$} \vcenter{\hbox{$\scriptstyle -$}}\kern-.5\wd0}} {{\setbox0=\hbox{$\scriptstyle{\scriptscriptstyle -}{\int}$} \vcenter{\hbox{$\scriptscriptstyle -$}}\kern-.5\wd0}} {{\setbox0=\hbox{$\scriptscriptstyle{\scriptscriptstyle -}{\int}$} \vcenter{\hbox{$\scriptscriptstyle -$}}\kern-.5\wd0}}\!\int_{D(z_{0} \varepsilon )}{\varphi }(z)\ dxdy\ <\ \infty \label{eq12.2.5} \end{equation is the mean value of the function ${\varphi }(z)$ over the disk D(z_{0},\varepsilon )$. We also say that a function ${\varphi }:D\rightarrow \mathbb{R}$ is of {\bf finite mean oscillation in D}, abbr. ${\varphi }\in $ FMO(D) or simply ${\varphi }\in $ FMO, if ${\varphi }$ has a finite mean oscillation at every point $z_{0}\in {D}$. \section{Preliminaries} \setcounter{equation}{0} Recall that the {\bf spherical (chordal) metric} $h(z^{\prime},z^{\prime\prime})$ in $\overline{{{\C}}}$ is equal to $|\pi(z^{\prime})-\pi(z^{\prime\prime})|$ where $\pi$ is the stereographic projection of $\overline{{{\C}}}$ on the sphere $S^2(\frac{1}{2}e_{3},\frac{1}{2})$ in ${{\Bbb R}}^{3},$ i.e., in the explicit form, $$h(z^{\prime},\infty)=\frac{1}{\sqrt{1+{|z^{\prime}|}^2}}, \ \ h(z^{\prime},z^{\prime\prime})=\frac{|z^{\prime}-z^{\prime\prime}|}{\sqrt{1+{|z^{\prime}|}^2} \sqrt{1+{|z^{\prime\prime}|}^2}}\,, \ \ z^{\prime}\ne \infty\ne z^{\prime\prime}\,.$$ The {\bf spherical diameter of a set} $E$ in $\overline{{\C}}$ is the quantity $h(E)=\sup\limits_{z^{\prime}, z^{\prime\prime}\in E} h(z^{\prime}, z^{\prime\prime}).$ A family $\frak{F}$ of continuous mappings from $\C$ into $\overline{\C}$ is said to be a {\bf normal} if every sequence of mappings $f_m$ in $\frak{F}$ has a subsequence $f_{m_k}$ converging to a continuous mapping $f:\C \to \overline{\C}$ uniformly on each compact set $C\subset \C$. Normality is closely related to the following notion. A family $\frak{F}$ of mappings $f:\C \rightarrow \overline{\C}$ is said to be {\bf equicontinuous at a point} $z_0 \in \C$ if for every $\varepsilon > 0$ there is $\delta > 0$ such that $h \left(f(z),f(z_0)\right)<\varepsilon$ for all $f \in \frak{F}$ and $z \in \C$ with $|z-z_0|<\delta$. The family $\frak{F}$ is called {\bf equicontinuous} if $\frak{F}$ is equicontinuous at every point $z_0 \in \C.$ The following version of the Arzela -- Ascoli theorem will be useful later on, see e.g. Section 20.4 in \cite{Va}. \bigskip \begin{proposition}\label{pr3**!}{\it\, A family $\frak{F}$ of mappings $f:\C\rightarrow \overline{\C}$ is normal if and only if $\frak{F}$ is equicontinuous.} \end{proposition} \bigskip For every non-decreasing function $\Phi:[0,\infty ]\to [0,\infty ] ,$ the {\bf inverse function} $\Phi^{-1}:[0,\infty ]\to [0,\infty ]$ can be well defined by setting \begin{equation}\label{eq5.5CC} \Phi^{-1}(\tau)\ =\ \inf\limits_{\Phi(t)\ge \tau}\ t\ . \end{equation} As usual, here $\inf$ is equal to $\infty$ if the set of $t\in[0,\infty ]$ such that $\Phi(t)\ge \tau$ is empty. Note that the function $\Phi^{-1}$ is non-decreasing, too.\medskip \begin{remark}\label{rmk3.333} Immediately by the definition it is evident that \begin{equation}\label{eq5.5CCC} \Phi^{-1}(\Phi(t))\ \le\ t\ \ \ \ \ \ \ \ \forall\ t\in[ 0,\infty ] \end{equation} with the equality in (\ref{eq5.5CCC}) except intervals of constancy of the function $\Phi(t)$. \end{remark} \medskip Since the mapping $t\mapsto t^p$ for every positive $p$ is a sense--preserving homeomorphism $[0, \infty]$ onto $[0, \infty]$ we may rewrite Theorem 2.1 from \cite{RSY$_1$} in the following form which is more convenient for further applications. Here, in (\ref{eq333Y}) and (\ref{eq333F}), we complete the definition of integrals by $\infty$ if $\Phi_p(t)=\infty ,$ correspondingly, $H_p(t)=\infty ,$ for all $t\ge T\in[0,\infty) .$ The integral in (\ref{eq333F}) is understood as the Lebesgue--Stieltjes integral and the integrals in (\ref{eq333Y}) and (\ref{eq333B})--(\ref{eq333A}) as the ordinary Lebesgue in\-te\-grals. \medskip \begin{proposition} \label{pr4.1aB}{\it\, Let $\Phi:[0,\infty ]\to [0,\infty ]$ be a non-decreasing function. Set \begin{equation}\label{eq333E} H_p(t)\ =\ \log \Phi_p(t)\ , \qquad \Phi_p(t)=\Phi\left(t^p\right)\,,\quad p\in (0,\infty)\,.\end{equation} Then the equality \begin{equation}\label{eq333Y} \int\limits_{\delta}^{\infty} H^{\,\prime}_p(t)\ \frac{dt}{t}\ =\ \infty \end{equation} implies the equality \begin{equation}\label{eq333F} \int\limits_{\delta}^{\infty} \frac{dH_p(t)}{t}\ =\ \infty \end{equation} and (\ref{eq333F}) is equivalent to \begin{equation}\label{eq333B} \int\limits_{\delta}^{\infty}H_p(t)\ \frac{dt}{t^2}\ =\ \infty \end{equation} for some $\delta>0,$ and (\ref{eq333B}) is equivalent to every of the equalities: \begin{equation}\label{eq333C} \int\limits_{0}^{\Delta}H_p\left(\frac{1}{t}\right)\ {dt}\ =\ \infty \end{equation} for some $\Delta>0,$ \begin{equation}\label{eq333D} \int\limits_{\delta_*}^{\infty} \frac{d\eta}{H_p^{-1}(\eta)}\ =\ \infty \end{equation} for some $\delta_*>H(+0),$ \begin{equation}\label{eq333A} \int\limits_{\delta_*}^{\infty}\ \frac{d\tau}{\tau \Phi_p^{-1}(\tau )}\ =\ \infty \end{equation} for some $\delta_*>\Phi(+0).$ \medskip Moreover, (\ref{eq333Y}) is equivalent to (\ref{eq333F}) and hence (\ref{eq333Y})--(\ref{eq333A}) are equivalent each to other if $\Phi$ is in addition absolutely continuous. In particular, all the conditions (\ref{eq333Y})--(\ref{eq333A}) are equivalent if $\Phi$ is convex and non--decreasing.} \end{proposition} \medskip It is easy to see that conditions (\ref{eq333Y})--(\ref{eq333A}) become weaker as $p$ increases, see e.g. (\ref{eq333B}). It is necessary to give one more explanation. From the right hand sides in the conditions (\ref{eq333Y})--(\ref{eq333A}) we have in mind $+\infty$. If $\Phi_p(t)=0$ for $t\in[0,t_*]$, then $H_p(t)=-\infty$ for $t\in[0,t_*]$ and we complete the definition $H_p'(t)=0$ for $t\in[0,t_*]$. Note, the conditions (\ref{eq333F}) and (\ref{eq333B}) exclude that $t_*$ belongs to the interval of integrability because in the contrary case the left hand sides in (\ref{eq333F}) and (\ref{eq333B}) are either equal to $-\infty$ or indeterminate. Hence we may assume in (\ref{eq333Y})--(\ref{eq333C}) that $\delta>t_0$, correspondingly, $\Delta<1/t_0$ where $t_0\colon =\sup\limits_{\Phi_p(t)=0}t$, $t_0=0$ if $\Phi_p(0)>0$. \cc \section{The main results} \begin{proposition} {}\label{prKPR3.1} Let $f:D\to\C$ be a homeomorphism with finite distortion. Then $f$ is a lower $Q$-homeomorphism at each point $z_0\in\overline{D}$ with $Q(z)=K_{f}(z)$, see Theorem 3.1. in \cite{KPR}. \end{proposition} \begin{proposition}{}\label{pr8.4.8} Let $D$ and $D'$ be domains in $\C$, let $z_0\in\overline{D}\setminus\{\infty\}$, and let $Q:D\to(0,\infty)$ be a measurable function. A homeomorphism $f:D\to D'$ is a lower $Q$-homeomorphism at $z_0$ if and only if \begin{equation} M(f\Sigma_{\varepsilon})\ \geq\ \int\limits_{\varepsilon}^{\varepsilon_0} \frac{dr}{||\,Q||\,_{1}(r)}\quad\forall\ \varepsilon\in(0,\varepsilon_0)\,,\quad\varepsilon_0\in(0,d_0), \label{eq8.4.9} \end{equation} where \begin{equation} d_0\ =\ \sup\limits_{z\in D}\, |z-z_0|, \label{eq8.4.10} \end{equation} $\Sigma_{\varepsilon}$ denotes the family of all the intersections of the circles $S(z_0,\,r)$, $r\in(\varepsilon,\varepsilon_0)$, with $D$, and \begin{equation} ||\,Q||\,_{1}(r)=\int\limits_{D(z_0,r)}Q(z)\ ds \label{eq8.4.11} \end{equation} is the $L_{1}$-norm of $Q$ over $D(z_0,r)=\{z\in D:|\,z-z_0|=r\}=D\cap S(z_0,r)$. The infimum of the expression from the right-hand side in (\ref{eq1.4KR}) is attained only for the function $$\varrho_0(z)\ =\ \frac{Q(z)}{||\,Q||_{1}(|\,z|)}\,,$$ see Theorem 2.1 in \cite{KR$_2$}. \end{proposition} \begin{proposition} {}\label{pr6.4.10} Let $D$ be a domain in $\C$ and $Q:D\rightarrow \lbrack 0,\infty ]$ a measurable function. A homeomorphism f:D\rightarrow {\C}$ is a ring $Q$-homeomorphism at a point z_{0}$ if and only if, for every $0<r_{1}<r_{2}<d_{0}=\mathrm{dist \,(z_{0},\partial D),$ \begin{equation} M({\Delta }(fS_{1},fS_{2},fD))\ \leq \ \frac{2\pi}{I}, \label{eq6.3.111} \end{equation where ${\omega }$ is the area of the unit circle in $\C,$ $q_{z_{0}}(r)$ is the mean value of $Q(z)$ over the circle $|z-z_{0}|=r,$ S_{j}=S(z_{0},\,r_{j}),$ $j=1,2,$ and $$ I\ =\ I(r_{1},r_{2})\ =\ \int\limits_{r_{1}}^{r_{2}}\ \frac{dr}{rq_{z_{0}}(r)} $$ Moreover, the infimum from the right-hand side in (\ref{eq1}) holds for the function \begin{equation} \eta _{0}(r)=\frac{1}{Irq_{z_{0}}(r)}\ , \label{eq6.3.116} \end{equation} see Theorem 3.15 in \cite{RS}. \end{proposition} The above results now yield the following. \begin{lemma} {}\label{lem6.3.2} Let $D$ and $D'$ be domains in $\C$, let $z_0\in\overline{D}\setminus\{\infty\}$, and let $Q:D\to(0,\infty)$ be a measurable function. A homeomorphism $f:D\to D'$ is a lower $Q$-homeomorphism at $z_0$. Then $f$ is a ring $Q$--homeomorphism at $z_0$. \end{lemma} {\it Proof of Lemma \ref{lem6.3.2}.} Denote by $\Sigma_{\varepsilon}$ the family of all circles $S(z_0,\,r)$, $r\in(\varepsilon,\varepsilon_0)$, $\varepsilon_0\in(0,d_0)\,.$ By Theorem 3.13 in \cite{Zi}, we have \begin{equation} M\left(\Delta\left(fS_{\varepsilon},\,fS_{\varepsilon_{0}},\,f(D)\right)\right)\leq \frac{1}{M\left(f \Sigma_{\varepsilon}\right)}\leq\frac{2\pi}{\int\limits_{\varepsilon}^{\varepsilon_{0}}\frac{dr}{rq_{z_{0}}(r)}} \label{eq6.3.1162} \end{equation} because $f\Sigma_{\varepsilon}\subset\Sigma\left(fS_{\varepsilon},\,fS_{\varepsilon_{0}}\right)$, where $\Sigma\left(fS_{\varepsilon},\,fS_{\varepsilon_{0}}\right)$ consists of all closed curves in $f(D)$ that separate $fS_{\varepsilon}$ and $fS_{\varepsilon_{0}}$. Proposition \ref{prKPR3.1} and Lemma \ref{lem6.3.2} imply the following result. \begin{theorem} \label{th3.1989898} Let $f:D\to\C$ be a homeomorphism with finite distortion. Then $f$ is a ring $Q$-homeomorphism at each point $z_0\in\overline{D}$ with $Q(z)=K_{f}(z)$. \end{theorem} \section{Estimates of Distortion} The results of the following section can be obtained on the base of theorem 3.1 and the correspondent theorems of work \cite{RS}. \begin{lemma} {}\label{lem6.3.23} Let $D$ be a domain in ${\C},$ let $D^{\prime }$ be a domain in $\overline{\C}$ with $h \overline{\C}\setminus D^{\prime })\geq {\Delta }>0,$ and let f:D\rightarrow D^{\prime }$ be a homeomorphism with finite distortion at a point z_{0}\in D.$ If, for $0<\varepsilon _{0}<\mathrm{dist}(z_{0},\partial D),$ \begin{equation} \int\limits_{\varepsilon <|z-z_{0}|\ <\ \varepsilon _{0}}K_{f}(z)\cdot \psi _{ \varepsilon }}^{2}(|z-z_{0}|)\ dx\,dy\ \leq \ c\cdot I^{p}(\varepsilon )\ ,\ \ \ \ \ \ \varepsilon \in (0,\varepsilon _{0}), \label{eq6.3.24} \end{equation where $p\leq 2$ and $\psi _{{\varepsilon }}(t)$ is nonnegative function on $(0,\infty )$ such that \begin{equation} 0\ <\ I(\varepsilon )\ =\ \int\limits_{\varepsilon }^{\varepsilon _{0}}\psi _{{\varepsilon }}(t)\ dt<\infty ,\ \ \ \ \ \ \varepsilon \in (0,\varepsilon _{0}), \label{eq6.3.25} \end{equation then \begin{equation} h(f(z),f(z_{0}))\ \leq \ \frac{32}{{\Delta }}\ \exp \left\{-\left(\frac{2\pi}{c}\right)I^{2-p}(|z-z_{0}|)\right\} \label{eq6.3.26} \end{equation for all $z\in B(z_{0},{{\varepsilon }_{0}}).$ \end{lemma} \begin{corollary} \label{cor6.3.28} Under the conditions of Lemma \ref{lem6.3.23} and for $p = 1$, \begin{equation} h(f(z),f(z_{0}))\ \leq \ \frac{32}{{\Delta }}\ \exp \left\{-\left(\frac{2\pi}{c}\right)I(|z-z_{0}|)\right\}. \label{eq6.3.29} \end{equation} \end{corollary} \begin{theorem} {}\label{th6.4.1} Let $D$ be a domain in ${\C},$ let D^{\prime }$ be a domain in $\overline{\C}$ with $h(\overline{\C}\setminus D^{\prime })\geq {\Delta }>0,$ and let f:D\rightarrow D^{\prime }$ be a homeomorphism with finite distortion at a point z_{0}\in D.$ Then \begin{equation} h(f(z),f(z_{0}))\ \leq \ \frac{32}{{\Delta }}\ \exp \left\{ -\int\limits_{|z-z_{0}|}^{\varepsilon (z_{0})}\frac{dr}{rq_{z_{0}}(r)}\right\} \label{eq6.4.2} \end{equation for $z\in B(z_{0},\varepsilon (z_{0})),$ where $\varepsilon (z_{0})<\dist(z_{0},\partial D)$ and $q_{z_{0}}(r)$ is the mean integral value of $K_{f}(z)$ over the circle $|z-z_{0}|=r.$ \end{theorem} \begin{corollary} \label{cor6.4.5} If \begin{equation} q_{z_{0}}(r)\leq { \log {\frac{1}{r}} } \label{eq6.4.6} \end{equation for $r<{\varepsilon }(z_{0})<\mathrm{dist}(z_{0},\partial D),$ then \begin{equation} h(f(z),f(z_{0}))\leq \,\frac{32}{{\Delta }}\,\frac{\log \frac{1} \varepsilon _{0}}}{\log \frac{1}{|z-z_{0}|}} \label{eq6.4.7} \end{equation for all $z\in B(z_{0},\varepsilon (z_{0}))$. \end{corollary} \begin{corollary} \label{cor6.4.8} If \begin{equation} K_{f}(z)\leq { \log {\frac{1}{|z-z_{0}|}} },\,\,\ \text{\ \ \ z\in B(z_{0},\varepsilon (z_{0})), \label{eq6.4.9} \end{equation then (\ref{eq6.4.7}) holds in the ball $B(z_{0},\varepsilon (z_{0}))$. \end{corollary} \begin{remark} \label{rmk6.4.10} If, instead of (\ref{eq6.4.6}) and (\ref{eq6.4.9}), we have the conditions \begin{equation} q_{z_{0}}(r)\leq c\cdot {\log {\frac{1}{r}} } \label{eq6.4.11} \end{equation and, correspondingly, \begin{equation} K_{f}(z)\leq c\cdot { \log {\frac{1}{|z-z_{0}|}} }, \label{eq6.4.12} \end{equation then \begin{equation} h(f(z),f(z_{0}))\ \leq \ {\ \frac{32}{{\Delta }}\ \left[ \frac{\log \frac{1}{\varepsilon _{0}}}{\log \frac{1}{|z-z_{0}|}}\right] ^{1/c}. \label{eq6.4.13} \end{equation} \end{remark} Choosing in Lemma \ref{lem6.3.23} $\psi (t)=1/t$ and $p=1,$ we also have the following conclusion. \begin{corollary} \label{4.23} Let $f:{\mathbb{D}}\rightarrow {\mathbb{D}}$ be a homeomorphism with finite distortion such that $f(0)=0$ and \begin{equation} \int\limits_{\varepsilon <|z|<1}K_{f}(z)\ \ \frac{dx\,dy}{{|z|}^{2}}\ \leq \ \text{ }\log {\frac{1}{\varepsilon }},\text{ \ }\ \ \ \varepsilon \in (0,1). \label{eq6.4.24} \end{equation Then \begin{equation} |f(z)|\ \leq \ 64\cdot {|z|}^{\frac{2\pi}{c}}. \label{eq6.4.25} \end{equation \end{corollary} \begin{theorem} {}\label{th6.5.11} Let $D$ be a domain in ${\C},$ let $D^{\prime }$ be a domain in $\overline{\C}$ with $h(\overline{\C}\setminus D^{\prime })\geq {\Delta }>0,$ and let $f:D\rightarrow D^{\prime }$ be a homeomorphism with finite distortion at a point $z_{0}\in D.$ If $K_{f}(z)$ has finite mean oscillation at the point $z_{0}\in D$, then \begin{equation} h(f(z),f(z_{0}))\leq \frac{32}{{\Delta }}{\left\{ {\frac{\log \,\frac{1}{\varepsilon _{0}}}{\log \,{\frac{1}{|z-z_{0}|}}}}\right\} }^{\beta _{0}} \label{eq6.5.12} \end{equation for some $\varepsilon _{0}<\mathrm{dist}(z_{0},\partial D)$ and every $z\in B(z_{0},\varepsilon _{0}),$ where $\beta _{0}>0$ depends only on the function $K_{f}$. \end{theorem} \section{On Normal Families of homeomorphisms with finite distortion} The results stated bellow can be proved by theorem 3.1 and the correspondent criteria of normality from the paper \cite{RS}. Given a domain $D$ in ${\C}$, let $\mathfrak{F}_{K_{f},\Delta }(D)$ be the class of all homeomorphisms $f$ with finite distortion $K_{f}$ in $D$ with $h \overline{\C}\setminus f(D))\geq {\Delta }>0.$ \begin{theorem} {}\label{th6.6.1} If $K_{f}\in \mathrm{FMO}$, then $\mathfrak{F}_{K_{f},\Delta }(D)$ is a normal family. \end{theorem} \begin{corollary} \label{cor6.6.2} The class $\mathfrak{F}_{K_{f},\Delta }(D)$ is normal if \begin{equation} \overline{\lim\limits_{\varepsilon \rightarrow 0}}\ \ \mathchoice {{\setbox0=\hbox{$\displaystyle{\textstyle -}{\int}$} \vcenter{\hbox{$\textstyle -$}}\kern-.5\wd0}} {{\setbox0=\hbox{$\textstyle{\scriptstyle -}{\int}$} \vcenter{\hbox{$\scriptstyle -$}}\kern-.5\wd0}} {{\setbox0=\hbox{$\scriptstyle{\scriptscriptstyle -}{\int}$} \vcenter{\hbox{$\scriptscriptstyle -$}}\kern-.5\wd0}} {{\setbox0=\hbox{$\scriptscriptstyle{\scriptscriptstyle -}{\int}$} \vcenter{\hbox{$\scriptscriptstyle -$}}\kern-.5\wd0}}\!\int_{B(z_{0} \varepsilon )}K_{f}(z)\ \ dx\,dy\ <\ \infty \ \ \ \ \ \ \forall \ z_{0}\in D. \label{eq6.6.3} \end{equation} \end{corollary} \begin{corollary} \label{cor6.6.4} The class $\mathfrak{F}_{K_{f},\Delta}(D)$ is normal if every z_0\in D$ is a Lebesgue point of $K_{f}(z)$. \end{corollary} \begin{theorem} {}\label{th6.6.5} Let $\Delta >0$ and let $Q:D\rightarrow \lbrack 0,\infty ]$ be a measurable function such that \begin{equation} \int\limits_{0}^{{\varepsilon }(z_{0})}\frac{dr}{rq_{z_{0}}(r)}=\infty \label{eq6.6.6} \end{equation holds at every point $z_{0}\in D$, where ${\varepsilon }(z_{0})=\mathrm{dist (z_{0},\partial D)$ and $q_{z_{0}}(r)$ denotes the mean integral value of K_{f}(z)$ over the circle $|z-z_{0}|=r$. Then $\mathfrak{F}_{K_{f},\Delta }$ forms a normal family. \end{theorem} \begin{corollary} \label{cor6.6.7} The class $\mathfrak{F}_{K_{f},\Delta }(D)$ is normal if $K_{f}(z)$ has singularities of the logarithmic type of order not greater than 1 at every point $z\in D$. \end{corollary} \medskip \section{On some integral conditions} \setcounter{equation}{0} The following results can be found in \cite{RS_{2}}. Recall that a function $\Phi :[0,\infty ]\to [0,\infty ]$ is called {\bf convex} if $$ \Phi (\lambda t_1 + (1-\lambda) t_2)\ \le\ \lambda\ \Phi (t_1)\ +\ (1-\lambda)\ \Phi (t_2)$$ for all $t_1$ and $t_2\in[0,\infty ] $ and $\lambda\in [0,1]$.\medskip In what follows, ${\R}(\varepsilon),$ $\varepsilon\in (0,1)$ denotes the ring in the space ${\C}$, \begin{equation}\label{eq5.5Cf} {\R}(\varepsilon)\ =R\,(\varepsilon,\,1,\,0).\end{equation} \noindent The following statement is a generalization and strengthening of Lemma 3.1 from \cite{RSY$_1$}. \begin{lemma} \label{lem5.5C} {\it\, Let $Q:{\Bbb D}\to [0,\infty ]$ be a measurable function and let $\Phi:[0,\infty ]\to (0,\infty ]$ be a non-decreasing convex function. Suppose that the mean value $M(\varepsilon)$ of the function $\Phi\circ Q$ over the ring ${\R}(\varepsilon),$ $\varepsilon\in (0, 1),$ is finite. Then \begin{equation}\label{eq3.222} \int\limits_{\varepsilon}^{1}\ \frac{dr}{rq^{\frac{1}{p}}(r)}\ \ge\ \frac{1}{2}\ \int\limits_{eM(\varepsilon)}^{\frac{M(\varepsilon)}{\varepsilon^2}}\ \frac{d\tau}{\tau \left[\Phi^{-1}(\tau )\right]^{\frac{1}{p}}}\qquad\qquad\forall\quad p\in (0, \infty) \end{equation} where $q(r)$ is the average of the function $Q(z)$ over the circle $|z|=r.$ }\end{lemma} \bigskip \begin{remark}\label{rmk3.333A} Note that (\ref{eq3.222}) is equivalent for each $p\in (0, \infty)$ to the inequality \begin{equation}\label{eq3.1!} \int\limits_\varepsilon^1\frac{dr}{rq^{\frac{1}{p}}(r)}\ \ge\ \frac{1}{2}\int\limits_{eM(\varepsilon)}^{\frac{M(\varepsilon)}{\varepsilon^2}}\frac{d\tau}{\tau\Phi_p^{\,-1}(\tau)}\ ,\qquad \Phi_p(t)\ \colon =\ \Phi(t^p)\ . \end{equation} Note also that $M(\varepsilon)$ converges as $\varepsilon\to 0$ to the average of $\Phi\circ Q$ over the unit disk ${\Bbb B}$. \end{remark} \medskip \begin{corollary} \label{cor3.1}{\,\it Let $\Phi:[0,\infty ]\rightarrow (0,\infty ]$ be a non-decreasing convex function, $Q:{\Bbb B}\rightarrow [0,\infty ]$ a measurable function, $Q_*(z)=1$ if $Q(z)<1$ and $Q_*(z)=Q(z)$ if $Q(z)\ge 1$. Suppose that the mean $M_*(\varepsilon)$ of the function $\Phi\circ Q$ over the ring ${\R}(\varepsilon),$ $\varepsilon\in (0, 1),$ is finite. Then \begin{equation}\label{eq3.1} \int\limits_{\varepsilon}^{1}\ \frac{dr}{rq^{\frac{\lambda}{p}}(r)}\ \ge\ \frac{1}{2}\ \int\limits_{eM_*(\varepsilon)}^{\frac{M_*(\varepsilon)}{\varepsilon^2}}\ \frac{d\tau}{\tau \left[\Phi^{-1}(\tau )\right]^{\frac{1}{p}}}\ \qquad \ \ \forall\ \lambda\ \in\ (0,1), \qquad p\in (0, \infty) \end{equation} where $q(r)$ is the average of the function $Q(z)$ over the circle $|z|=r.$ } \end{corollary} \medskip \medskip Indeed, let $q_*(r)$ be the average of the function $Q_*(z)$ over the circle $|z|=r$. Then $q(r)\le q_*(r)$ and, moreover, $q_*(r)\ge 1$ for all $r\in (0,1)$. Thus, $q^{\frac{\lambda}{p}}(r)\le q_*^\frac{\lambda}{p}(r)\le q_*^\frac{1}{p}(r)$ for all $\lambda\in (0,1)$ and hence by Lemma \ref{lem5.5C} applied to $Q_*(z)$ we obtain (\ref{eq3.1}). \medskip \begin{theorem} \label{th5.555}{\it\, Let $Q:{\Bbb D}\to [0,\infty ]$ be a measurable function such that \begin{equation}\label{eq5.555} \int\limits_{{\Bbb B}} \Phi (Q(z))\ dx\,dy\ <\ \infty\end{equation} where $\Phi:[0,\infty ]\to [0,\infty ]$ is a non-decreasing convex function such that \begin{equation}\label{eq3.333a} \int\limits_{\delta_0}^{\infty}\ \frac{d\tau}{\tau \left[\Phi^{-1}(\tau )\right]^{\frac{1}{p}}}\ =\ \infty\,,\qquad p\in (0, \infty)\,, \end{equation} for some $\delta_0\ >\ \tau_0\ \colon =\ \Phi(0).$ Then \begin{equation}\label{eq3.333A} \int\limits_{0}^{1}\ \frac{dr}{rq^{\frac{1}{p}}(r)}\ =\ \infty \end{equation} where $q(r)$ is the average of the function $Q(z)$ over the circle $|z|=r$.} \end{theorem} \begin{remark}\label{rmk4.7www} Since $\left[\Phi^{\,-1}(\tau)\right]^{\frac{1}{p}}= \Phi_p^{\,-1}(\tau)$ where $\Phi_p(t)=\Phi(t^p),$ (\ref{eq3.333a}) implies that \begin{equation}\label{eq3.a333} \int\limits_{\delta}^{\infty}\ \frac{d\tau}{\tau \Phi^{-1}_p(\tau )}\ =\ \infty\ \ \ \ \ \ \ \ \ \ \forall\ \delta\ \in\ [0,\infty) \end{equation} but (\ref{eq3.a333}) for some $\delta\in[0,\infty)$, generally speaking, does not imply (\ref{eq3.333a}). Indeed, for $\delta\in [0,\delta_0),$ (\ref{eq3.333a}) evidently implies (\ref{eq3.a333}) and, for $\delta\in(\delta_0,\infty)$, we have that \begin{equation}\label{eq3.e333} 0\ \le\ \int\limits_{\delta_0}^{\delta}\ \frac{d\tau}{\tau \Phi_p^{-1}(\tau )}\ \le\ \frac{1}{\Phi_p^{-1}(\delta_0)}\ \log\ \frac{\delta}{\delta_0}\ <\ \infty \end{equation} because $\Phi_p^{-1}$ is non-decreasing and $\Phi_p^{-1}(\delta_0)>0$. Moreover, by the de\-fi\-ni\-tion of the inverse function $\Phi_p^{-1}(\tau)\equiv 0$ for all $\tau \in [0,\tau_0],$ $\tau_0=\Phi_p(0)$, and hence (\ref{eq3.a333}) for $\delta\in[0,\tau_0),$ generally speaking, does not imply (\ref{eq3.333a}). If $\tau_0 > 0$, then \begin{equation}\label{eq3.c333} \int\limits_{\delta}^{\tau_0}\ \frac{d\tau}{\tau \Phi_p^{-1}(\tau )}\ =\ \infty\ \ \ \ \ \ \ \ \ \ \forall\ \delta\ \in\ [0,\tau_0) \end{equation} However, (\ref{eq3.c333}) gives no information on the function $Q(z)$ itself and, consequently, (\ref{eq3.a333}) for $\delta < \Phi(0)$ cannot imply (\ref{eq3.333A}) at all. \end{remark} \medskip In view of (\ref{eq3.a333}), Theorem \ref{th5.555} follows immediately from Lemma \ref{lem5.5C}. \medskip \begin{corollary} \label{cor555}{\it\, If $\Phi:[0,\infty ]\to [0,\infty ]$ is a non-decreasing convex func\-tion and $Q$ satisfies the condition (\ref{eq5.555}), then each of the conditions (\ref{eq333Y})--(\ref{eq333A}) for $p\in (0, \infty)$ implies (\ref{eq3.333A}). Moreover, if in addition $\Phi(1)<\infty$ or $q(r)\ge 1$ on a subset of $(0,1)$ of a positive measure, then each of the conditions (\ref{eq333Y})--(\ref{eq333A}) for $p\in (0, \infty)$ implies \begin{equation}\label{eq3.3} \int\limits_{0}^{1}\ \frac{dr}{rq^{\frac{\lambda}{p}}(r)}\ =\ \infty\ \ \ \ \ \ \ \ \ \forall\ \lambda\ \in\ (0,1) \end{equation} and also \begin{equation}\label{eq3.3AB} \int\limits_{0}^{1}\ \frac{dr}{r^{\alpha}q^{\frac{\beta}{p}}(r)}\ =\ \infty\ \ \ \ \ \ \ \ \ \forall\ \alpha\ge 1 ,\ \beta\ \in\ (0,\alpha]\,. \end{equation}} \end{corollary} \section{Sufficient conditions for equicontinuity} \setcounter{equation}{0} \medskip Let $D$ be a fixed domain in the extended space $\overline{{\C}}={\C}\cup\{\infty\}.$ Given a function $\Phi:[0, \infty]\rightarrow [0, \infty],$ $M>0,$ $\Delta>0$, $\frak{F}^{\Phi}_{M,\Delta}$ denotes the collection of all homeomorphisms with finite distortion in $D$ such that $h\left(\overline{{\C}}\setminus f(D)\right)\ge \Delta$ and \begin{equation}\label{eq2!!} \int\limits_D\Phi\left(K_{f}(z)\right)\frac{dx\,dy}{\left(1+|z|^2\right)^2}\ \le\ M\,. \end{equation} \medskip \begin{theorem}\label{th1!}{\it\, Let $\Phi:[0, \infty]\rightarrow [0, \infty]$ be non-decreasing convex function. If \begin{equation}\label{eq3!} \int\limits_{\delta_0}^{\infty} \frac{d\tau}{\tau\Phi^{-1}(\tau)}\ =\ \infty \end{equation} for some $\delta_0>\tau_0:=\Phi(0),$ then the class $\frak{F}^{\Phi}_{M,\Delta}$ is equicontinuous and, consequently, forms a normal family of mappings for every $M\in(0, \infty)$ and $\Delta\in(0, 1).$ } \end{theorem} \medskip \begin{remark}\label{rem1} Note that the condition \begin{equation}\label{eq3!!} \int\limits_D \Phi\left(K_{f}(z)\right)dx\,dy\le M \end{equation} implies (\ref{eq2!!}). Thus, the condition (\ref{eq2!!}) is more general than (\ref{eq3!!}) and homeomorphisms with finite distortion satisfying (\ref{eq3!!}) form a subclass of $\frak{F}^{\Phi}_{M,\Delta}.$ Conversely, if the domain $D$ is bounded, then (\ref{eq2!!}) implies the condition \begin{equation}\label{eq4!} \int\limits_D \Phi\left(K_{f}(z)\right)dx\,dy\le M_* \end{equation} where $M_*=M\cdot\left(1+\delta_*^2\right),$ $\delta_*=\sup\limits_{z\in D}|z|.$ \end{remark} \medskip \begin{corollary}\label{cor1!}{\,\it Each of the conditions (\ref{eq333Y})--(\ref{eq333A}) for $p\in (0, n-1] $ implies equicontinuity and normality of the classes $\frak{F}^{\Phi}_{M,\Delta}$ for all $M\in (0, \infty)$ and $\Delta\in (0, 1).$ } \end{corollary} \medskip Given a function $\Phi:[0, \infty]\rightarrow [0, \infty],$ $M>0$ and $\Delta>0,$ $S^{\Phi}_{M, \Delta}$ denotes the class of all homeomorphisms $f$ of $D$ in the Sobolev class $W_{loc}^{1,2}$ with a locally integrable $K_{f}(z)$ such that $h\left(\overline{{\C}}\setminus f(D)\right)\ge\Delta$ and (\ref{eq2!!}) holds for $K_{f}(z).$ Note that if $\Phi$ is non-decreasing, convex and non--constant on $[0,\infty)$, then (\ref{eq2!!}) itself implies that $K_{f}(z)\in L_{loc}^1.$ Note also that $S^{\Phi}_{M, \Delta}\subset \frak{F}^{\Phi}_{M, \Delta},$ see e.g. Theorem 4.1 in \cite{MRSY}. Thus, we have the following consequence. \medskip \begin{corollary}\label{cor2!}{\,\it Each of the conditions (\ref{eq333Y})--(\ref{eq333A}) for $p\in (0, 1]$ implies equicontinuity and normality of the class $S^{\Phi}_{M,\Delta}$ for all $M\in (0, \infty)$ and $\Delta\in (0, 1).$ } \end{corollary} \section{Necessary conditions for equicontinuity} \setcounter{equation}{0} \begin{theorem}\label{th3}{\it\, If the classes $S^{\Phi}_{M,\Delta}\subset \frak{F}^{\Phi}_{M,\Delta}$ are equicontinuous (normal) for a non--decreasing convex function $\Phi:[0, \infty]\rightarrow [0, \infty],$ all $M\in (0,\infty)$ and $\Delta\in (0,1).$ Then \begin{equation}\label{eq3} \int\limits_{\delta_*}^{\infty}\frac{d\tau}{\tau \Phi^{\,-1}(\tau)}\ =\ \infty \end{equation}} for all $\delta_*\in (\tau_0, \infty)$ where $\tau_0\ \colon=\ \Phi(0).$ \end{theorem}\medskip It is evident that the function $\Phi(t)$ in Theorem \ref{th3} cannot be constant because in the contrary case we would have no real restrictions for $K_{I}$ except $\Phi(t)\equiv\infty$ when the classes $S^{\Phi}_{M,\Delta}$ are empty. Moreover, by the known criterion of convexity, see e.g. Proposition 5 in I.4.3 of \cite{Bou}, the slope $[\Phi(t)-\Phi(0)]/t$ is nondecreasing. Hence the proof of Theorem \ref{th3} follows from the next statement.\medskip \begin{lemma}\label{th3!}{\it\, Let a function $\Phi : [0,\infty]\to[0,\infty]$ be non-decreasing and \begin{equation}\label{eq4!!} \Phi(t)\ \ge\ C\cdot t\qquad\forall\ t\in [T, \infty] \end{equation} for some $C>0$ and $T\in (0, \infty).$ If the classes $S_{M,\Delta}^{\Phi}\subset \frak{F}_{M,\Delta}^{\Phi}$ are equicontinuous (normal) for all $M\in (0,\infty)$ and $\Delta\in (0,1)$, then (\ref{eq3}) holds for all $\delta_*\in (\tau_0, \infty)$ where $\tau_0\ \colon=\ \Phi(+0).$} \end{lemma} \medskip \begin{remark}\label{rem4} Theorem \ref{th3} shows that the condition (\ref{eq3!}) in Theorem \ref{th1!} is not only sufficient but also necessary for equicontinuity (normality) of classes with the integral constraints of the type either (\ref{eq2!!}) or (\ref{eq4!}) with a convex non--decreasing $\Phi.$ In view of Proposition \ref{pr4.1aB}, the same concerns to all the conditions (\ref{eq333Y})--(\ref{eq333A}) with $p=1.$ \end{remark} \medskip \begin{corollary}\label{cor3!} {\it\, The equicontinuity (normality) of the classes $S^{\Phi}_{M,\Delta}\subset \frak{F}^{\Phi}_{M,\Delta}$ for $M\in (0, \infty)$, $\Delta\in (0,1)$ and non--decreasing convex $\Phi$ implies that \begin{equation}\label{eq6!!} \int\limits_{\delta}^{\infty}\log \Phi(t)\ \frac{dt}{t^{2}}\ =\ \infty \end{equation} for all $\delta>t_0$ where $t_0:=\sup\limits_{\Phi(t)=0}t,$ $t_0=0$ if $\Phi(0)>0.$ } \end{corollary} \medskip The condition (\ref{eq6!!}) is also sufficient for equi\-con\-ti\-nui\-ty (normality) of the classes $S^{\Phi}_{M,\Delta}$ and $\frak{F}^{\Phi}_{M,\Delta}$.
1,314,259,995,446
arxiv
\section*{References} \bibliographystyle{plain} \section{Reminders on the BLASSO} \subsection{Recovery of the Support in Presence of Noise} Let $\posO\in \Poso{}^N$, $\ampO\in (\RR\setminus\{0\})^N$ and $\measO=\sum_{i=1}^N\ampOi \dirac{\posOi}$. The BLASSO is the variational problem \begin{align}\label{eq:defblasso} \umin{m \in \radon} \frac{1}{2} \normObs{\Phi m - \obsw}^2 + \lambda \normTVX{m} \tag{$\blasso$}, \end{align} where $\obsw\eqdef\Phi\measO+w$ are the noisy observations of a measure composed of a sum of Dirac masses. The optimality of a measure $m_\la$ for~$\blasso$ is characterized by the fact that the function \begin{align} \label{eq:certifdual} \eta_\la\eqdef \Phi^* p_\la \qwhereq p_\la \eqdef \frac{1}{\la}(y-m_\la) \end{align} satisfies $\normLi{\eta_\la}\leq 1$. The function $\eta_\la$ is then called a dual certificate. When one is interested in the recovery of the support, \ie finding a solution $\meas$ of $\blasso$ composed of exactly the same number of Dirac masses as the initial measure $\measO$, in a small noise regime, an important object is the so-called vanishing derivatives precertificate introduced in~\cite{duval-exact2013}. \begin{defn}[Vanishing Derivatives Precertificate, \cite{duval-exact2013}]\label{sec:blasso-def:etaV} If $\GaxO$ has full column rank, there is a unique solution to the problem \begin{align*} \inf\enscond{\normObs{p}}{\forall i=1,\ldots,N, \; (\Phi^*p)(\posOi)=\sign(\ampOi), (\Phi^*p)'(\posOi)=0_{\RR^d}}. \end{align*} Its solution $\pVV$ is given by \begin{align}\label{eq-vanish-closed-form} \pVV=(\GaxO^{+})^* \begin{pmatrix} \sign(\ampO) \\ 0_{(\RR^d)^N}\end{pmatrix}, \end{align} and we define the vanishing derivatives precertificate as $\etaVV\eqdef\Phi^*\pVV$ ($\GaxO$ is defined in Equation~\ref{sec:intro-def:Gax}). \end{defn} One can show that is if $\normLi{\etaVV}\leq 1$ then $\etaVV$ is a a so-called valid certificate, which assures that $\measO$ is a solution to the constrained problem (corresponding to setting $w=0$ and $\la\to0$ in $\blasso$) \begin{align*} \umin{\Phi m=\obsO} \normTVX{m} \qwhereq \obsO\eqdef\Phi\measO \tag{$\bpursuit$}. \end{align*} More importantly, if it satisfies a stronger nondegeneracy condition detailed in Definition~\ref{sec:blasso-def:etaVnondegen} below, then $\eta_V$ also ensures the stable recovery of the support in a small noise regime when solving the BLASSO. This result proved in~\cite{duval-exact2013} is stated in Theorem~\ref{sec:blasso-thm:supportrecovery}. \begin{defn}[Nondegeneracy of $\etaVV$, \cite{duval-exact2013}]\label{sec:blasso-def:etaVnondegen} We say that $\etaVV$ is \emph{nondegenerate} if \begin{equation}\label{eq-etaV-nondegen} \left\{\begin{split} \foralls \pos\in \Pos \setminus \bigcup_{i=1}^N\{\posOi\}, \quad \abs{\etaVV(\pos)}&<1,\\ \foralls i\in\{1,\ldots,N\}, \quad \det(D^2\etaVV(\posOi))&\neq 0. \end{split}\right. \end{equation} \end{defn} \begin{thm}[Exact Support Recovery, \cite{duval-exact2013}]\label{sec:blasso-thm:supportrecovery} Assume that $\phi\in\kernel{2}$, $\GaxO$ has full column rank and $\etaVV$ is nondegenerate. Then there exists $C>0$ such that if $(\la,w)\in\RR_+^*\times\Obs$ satisfies: \eq{\max\pa{\la,\normObs{w}/\la} \leq C,} then there is a unique solution $\meas$ to $\blasso$ composed of $N$ Dirac masses such that $(\amp,\pos)=\fimpp(\la,w)$ where $\fimpp$ is $\Ccr^{1}$. In particular, by taking the regularization parameter $\lambda = \normObs{w}/C$ proportional to the noise level, one obtains \eq{ \normLiVec{(\amp,\pos)-(\ampO,\posO)}=O(\normObs{w}), } where $\normLiVec{\cdot}$ is the $\ell^{\infty}$ norm for vectors. \end{thm} Figure~\ref{fig:EtaV_microscopy} displays some example of $\etaVV$ associated to several $\Phi$ operators for 3-D super-resolution fluorescence microscopy. This shows that for these inverse problems, the BLASSO stably recovers the support of the input measure if the noise level is not too high. \subsection{The Super-Resolution Problem} In this section, $\Pos$ is considered to be $1$-dimensional and we now tackle the super-resolution problem in presence of noise using the BLASSO. In this setting, we assume that the Dirac masses of the initial measure have positive amplitudes and cluster at some point $\poscluster\in \Poso$. We parametrize this cluster as \eq{ \meastO\eqdef\sum_{i=1}^N \ampOi \dirac{\poscluster+\postOi} \qwhereq \ampOi>0, \ \poszOi\in\RR, } and where the parameter $t>0$ controls the separation distance between the spikes of the input measure. In~\cite{Denoyelle2017}, the authors proved that the recovery of the support in presence of noise in the limit $t\to0$ is controlled by the $2N-1$ vanishing derivatives precertificate. \begin{prop}[$2N-1$ Vanishing Derivatives Precertificate,~\cite{Denoyelle2017}]\label{sec:blasso-def:etaW} If $\injdn$ holds at $\poscluster$ (see Definiton~\ref{sec:intro-def:injectivity}), there is a unique solution to the problem \begin{align*} \inf\enscond{\normObs{p}}{(\Phi^*p)(\poscluster)=1, (\Phi^*p)'(\poscluster)=0, \ldots , (\Phi^*p)^{(2N-1)}(\poscluster)=0}. \end{align*} We denote by $\pW$ its solution, given by \begin{equation}\label{def-pW} \pW=(\Fdn^{+})^*\dirac{2N} \qwhereq \dirac{2N} \eqdef (1,0,\ldots,0)^T \in \RR^{2N}, \end{equation} and we define the $2N-1$ vanishing derivatives precertificate as $\etaW\eqdef\Phi^*\pW$ (see Equation~\ref{sec:intro-eq:Fk} for the definition of $\Fdn$). \end{prop} Figure~\ref{sec:blasso-fig:etaWgaussian} shows $\etaW$ in the case of a Gaussian convolution kernel. \begin{figure}[!htb] \centering \subfigure[$N=1$]{\includegraphics[width=0.23\linewidth]{etaW/etaW-gaussian-n1sigma10}} \subfigure[$N=2$]{\includegraphics[width=0.23\linewidth]{etaW/etaW-gaussian-n2sigma10}} \subfigure[$N=4$]{\includegraphics[width=0.23\linewidth]{etaW/etaW-gaussian-n4sigma10}} \subfigure[$N=7$]{\includegraphics[width=0.23\linewidth]{etaW/etaW-gaussian-n7sigma10}}\caption{\label{sec:blasso-fig:etaWgaussian}$\etaW$ for a Gaussian convolution ($x\in\RR$, $\phi(x)=e^{-\frac{(\cdot-x)^2}{2\sigma^2}}$) for several numbers of spikes and $\sigma=1$.} \end{figure} \begin{rem} From Proposition~\ref{sec:blasso-def:etaW}, one can easily see that $\etaW$ can equivalently be written as \begin{align}\label{sec:blasso-eq:etaWdefbis} \forall\pos\in\Pos,\quad\etaW(\pos)=\sum_{k=0}^{2N-1} \al_k\partial^{(k)}_2\Co(\pos,\poscluster), \end{align} where $\Co$ is the correlation kernel associated to the correlation operator $\Phi^*\Phi$, namely $\Co(\pos,\pos')=\dotObs{\phi(\pos)}{\phi(\pos')}$, and the coefficients $\al_k$ are defined by the equations \begin{align}\label{sec:blasso-eq:etaWequations} \forall k\in\{0,\ldots,2N-1\},\quad\etaW^{(k)}(\poscluster)=\delta_0^k. \end{align} \end{rem} If $\etaW$ satisfies some nondegeneracy property (see Definition~\ref{sec:blasso-def:etaWnondegen}) then one can prove that the recovery of the support in a small noise regime when $t\to0$ is possible. Theorem~\ref{sec:blasso-thm:superresol} (see~\cite{Denoyelle2017}) makes this statement precise by quantifying the scaling between the noise level and the separation $t$ to ensure the recovery. \begin{defn}\label{sec:blasso-def:etaWnondegen} Assume that $\injdn$ holds at $\poscluster$ and $\phi\in\kernel{2N}$. We say that $\etaW$ is $(2N-1)$-nondegenerate if $\etaW^{(2N)}(\poscluster)\neq 0$ and for all $\pos\in \Pos\setminus\{\poscluster\}$, $|\etaW(\pos)| <1$. \end{defn} \begin{thm}\label{sec:blasso-thm:superresol} Suppose that $\varphi\in\kernel{2N+1}$ and that $\etaW$ is $(2N-1)$-nondegenerate. % Then there exist positive constants $t_0,C,M$ (depending only on $\phi$, $\ampO$ and $\poszO$) such that for all $0<t<t_0$, for all $(\la,w)\in \ball{0}{C t^{2N-1}}$ with $\normObs{w}/\la\leq C$, \begin{itemize} \item the BLASSO has a unique solution, \item that solution has exactly $N$ spikes, and it is of the form $m_{a,\poscluster+tz}$, with $(\amp,\posz)=g(\la,w)$ (where $g$ is a $\Cder{2N}$ function), \item the following inequality holds \eq{ \normLiVec{(\amp,\posz)-(\ampO,\poszO)}\leq\constdg\pa{\frac{|\la|}{t^{2N-1}}+\frac{\normObs{w}}{t^{2N-1}}}. } \end{itemize} \end{thm} In the next section, we prove that the main assumption of Theorem~\ref{sec:blasso-thm:superresol} (the nondegeneracy of $\etaW$) is satisfied for some operators $\Phi$ associated to Laplace measurements. \section*{Conclusion} This paper demonstrated from both theoretical and practical perspectives the Sliding Frank-Wolfe Algorithm, in particular when facing challenging non-translation invariant operator such as the Laplace kernels. Such operators lead to difficulties in estimating the spikes positions which is efficiently addressed by non-convex update step of the grid location. The BLASSO method, coupled with this Sliding Frank-Wolfe solver, is well adapted to these non-convolutive operators because it does not rely on spectral (Fourier) methods and can be analyzed theoretically through the prism of convex duality and vanishing certificates. \section*{Acknowledgement} The authors would like to thank Laure Blanc-F\'eraud for initiating this collaboration and for stimulating discussions. The work of Gabriel Peyr\'e has been supported by the European Research Council (ERC project NORIA). The work of Emmanuel Soubies has been supported by the European Research Council (ERC project GlobalBioIm). \section{Introduction} \subsection{Super-Resolution using the BLASSO} \label{sec-intro-blasso-pw} Super-resolution consists in retrieving the fine scale details of a possibly noisy signal from coarse scale information. The importance of recovering the high frequencies of a signal comes from the fact that there is often a physical blur in the acquisition process, such as diffraction in optical systems, wave reflection in seismic imaging or spikes recording from neuronal activity. In resolution theory~\cite{den-resolution1997}, the two-point resolution criterion defines the ability of a system to resolve two points of equal intensities. It is defined as a distance, namely the Rayleigh criterion, which only depends on the system. In the case of the ideal low-pass filter (\ie convolution with the Dirichlet kernel) with cutoff frequency $f_c$, the Rayleigh criterion is $1/f_c$. Then, super-resolution in signal processing consists in developing techniques which enable to retrieve information below the Rayleigh criterion. Let us introduce in a more formal way the problem which will be the framework of this article. Let $\Pos$ be a connected subset of $\RR^d$ with non-empty interior or the $d$-dimensional torus $\TT^d$ ($d\in\NN^*$) and $\radon$ the Banach space of bounded Radon measures on $\Pos$. The latter can be seen as the topological dual of $\ContX(X,\RR)$, the space of continuous functions on $\Pos$ that vanish at infinity. We consider a given integral operator $\Phi:\radon\to\Obs$, where $\Obs$ is a separable Hilbert space, whose kernel $\phi$ is supposed to be a smooth function (see Definition~\ref{sec:intro-def:admkernel} for the technical assumptions made on $\phi$), \ie \begin{align}\label{def-Phi} \forall m\in\radon,\quad\Phi m\eqdef\int_\Pos \phi(\pos)\d m(\pos). \end{align} The operator $\Phi$ models the acquisition process. It includes translation-invariant operators such as convolutions (\ie $\phi(x)=\tilde{\phi}(\cdot-x)$) as well as non-translation invariant operators such as the Laplace transform ($X=\RR_+^*$ and $\phi(x) = (t\mapsto e^{-tx} ) \in L^2(\RR_+)$) considered in the present paper. The sparse spikes super-resolution problem aims at recovering an approximation of an unknown input discrete measure $\measO\eqdef\sum_{i=1}^{N} \ampOi \dirac{\posOi}$ from noisy measurements $\obsw\eqdef\obsO+w$ where $\obsO \eqdef \Phi\measO$ and $w \in \Hh$ models the acquisition noise. Here $\ampOi\in\RR$ are the amplitudes of the Dirac masses at positions $\posOi\in\Pos$. This is an ill-posed inverse problem and the BLASSO is a way to solve it in a stable way by introducing a sparsity-enforcing convex regularization. \subsubsection{From the LASSO to the BLASSO} The common practice in sparse spike recovery relies on $\lun$ regularization which is known as LASSO in statistic~\cite{tibshirani-regression1994} or basis pursuit in the signal processing community~\cite{chen-atomic1998}. Given a grid of possible positions, the reconstruction problem is addressed as the minimization of a quadratic error subject to an $\lun$ penalization. The $\lun$ prior provides solutions with few nonzero coefficients and can be computed efficiently with convex optimization methods. Moreover, recovery guarantees have been proved under certain assumptions~\cite{donoho-superresolution1992}. Following recent works (see for instance \cite{bhaskar-atomic2011,bredies-inverse2013,candes-towards2013,deCastro-exact2012,duval-exact2013}), we consider instead sparse spike estimation methods which operate over a continuous domain, \ie without resorting to some sort of discretization on a grid. The inverse problem is solved over the space of Radon measures which is a non-reflexive Banach space. This continuous \guillemet{grid-free} setting makes the mathematical analysis easier and allows us to make precise statement about the location of the recovered spikes. The technique that we are considering in this paper consists in solving a convex optimization problem that uses the total variation norm, which is the counterpart of the $\lun$ norm for measures. It favors the emergence of spikes in the solution and is defined by \begin{align}\label{sec:intro-eq:TV} \forall m \in \radon, \quad \normTVX{m} \eqdef \usup{\psi \in \ContX} \enscond{ \int_\Pos \psi \d m }{ \normLi{\psi} \leq 1 }. \end{align} In particular, for $\measO\eqdef\sum_{i=1}^{N} \ampOi \dirac{\posOi}$, \eq{ \normTVX{m_{\ampO,\posO}}=\normu{\ampO}, } which shows in a way that the total variation norm generalizes the $\lun$ norm to the continuous setting of measures (\ie no discretization grid is required). When no noise is contaminating the data, one considers the classical basis pursuit, defined originally in~\cite{chen-atomic1998} in a finite dimensional setting, written here over the space of Radon measures \begin{align}\label{eq-blasso-noiseless} \umin{m \in \radon} \normTVX{m} \quad \mbox{s.t.}\quad \Phi m=\obsO \tag{$\bpursuit$}. \end{align} This problem is studied in~\cite{candes-towards2013}, in the case where $\Phi$ is an ideal low-pass filter on the torus $X=\TT$. When the signal is noisy, \ie when one observes $y=\obsO+w$, with $w\in \Obs$, we may rather consider the problem \begin{align}\label{eq-blasso-noisy} \umin{m \in \radon} \frac{1}{2} \normObs{\Phi m - y}^2 + \lambda \normTVX{m} \tag{$\blasso$}. \end{align} Here $\lambda>0$ is a parameter that should be adapted to the noise level $\normObs{w}$. This problem is coined as BLASSO~\cite{deCastro-exact2012}. \subsubsection{BLASSO performance analysis} In order to quantify the recovery performance of the methods $\bpursuit$ and $\blasso$, the following two questions arise: \begin{enumerate} \item Does the solutions of $\bpursuit$ recover the input measure $\measO$ ? \item How close is the solution of $\blasso$ to the solution of $\bpursuit$ ? \end{enumerate} When the amplitudes of the spikes are arbitrary complex numbers, the answers to the above questions require a large enough minimum separation distance $\De(\measO)$ between the spikes, where \begin{align}\label{min-sep-dist} \De(\measO) \eqdef \umin{i \neq j} \dX(\posOi,\posOj). \end{align} When $\Pos=\TT$, $\dX$ is the geodesic distance on the circle \begin{align}\label{dist-T} \forall x,y\in\RR, \quad \dX(x+\ZZ,y+\ZZ)=\min_{k\in\ZZ} |x-y+k|. \end{align} In~\cite{candes-towards2013}, the authors shows that for the ideal low-pass filter, $\measO$ is the unique solution of $\bpursuit$ provided that $\De(\measO)\geq \frac{C}{f_c}$ where $C>0$ is a universal constant and $f_c$ the cutoff frequency of the ideal low-pass filter. In the same paper, it is shown that $C\leq 2$ when $\ampO\in\CC^N$ and $C\leq 1.87$ when $\ampO\in \RR^N$. In~\cite{carlos-super2015}, the constant $C$ is further refined to $C\leq 1.26$ when $\ampO\in \RR^N$. Suboptimal lower bounds on $C$ were given in~\cite{duval-exact2013,tang2015resolution}. Moreover, it was recently shown in~\cite{ferreira-2018tight} that necessarily $C\geq 1$ in the sense that for all $\varepsilon>0$, and for $f_c$ large enough, there exist measures with $\De(\measO)\geq (1-\varepsilon)/f_c$ which are not identifiable using~\eqref{eq-blasso-noiseless}. The second question receives partial answers in~\cite{azais-spike2014,bredies-inverse2013,candes-superresolution2013,fernandez-support2013}. In~\cite{bredies-inverse2013}, it is shown that if the solution of $\bpursuit$ is unique then the measures recovered by $\blasso$ converge in the weak-* sense to the solution of $\bpursuit$ when $\lambda \rightarrow 0$ and $\normObs{w}/\lambda \rightarrow 0$. In~\cite{candes-superresolution2013}, the authors measure the reconstruction error using the $\Ldeux$ norm of an ideal low-pass filtered version of the recovered measures. In~\cite{azais-spike2014,fernandez-support2013}, error bounds are given on the locations of the recovered spikes with respect to those of the input measure $\measO$. However, those works provide little information about the geometrical structure of the measures recovered by $\blasso$. That point is addressed in~\cite{duval-exact2013} where the authors show that under the \emph{Non Degenerate Source Condition}, there exists a unique solution to $\blasso$ with the exact same number of spikes as the original measure provided that $\lambda$ and $\normObs{w}/\lambda$ are small enough. Moreover in that regime, this solution converges to the original measure when the noise drops to zero. \paragraph{BLASSO for positive spikes.} For positive spikes (\ie $\ampOi>0$), the picture is radically different. Exact recovery of $\measO$ without noise (\ie $(w,\la)=(0,0)$) holds whatever the distance between the spikes~\cite{deCastro-exact2012}, but stability constants explode as $\De(\measO) \to 0$. However, the authors in~\cite{candes-stable2014} show that stable recovery is obtained if the signal-to-noise ratio grows faster than $O(1/\De^{2N})$. This closely matches the optimal lower bounds of $O(1/\De^{2N-1})$ obtained by combinatorial methods~\cite{demanet-recoverability2014}. Finally, provided a certain nondegeneracy condition, it was recently shown in~\cite{Denoyelle2017} that support recovery is guaranteed in the presence of noise if the signal-to-noise ratio grows faster than $O(1/\De^{2N-1})$. \subsection{Solving the BLASSO} As the BLASSO is an optimization problem over the infinite dimensional space of Radon measures $\radon$, its resolution is challenging. We review in this section the existing approaches to tackle this problem. They can be roughly divided into three main families although there exists a flurry of generalizations and extensions that must be considered separately. \myparagraph{Fixed spatial discretization} A common approach consists in constraining the measure to be supported on a grid. This leads to a finite dimensional convex optimization problem---known as LASSO~\cite{tibshirani-regression1994} or basis pursuit~\cite{chen-atomic1998}---for which there exist numerous solvers. These include the block-coordinate descent (BCD) algorithm~\cite{tseng-convergence2001,wu-coordinate2008}, the homotopy/LARS algorithm~\cite{efron-lars2004,soussen-homotopy2015}, or proximal forward-backward splitting algorithms \cite{combettes-fb2005} such as the Iterative Soft Thresholding (IST)~\cite{daubechies-ist}. Although simples to implement, the latters are in general slow to converge (the error in the objective function is typically of the order of $O(1/k)$, where $k$ is the number of iterations)~\cite{daubechies-ist,donoho-adapting1999,figueiredo-em2003}. However, there exist accelerated versions such as FISTA~\cite{beck-fista2009}, which benefit from a better non-asymptotic rate of convergence ($O(1/k^2)$). Finally, it is noteworthy that these proximal methods enjoy a linear asymptotic rate (see for instance~\cite{LiangLinearFB}), but this regime can be slow to reach. The main limitation of these grid-based methods is that, in order to go below the Rayleigh limit and perform super-resolution, the grid must be thin enough. This leads to theoretical and practical issues. Indeed, refining the grid not only increases the computational cost of each iteration, but it also deteriorates the conditioning of the linear operator to invert. Hence, in practice, these methods provide solutions which are composed of small clusters of non-zero coefficents around each ``true'' spike. A way to mitigate this issue is to perform a post processing by replacing each cluster of spikes by its center of mass, as proposed in~\cite{tang-justdiscretize2013,flinthweiss2018}. This drastically reduces the number of false positive spikes although it is hard to analyze theoretically and can be unstable. Instead, one can also consider methods based on safe rules~\cite{elghaoui-safe2010} which perform a progressive pruning of the grid and keep only active sets of weights~\cite{salmon-screening2017}. Finally, it has been shown in~\cite{duval-thingridsI2017,duval-thingridsII2017} that the solution of the LASSO, in a small noise regime and when the step size tends to zero, contains pairs of spikes around the true ones. \myparagraph{Fixed spectral discretization and semidefinite programming (SDP) formulation} In~\cite{candes-towards2013}, the authors propose a reformulation of the Basis Pursuit for measures into an equivalent finite dimensional SDP for which solvers exist. Similarly, one can get an SDP formulation of the BLASSO. However, these equivalences are only true in a $1$-dimensional setting. In higher dimensions ($d\geq2$), one needs to use the so-called Lasserre's hierarchy~\cite{lasserre-moments2009,lasserre-global2004}. This principle has been used for the super-resolution problem in~\cite{decastro-semi2015}. The resolution of SDPs can be tackled through proximal splitting methods~\cite{toh-fistasvd2010} as well as interior point methods~\cite{boyd2004convex}. However, the overall complexity of the latter is polynomial in $O(f_c^{2d})$, where $d$ is the dimension of the domain $\Pos$, which restricts its application to small dimensional problems. This limitation has led to recent developments~\cite{catala-rank2017} where the authors proposed a relaxed low rank SDP formulation of the BLASSO in order to use a Frank-Wolfe-type method (see below). The resulting method enjoys the better overall complexity of $O(f_c^d\log(f_c))$ per iteration. Finally, note that these SDP-based approaches are restricted to certain type of forward operators (typically Fourier measurements). In contrast, grid-based proximal methods as well as Frank-Wolfe (directly on the BLASSO, see below) can be used for a larger class of operators $\Phi$. \myparagraph{Optimization over the space of measures} In order to directly solve the BLASSO, one needs to design algorithms that do not use any Hilbertian structure and can instead deal with measures. The benefit is the fact that one can exploit advantageously the continuous setting of the problem (typically moving continuously spikes over the domain). In contrast to fixed spatial or spectral discretization methods, these algorithms proceed by iteratively adding new spikes, \ie Dirac masses, to the recovered measure. The Frank-Wolfe (FW) algorithm~\cite{frank-fw1956} (see Section~\ref{sec:sfw}), also called the Conditional Gradient Method (CGM)~\cite{levitin-constrained1966}, solves optimization problems of the form $\min_{m \in C}\ f(m)$, where $C$ is a weakly compact convex set of a topological vector space and $f$ is a differentiable convex function (in the case of the BLASSO, $m$ is a Radon measure). It proceeds by iteratively minimizing a linearized version of $f$. No Hilbertian structure is used which makes it well suited to work on the space of Radon measures. It has been proven under a curvature condition on $f$ (which holds on a Banach space for smooth functions having a Lipschitz gradient) that the rate of convergence of this algorithm in the objective function is $O(1/k)$ (see for instance~\cite{demyanov-1970approximate}). However, it is possible to improve the convergence speed of FW by replacing the current iterate by any \guillemet{better} candidate $m \in C$ that further decreases the objective function $f$. This simple idea has led to several successful variations of the standard FW algorithm. For instance, the authors of~\cite{bredies-inverse2013} proposed a modified Frank-Wolfe algorithm for the BLASSO where the final step updates the amplitudes and positions of spikes by a gradient descent on a non-convex optimization problem. Moving the spikes positions takes advantage of the continuous framework of the problem (the domain $\Pos$ is not discretized) which is the main ingredient that leads to a typical $N$-step convergence observed empirically. Finally, this approach has later been used in~\cite{boyd-adcg2015} and provides state of the art results in many sparse inverse problems such as matrix completion or Single Molecule Localization Microscopy (SMLM)~\cite{SMLM,sage-quantitative2015}. % \subsection{Other methods for super-resolution} The Prony method~\cite{prony} and its successors such as MUSIC (MUltiple SIgnal Classification) \cite{schmidt-multiple1986}, ESPRIT (Estimation of Signal Parameters by Rotational Invariance Techniques) \cite{Kailath_1990}, or Matrix Pencil \cite{hua-pencils1990}, are spectral methods which perform spikes localization from low frequency measurements. They do not need any discretization and enable to recover exactly the initial signal in the noiseless case as long as there are enough observations compared to the number of distinct frequencies \cite{liao-music2014}. Extensions to deal with noise have been developped in \cite{cadzow-signal1988,condat-cadzow2015} and stability is known under a minimum separation distance \cite{liao-music2014}. Greedy algorithms constitute another class of popular methods for sparse super-resolution. The Matching Pursuit (MP)~\cite{mallat-mp1994} adds new spikes by finding the ones that best correlate with the residual. The Orthogonal Matching Pursuit (OMP)~\cite{tropp-omp2008,soussen-omp2013,herzet-ols2014} is similar to MP but imposes that the current estimate of the observations, \ie $\Phi(\sum_{i=1}^k \amp_i \dirac{\pos_i})$, is always orthogonal to the residual. Hence, the amplitudes of the Dirac masses are updated by an orthogonal projection after every support update (\ie addition of a new spike). It is noteworthy that there exist many generalizations/variants of OMP. For instance, the results of OMP can be improved with a backtracking step at each iteration, allowing to remove non reliable spikes from the support of the reconstructed measure~\cite{huang-backtracking2011}. These greedy pursuit algorithms can be applied without grid discretization~\cite{jacques2008geometrical} which enables the use of local optimizations over the spikes' positions~\cite{eftekhari2015greed}. Finally, let us mention the class of nonconvex optimization methods which include the well known Iterative Hard Thresholding (IHT)~\cite{davies-it2008,davies-iht2009} \subsection{Contributions} Our first set of contributions, detailed in Section~\ref{sec:etaW-laplace}, studies the BLASSO performance in the special case of several types of Laplace transforms. This theoretical study is motivated by the use of these Laplace transform for certain types of fluorescence microscopy imaging devices. Our main finding is that for positive spikes, these operators can be stably inverted without minimum separation distance. This study makes use of the theoretical tools developed in our previous work~\cite{Denoyelle2017}. Our algorithmic contributions are detailed in Section~\ref{sec:sfw}, where we introduce the \ADCG, which is an extension of the initial FW solver proposed in~\cite{bredies-inverse2013}. Proposition~\ref{sec:sfw-prop:cvfaible} shows that this algorithm, used to solve the BLASSO, enjoys the same convergence property as the classical Frank-Wolfe algorithm (weak-* convergence with a rate in the objective function of $O(1/k)$). Our main theoretical contribution is Theorem~\ref{sec:sfw-thm:cvksteps} which proves that our algorithm converges towards the unique solution of the BLASSO in a finite number of iterations. Section~\ref{sec:microscopy} makes the connection between these two sets of contributions, by showcasing the \adcgshort~algorithm for 3-D PALM/STORM super-resolution fluorescence microscopy. We study its performance for several imaging operators, among which some relies on the inversion of a Laplace transform along the depth axis. The code to reproduce the numerical illustrations of this article can be found online at~\url{https://github.com/qdenoyelle}. \subsection{Notations and Definitions} This section gathers some useful notations and definitions. \myparagraph{Ground space and measures} We frame our theoretical and numerical analysis of the BLASSO on the space of Radon measure over a set $\Pos$. \begin{defn}[Set $\Pos$ of positions of spikes]\label{def:Pos} The set of positions of spikes, denoted $\Pos$, is supposed to be a subset of $\RR^d$ with non-empty interior $\Poso$, or $\TT^d$ with $d\in\NN^*$. \end{defn}% Definition~\ref{def:Pos} covers the particular case of $\Pos=\RR^d$, $\Pos=\TT^d$ or any compact subset with non-empty interior of $\RR^d$. \begin{defn}[Continuous functions on $\Pos$]\label{def:ContX} Let $(Y,\norm{\cdot}_Y)$ be a normed space. We denote by $\Cder{}_c(X,Y)$ the space of $Y$-valued continuous functions with compact support, by $\ContX(\Pos,Y)$ the set of continuous functions that vanish at infinity \ie \eq{% \forall \varepsilon>0, \exists K\subset\Pos \mbox{ compact}, \quad \underset{x\in\Pos\setminus K}{\sup} \ \norm{\phi(x)}_Y \leq \varepsilon, } and by $\Contk{k}(\Pos,Y)$ the set of $k$-times differentiable functions on $\Pos$. Note that when $\Pos$ is compact, $\Cder{}_c(X,Y)$ and $\ContX(\Pos,Y)$ are simply the set $\Cont(\Pos,Y)$ of continuous functions on $\Pos$. \end{defn} Now we can define rigorously the space of real bounded Radon measures on $\Pos$. \begin{defn}[Set $\radon$ of Radon measures]\label{def:radon} We denote by $\radon$ the set of real bounded Radon measures on $\Pos$ which is the topological dual of $\ContX(\Pos,\RR)$ endowed with $\normLi{\cdot}$ (the supremum norm for functions defined on $\Pos$). \end{defn} By the Riesz representation theorem, $\radon$ is also the set of regular real Borel measures with finite total mass on $\Pos$. See~\cite{rudin-real1987} for more details on Radon measures. \myparagraph{Kernels} This paragraph details the assumptions that we use in the following on the kernel $\phi$. We recall that the operator $\Phi: \radon\rightarrow \Obs$, which models the acquisition process of the source signal, has the form: \begin{align}\label{eq-defnPhi} \forall m\in\radon, \quad \Phi m &\eqdef \int_\Pos \varphi(\pos)\d m(\pos). \end{align} The above quantity is well-defined (as a Bochner integral) as soon as $\phi$ is continuous and bounded. In order to apply some results of~\cite{Denoyelle2017}, we add the hypotheses that are summarized below. \begin{defn}[Admissible kernels $\phi$]\label{sec:intro-def:admkernel} We denote by $\kernel{k}$\index{kernel}, the set of admissible kernels of order $k$. A function $\phi:\Pos\to\Obs$ belongs to $\kernel{k}$ if: \begin{itemize} \item $\phi\in\Contk{k}(\Pos,\Obs)$, \item For all $p\in\Obs$, $x\in\Pos\mapsto\dotObs{\phi(\pos)}{p}$ vanishes at infinity, \item for all $0\leq i\leq k$, $\underset{\pos\in\Pos}{\sup} \normObs{D^i\phi(x)} <+\infty$. \end{itemize} where $D^i\phi$ is the $i$-th differential of $\phi$. \end{defn} \myparagraph{Operators} Given $\pos=(\pos_1,\ldots,\pos_N)\in\Poso{}^N$, we denote by $\Phi_{\pos}:\RR^N\rightarrow \Obs$ the linear operator such that: \begin{align}\label{sec:intro-def:Phix} \forall a\in \RR^N, \quad \Phi_{\pos}(a) \eqdef \sum_{i=1}^{N} a_i \phi(\pos_i), \end{align} and by $\Ga_{\pos}:(\RR^{N}\times\underbrace{\RR^N\times\cdots\times\RR^N}_{d})\rightarrow \Obs$ the linear operator defined by: \begin{align}\label{sec:intro-def:Gax} \forall (a,b_1,\ldots,b_d)\in \RR^N\times(\RR^N)^d, \quad \Ga_{\pos}\begin{pmatrix}a\\b_1\\\vdots\\b_d\end{pmatrix} \eqdef \sum_{i=1}^{N}\left( a_i \phi(\pos_i) + \sum_{j=1}^d b_{j,i} \partial_j \varphi(\pos_i)\right). \end{align} We may also write $\Ga_{\pos}=\pa{\Phi_{\pos} \ \pa{\Phi_{\pos}}^{(1)}}$, where $\pa{\Phi_{\pos}}^{(1)}$ (sometimes denoted by $\Phi_{\pos}{}'$) stacks all the first order derivatives of $\phi$ for the different positions $\pos_i$. Similarly we define $\pa{\Phi_{\pos}}^{(k)}$ for $k\geq 1$ by stacking all the derivatives of order $k$. Finally, $\Ga_{\pos}^+$ refers to the pseudo-inverse of $\Ga_{\pos}$. When $d=1$, given $\poscluster\in \Poso$, we denote by $\phiD{k} \in \Obs$ the $k^{th}$ derivative of $\phi$ at $\poscluster$, \textit{i.e.} \begin{equation}\label{sec:intro-eq:phider} \phiD{k} \eqdef \phi^{(k)}(\poscluster). \end{equation} In particular, $\phiD{0} = \phi(\poscluster)$. Given $k \in \NN$, we then define: \begin{equation}\label{sec:intro-eq:Fk} \Fk \eqdef \begin{pmatrix} \phiD{0} & \phiD{1} & \ldots & \phiD{k} \end{pmatrix}. \end{equation} \myparagraph{Injectivity Assumption} In order to avoid degeneracy issues we sometimes assume the following injectivity assumption of the operator when restricted to discrete spikes. \begin{defn}\label{sec:intro-def:injectivity} Let $\varphi : \Pos\to\Obs$. For all $k\in\NN$, we say that the hypothesis $\injk$ holds at $\poscluster\in\Poso$ if and only if \begin{equation} \phi\in\kernel{k} \text{ and } (\phiD{0}, \ldots,\phiD{k}) \text{ are linearly independent in } \Obs. \tag{$\injk$} \end{equation} \end{defn} \myparagraph{Norms} We use the $\ell^{\infty}$ norm, $\normLiVec{\cdot}$, for vectors of $\RR^N$ or $\RR^{2N}$, whereas the notation $\norm{\cdot}$ refers to an operator norm (on matrices, or bounded linear operators). $\normObs{\cdot}$ is the norm on $\Obs$ associated to the inner product $\dotObs{\cdot}{\cdot}$. $\normLi{\cdot}$ denotes the $\Linf$ norm for functions defined on $\Pos$. \section{BLASSO for Laplace Inversion}\label{sec:etaW-laplace} Most existing theoretical studies of super-resolution are focussed on translation-invariant operator $\Phi$ (convolution or Fourier measurements), see Section~\ref{sec-intro-blasso-pw}. In contrast, this section presents new results for one of the most fundamental non-translation invariant operator: the Laplace transform (and variants). The behavior of the Laplace transform is radically different from the one of the Fourier transform, and understanding the impact of the lack of translation invariance on super-resolution is relevant for many applications in imaging, including those considered in Section~\ref{sec:microscopy}. A first argument in favor of the BLASSO for the Laplace transform is the study provided in~\cite{duval-ndsc2017}. It essentially shows that the recovery of $N$ positive spikes with stability of the support is possible using at least $2N$ measurements, regardless of the spacing of the spikes (and the spacings of the samples). The stability is asserted by showing that $\etaV$ and $\etaW$ are nondegenerate, using abstract T-systems arguments. Our strategy here is different, as we provide closed form expressions for $\etaW$ for these operators in order to show its nondegeneracy. The results presented here are thus complementary to those of~\cite{duval-ndsc2017}, providing additional theoretical guarantees which backup our numerical observations. The main differences are \begin{itemize} \item we provide closed form expressions to $\etaW$, \item some of the impulse responses we consider are $L^2$-normalized, a case which is not covered by the theory of~\cite{duval-ndsc2017}, \item we cannot deal with arbitrary samplings $\mu$, contrary to~\cite{duval-ndsc2017}. \end{itemize} In this section, we suppose that $N$ spikes are clustered at the position $\poscluster\in\Poso$ (which appears in the following results because of the non translation invariance of the kernel). In the next section, we first detail the different continuous operators considered. Then, Section~\ref{subsec:etaW-unnorm} gives explicit formulas for $\etaW$ in two different setups and shows that $\eta_W$ is $(2N-1)$-nondegenerate. Finally, Section~\ref{subsec-laplaceinversion} provides some numerical material concerning $\etaW$ when the continuous kernels are approximated by a sampling. \subsection{Laplace Operators}\label{subsec:laplace-models} We suppose in this section that $\Pos=[\xmin,\xmax]\subset \RR_+^*$ is a compact interval, and that $\Obs = \Ldeux(\RR_+,\mu)$ for some Radon measure $\mu$ on $\RR_+$. A generic Laplace measurement kernel is defined as \begin{align}\label{sec:basso-eq:phi-laplace} \forall \pos\in\Pos,\quad\phi(\pos) \eqdef \pa{s \mapsto \xi(\pos) e^{-s\pos}} \in \Obs. \end{align} This choice ensures that $\phi$ defines a valid operator $\Phi$ for all the the Laplace-like transform models presented below, provided $e^{-\xmin s}\d\mu(s)$ has sufficiently many finite moments (in the following we require finite moments of order $4N-1$). The kernel is parametrized by a positive Radon measure $\mu$ on $\Pos$ (which models the sampling pattern) and a non-negative weighting function $\xi \in \Cont(\Pos,\RR)$ (which takes care of the normalization of the measurement). The adjoint operator is thus defined as \eq{ (\Phi^*p)(\pos) = \xi(\pos) \int_{\RR_+} e^{-s\pos} p(s) \d \mu(s). } The choice of $\mu$ is let to the experimentalist and corresponds to the way samples are chosen. A discrete measure $\mu = \sum_{k=1}^K \mu_k \de_{s_k}$ corresponds to using a finite set of samples values $s_k$. In this case, one can equivalently consider finite-dimensional observations $\Obs=\RR^K$ and define $\phi(\pos) \eqdef ( \xi(\pos)\mu_k e^{-s_k \pos} )_{k=1}^K \in \Obs$. A continuous measure $\d\mu(s)=h_\mu(s) \d s$ is a mathematical idealization, where a high value of $h_\mu(s)$ indicates that a high number of measurements have been taken for the index $s$ (or equivalently that there is less noise for this measurement). On contrast, a value $\mu(s)=0$ indicates that this measurement is not available. In contrast, $\xi$ can be freely chosen but strongly impacts the BLASSO problem by weighting the contribution of each position. The design of such a spatially-varying weighting is crucial (and non trivial) here because the operator $\Phi$ is not translation-invariant. The most frequent normalization for LASSO-type problems is \begin{equation}\label{eq-normalization} \xi(\pos)^2 = \frac{1}{ \int_{\RR_+} e^{-2 s\pos} \d\mu(s) }, \end{equation} which guarantees that $\normObs{\phi(\pos)}=1$ for all $\pos\in\Pos$. See Section~\ref{par:L2-norm-laplace} for more details for this normalization. Note that both $\mu$ and $\xi$ can be independently chosen, since they operate separately on the input and output variables $\pos$ and $s$. \myparagraph{Correlation kernel} The properties of the BLASSO problem (and also the implementation of BLASSO solvers) only depend on the correlation operator $\Phi^*\Phi$ (rather than on the operator $\Phi$ itself). This operator reads $(\Phi^*\Phi m)(x) = \int_X \Co(x,x') \d m(x')$ where $\Co$ is a symmetric positive kernel. For Laplace-type operators, it reads \eq{ \forall\pos,\pos'\in\Pos,\quad\Co(\pos,\pos') = \xi(\pos)\xi(\pos') \int_{\RR_+} e^{-(\pos+\pos')s} \d \mu(s). } The choice of normalization~\eqref{eq-normalization} ensures that $\Co(\pos,\pos)=1$. \if 0 {\color{blue}For instance, for a sampling density of the form $h_\mu(s) = e^{-\al s}$ for some $\al>0$, one obtains \begin{equation}\label{eq-normalized-laplace} \xi(\pos) = \sqrt{2(x+\al)} \qandq \Co(x,x') = 2 \frac{\sqrt{ (x+\al)(x'+\al)}}{x+x'+2\al}. \end{equation} Note that the parameter $\al$ has only the effect of shifting all Dirac location, i.e. making the change of variable $\pos \mapsto x-\al$ is equivalent to replacing $\al$ by $0$. } \fi We now detail in the following sections several particular cases covered by Equation~\ref{sec:basso-eq:phi-laplace} and study the associated $\etaW$. \subsection{Preliminaries Results} This section gathers preliminary results useful for the computation of $\etaW$. One begins with two elementary lemmas. Their proofs are left to the reader. The first one is a simple consequence of the Faa di Bruno formula. \begin{lem} \label{lem:faadibruno} Let $I$, $I'\subset \RR$ be open intervals, and $h:I'\rightarrow I$ be a smooth diffeomorphism. Let $\poscluster\in I$, $\tcluster:=h^{-1}(\poscluster)\in I'$, and let $\eta: I\rightarrow \RR$ be a smooth function. % Then $\eta$ satisfies \begin{equation} \eta(\poscluster)=1, \ \eta'(\poscluster)=0, \ldots, \eta^{(2N-1)}(\poscluster)=0, \end{equation} if and only if $\nu\eqdef \eta\circ h$ satisfies \begin{equation} \nu(\tcluster)=1, \ \nu'(\tcluster)=0, \ldots, \nu^{(2N-1)}(\tcluster)=0.\label{eq:laplacefaadibruno} \end{equation} Moreover, in that case, $\nu^{(2N)}(\tcluster)=\eta^{(2N)}(\poscluster)(h'(\tcluster))^{2N}$. \end{lem} The next one follows from the general Leibniz rule. \begin{lem} \label{lem:leibniz} Let $I$ be an open interval, $\tcluster\in I$ and let $g:I\rightarrow \RR$, $\eta: I\rightarrow \RR$ be two smooth functions. If $\eta$ satisfies: \begin{equation} \eta(\poscluster)=1, \ \eta'(\poscluster)=0, \ldots, \eta^{(2N-1)}(\poscluster)=0, \end{equation} then $P\eqdef \eta \times g$ satisfies: \begin{equation} P(\poscluster)=g(\poscluster), \ P'(\poscluster)=g'(\poscluster), \ldots,\ P^{(2N-1)}(\poscluster)=g^{(2N-1)}(\poscluster). \end{equation} In particular, if $P\in \RR_{2N-1}[T]$, then $P$ is the Taylor expansion of $g$ at $\poscluster$ of order $2N-1$, and $\etaW^{(2N)}(\poscluster)=-g^{(2N)}(\poscluster)/g(\poscluster)$ provided that $g(\poscluster)\neq 0$. \end{lem} \subsection{Explicit Formulas for $\etaW$ in Continuous Settings}\label{subsec:etaW-unnorm} \subsubsection{Classical Laplace Operator}\label{par:laplace} We suppose that $\mu=\Ll$, where $\Ll$ is the Lebesgue measure on $\RR_+$, and $\xi=1$. Then one has \begin{align}\label{sec:laplace-eq:correllapl1} \Co(\pos,\pos')=\frac{1}{\pos+\pos'}. \end{align} The following Proposition provides a formula for $\etaW$ in this unnormalized continuous setting and proves that it is nondegenerate. \begin{prop}\label{prop:etaW-unnorm-Laplace} $\etaW$ is $(2N-1)$-nondegenerate. More precisely, we have \begin{align}\label{etaW-expr} \forall \pos\in\Pos, \quad \etaW(\pos)=1-\pa{\frac{\pos-\poscluster}{\pos+\poscluster}}^{2N}. \end{align} \end{prop} \begin{figure}[!h] \centering \subfigure[$N=2$]{\includegraphics[width=0.48\linewidth]{etaW/etaW_unnormalized_xc}} \subfigure[$\poscluster=1$]{\includegraphics[width=0.48\linewidth]{etaW/etaW_unnormalized_N}} \caption{\label{sec:blasso-fig:etaW-laplace-unnorm}$\etaW$ for the unnormalized Laplace model for a varying $\poscluster$ with fixed $N=2$ and a fixed $\poscluster=1$ with varying $N\in\{2,4,6\}$.} \end{figure} In Figure~\ref{sec:blasso-fig:etaW-laplace-unnorm}, one sees that when the position $\poscluster$ where the spikes cluster increases, the curvature of $\etaW$ at $\poscluster$ decreases. This means that it is harder in this situation to perform the recovery. It reflects the exponential decay of the kernel $\phi$. \begin{proof}[Proof of Proposition~\ref{prop:etaW-unnorm-Laplace}] From Equations~\eqref{sec:blasso-eq:etaWdefbis} and~\eqref{sec:laplace-eq:correllapl1}, one sees that $\etaW$ has the form \begin{align*} \etaW(\pos)=\sum_{k=1}^{2N}\frac{\beta_k}{(x+\poscluster)^k}, \qwhereq \beta_k\in \RR. \end{align*} We set $h:t\mapsto (1/t-\poscluster)$, $\nu\eqdef \eta\circ h$ so that \begin{align*} \nu(t) =\sum_{k=1}^{2N}\beta_k t^k, \end{align*} is a polynomial with degree at most $2N$ with $\nu(0)=0$. By Lemma~\ref{lem:faadibruno}, $\nu$ satisfies~\eqref{eq:laplacefaadibruno} at $\tcluster\eqdef \frac{1}{2\poscluster}$. As a result, $\nu(t)=1+\beta_{2N}(t-\tcluster)^{2N}$. The constant $\beta_{2N}$ is fixed by the condition $\nu(0)=0$, so that $\nu(t)=1-\left(\frac{t-\tcluster}{\tcluster}\right)^{2N}$, and $\etaW$ is given by~\eqref{etaW-expr}. The $2N$ derivative is $\nu^{(2N)}(\tcluster)=-\frac{(2N)!}{(\tcluster)^{2N}}$, so that $\etaW(\poscluster)=-\frac{(2N)!}{(2\poscluster)^{2N}}<0$. \end{proof} \subsubsection{$L^2$-Normalized Laplace Operator}\label{par:L2-norm-laplace} We choose $\mu=\Ll$, where $\Ll$ is the Lebesgue measure on $\RR_+$ , and \eq{ \forall \pos\in\Pos, \quad \xi(\pos) = \sqrt{\frac{1}{ \int_{\RR_+} e^{-2 s\pos} \d s }}=\sqrt{2x}, } so that for all $\pos\in\Pos$, $\varphi(\pos): s\mapsto \sqrt{2\pos}e^{-s\pos}$ and $\normObs{\phi(\pos)}=1$. One gets \begin{equation}\label{eq-normalized-lapl-correl} \forall \pos,\pos'\in\Pos,\quad\Co(\pos,\pos')\eqdef\dotObs{\phi(\pos)}{\phi(\pos')}=\frac{2\sqrt{\pos\pos'}}{\pos+\pos'}. \end{equation} The following Proposition provides a formula for $\etaW$ in this normalized setting and proves that it is nondegenerate. \begin{prop}\label{prop:etaW-normL2-Laplace} $\etaW$ is $(2N-1)$-nondegenerate. More precisely, we have the following formula: \begin{equation} \forall \pos\in\Pos,\quad \etaW(\pos)=\frac{2\sqrt{\pos\poscluster}}{\pos+\poscluster}\sum_{k=0}^{N-1}\frac{(2k)!}{2^{2k}(k!)^2}\left(\frac{\pos-\poscluster}{\pos+\poscluster}\right)^{2k}. \end{equation} \end{prop} \begin{figure}[!h] \centering \subfigure[$N=2$]{\includegraphics[width=0.48\linewidth]{etaW/etaW_normalized_xc}} \subfigure[$\poscluster=1$]{\includegraphics[width=0.48\linewidth]{etaW/etaW_normalized_N}} \caption{\label{sec:blasso-fig:etaW-laplace-norm}$\etaW$ for the normalized Laplace model for a varying $\poscluster$ with fixed $N=2$ and a fixed $\poscluster=1$ with varying $N\in\{2,4,6\}$.} \end{figure} In Figure~\ref{sec:blasso-fig:etaW-laplace-norm}, one sees that when the position $\poscluster$ where the spikes cluster increases then the curvature of $\etaW$ at $\poscluster$ decreases. The interpretation is the same as in the previous paragraph. \begin{proof}[Proof of Proposition~\ref{prop:etaW-normL2-Laplace}] From the general Leibniz rule, we have for all $n\in\{0,\ldots,2N-1\}$ and for all $\pos,\pos'\in\Pos$: \begin{equation*} \frac{\d^{n}}{\d {\pos'}^{n}}\pa{\Co(\pos,\pos')}=2\sqrt{\pos}\sum_{k=0}^{n}\binom{n}{k}\frac{\d^{n-k}}{\d {\pos'}^{n-k}}\left(\sqrt{\pos'}\right)\frac{\d^k}{\d {\pos'}^k}\left(\frac{1}{\pos+\pos'}\right) \end{equation*} Evaluating this expression at $\pos'=\poscluster$, one gets that: \eq{ \partial_2^{n}\Co(\pos,\poscluster)=\sqrt{\pos}\sum_{k=0}^{n} \frac{\alpha_k}{(\pos+\poscluster)^{k+1}}, } for some coefficients $\alpha_k\in \RR$. As a result, $\etaW$ is the unique function the form \begin{equation*} \etaW(\pos)=\sqrt{\pos}\sum_{k=0}^{2N-1} \frac{\be_k}{(\pos+\poscluster)^{k+1}} \end{equation*} for some coefficients $\beta_k\in \RR$, which satisfies~\eqref{sec:blasso-eq:etaWequations}. As before, we set $t=\frac{1}{\pos+\poscluster}$, that is $\pos=h(t)\eqdef \frac{1}{t}-\poscluster$, and $h$ is a diffeomorphism of $(0,1/\poscluster)$ onto $(0,+\infty)$. Then: \eq{ \etaW\circ h(t)=\sqrt{\frac{1}{t}-\poscluster}tP(t)=\sqrt{t-t^2\poscluster}P(t),} where $P(T)=\sum_{k=0}^{2N-1}\beta_k T^k\in \RR_{2N-1}[T]$. By Lemma~\ref{lem:faadibruno} and Lemma~\ref{lem:leibniz}, $P$ is the Taylor expansion of order $2N-1$ of $g:t\mapsto \frac{1}{\sqrt{t-t^2\poscluster}}$ at $\tcluster=h^{-1}(\poscluster)=\frac{1}{2\poscluster}$. Setting $t=u+\frac{1}{2\poscluster}$, we note that: \begin{align*} \frac{1}{\sqrt{t-t^2\poscluster}}&=\frac{2\sqrt{\poscluster}}{\sqrt{1-(2u\poscluster)^2}} \qandq \frac{1}{\sqrt{1-z^2}} =\sum_{k=0}^{N-1}\frac{(2k)!}{2^{2k}(k!)^2}z^{2k} + o(z^{2N-1}). \end{align*} One deduces that \begin{align*} \frac{1}{\sqrt{t-t^2\poscluster}}&= 2\sqrt{\poscluster}\sum_{k=0}^{N-1}\frac{(2k)!}{2^{2k}(k!)^2}\left[2\poscluster(t-\tcluster)\right]^{2k}+o(\pa{t-\tcluster}^{2N-1}). \end{align*} As a result, $P$ is given by $P(t)=2\sqrt{\poscluster}\sum_{k=0}^{N-1}\frac{(2k)!}{2^{2k}(k!)^2}\left[2\poscluster(t-\tcluster)\right]^{2k}$ and \begin{align} \etaW\circ h(t)&=\sqrt{t-t^2\poscluster}P(t)\\ &=1-\frac{\sum_{k=M}^{+\infty}\frac{(2k)!}{2^{2k}(k!)^2}\left[2\poscluster(t-\tcluster)\right]^{2k}}{\sum_{k=0}^{+\infty}\frac{(2k)!}{2^{2k}(k!)^2}\left[2\poscluster(t-\tcluster)\right]^{2k}}. \end{align} One sees that $\abs{\etaW\circ h(t)}<1$ for all $t\in (0,\frac{1}{\poscluster})\setminus\{\frac{1}{2\poscluster}\}$, and by Lemma~\ref{lem:leibniz}, \begin{equation} (\etaW\circ h)^{(2N)}(\tcluster)=-g^{(2N)}(\tcluster)/g(\tcluster)=-\frac{((2N)!)^2}{(N!)^2}\poscluster^{2N}<0 \end{equation} so that $\etaW\circ h$ (hence $\etaW$) is $(2N-1)$-nondegenerate. One recovers $\etaW$ by composing with $h^{-1}$, noting that $2\poscluster(t-\tcluster)=\frac{\poscluster-\pos}{\pos+\poscluster}$. \end{proof} \subsection{Sampled Approximations}\label{subsec-laplaceinversion} The previous two cases (normalized and unnormalized versions of the Laplace transform) correspond to mathematical idealizations. In practice, one needs to restrict the sampling patterns by limiting their ranges and considering discrete samples. The following two setups are involved in the application of Section~\ref{sec:microscopy}. \myparagraph{Discretized Unnormalized Laplace}~ We assume that $\mu=\sum_{k=0}^{K-1} \dirac{s_k}$ and $\xi=1$. Then $\phi(\pos)=(e^{-s_k\pos})_{k=0}^{K-1} \in\RR^K$ and: \eq{ \Co(\pos,\pos') = \sum_{k=0}^{K-1} e^{-s_k(\pos+\pos')}. } \if \todo{Est-ce qu'on se sert de ca quelque part ou bien c'est juste utile pour la doc du code? Voulez-vous le garder?}{\color{blue} If we make the assumption that we discretize uniformly the interval $[a,b]\subset\RR$ \ie $s_k=a+\frac{k}{K-1}(b-a)$ for $k\in\{0,\ldots,K-1\}$, then: \eq{ \Co(\pos,\pos') = e^{-m(\pos+\pos') } \frac{\sinh\pa{\De\frac{K}{K-1}\frac{\pos+\pos'}{2} }}{\sinh\pa{\frac{\De}{K-1}\frac{\pos+\pos'}{2} }}, } where $\De=b-a$ and $m=(a+b)/2$. } \fi \myparagraph{Discretized $L^2$-normalized Laplace}~ We let $\mu=\sum_{k=0}^{K-1} \dirac{s_k}$ and $\xi(\pos)=\pa{\sum_{k=0}^{K-1} e^{-2s_k \pos}}^{-1/2}$. Then $\phi(\pos)=\xi(\pos)(e^{-s_k\pos})_{k=0}^{K-1} \in\RR^K$, $\normObs{\phi(\pos)}=1$ and: \eq{ \Co(\pos,\pos') = \xi(\pos)\xi(\pos')\sum_{k=0}^{K-1} e^{-s_k(\pos+\pos')}. } \if \todo{Est-ce qu'on se sert de ca quelque part ou bien c'est juste utile pour la doc du code? Voulez-vous le garder?}{\color{blue} If we make the assumption that we discretize uniformly the interval $[a,b]$ \ie $s_k=a+\frac{k}{K-1}(b-a)$ for $k\in\{0,\ldots,K-1\}$, then: \eq{ \Co(\pos,\pos') = \sqrt{\frac{\sinh\pa{\frac{\De}{K-1}\pos } \sinh\pa{\frac{\De}{K-1}\pos' } }{\sinh\pa{\De\frac{K}{K-1}\pos } \sinh\pa{\De\frac{K}{K-1}\pos' } } } \frac{\sinh\pa{\De\frac{K}{K-1}\frac{\pos+\pos'}{2} }}{\sinh\pa{\frac{\De}{K-1}\frac{\pos+\pos'}{2} }}, } where $\De=b-a$ and $m=(a+b)/2$. } \fi In contrast to the continuous setups of Section~\ref{subsec:etaW-unnorm}, we do not have closed-form expressions for $\etaW$. However, if a sequence of measures, \eg{} $\mun=\sum_{k=0}^{K_{n}-1} \munk \dirac{s_{n,k}}$ converges in a suitable sense towards the Lebesgue measure $\mu=\Ll$, the following proposition shows that the corresponding $\etaW$ must be nondegenerate for $n$ large enough. We consider both the unnormalized and $L^2$-normalized setups, corresponding respectively to \begin{align*} \Con(\pos,\pos')&=\int_{\RR_+}e^{-(x+x')s}\d\mun(s), \mbox{ and}\\ \Con(\pos,\pos')&=\xi_n(x)\xi_n(x')\int_{\RR_+}e^{-(x+x')s}\d\mun(s) \qwhereq \xi_n(x)=\int_{\RR_+}e^{-2xs}\d\mun(s), \end{align*} and similarly for $\Co$ and $\mu=\Ll$. \begin{prop} Let $(\mu_n)_{n\in\NN}$ be a sequence of positive measures which converges towards the Lebesgue measure $\mu$ in the local weak-* topology, \ie{} \begin{align*} \forall \psi\in\Cder{}_c(\RR_+),\quad \lim_{n\to +\infty}\int_{\RR_+}\psi(s)\d\mu_n(s) = \int_{\RR_+}\psi(s)\d s, \end{align*} and such that \begin{align}\label{eq:cvphimoment} \sup_{n\in\NN} \int_{\RR_+} (1+s^{4N-1})e^{-\xmin s}\d\mun(s)<+\infty. \end{align} Then, both in the unnormalized and the $L^2$-normalized case, for $n$ large enough, the $2N-1$ vanishing derivatives precertificate $\etaWk{n}$ is $(2N-1)$-nondegenerate. \end{prop} \begin{proof} Let us denote by $\Fdn^{[n]}=(\phiD{0}, \ldots,\phiD{2N-1})$ (resp. $\Fdn$) the impulse response derivatives corresponding to $\mun$ (resp. $\mu=\Ll$), and by $\etaW$ the $2N-1$ vanishing derivatives precertificate for $\mu=\Ll$. First, in view of Sections~\ref{par:laplace} and~\ref{par:L2-norm-laplace}, we observe that the result follows immediately if we prove that \begin{align} \lim_{n\to +\infty}\Fdn^{[n]*}\Fdn^{[n]} &= \Fdn^*\Fdn, \label{eq:cvfdn} \end{align} (as it implies the linear independence of $(\phiD{0}, \ldots,\phiD{2N-1})$ for $n$ large enough), and that \begin{align} \forall i\in \{0,1,\ldots, 2N\},\quad \lim_{n\to +\infty} \normLi{\etaWk{n}^{(i)}-\etaW^{(i)}}&=0,\label{eq:cvetawk} \end{align} (as it implies $\abs{\etaWk{n}(\pos)}<1$ for $\pos\neq \poscluster$ and $\etaWk{n}^{(2N)}(\poscluster)<0$ for $n$ large enough). We recall from~\eqref{sec:blasso-eq:etaWdefbis} that $\etaWk{n}$ is given by $\etaWk{n}(\pos)=\sum_{i=0}^{2N-1} \al_{i}^{[n]}\partial^{(i)}_2\Con(\pos,\poscluster)$ where $\alpha^{[n]} =(\Fdn^{[n]*}\Fdn^{[n]})^{-1} \dirac{2N}$ (provided the matrix is invertible), and the $(i,j)$-entry of $(\Fdn^{[n]*}\Fdn^{[n]})$ is $\partial^{(i)}_1\partial^{(j)}_2\Con(\poscluster,\poscluster)$. As a consequence, both~\eqref{eq:cvfdn} and~\eqref{eq:cvetawk} are established if we can prove that \begin{align}\label{eq:cvuphi} \lim_{n\to+\infty} \sup_{x,x'\in [\xmin,\xmax]} \abs{\partial^{(i)}_1\partial^{(j)}_2\Con(x,x')-\partial^{(i)}_1\partial^{(j)}_2\Co(x,x')}=0, \end{align} for all $i\in \{0,\ldots, 2N\}$, $j\in \{0,\ldots, 2N-1\}$. First, we prove~\eqref{eq:cvuphi} in the unnormalized case, \ie{} $\Con(\pos,\pos')=\int_{\RR_+}e^{-(x+x')s}\d\mun(s)$. The dominated convergence theorem ensures that $\partial^{(i)}_1\partial^{(j)}_2\Con(x,x')=\int_{\RR_+}s^{i+j}e^{-(x+x')s}\d\mun(s)$ (and similarly for $\Co$ and $\mu$). Let $(x,x')\in [\xmin,\xmax]^2$ and let $\psi\in \Cder{}_c(\RR_+)$ such that $\psi(s)=1$ for $s\in [0,1]$, $\psi(s)=0$ for $s\geq 2$, and $0\leq \psi\leq 1$ on $\RR_+$. We denote by $C$ the supremum in~\eqref{eq:cvphimoment}. Let $\varepsilon>0$ and $A>0$. Then, \begin{align*} &\abs{\int_{\RR_+}s^{i+j} e^{-{(x+x')} s}\d\mun(s)- \int_{\RR_+} s^{i+j}e^{-{(x+x')} s}\d s}\\ &\leq \underbrace{\abs{\int_{\RR_+} s^{i+j}e^{-{(x+x')} s}\psi\left(\frac{s}{A}\right)\d\mun(s)- \int_{\RR_+}s^{i+j} e^{-{(x+x')} s}\psi\left(\frac{s}{A}\right)\d s}}_{\eqdef a}\\ & + \underbrace{\abs{ \int_{\RR_+} s^{i+j} e^{-{(x+x')} s}(1-\psi\left(\frac{s}{A}\right))\d \mun(s)}}_{=b}+ \underbrace{\abs{ \int_{\RR_+} s^{i+j}e^{-{(x+x')} s}(1-\psi\left(\frac{s}{A}\right))\d s}}_{=c}. \end{align*} We have \begin{align*} c&\leq \int_A^{+\infty} (1+s^{4N-1})e^{-2\xmin s}\d s,\\ \qandq b &\leq e^{-\xmin A} \int_{\RR_+} (1+s^{4N-1}) e^{-{\xmin} s}\d\mun(s) \leq e^{-\xmin A} C. \end{align*} We choose $A>0$ sufficiently large so that $\int_A^{+\infty} (1+s^{4N-1})e^{-2{\xmin} s}\d s\leq\varepsilon$ and $ e^{-\xmin A} C\leq \varepsilon$, hence $\max(b,c)\leq \varepsilon$. Now, to prove that $a$ is uniformly small for $(x,x')\in[\xmin,\xmax]^2$ as $n\to +\infty$, we apply Lemma~\ref{lem:uniformcv} to $((x,x'),s)\mapsto s^{i+j}e^{-(x+x')s}\psi\left(\frac{s}{A}\right)$ defined on $[\xmin,\xmax]^2\times [0,2A]$. This yields the desired result. The proof for the normalized case readily follows from the uniform convergence of the unnormalized case and the fact that the normalization factors $\xi_n(\pos)=\left(\int_{\RR_+}e^{-2s\pos}\d\mu_n(s)\right)^{-1/2}\leq \left(\int_{\RR_+}e^{-2s\xmax}\d\mu_n(s)\right)^{-1/2}$ are upper bounded by some positive constant independent of $n$. \end{proof} \begin{lem}\label{lem:uniformcv} Let $X$ and $S$ be two compact metric spaces, and $\psi\in \Cder{}(X\times S)$. If $\{\mun\}_{n\in\NN}$ and $\mu$ are Radon measures such that $\mun \stackrel{*}{\rightharpoonup} \mu$ in the weak-* convergence of $\Mm(S)$, then \begin{align*} \lim_{n\to +\infty} \int_{S}\psi(x,s)\d\mun(s) = \int_{S}\psi(x,s)\d\mu(s), \end{align*} uniformly in $x\in X$. \end{lem} \begin{proof} We note that the mapping $(\eta,\nu)\mapsto \int_S \eta\d \nu$ is continuous on $\Cder{}(S)\times \Mm(S)$. Since $x\mapsto \psi(x,\cdot)$ is continuous from $X$ to $\Cder{}(S)$, the mapping \begin{align} F: (x,\nu)\longmapsto \int_S \psi(x,s)\d \nu(s) \end{align} is continuous on $X\times \Mm(S)$. Now, since $S$ is compact, $\Mm(S)$ is the dual of the Banach space $\Cder{}(S)$, and the Banach-Steinhaus theorem implies that there exists $R>0$ such that $\sup_n |\mun|(S)\leq R$ (and $|\mu|(S)\leq R$). The subspace $\Bb_R\eqdef\enscond{\nu\in\Mm(S)}{ |\nu|(S)\leq R}$ is metrizable for the weak-* topology and compact. As a result, the mapping $F$ is uniformly continuous on the compact $X\times \Bb_R$. In particular, as $\mun\to \mu$ in $\Bb_R$, \begin{align*} \sup_{x\in X} \abs{\int_{S}\psi(x,s)\d\mun(s)-\int_{S}\psi(x,s)\d\mu(s)}\to 0. \end{align*} \end{proof} Figure~\ref{sec:laplace-fig:etaWapprox} illustrates this convergence between the precertificates in the unnormalized case. \begin{figure}[!htb] \centering \subfigure[$K=10$]{\includegraphics[width=0.32\linewidth]{etaW/etaW-approx-1}} \subfigure[$K=120$]{\includegraphics[width=0.32\linewidth]{etaW/etaW-approx-2}} \subfigure[$K=800$]{\includegraphics[width=0.32\linewidth]{etaW/etaW-approx-3}} \caption{\label{sec:laplace-fig:etaWapprox}Approximation of $\etaW$ for the unnormalized continuous Laplace operator (see Proposition~\ref{prop:etaW-unnorm-Laplace}) by the $\etaW$ obtained for discretized unnormalized Laplace operators.} \end{figure} \section{Single Molecule Localization Microscopy}\label{sec:microscopy} The field of fluorescent microscopy has experienced an important revolution during the past two decades with the emergence of super-resolution techniques. These modalities, such as structured illumination microscopy (SIM) \cite{gustafsson-surpassing2000}, stimulated emission depletion (STED) \cite{Hell:94}, or single molecule localization microscopy (SMLM)---which includes photoactivated localization microscopy (PALM) \cite{Betzig1642,hess-ultra2007} and stochastic optical reconstruction microscopy (STORM) \cite{rust2006sub}---bypass the diffraction limit so as to reach unprecedented nanoscale resolution. The main principle behind these methods relies on a combined use of optics and numerical processing, which is commonly called computational imaging. The resolution improvement is thus directly related to the performance of the reconstruction algorithms employed to process the acquired data. SMLM techniques use photoactivables fluorescent probes to sequentially image a subset of activated molecules. Then, dedicated algorithms are deployed to precisely extract the position of these molecules. While the difficulty of the localization problem increases with the density of activated molecules per acquisitions, low density activations drastically reduce the temporal resolution of the system which makes the method limited for live imaging. Hence, current trends in SMLM concern the development of efficient algorithms dealing with high density data for which classical point-spread function (PSF) fitting or centroid localization methods \cite{henriques2010quickpalm} fail. In particular, off-the-grid sparse regularized methods have shown their efficiency for high density settings \cite{huang2017super,boyd-adcg2015}. For a complete review and comparisons of existing methods, we refer the reader to the two recent SMLM challenges \cite{sage-quantitative2015,Sage362517}. Initially introduced for two-dimensional imaging, SMLM has been extended to 3D thanks to PSF engineering. The principle relies on the design of PSFs which vary in the axial direction (\ie z) in order to encode an information about the depth of molecules. Conventional PSF models include astigmatism \cite{huang-three2008} and double-helix \cite{rama-three2009}. An alternative to PSF engineering is to record simultaneously multiple focal planes, as in the biplane modality \cite{juette-three2008}. It is noteworthy that these two approaches can also be combined as in \cite{huang-3dastmultifp} where the authors use both an astigmatism PSF and multi-focal acquisitions. \\ In this section, we study the performance of the SFW algorithm on both astigmatism and double-helix modalities with various number of focal planes (typically from $1$ to $4$). We emphasize that conventional astigmatism and double-helix SMLM devices---in particular commercial ones---use a single focal plane. As opposed to single-focal acquisitions, multi-focal acquisitions require to mount and synchronize several cameras in parallel. To the best of our knowledge, such a setting has only been reported by Huang et al~\cite{huang-3dastmultifp} for the astigmatism SMLM. Moreover, we propose to compare these two modalities to an alternative approach where depth information is extracted from multi-angle total internal reflection fluorescence (MA-TIRF) microscopy acquisitions. Such an approach has never been reported yet and we expect our numerical simulations to serve as a proof of concept for further developments. One of the main interest in combining SMLM with MA-TIRF is that classical PSFs, which are better localized laterally than astigmatism or double-helix, can be used. This would reduce the difficulty of lateral molecule localization for high density settings while recovering the depth through the MA-TIRF acquisitions. \subsection{Forward Operators} \label{sec:FwdModels} In this section, we define the forward operator $\Phi$ for the three modalities considered in this paper. The first two correspond to conventional three-dimensional SMLM with astigmatism or double-helix PSFs. The third one, on the contrary, uses a MA-TIRF excitation in order to get an information about the depth of molecules. The operator $\Phi : \radon \rightarrow \RR^{N_1N_2K}$ maps the Radon measures $m\in\radon$ to the discrete noiseless measurements $\Phi m \in \RR^{N_1N_2K}$, \begin{equation}\label{eq:Fwd} \Phi m=\int_X \phi(x)\d m(x). \end{equation} It is fully characterized by the function $\phi : X \rightarrow \RR^{N_1N_2K}$. Hence, for each modality, we only have to define $\phi$. In the following, $X \eqdef [0,b_1] \times [0,b_2] \times [0,b_3]$ is a subset of $\RR^3$, and we write $x=(x_1,x_2,x_3)\in X$. Then, we consider a camera containing $N_1 \times N_2 $ pixels and we denote the center of the ith pixel by $(c_{i,1},c_{i,2})$. Finally, we provide expressions of $\phi$ which enclose the integration over camera pixels $$ \Omega_i \eqdef (c_{i,1},c_{i,2})+\left[-\frac{b_1}{2N_1},\frac{b_1}{2 N_1}\right]\times \left[-\frac{b_2}{2 N_2},\frac{b_2}{2 N_2}\right] \subset \Omega \eqdef [0,b_1] \times [0,b_2]. $$ \paragraph{Astigmatism model.} This modality provides depth information using an astigmatism deformation of the PSF with respect to the axial direction $z$. It is customary to model the latter with a Gaussian function whose variances $\sigma_1$ and $\sigma_2 $ vary with $z$ according to \cite{huang2017super,kirshner2013} \begin{equation}\label{eq:astig_var} \sigma_1(z) \eqdef \sigma_0\sqrt{1+\pa{\frac{\alpha z-\beta}{d}}^2} \qandq \sigma_2(z) \eqdef \sigma_1 (-z). \end{equation} The constants involved in~\eqref{eq:astig_var} can be calibrated from real data~\cite{huang-three2008,kirshner2013}. Then, integrating this Gaussian model over camera pixels, we have for all $i\in \{1,\ldots, N_1 N_2\}$ and $k \in \{1,\ldots,K\}$ $$ [\phi(x)]_{i,k} \eqdef\frac{1}{2\pi\sigma_{1}(x_3-z_k)\sigma_{2}(x_3-z_k)} \int_{\Omega_i} e^{-\left(\frac{(x_1-s_1)^2}{2\sigma^2_{1}(x_3-z_k)}+\frac{(x_2-s_2)^2}{2\sigma^2_{2}(x_3-z_k)}\right)} \d s_1 \d s_2, $$ where $(z_k)_{k=1}^K$ are the positions of the considered focal planes. \paragraph{Double-helix model.} Here, depth information is obtained by using a PSF formed out of two lobes which coil around each other along $z$ to form a double-helix shape. In this paper, we model these lobes by two Gaussian functions with fixed variances $\sigma_1= \sigma_2$, and with a center whose lateral position $(r_1,r_2) $ (respectively, $(-r_1,-r_2)$) varies with $z$ according to \begin{equation} r_1(z) \eqdef \frac{\om}{2}\cos(\theta(z)) \; \text{ and } \; r_2(z) \eqdef -\frac{\om}{2}\sin(\theta(z)) \; \text{ where } \; \theta(z)=\theta_{\mathrm{speed}} z \label{sec:microscopy-eq:theta}. \end{equation} Parameters $\omega >0$ and $\theta_{\mathrm{speed}}>0$ correspond to the distance between the two Gaussian and the rotation speed of the double-helix (rad/nm), respectively. Then, integrating this model over camera pixels, we have for all $i\in \{1,\ldots, N_1 N_2\}$ and $k \in \{1,\ldots,K\}$ $$ [\phi(x)]_{i,k} \eqdef \frac{1}{{2\pi}\sigma_{1}\sigma_{2}} \sum_{u\in\{-1,1\}} \int_{\Omega_i} e^{-\left( \frac{(x_1+u r_1(x_3-z_k)-s_1)^2}{2\sigma^2_{1}} +\frac{(x_2+u r_2(x_3-z_k)-s_2)^2}{2\sigma^2_{2}}\right) } \d s_1 \d s_2, $$ where $(z_k)_{k=1}^K$ are the positions of the considered focal planes. \paragraph{MA-TIRF model.} With this modality, each activated set of molecules is imaged using $K\in \NN$ TIRF illuminations with incident angles $(\al_k)_{k =1}^K$. Let $\nii>0$ and $n_t>0$ be the refractive indices of the incident (\ie glass coverslip) and the transmitted (\ie sample) medium, respectively. A TIRF excitation is obtained when the incident angle $\al$ is greater than the critical angle $\alc = \arcsin (n_t/\nii)$ for which we have total internal reflection of the light within the incident medium. This phenomenon produces an evanescent wave which decays in the transmitted medium as $ \exp (-s x_3) $, where $s = ({4\pi\nii})/{\lamb}\pa{\sin^2(\al)-\sin^2(\alc)}$ is the penetration depth and $\lamb$ is the wavelength of the incident laser beam~\cite{axelrod1981cell,axelrod-total2008}. Because the decay of this evanescent excitation vary with the incident angle, the depth of biological structures can be recovered with a nanometric precision from multi-angle acquisitions~\cite{boulanger2014fast,dos2016,Zheng2018}. Combining this principle with SMLM techniques lead to a forward model $\Phi$ defined, for all $i \in \{1,\ldots , N_1 N_2\}$ and $k \in \{1,\ldots,K\}$, by \begin{equation}\label{eq:MATIRF_Fwd} [\phi(x)]_{i,k} \eqdef \frac{\xi(x_3) e^{-s_k x_3}}{{2\pi}\sigma_{1}\sigma_{2}} \int_{\Omega_i} e^{-\left( \frac{(x_1 - s_1)^2}{2\sigma^2_{1}} +\frac{(x_2 -s_2)^2}{2\sigma^2_{2}}\right) } \d s_1 \d s_2, \end{equation} where $ \xi(z)=\pa{\sum_{k=1}^\K e^{-2s_k z}}^{-1/2}.$ This model comes from the combination of a lateral convolution with the axial TIRF excitation. Here the PSF of the system is assumed to be a Gaussian with variances $\sigma_1=\sigma_2$, and to be constant along $x_3$ (because only a thin layer of few hundred nanometers is excited by the evanescent wave). The values $(s_k)_{k=1}^K$ correspond to the penetration depths associated to the incident angles $(\alpha_k)_{k=1}^K$. \begin{rem}\label{sec:microscopy-rem:separable-lap} One particularity of the MA-TIRF modality is that the kernel $\phi$ in~\eqref{eq:MATIRF_Fwd} is separable. This can be exploited numerically to reduce the overall algorithm complexity. \end{rem} \paragraph{Illustrations and numerical computation of $\etaVV$.} Examples of noiseless measurements $\obsO=\Phi\measO$ with \begin{align}\label{sec:microscopy-eq:measO} \measO=\dirac{(1.5,2.5,0.1)}+\dirac{(1.5,3,0.5)}+\dirac{(2,5,0.7)}+\dirac{(4.5,3.5,0.4)}+\dirac{(5,1,0.2)}. \end{align} are presented in Figure~\ref{fig:NoiselessEx} for the three modalities. The parameters used for these simulations are provided in Table~\ref{Table:rparametersSImu}. One can observe the effect of the three modalities on molecules at different depths. For the astigmatism modality, the orientation along which the PSF is defocuced indicates the position of the molecule with respect to the focal plane (above/below). Moreover, the larger is this defocucing, the deeper is the molecule. In the case of the double-helix modality, we can clearly see the rotation of the PSF with depth. Finally, for the MA-TIRF modality, we can observe that the recorded intensities for deep molecules decrease, with the incident angle, faster than the intensity for molecules which are close to the glass coverslip (\ie $x_3=0$). \begin{figure}[t] \centering \begin{tikzpicture} \begin{groupplot}[group style={group size= 4 by 3, horizontal sep=0.3cm, vertical sep=0.3cm}, xmin=0,xmax=6.4, ymin=0,ymax=6.4, title style={yshift=-0.10cm}, grid=both, xtick={0,2,4,6}, xticklabels={0,2,4,6}, ytick={0,2,4,6}, yticklabels={0,2,4,6}, axis equal image, grid style={black}, width=0.35\textwidth] \nextgroupplot[enlargelimits=false,xticklabels={,,},ylabel style={align=center},ylabel={{Astigmatism \\[0.2cm] $x_2$ ($\micron$)}}] \addplot[] graphics[xmin=0,ymin=0,xmax=6.4,ymax=6.4] {microscopy/models/psf_ast_1-up}; \addplot[blue!12.5!red,mark=*,mark size=1] (1.5,2.5); \addplot[blue!62.5!red,mark=*,mark size=1] (1.5,3); \addplot[blue!87.5!red,mark=*,mark size=1] (2,5); \addplot[blue!50!red,mark=*,mark size=1] (4.5,3.5); \addplot[blue!25!red,mark=*,mark size=1] (5,1); \node[white,anchor=west] at (axis cs:0.1,5.9) {{$z_1=0.16\micron$}}; \nextgroupplot[enlargelimits=false,yticklabels={,,},xticklabels={,,}] \addplot[] graphics[xmin=0,ymin=0,xmax=6.4,ymax=6.4] {microscopy/models/psf_ast_2-up}; \addplot[blue!12.5!red,mark=*,mark size=1] (1.5,2.5); \addplot[blue!62.5!red,mark=*,mark size=1] (1.5,3); \addplot[blue!87.5!red,mark=*,mark size=1] (2,5); \addplot[blue!50!red,mark=*,mark size=1] (4.5,3.5); \addplot[blue!25!red,mark=*,mark size=1] (5,1); \node[white,anchor=west] at (axis cs:0.1,5.9) {{$z_2=0.32\micron$}}; \nextgroupplot[enlargelimits=false,yticklabels={,,},xticklabels={,,}] \addplot[] graphics[xmin=0,ymin=0,xmax=6.4,ymax=6.4] {microscopy/models/psf_ast_3-up}; \addplot[blue!12.5!red,mark=*,mark size=1] (1.5,2.5); \addplot[blue!62.5!red,mark=*,mark size=1] (1.5,3); \addplot[blue!87.5!red,mark=*,mark size=1] (2,5); \addplot[blue!50!red,mark=*,mark size=1] (4.5,3.5); \addplot[blue!25!red,mark=*,mark size=1] (5,1); \node[white,anchor=west] at (axis cs:0.1,5.9) {{$z_3=0.48\micron$}}; \nextgroupplot[enlargelimits=false,yticklabels={,,},xticklabels={,,}] \addplot[] graphics[xmin=0,ymin=0,xmax=6.4,ymax=6.4] {microscopy/models/psf_ast_4-up}; \addplot[blue!12.5!red,mark=*,mark size=1] (1.5,2.5); \addplot[blue!62.5!red,mark=*,mark size=1] (1.5,3); \addplot[blue!87.5!red,mark=*,mark size=1] (2,5); \addplot[blue!50!red,mark=*,mark size=1] (4.5,3.5); \addplot[blue!25!red,mark=*,mark size=1] (5,1); \node[white,anchor=west] at (axis cs:0.1,5.9) {{$z_4=0.64\micron$}}; \nextgroupplot[enlargelimits=false,xticklabels={,,},ylabel style={align=center},ylabel={{Double-helix \\[0.2cm] $x_2$ ($\micron$)}}] \addplot[] graphics[xmin=0,ymin=0,xmax=6.4,ymax=6.4] {microscopy/models/psf_dh_1-up}; \addplot[blue!12.5!red,mark=*,mark size=1] (1.5,2.5); \addplot[blue!62.5!red,mark=*,mark size=1] (1.5,3); \addplot[blue!87.5!red,mark=*,mark size=1] (2,5); \addplot[blue!50!red,mark=*,mark size=1] (4.5,3.5); \addplot[blue!25!red,mark=*,mark size=1] (5,1); \node[white,anchor=west] at (axis cs:0.1,5.9) {{$z_1=0.16\micron$}}; \nextgroupplot[enlargelimits=false,yticklabels={,,},xticklabels={,,}] \addplot[] graphics[xmin=0,ymin=0,xmax=6.4,ymax=6.4] {microscopy/models/psf_dh_2-up}; \addplot[blue!12.5!red,mark=*,mark size=1] (1.5,2.5); \addplot[blue!62.5!red,mark=*,mark size=1] (1.5,3); \addplot[blue!87.5!red,mark=*,mark size=1] (2,5); \addplot[blue!50!red,mark=*,mark size=1] (4.5,3.5); \addplot[blue!25!red,mark=*,mark size=1] (5,1); \node[white,anchor=west] at (axis cs:0.1,5.9) {{$z_2=0.32\micron$}}; \nextgroupplot[enlargelimits=false,yticklabels={,,},xticklabels={,,}] \addplot[] graphics[xmin=0,ymin=0,xmax=6.4,ymax=6.4] {microscopy/models/psf_dh_3-up}; \addplot[blue!12.5!red,mark=*,mark size=1] (1.5,2.5); \addplot[blue!62.5!red,mark=*,mark size=1] (1.5,3); \addplot[blue!87.5!red,mark=*,mark size=1] (2,5); \addplot[blue!50!red,mark=*,mark size=1] (4.5,3.5); \addplot[blue!25!red,mark=*,mark size=1] (5,1); \node[white,anchor=west] at (axis cs:0.1,5.9) {{$z_3=0.48\micron$}}; \nextgroupplot[enlargelimits=false,yticklabels={,,},xticklabels={,,}] \addplot[] graphics[xmin=0,ymin=0,xmax=6.4,ymax=6.4] {microscopy/models/psf_dh_4-up}; \addplot[blue!12.5!red,mark=*,mark size=1] (1.5,2.5); \addplot[blue!62.5!red,mark=*,mark size=1] (1.5,3); \addplot[blue!87.5!red,mark=*,mark size=1] (2,5); \addplot[blue!50!red,mark=*,mark size=1] (4.5,3.5); \addplot[blue!25!red,mark=*,mark size=1] (5,1); \node[white,anchor=west] at (axis cs:0.1,5.9) {{$z_4=0.64\micron$}}; \nextgroupplot[enlargelimits=false,ylabel style={align=center},ylabel={{MA-TIRF \\[0.2cm] $x_2$ ($\micron$)}},xlabel=$x_1$ ($\micron$)] \addplot[] graphics[xmin=0,ymin=0,xmax=6.4,ymax=6.4] {microscopy/models/psf_laplace_1-up}; \addplot[blue!12.5!red,mark=*,mark size=1] (1.5,2.5); \addplot[blue!62.5!red,mark=*,mark size=1] (1.5,3); \addplot[blue!87.5!red,mark=*,mark size=1] (2,5); \addplot[blue!50!red,mark=*,mark size=1] (4.5,3.5); \addplot[blue!25!red,mark=*,mark size=1] (5,1); \node[white,anchor=west] at (axis cs:0.1,5.9) {{$\alpha_1=61.63^o$}}; \nextgroupplot[enlargelimits=false,yticklabels={,,},xlabel=$x_1$ ($\micron$)] \addplot[] graphics[xmin=0,ymin=0,xmax=6.4,ymax=6.4] {microscopy/models/psf_laplace_2-up}; \addplot[blue!12.5!red,mark=*,mark size=1] (1.5,2.5); \addplot[blue!62.5!red,mark=*,mark size=1] (1.5,3); \addplot[blue!87.5!red,mark=*,mark size=1] (2,5); \addplot[blue!50!red,mark=*,mark size=1] (4.5,3.5); \addplot[blue!25!red,mark=*,mark size=1] (5,1); \node[white,anchor=west] at (axis cs:0.1,5.9) {{$\alpha_2=67.61^o$}}; \nextgroupplot[enlargelimits=false,yticklabels={,,},xlabel=$x_1$ ($\micron$)] \addplot[] graphics[xmin=0,ymin=0,xmax=6.4,ymax=6.4] {microscopy/models/psf_laplace_3-up}; \addplot[blue!12.5!red,mark=*,mark size=1] (1.5,2.5); \addplot[blue!62.5!red,mark=*,mark size=1] (1.5,3); \addplot[blue!87.5!red,mark=*,mark size=1] (2,5); \addplot[blue!50!red,mark=*,mark size=1] (4.5,3.5); \addplot[blue!25!red,mark=*,mark size=1] (5,1); \node[white,anchor=west] at (axis cs:0.1,5.9) {{$\alpha_3=73.6^o$}}; \nextgroupplot[enlargelimits=false,yticklabels={,,},xlabel=$x_1$ ($\micron$)] \addplot[] graphics[xmin=0,ymin=0,xmax=6.4,ymax=6.4] {microscopy/models/psf_laplace_4-up}; \addplot[blue!12.5!red,mark=*,mark size=1] (1.5,2.5); \addplot[blue!62.5!red,mark=*,mark size=1] (1.5,3); \addplot[blue!87.5!red,mark=*,mark size=1] (2,5); \addplot[blue!50!red,mark=*,mark size=1] (4.5,3.5); \addplot[blue!25!red,mark=*,mark size=1] (5,1); \node[white,anchor=west] at (axis cs:0.1,5.9) {{$\alpha_4=79.58^o$}}; \end{groupplot} \end{tikzpicture} \caption{\label{fig:NoiselessEx} Noiseless acquisitions $\obsO$ for the measure $\measO$ given in~\eqref{sec:microscopy-eq:measO} and $K=4$. The parameters used for these simulations are given in Table~\ref{Table:rparametersSImu}. The color of the molecules represent their depths: 0 (red) -- $0.8\micron$ (blue). } \end{figure} Although, for these three-dimensional models, an explicit expression of $\etaVV$ seems challenging to come by, the latter can be computed numerically for specific points $x \in X$. A representation of $\etaVV$ for the measure given in~\eqref{sec:microscopy-eq:measO} at $x_3=0.1$ and $x_3=0.5$ is depicted in Figure~\ref{fig:EtaV_microscopy}. For the three modalities, we have that $\etaVV(1.5,2.5,0.1)=\etaVV(1.5,3,0.5)=1$ and otherwise $\etaVV$ is smaller than $1$. Hence, $\etaVV$ seems nondegenerate and a measure composed of the same number of Dirac masses as $\measO$ can be recovered by the SFW algorithm. \begin{figure}[t] \centering \begin{tikzpicture} \begin{groupplot}[group style={group size= 3 by 2, horizontal sep=0.3cm, vertical sep=0.3cm}, xmin=0,xmax=64, ymin=0,ymax=64, title style={yshift=-0.10cm}, grid=both, xtick={0,20,40,60}, xticklabels={0,2,4,6}, ytick={0,20,40,60}, yticklabels={0,2,4,6}, axis equal image, grid style={black}, width=0.4\textwidth] \nextgroupplot[enlargelimits=false,xticklabels={,,},ylabel style={align=center},ylabel={{$x_3=0.1\micron$ \\[0.2cm] $x_2$ ($\micron$)}},title=Astigmatism] \addplot[] graphics[xmin=-11,ymin=-9,xmax=80,ymax=74] {microscopy/models/etaV_ast_1}; \nextgroupplot[enlargelimits=false,yticklabels={,,},xticklabels={,,},title=Double-helix] \addplot[] graphics[xmin=-11,ymin=-9,xmax=80,ymax=74] {microscopy/models/etaV_dh_1}; \nextgroupplot[enlargelimits=false,yticklabels={,,},xticklabels={,,},title=MA-TIRF] \addplot[] graphics[xmin=-11,ymin=-9,xmax=80,ymax=74] {microscopy/models/etaV_laplace_1}; \nextgroupplot[enlargelimits=false,ylabel style={align=center},ylabel={{$x_3=0.5\micron$ \\[0.2cm] $x_2$ ($\micron$)}},xlabel=$x_1$ ($\micron$)] \addplot[] graphics[xmin=-11,ymin=-9,xmax=80,ymax=74] {microscopy/models/etaV_ast_2}; \nextgroupplot[enlargelimits=false,yticklabels={,,},xlabel=$x_1$ ($\micron$)] \addplot[] graphics[xmin=-11,ymin=-9,xmax=80,ymax=74] {microscopy/models/etaV_dh_2}; \nextgroupplot[enlargelimits=false,yticklabels={,,},xlabel=$x_1$ ($\micron$)] \addplot[] graphics[xmin=-11,ymin=-9,xmax=80,ymax=74] {microscopy/models/etaV_laplace_2}; \end{groupplot} \end{tikzpicture} \caption{\label{fig:EtaV_microscopy} Numerical computation of $\etaVV$ at $x_3=0.1$ (top) and $x_3=0.5$ (bottom) for the three models and the measure $\measO$ given in~\eqref{sec:microscopy-eq:measO}. The colormap ranges from 0 (blue) to 1 (red).} \end{figure} \subsection{Simulation setting}\label{subsec:simul-setting} \paragraph{Imaged Structure.} Simulations were performed using the microtubules-like structure depicted in Figure~\ref{fig:sec-models-tubs}. It has been generated within the volume \begin{equation} \Pos=[0,b_1]\times[0,b_2]\times[0,b_3] \subset\RR^3 \qwhereq b_1=b_2=6.4~\micron \text{ and } b_3=0.8~\micron. \end{equation} The filaments were obtained by randomly sampling many points along four curves defined by polynomial equations. To ensure a uniform distribution of the points along the curves, we first parametrized each curve by a piecewise linear function (with very small steps). Then, in order to give a width to the filaments, each point $x \in \Pos$ randomly chosen on one of the curves is replaced by a point randomly chosen in a ball centered at $x$ with radius $10$ nm. Thus, simulated filaments have a diameter of $20$ nm. \begin{figure}[t] \centering \includegraphics[trim=3cm 3cm 3cm 3cm,width=0.5\linewidth,clip=true]{microscopy/models/tub_with_initial_meas} \caption{Microtubules structure used for the simulations. The diameter of the filaments is $20$ nm. The color encodes the depth of molecules within the range $0-0.8~\micron$. Black crosses represent a subset of activated molecules (\ie a measure $\measO$). }\label{fig:sec-models-tubs} \end{figure} \paragraph{Simulation of noiseless acquisitions.} The $\nMol_{\text{tot}}\in\NN^*$ molecules of the simulated structure are divided into $\nGT\in\NN^*$ sparse set of $\nMol\in\NN^*$ molecules using a random permutation (\ie $\nMol_{\text{tot}}=\nGT \times \nMol$). This models the sequential stochastic activation of fluorophores used in SMLM. For each of the $\nGT$ subsets of molecules, we define a Radon measure composed of a sum of Dirac masses---located at the position of the molecules---with positive amplitudes \eq{ \measO=\sum_{i=1}^\nMol \ampOi\dirac{\posOi} \qwhereq \ampOi>0 \qandq \posOi \in\Pos.} The amplitudes are randomly generated within $[1,1.5]$. An example of a set of activated molecules is shown in Figure~\ref{fig:sec-models-tubs} (black crosses). Now let ($N_1 \times N_2$) be the size of the grid of pixels on the detector plane, and $K$ be the number of focal planes (or the number of TIRF ``angles'', see Section~\ref{sec:FwdModels}) which are recorded. Then, the noiseless measurements $\obsO$ for an activated measure $\measO$ follow the model \begin{equation} \obsO = \Phi \measO, \end{equation} where $\Phi$ is defined in~\eqref{eq:Fwd}. Finally, it is noteworthy that in practice the number of activated molecules varies from one activation to another around an average value (which depends on the power of the excitation laser beam). However, fixing this number to $\nMol$ for each activated set of molecules allows us to better control the density of spikes in order to study the behaviour of the algorithm when the latter increases. \paragraph{Noise Model.} There are two predominant sources of noise in microscopy data. \begin{itemize} \item The shot noise which is inherent to the quantum nature of light (random emissions of photons). It is well modeled by a Poisson distribution whose intensity is the number of photon collected at each pixel. Given the noiseless acquisition $\obsO$, we normalize it such that \begin{equation}\label{eq:photNoise} \max_{i \in \{1,\ldots,N_1N_2\}} \left( \sum_{k=1}^{K} [\obsO]_{i,k} \right) = \nPhoton, \end{equation} where $\nPhoton >0 $ denotes the maximal photon budget per pixel and controls the noise level. Then, each entry of $\obsO$ is replaced by a realization of a Poisson distribution $\Pp$ with parameter $[\obsO]_{i,k}$. It is noteworthy from \eqref{eq:photNoise} that the level of noise not only increases as $\nPhoton$ decreases, but it also increases with $K$. \item The readout noise $w_G$ of the camera. It is usually modeled by a Gaussian distribution with variance~$\sigma^2$. \end{itemize} Finally the noisy data are given by \begin{equation} \obsw = \Pp(\obsO)+w_G. \end{equation} \subsection{Results} For each of the three modalities presented in Section \ref{sec:FwdModels} (Double-Helix, Astigmatism, MA-TIRF), acquisitions where simulated using the optical parameters gathered in Table~\ref{Table:rparametersSImu}. These parameters have been tuned according to the experimental PSF used in the SMLM challenge \cite{Sage362517}. Finally, we generated different experiments by varying the density of molecules $ \nMol \in \{ 5,10,15\}$ as well as the number of focal planes (or angles for the TIRF model) $K \in \{1,2,3,4\}$. \begin{table}[t] \centering \caption{\label{Table:rparametersSImu} Parameters used for data simulation. } \begin{tabular}{l|ccl} \toprule \toprule & Parameter & Value & Description \\ \midrule \multirow{9}{*}{\rotatebox{90}{All modalities}} &$b_1=b_2$ & $6.4\micron$ & Region of interest \\ & $b_3$ & $0.8\micron$ & Maximal depth of molecules \\ & $N_1 = N_2$ & $64$ & Detector grid size \\ &$\NA$ & $1.49$ & Objective numerical aperture \\ &$\nii$ & $1.515$ & Refractive index incident medium \\ &$\nt$ & $1.333$ & Refractive index transmitted medium \\ &$\lamb$ & $0.66\micron$ & Excitation wavelength \\ &$\nPhoton$ & $1000$ & Photon budget\\ &$\sigma$ & $10^{-4}$ & Variance of Gaussian noise \\ \midrule \multirow{5}{*}{\rotatebox{90}{Astigmatism}} &$\sigma_0$ & ${0.42\lamb}/{\NA}$ & PSF variance at focus\\ &$\beta$ & $0.2\micron$ & Depth for which the variance is minimal \\ &$d$ & ${\lamb\nii}/({2\NA^2})$ & Parameter related to the depth-of-field\\ &$\alpha$ &$-0.79$ & Scaling constant \\ & $(z_k)_{k=1}^K$ & $k b_3 /(K+1)$ & Focal planes \\ \midrule \multirow{4}{*}{\rotatebox{90}{Double-Helix}} & $\sigma_1=\sigma_2$ & ${0.42\lamb}/{\NA}$ & PSF variance \\ & $\om$ & $1\micron$ & Distance between the two PSF lobes\\ & $\theta_{\mathrm{speed}}$ & $ 0.3846 \pi$ rad/$\micron$ & Rotation speed of the PSF\\ & $(z_k)_{k=1}^K$ & $k b_3 /(K+1)$ & Focal planes \\ \midrule \multirow{3}{*}{\rotatebox{90}{MA-TIRF}} & $\sigma_1=\sigma_2$ & ${0.42\lamb}/{\NA}$ & PSF variance \\ & $(\alpha_k)_{k=1}^K$ & $\alc + \frac{\alm-\alc}{K-1}(k-1)$ & Incident angles \\ & $\alm$ & $\sin^{-1}(\NA / \nii)$ & Maximal incident angle \\ \bottomrule \bottomrule \end{tabular} \end{table} \subsubsection{Metrics for evaluation} In order to assess the quality of the reconstructed volumes, we consider standard metrics which reflect both the detection rate and the localization error \cite{Sage362517,sage-quantitative2015}. Given a recovered frame and a tolerance radius $r >0$, we pair estimated molecules and ground truth (GT) molecules when the distance between them is lower than~$r$. Paired estimated molecules are then referred as true positive (TP) while unpaired ones as false positive (FP). Finally, the unpaired GT molecules are identified as false negative (FN). These quantities being determined for each frame, we can compute the Jaccard index (Jac), the Recall (Rec) and the Precision (Pre) metrics, \begin{equation} \mathrm{Jac} = \frac{\#\mathrm{TP}}{\#\mathrm{TP}+ \#\mathrm{FP}+\# \mathrm{FN}} \quad \mathrm{Rec}=\frac{\#\mathrm{TP}}{\#\mathrm{TP}+\# \mathrm{FN}} \quad \mathrm{Pre}= \frac{\#\mathrm{TP}}{\#\mathrm{TP}+ \#\mathrm{FP}}. \end{equation} The Jaccard index measures the overall performance of detection by giving a measure of similarity between the two sets of points. The Recall and Precision metrics can then be used to measure the ability of an algorithm to minimize FN and FP detection, respectively. Finally, the TP molecules are used to compute the root mean squared error (RMSE) along each dimension \begin{equation} \mathrm{RMSE}_{x_1} = \sqrt{\frac{1}{\# \mathrm{TP}} \sum_{i \in \mathrm{TP}} ([x_i]_1 - [x_{0,i}]_1)^2}, \end{equation} and similarly for $\mathrm{RMSE}_{x_2}$ and $\mathrm{RMSE}_{x_3}$. Note that, by construction, the RMSE is bounded by the radius $r$. Hence, in the following, we use different values for $r$ depending on the metric of interest. \subsubsection{Choice of the regularization parameter $\lambda$} For each experiment (\ie $\nMol\in \{5,10,15\}$ and $\K \in \{1,2,3,4\}$), we choose the value of the regularization parameter $\lambda$ which maximizes the Jaccard index for a radius of $r=0.02$ (\ie $20$ nm). This training step was performed over a small subset of initial measures $\measO$ (\ie frames). Then the recovery was done on the complete dataset using the optimal $\lambda$ found. \subsubsection{Discussion} The evolution of Jaccard, Recall, and Precision metrics with respect to $K$ are depicted in Figure~\ref{sec:microscopy-fig:indices}. As expected, they all increase with $K$. However, although the improvement is significant from $K=1$ to $K=2$, higher values only provide marginal gains. This can be explained by the fact that the photon budget $\nPhoton$ is distributed over the $K$ acquisitions (see equation \eqref{eq:photNoise}). Hence, the additional axial information brought by increasing the number of acquisitions per activation should be balanced by the higher noise corrupting the data. Another observation from these plots concerns the degradation of the performance as the density (\ie the number of molecules $\nMol$) increases. \begin{figure}[t] \centering \begin{tikzpicture} \begin{groupplot}[group style={group size= 3 by 1, horizontal sep=0.3cm, vertical sep=0.3cm}, xmin=0.5,xmax=4.5, ymin=0.2,ymax=1, grid=both, xtick={1,2,3,4}, ytick={0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9,1}, xlabel={$K$}, legend columns=3, legend style={legend cell align=left,at={(3.15,-0.4)},font=\footnotesize}, width=0.4\textwidth] \nextgroupplot[title={$N=5$}] \addplot[blue,densely dashed,mark=triangle*,every mark/.append style={solid},mark size=0.9] table{figures/microscopy/results/DHRec_N5.dat}; \addplot[darkgreen,densely dashed,mark=triangle*,every mark/.append style={solid},mark size=0.9] table{figures/microscopy/results/AstRec_N5.dat}; \addplot[red,densely dashed,mark=triangle*,every mark/.append style={solid},mark size=0.9] table{figures/microscopy/results/LapRec_N5.dat}; \addplot[blue,loosely dashed,mark=square*,every mark/.append style={solid},mark size=0.8] table{figures/microscopy/results/DHPre_N5.dat}; \addplot[darkgreen,loosely dashed,mark=square*,every mark/.append style={solid},mark size=0.8] table{figures/microscopy/results/AstPre_N5.dat}; \addplot[red,loosely dashed,mark=square*,every mark/.append style={solid},mark size=0.8] table{figures/microscopy/results/LapPre_N5.dat}; \addplot[blue,mark=*,mark size=0.8] table{figures/microscopy/results/DHJac_N5.dat}; \addplot[darkgreen,mark=*,mark size=0.8] table{figures/microscopy/results/AstJac_N5.dat}; \addplot[red,mark=*,mark size=0.8] table{figures/microscopy/results/LapJac_N5.dat}; \legend{Double-Helix (Recall),Astigmatism (Recall),MA-TIRF (Recall),Double-Helix (Precision),Astigmatism (Precision),MA-TIRF (Precision),Double-Helix (Jaccard),Astigmatism (Jaccard),MA-TIRF (Jaccard)} \nextgroupplot[title={$N=10$},yticklabels={,,}] \addplot[blue,mark=*,mark size=0.8] table{figures/microscopy/results/DHJac_N10.dat}; \addplot[blue,densely dashed,mark=triangle*,every mark/.append style={solid},mark size=0.9] table{figures/microscopy/results/DHRec_N10.dat}; \addplot[blue,loosely dashed,mark=square*,every mark/.append style={solid},mark size=0.8] table{figures/microscopy/results/DHPre_N10.dat}; \addplot[darkgreen,mark=*,mark size=0.8] table{figures/microscopy/results/AstJac_N10.dat}; \addplot[darkgreen,densely dashed,mark=triangle*,every mark/.append style={solid},mark size=0.9] table{figures/microscopy/results/AstRec_N10.dat}; \addplot[darkgreen,loosely dashed,mark=square*,every mark/.append style={solid},mark size=0.8] table{figures/microscopy/results/AstPre_N10.dat}; \addplot[red,mark=*,mark size=0.8] table{figures/microscopy/results/LapJac_N10.dat}; \addplot[red,densely dashed,mark=triangle*,every mark/.append style={solid},mark size=0.9] table{figures/microscopy/results/LapRec_N10.dat}; \addplot[red,loosely dashed,mark=square*,every mark/.append style={solid},mark size=0.8] table{figures/microscopy/results/LapPre_N10.dat}; \nextgroupplot[title={$N=15$},yticklabels={,,}] \addplot[blue,mark=*,mark size=0.8] table{figures/microscopy/results/DHJac_N15.dat}; \addplot[blue,densely dashed,mark=triangle*,every mark/.append style={solid},mark size=0.9] table{figures/microscopy/results/DHRec_N15.dat}; \addplot[blue,loosely dashed,mark=square*,every mark/.append style={solid},mark size=0.8] table{figures/microscopy/results/DHPre_N15.dat}; \addplot[darkgreen,mark=*,mark size=0.8] table{figures/microscopy/results/AstJac_N15.dat}; \addplot[darkgreen,densely dashed,mark=triangle*,every mark/.append style={solid},mark size=0.9] table{figures/microscopy/results/AstRec_N15.dat}; \addplot[darkgreen,loosely dashed,mark=square*,every mark/.append style={solid},mark size=0.8] table{figures/microscopy/results/AstPre_N15.dat}; \addplot[red,mark=*,mark size=0.8] table{figures/microscopy/results/LapJac_N15.dat}; \addplot[red,densely dashed,mark=triangle*,every mark/.append style={solid},mark size=0.9] table{figures/microscopy/results/LapRec_N15.dat}; \addplot[red,loosely dashed,mark=square*,every mark/.append style={solid},mark size=0.8] table{figures/microscopy/results/LapPre_N15.dat}; \end{groupplot} \end{tikzpicture} \caption{\label{sec:microscopy-fig:indices} Evolution of Jaccard, Recall and Precision metrics with respect to $K$, for a radius of detection $r=0.02$ ($20$nm).} \end{figure} These results also bring useful information in order to improve existing systems. Let us recall that current commercial systems includes Astigmatism and Double-Helix modalities with one focal plane (\ie $K=1$). Hence, it can be inferred from our simulations that recording an image at two focal planes for each activation of molecules would not only improve significantly the reconstruction quality but make the reconstructions more robust when the density of molecules increases. These observations corroborate the study in \cite{huang-3dastmultifp} where the authors use a multi-focus astigmatism system. However, to preserve a reasonable temporal resolution, multi-focal acquisitions require to synchronize several cameras \cite{huang-3dastmultifp} which can be expensive and lead to delicate calibration procedures (\textit{e.g.} alignment and PSF aberrations for each camera). In that respect, the proposed combination of SMLM with MA-TIRF offers an interesting alternative to improve existing systems. First, it has the potential to provide reconstructions whose quality compares favorably with the Double-Helix model while improving over the Astigmatism modality. Second, it only requires the use of galvanometric mirrors to control the incident angle \cite{boulanger2014fast}. It is noteworthy that commercial SMLM systems generally use a single TIRF illumination to limit the illumination depth. Finally, as for the multi-focus strategy, MA-TIRF requires some calibrations (\textit{e.g.} incident angles) for which there exist dedicated procedures \cite{boulanger2014fast,Soubies2016}. \begin{rem} Although the PSFs used for these simulations have been adjusted using experimental PSFs, they remain idealistic. This is particularly the case for the Double-Helix which in practice deviates from two Gaussian lobes that coil around each other along $z$ \cite{Sage362517}. In contrast, the Gaussian model yields a precise approximation of the MA-TIRF (\ie widefield) PSF \cite{Zhang2007}. The main simplification for the latter lies in the fact that each molecule is activated only during one set of multi-angle acquisitions. This would not be the case with a real implementation of the system and the model should be improved by considering the temporal aspect of the acquisition. However, the present study constitutes a first proof-of-concept and future developments will consider a more sophisticated model. \end{rem} \begin{figure}[!h] \centering \begin{tikzpicture} \begin{groupplot}[group style={group size= 3 by 1, horizontal sep=0.3cm, vertical sep=0.3cm}, xmin=0.5,xmax=4.5, ymin=0,ymax=35, grid=both, xtick={1,2,3,4}, ytick={5,10,15,20,25,30,35}, xlabel={$K$}, legend columns=3, legend style={legend cell align=left,at={(3.15,-0.4)},font=\footnotesize}, width=0.4\textwidth] \nextgroupplot[title={$N=5$}] \addplot[blue,densely dashed,mark=triangle*,every mark/.append style={solid},mark size=0.9] table{figures/microscopy/results/DH_RMSEx_N5.dat}; \addplot[darkgreen,densely dashed,mark=triangle*,every mark/.append style={solid},mark size=0.9] table{figures/microscopy/results/Ast_RMSEx_N5.dat}; \addplot[red,densely dashed,mark=triangle*,every mark/.append style={solid},mark size=0.9] table{figures/microscopy/results/Lap_RMSEx_N5.dat}; \addplot[blue,loosely dashed,mark=square*,every mark/.append style={solid},mark size=0.8] table{figures/microscopy/results/DH_RMSEy_N5.dat}; \addplot[darkgreen,loosely dashed,mark=square*,every mark/.append style={solid},mark size=0.8] table{figures/microscopy/results/Ast_RMSEy_N5.dat}; \addplot[red,loosely dashed,mark=square*,every mark/.append style={solid},mark size=0.8] table{figures/microscopy/results/Lap_RMSEy_N5.dat}; \addplot[blue,mark=*,mark size=0.8] table{figures/microscopy/results/DH_RMSEz_N5.dat}; \addplot[darkgreen,mark=*,mark size=0.8] table{figures/microscopy/results/Ast_RMSEz_N5.dat}; \addplot[red,mark=*,mark size=0.8] table{figures/microscopy/results/Lap_RMSEz_N5.dat}; \legend{Double-Helix ($\mathrm{RMSE}_{x_1}$),Astigmatism ($\mathrm{RMSE}_{x_1}$),MA-TIRF ($\mathrm{RMSE}_{x_1}$),Double-Helix ($\mathrm{RMSE}_{x_2}$),Astigmatism ($\mathrm{RMSE}_{x_2}$),MA-TIRF ($\mathrm{RMSE}_{x_2}$),Double-Helix ($\mathrm{RMSE}_{x_3}$),Astigmatism ($\mathrm{RMSE}_{x_3}$),MA-TIRF ($\mathrm{RMSE}_{x_3}$)} \nextgroupplot[title={$N=10$},yticklabels={,,}] \addplot[blue,mark=*,mark size=0.8] table{figures/microscopy/results/DH_RMSEz_N10.dat}; \addplot[blue,densely dashed,mark=triangle*,every mark/.append style={solid},mark size=0.9] table{figures/microscopy/results/DH_RMSEx_N10.dat}; \addplot[blue,loosely dashed,mark=square*,every mark/.append style={solid},mark size=0.8] table{figures/microscopy/results/DH_RMSEy_N10.dat}; \addplot[darkgreen,mark=*,mark size=0.8] table{figures/microscopy/results/Ast_RMSEz_N10.dat}; \addplot[darkgreen,densely dashed,mark=triangle*,every mark/.append style={solid},mark size=0.9] table{figures/microscopy/results/Ast_RMSEx_N10.dat}; \addplot[darkgreen,loosely dashed,mark=square*,every mark/.append style={solid},mark size=0.8] table{figures/microscopy/results/Ast_RMSEy_N10.dat}; \addplot[red,mark=*,mark size=0.8] table{figures/microscopy/results/Lap_RMSEz_N10.dat}; \addplot[red,densely dashed,mark=triangle*,every mark/.append style={solid},mark size=0.9] table{figures/microscopy/results/Lap_RMSEx_N10.dat}; \addplot[red,loosely dashed,mark=square*,every mark/.append style={solid},mark size=0.8] table{figures/microscopy/results/Lap_RMSEy_N10.dat}; \nextgroupplot[title={$N=15$},yticklabels={,,}] \addplot[blue,mark=*,mark size=0.8] table{figures/microscopy/results/DH_RMSEz_N15.dat}; \addplot[blue,densely dashed,mark=triangle*,every mark/.append style={solid},mark size=0.9] table{figures/microscopy/results/DH_RMSEx_N15.dat}; \addplot[blue,loosely dashed,mark=square*,every mark/.append style={solid},mark size=0.8] table{figures/microscopy/results/DH_RMSEy_N15.dat}; \addplot[darkgreen,mark=*,mark size=0.8] table{figures/microscopy/results/Ast_RMSEz_N15.dat}; \addplot[darkgreen,densely dashed,mark=triangle*,every mark/.append style={solid},mark size=0.9] table{figures/microscopy/results/Ast_RMSEx_N15.dat}; \addplot[darkgreen,loosely dashed,mark=square*,every mark/.append style={solid},mark size=0.8] table{figures/microscopy/results/Ast_RMSEy_N15.dat}; \addplot[red,mark=*,mark size=0.8] table{figures/microscopy/results/Lap_RMSEz_N15.dat}; \addplot[red,densely dashed,mark=triangle*,every mark/.append style={solid},mark size=0.9] table{figures/microscopy/results/Lap_RMSEx_N15.dat}; \addplot[red,loosely dashed,mark=square*,every mark/.append style={solid},mark size=0.8] table{figures/microscopy/results/Lap_RMSEy_N15.dat}; \end{groupplot} \end{tikzpicture} \caption{\label{sec:microscopy-fig:rmse} Evolution of the RMSE (nm) with respect to $K$, for a radius of detection $r=0.1$ ($100$nm).} \end{figure} The results in terms of RMSE presented in Figure~\ref{sec:microscopy-fig:rmse} lead to similar interpretations. First, the detection accuracy is increasing with $K$ while decreasing with $\nMol$. Second, we can observe that the differences between the Double-Helix and the MA-TIRF models mainly come from the precision in $x_3$. Indeed, they both lead to the same lateral RMSE (around $5$nm when $\nMol=5$ and $12$nm at the highest density $\nMol=15$), but the Double-Helix enjoys a better axial RMSE. This reflects the challenging problem that constitues the inversion of the Laplace transform, which is related to the MA-TIRF model. Nevertheless, the SWF algorithm performs quite well at this task (see also Figures~\ref{sec:microscopy-fig:tub-Kfixe} and~\ref{sec:microscopy-fig:tub-Nfixe}). Another observation concerns the fact that the Double-Helix can reach a better axial than lateral RMSE. This fact, which was also observed in the recent SMLM challenge \cite{Sage362517}, can be explained by the large lateral support of the Double-Helix PSF as well as its good axial discrimination. Finally, three-dimensional representations of the recovered structures are presented in Figures~\ref{sec:microscopy-fig:tub-Kfixe} and~\ref{sec:microscopy-fig:tub-Nfixe} for a fixed $K=4$ and $\nMol=10$, respectively. These figures complete and illustrate the observations made with the computed metrics. \begin{figure} \centering \begin{tikzpicture} \node at (0,0) {\includegraphics[width=0.33\linewidth]{microscopy/results/laplace_1000_4_5}}; \node at (4,0) {\includegraphics[width=0.33\linewidth]{microscopy/results/astigmatism_1000_4_5}}; \node at (8,0) {\includegraphics[width=0.33\linewidth]{microscopy/results/doublehelix_1000_4_5}}; \node at (0,-4) {\includegraphics[width=0.33\linewidth]{microscopy/results/laplace_1000_4_10}}; \node at (4,-4) {\includegraphics[width=0.33\linewidth]{microscopy/results/astigmatism_1000_4_10}}; \node at (8,-4) {\includegraphics[width=0.33\linewidth]{microscopy/results/doublehelix_1000_4_10}}; \node at (0,-8) {\includegraphics[width=0.33\linewidth]{microscopy/results/laplace_1000_4_15}}; \node at (4,-8) {\includegraphics[width=0.33\linewidth]{microscopy/results/astigmatism_1000_4_15}}; \node at (8,-8) {\includegraphics[width=0.33\linewidth]{microscopy/results/doublehelix_1000_4_15}}; \node at (0,2.1) {MA-TIRF};\node at (4,2.1) {Astigmatism};\node at (8,2.1) {Double-Helix}; \node[rotate=90] at (-2.1,0) {$\nMol=5$};\node[rotate=90] at (-2.1,-4) {$\nMol=10$};\node[rotate=90] at (-2.1,-8) {$\nMol=15$}; \end{tikzpicture} \caption{\label{sec:microscopy-fig:tub-Kfixe} Recovered structures for $\K=4$.} \end{figure} \begin{figure} \centering \begin{tikzpicture} \node at (4,0) {\includegraphics[width=0.33\linewidth]{microscopy/results/astigmatism_1000_1_10}}; \node at (8,0) {\includegraphics[width=0.33\linewidth]{microscopy/results/doublehelix_1000_1_10}}; \node at (0,-4) {\includegraphics[width=0.33\linewidth]{microscopy/results/laplace_1000_2_10}}; \node at (4,-4) {\includegraphics[width=0.33\linewidth]{microscopy/results/astigmatism_1000_2_10}}; \node at (8,-4) {\includegraphics[width=0.33\linewidth]{microscopy/results/doublehelix_1000_2_10}}; \node at (0,-8) {\includegraphics[width=0.33\linewidth]{microscopy/results/laplace_1000_3_10}}; \node at (4,-8) {\includegraphics[width=0.33\linewidth]{microscopy/results/astigmatism_1000_3_10}}; \node at (8,-8) {\includegraphics[width=0.33\linewidth]{microscopy/results/doublehelix_1000_3_10}}; \node at (0,2.1) {MA-TIRF};\node at (4,2.1) {Astigmatism};\node at (8,2.1) {Double-Helix}; \node[rotate=90] at (-2.1,0) {$\K=1$};\node[rotate=90] at (-2.1,-4) {$\K=2$};\node[rotate=90] at (-2.1,-8) {$\K=3$}; \end{tikzpicture} \caption{\label{sec:microscopy-fig:tub-Nfixe} Recovered structures for $\nMol=10$.} \end{figure} \section{The \adcgshort~Algorithm}\label{sec:sfw} In this section, we present the \adcg~(see Algorithm~\ref{sec:sfw-alg:sfw}), a new version of the modified Frank-Wolfe algorithm introduced in~\cite{bredies-inverse2013}. Moreover, we prove in Theorem~\ref{sec:sfw-thm:cvksteps} that it converges in a finite number of steps under mild assumptions. The code can be found in~\url{https://github.com/qdenoyelle}. We suppose in this section that $\Pos\subset\RR^d$ is compact, or $\Pos=\TT^d$ with $d\in\NN^*$ and $\phi\in\kernel{2}$ (see Definition~\ref{sec:intro-def:admkernel}). \subsection{The Algorithm}\label{sec:sfw-subsec:greedy} \myparagraph{Frank-Wolfe Algorithm} The Frank-Wolfe (FW) algorithm~\cite{frank-fw1956}, also called the Conditional Gradient Method (CGM)~\cite{levitin-constrained1966} solves the following optimization problem \begin{equation}\label{eq:minfw} \min_{m \in C}\ f(m), \end{equation} where $C$ is a weakly compact convex set of a Banach space, and $f$ is a differentiable convex function. For instance, in the case of sparse recovery problems, $m$ is a measure and $C$ is a subset of $\Mm(X)$. A chief advantage of FW with respect to most first order optimization scheme (such as gradient descent or proximal splitting method) is that it does not rely on any underlying Hilbertian structure and only makes use of directional derivatives. It is thus particularly well adapted to optimize over the space of Radon measures. The algorithm is detailed in Algorithm~\ref{sec:sfw-alg:fw}. \begin{algorithm} \caption{Frank-Wolfe Algorithm} \label{sec:sfw-alg:fw} \begin{algorithmic}[1] \For{$k=0,\ldots, n$} \State\label{fw-greedy-step} Minimize: $\iter{s}\ni\mbox{argmin}_{s\in C} f(\iter{m})+df(\iter{m})[s-\iter{m}]$. \If{$df(\iter{m})[\iter{s}-\iter{m}]=0$}\label{stopping-conditionFW} \State $\iter{m}$ solution of \eqref{eq:minfw}. Stop. \Else{} \State\label{fw-stepresearch} Step research: $\iter{\ga}\gets \frac{2}{k+2}$ or $\iter{\ga}\ni\mbox{argmin}_{\gamma\in [0,1]} f(\iter{m}+\gamma(\iter{s}-\iter{m}))$. \State\label{fw-tentativeupdate} Update: $\iterpo{m}\gets \iter{m}+\iter{\ga}(\iter{s}-\iter{m})$. \EndIf \EndFor \end{algorithmic} \end{algorithm} Let us note that the FW algorithm is naturally endowed with a stopping criterion in Step~\ref{stopping-conditionFW} (see for instance~\cite[Ch. 3, Sec.1.2]{demyanov-1970approximate}) which is equivalent to the standard optimality condition for constrained convex problems \begin{align}\label{eq:fwoptimality} \forall s\in C,\quad df(\iter{m})[s-\iter{m}]\geq 0. \end{align} \myparagraph{Frank-Wolfe for the BLASSO} The FW algorithm cannot be applied directly to the BLASSO because it is an optimization problem over $\radon$ which is not bounded and the objective function \begin{align}\label{sec:sfw:eq-fobj} \forall m\in\radon, \quad \fobj(m)\eqdef\frac{1}{2}\normObs{\Phi m-\obsw}^2+\la\normTVX{m}, \end{align} is not differentiable. Instead, we propose to consider an equivalent problem to the BLASSO, using an epigraphical lift (following an idea of~\cite{harchaoui-2015conditional}), which is presented in Lemma~\ref{sec:sfw-lem:blasso-eq}. \begin{lem}\label{sec:sfw-lem:blasso-eq} The BLASSO \begin{align}\label{sec:sfw-def:blasso} \umin{m\in\radon} \fobj(m) \eqdef \frac{1}{2}\normObs{\Phi m-y}^2+\la \normTVX{m}.\tag{$\blasso$} \end{align} is equivalent to \begin{align}\label{sec:sfw:blassoeq}\tag{$\blassoeq$} \umin{(t,m)\in C} \tfobj(m,t) \eqdef \frac{1}{2}\normObs{\Phi m-y}^2+\la t, \end{align} where we defined $C \eqdef \enscond{(t,m)\in\RR_+\times\radon}{ \normTVX{m}\leq t\leq M }$ and $M\eqdef\frac{\normObs{\obsw}^2}{2\la}$. \end{lem} The equivalence stated in Lemma~\ref{sec:sfw-lem:blasso-eq} is to be understood in the following sense: $m$ is a solution to~\eqref{sec:sfw-def:blasso} if and only if $(t,m)$ is a solution to~\eqref{sec:sfw:blassoeq} for some $t\geq 0$. Moreover, in that case $t=\normTVX{m}$ and $\tfobj(m,t)= \fobj(m)$. As a result, one can directly translate the FW algorithm (see Algorithm~\ref{sec:sfw-alg:fw}) to $\blasso$. \begin{proof} Let $\mlimit$ be a minimizer of $\fobj$ on $\radon$, then we have \begin{equation} \fobj(\mlimit)\leq \fobj(0)=\la M. \end{equation} Hence, one can restrict the BLASSO to the set of measures $m\in\radon$ such that $\normTVX{m}\leq M$ and $\blassoeq$ is obtained using an epigraphical representation. \end{proof} The next two remarks discuss the applicability of standard results on FW to the BLASSO. \begin{rem}[Well-posedness] The FW algorithm is well defined for $\blassoeq$. Indeed, $\tfobj$ is a differentiable functional on the Banach space $\RR\times\radon$, with differential \begin{equation} \d\tfobj(t,m): (t',m')\longmapsto\int_\Pos \Phi^*(\Phi m-\obsw)\d m' + \la t'. \end{equation} Although $C$ is not weakly compact (otherwise, by the Eberlein-Shmulyan theorem, $\radon$ would be reflexive), it is compact for the weak-* topology: as $\d\tfobj(t,m)$ is represented by $(\la,\Phi^*(\Phi m-\obsw))\in \RR\times \ContX(\Pos)$, it does reach its minimum on $C$. \end{rem} \begin{rem}[Rate of convergence] Let us note that $\d\tfobj$ is Lipschitz continuous (because $\phi\in\kernel{2}$), hence by classical results for the study of the convergence of the FW algorithm, one obtains the $O(1/k)$ rate of convergence in the objective function for any minimizing sequence for the BLASSO. \end{rem} \begin{lem}[\protect{\cite[Th. 3.1.7]{demyanov-1970approximate}}]\label{sec:sfw-lem:fw-blasso-rate} Let $(t_k,\iter{m})_{k\in\NN}$ be a sequence generated by Algorithm~\ref{sec:sfw-alg:fw} applied to $\blassoeq$. Then, there exists $C_1>0$ such that for any $\mlimit$ solution of $\blasso$ we have \begin{equation} \forall k\in\NN^*, \quad \fobj(\iter{m})-\fobj(\mlimit)\leq \frac{C_1}{k}. \end{equation} \end{lem} Next, we discuss how the minimization step yields a greedy approach and a natural stopping criterion. The following two remarks also crucially relate the algorithm to the dual certificate of~\eqref{eq:defblasso}. \begin{rem}[Greedy approach] Obviously, the FW algorithm is only interesting if, in step~\ref{fw-greedy-step} of Algorithm~\ref{sec:sfw-alg:fw}, one is able to minimize the linear form $s\mapsto \d\tfobj(\iter{t},\iter{m})[s]$ on $C$. That linear form reaches its minimum at least at one extreme point of $C$, \ie{} $s=(0,0)$ or points of the form $s=\pa{M,\pm M\dirac{\pos}}$ for $x\in \Pos$. Finding a minimizer among those points amounts to finding a point $x$ in \begin{align*} &\argmin_{x\in \Pos} \left(\pm \frac{1}{\la}\left(\Phi^*(\obsw-\Phi \iter{m})\right)(x)+1\right) \la M,\\ \mbox{or equivalently in}\quad &\argmax_{x\in\Pos}\left(\abs{\iter{\eta}(x)}-1\right) \qwhereq \iter{\eta}\eqdef {\frac{1}{\la}\left(\Phi^*(\obsw-\Phi \iter{m})\right)} \end{align*} (note the similarity of $\iter{\eta}$ with the dual certificate defined in~\eqref{eq:certifdual}). As a consequence, at each Step~\ref{fw-tentativeupdate} of Algorithm~\ref{sec:sfw-alg:fw}, a new spike is created at some point in $\argmax_\Pos\abs{\iter{\eta}}$ (unless $s=(0,0)$ is optimal, which means that $\normLi{\iter{\eta}}\leq 1$). This spike creation step is at the core of the algorithms in \cite{bredies-inverse2013} and~\cite{boyd-adcg2015}. \end{rem} \begin{rem}[Stopping criterion]\label{rem:stopfw} It is interesting to relate the stopping criterion \eq{(\iter{t},\iter{m})\in \argmin_{s\in C} \d\tfobj(\iter{t},\iter{m})[s],} with the dual certificates for~\eqref{eq:defblasso}. As noted above (see Equation~\ref{eq:fwoptimality}), the stopping criterion is equivalent to $(\iter{t},\iter{m})$ being a solution, hence $\iter{t}=\normTVX{\iter{m}}$. If $\iter{m}\neq 0$, without loss of generality we write $\iter{m}=\sum_{i=1}^{\iter{N}} \iter{a}_i\delta_{\iter{x}_i}$ where the $\iter{x}_i$'s are distinct, so that $\iter{t}=\normTVX{\iter{m}}=\sum_{i}\abs{\iter{a}_i}$. We also set $\iter{\varepsilon}_i\eqdef \sign(\iter{a}_i)$ and $L\eqdef \d\tfobj(\iter{t},\iter{m})$. Assume first that $\normTVX{\iter{m}}<M$, so that the smallest face of $C$ which contains $(\iter{t},\iter{m})$ is \begin{align*} F\eqdef \mathrm{conv} \left\{(0,0),(M,M\iter{\varepsilon}_1\delta_{\iter{x}_1}),\ldots, (M,M\iter{\varepsilon}_{\iter{N}}\delta_{\iter{x}_{\iter{N}}})\right\}. \end{align*} Since $\argmin_{s\in C} L$ is a face of $C$ containing $(\iter{t},\iter{m})$ (see~\cite[Sec. 18]{rockafellar2015convex}), it must contain $F$. Hence \begin{align}\label{eq:sfwminL} L(0,0)=L(M,M\iter{\varepsilon}_1\delta_{\iter{x}_1})=\cdots= L(M,M\iter{\varepsilon}_{\iter{N}}\delta_{\iter{x}_{\iter{N}}})=\min_{C}L. \end{align} Now, if $\normTVX{\iter{m}}=M$, it means that $\tfobj(\iter{t},\iter{m})=\tfobj(0,0)$, so that by convexity of $\tfobj$ and optimality of $(\iter{t},\iter{m})$ one has $L(\iter{t},\iter{m})=\d\tfobj(\iter{t},\iter{m})[\iter{t},\iter{m}]=0=L(0,0)$. As the smallest face which contains $(\iter{t},\iter{m})$ is \eq{ F'\eqdef \mathrm{conv} \left\{(M,M\iter{\varepsilon}_1\delta_{\iter{x}_1}),\ldots, (M,M\iter{\varepsilon}_{\iter{N}}\delta_{\iter{x}_{\iter{N}}})\right\}, } we deduce as above that~\eqref{eq:sfwminL} holds. In particular, $L(0,0)\leq \inf_{x\in\Pos} L(M,\pm M\delta_{x})$ yields \begin{align}\label{eq:sfwzeroopt} 0\leq \inf_{x\in\Pos} \left(-\abs{\iter{\eta}(x)}+1 \right), \end{align} that is $\normLi{\iter{\eta}}\leq 1$. Moreover $L(\iter{t},\iter{m})=\sum_{j=1}^{\iter{N}} \abs{\iter{a}_j} L\left(M,M\iter{\varepsilon}_j\delta_{\iter{x}_j}\right)\leq \sum_{j=1}^N\abs{\iter{a}_j} L(M,\pm M\delta_{\iter{x}_j})$ , yields \begin{align*} -\sum_{j=1}^{\iter{N}} \iter{a}_j\iter{\eta}(\iter{x}_j) \leq -\sum_{j=1}^{\iter{N}} \abs{\iter{a}_j}\abs{\iter{\eta}(\iter{x}_j)}, \end{align*} from which we deduce $\iter{\eta}(\iter{x}_j)=\sign(\iter{a}_j)$. As a result, when the FW algorithm stops (if it does), we observe that \emph{the quantity $\iter{\eta}$ it has constructed is the dual certificate} for~\eqref{eq:defblasso}. If $\iter{m}=0$, the argument is similar (as~\eqref{eq:sfwzeroopt} must hold). \end{rem} \myparagraph{The \adcg~algorithm} Applying directly Algorithm~\ref{sec:sfw-alg:fw} yields a sequence of measures $(\iter{m})_{k\in \NN}$ which weakly-* converges towards some solution $\mlimit$ in a greedy way. But the generated measures $\iter{m}$ are not very sparse compared to $\mlimit$, each Dirac mass of $\mlimit$ being approximated by a multitude of Dirac masses of $\iter{m}$ with inexact positions. It is therefore suggested in~\cite{bredies-inverse2013}, and strongly advocated in~\cite{boyd-adcg2015}, to modify the Frank-Wolfe algorithm for the resolution of the BLASSO and to let the Dirac positions move. One important feature of the FW algorithm, as noted in~\cite{jaggi2013revisiting,boyd-adcg2015}, is that in the update step~\ref{fw-tentativeupdate}, \emph{the point $\iterpo{m}$ may be replaced with any point $m\in C$ which has lower energy}, without breaking the convergence property and the convergence rate. The Frank-Wolfe algorithm with our modified update step is described in Algorithm~\ref{sec:sfw-alg:sfw}, we call it the \adcg{} (\adcgshort)~algorithm. Since the $t$ variable is only auxiliary in~\eqref{sec:sfw:blassoeq}, we omit it and we formulate directly Algorithm~\ref{sec:sfw-alg:sfw} in terms of $m$ only. \begin{algorithm}[t] \caption{\ADCG~Algorithm} \label{sec:sfw-alg:sfw} \begin{algorithmic}[1] \State Initialize with $\iterO{m}=0$ and $n=0$. \For{$k=0,\ldots,n$} \State\label{computeNextPos}$\iter{m}=\sum_{i=1}^{\iter{N}} \iter{\amp_i} \dirac{\iter{\pos_i}}$, $\iter{\amp_i}\in\RR$, $\iter{\pos_i}$ pairwise distincts, find $\iter{\pos_*}\in\Pos$ s.t.: \eq{% \iter{\pos_*}\in \mathrm{arg} \, \underset{\pos\in\Pos}{\mathrm{max}}\ |\iter{\eta}(x)| \qwhereq \iter{\eta}\eqdef\frac{1}{\la}\Phi^*(y-\Phi \iter{m}), } \If{$|\iter{\eta}(\iter{\pos_*})|\leq 1$}\label{stopping-condition} \State $\iter{m}$ is a solution of $\blasso$. Stop. \Else{} \State\label{computeNextAmp} Obtain $\iterph{m}=\sum_{i=1}^{\iter{N}} \iterph{\amp_i}\dirac{\iter{\pos_i}}+\iterph{\amp_{\iter{N}+1}}\dirac{\iter{\pos_*}}$, s.t.: \begin{align*} \iterph{\amp}\in \mathrm{arg} \, \underset{{\amp} \in\RR^{\iter{N}+1}}{\mathrm{min}}\ \frac{1}{2}\normObs{\Phi_{\iterph{\pos}} {\amp}-y}^2+\la \normu{{\amp}} \\ \qwhereq \iterph{\pos}=(\iter{\pos_1},\ldots,\iter{\pos_{\iter{N}}},\iter{\pos_*}) \end{align*} \State\label{BFGS} Obtain $\iterpo{m}=\sum_{i=1}^{\iter{N}+1} \iterpo{\amp_i}\dirac{\iterpo{\pos_i}}$, s.t.: \eq{% (\iterpo{\amp},\iterpo{\pos})\in\underset{({\amp},{\pos})\in\RR^{\iter{N}+1}\times\Pos^{\iter{N}+1}}{\mathrm{arg \, min}}\ \frac{1}{2}\normObs{\Phi_{{\pos}} {\amp}-y}^2+\la \normu{{\amp}}, } using a non-convex solver initialized with $(\iterph{\amp},\iterph{\pos})$. \State Eventually remove zero amplitudes Dirac masses from $\iterpo{m}$. \EndIf \EndFor \end{algorithmic} \end{algorithm} As we detail below, the algorithm slightly (but crucially) differs from the one in~\cite{boyd-adcg2015}. The main ingredient is to replace the final update with the minimization of a non-convex minimization problem updating both the positions and the amplitudes of the spikes (whereas~\cite{boyd-adcg2015} update successively the amplitudes and the positions). \begin{rem}[Links between FW applied to $\blassoeq$ and the~\adcgshort]\label{sec:sfw-rem:links} Algorithm~\ref{sec:sfw-alg:sfw} is a valid variant of FW, as the update step decreases more the energy than the standard convex combination using $\iter{\ga}$. Indeed, \eq{% \fobj(\iterpo{m})\leq \fobj(\iterph{m})\leq \fobj(\iter{m}+\iter{\ga}(\sign(\iter{\eta}(\iter{\pos_*}))M\dirac{\iter{\pos_*}}-\iter{m})). } It is noteworthy that other forms were previously used in~\cite{bredies-inverse2013,boyd-adcg2015}, but, to our knowledge, the update procedure (Steps~\ref{computeNextAmp} and~\ref{BFGS}) described in the present paper is new. As we show in Theorem~\ref{sec:sfw-thm:cvksteps}, optimizing over \emph{both the amplitudes and the positions} is essential to prove the convergence of the algorithm in a finite number of iterations. \end{rem} \begin{rem}[Stopping criterion of the~\adcgshort] One may observe that the condition $df(\iter{m})[\iter{s}-\iter{m}]=0$ of Algorithm~\ref{sec:sfw-alg:fw} (or equivalently $\iter{m}\in \argmin_{s\in C} \d f(\iter{m})[s]$) has been replaced with $|\iter{\eta}(\iter{\pos_*})|\leq 1$. In fact the optimality conditions for the non-convex local descent (Step~\ref{BFGS}) at iteration $k-1$ imply \eq{ \forall i\in\{1,\ldots,\iter{N}\}, \quad \iter{\eta}(\iter{\pos}_i)=\sign(\iter{\amp}_i), } whereas $|\iter{\eta}(\iter{\pos_*})|\leq 1$ implies $\normLi{\iter{\eta}}\leq 1$, hence $\iter{\eta}$ is a valid dual certificate. With the words of Remark~\ref{rem:stopfw}, Step~\ref{BFGS} implies that $L\left(M,M\iter{\varepsilon}_j\delta_{\iter{x}_j}\right)=0$ for $1\leq j\leq \iter{N}$, whereas the condition $|\iter{\eta}(\iter{\pos_*})|\leq 1$ means $0=L(0,0)=\min_{C}L$. As $m$ is a convex combination of those points, we deduce that $(\normTVX{\iter{m}},\iter{m})\in\argmin_C L$, that is the optimality condition~\eqref{eq:fwoptimality}. \end{rem} \begin{rem}[Adaptation for the positive BLASSO] In many applications, one is often interested in recovering positive spikes (see for example in Section~\ref{sec:microscopy}). As a result, in these cases it is better to add a positivity constraint $m\geq 0$ to the BLASSO. This leads to several changes in Algorithm~\ref{sec:sfw-alg:sfw} \begin{itemize} \item the stopping condition $|\iter{\eta}(\iter{\pos_*})|\leq 1$ becomes $\iter{\eta}(\iter{\pos_*})\leq 1$, \item the LASSO is solved on $\RR_+^{\iter{N}+1}$, \item the optimization problem of Step~\ref{BFGS} is solved on $\RR_+^{\iter{N}+1}\times\Pos^{\iter{N}+1}$. \end{itemize} \end{rem} \myparagraph{Implementation details} \begin{itemize} \item A Newton method, initialized by a grid search, is used to to find the maximum of $|\iter{\eta}|$ over the compact domain $\Pos$ in step~\ref{computeNextPos}. The size of the grid depends on the operator~$\Phi$. For example, when $\Phi$ is the convolution by the Dirichlet kernel with cutoff frequency $f_c$, we choose a number of points proportional to $f_c$. \item The LASSO problem at step~\ref{computeNextAmp} is solved using the fast iterative shrinkage thresholding algorithm (FISTA)~\cite{beck-fista2009}. \item To solve the non-convex optimization problem at step~\ref{BFGS}, we deploy a bounded BFGS. It allows to enforce the positions $x_i$ to be in the compact domain $\Pos$ and to preserve the sign of the amplitudes $a_i$. These constraints ensure the differentiability of the objective function which is required by BFGS. \end{itemize} \subsection{Study of the Convergence of the \adcgshort~Algorithm}\label{sec:sfw-subsec:conv-sfw} We now study the convergence properties of the \adcg~algorithm presented last section (see Algorithm~\ref{sec:sfw-alg:sfw}). Our main result is Theorem~\ref{sec:sfw-thm:cvksteps} where one shows that if $\etaLL=\frac{1}{\lambda}\Phi^*(y-\Phi \meas)$, where $\meas=\sum_{i=1}^N \amp_i \dirac{\pos_i}$, is the unique solution of $\blasso$ and is nondegenerate (see Equation~\eqref{sec:sfw-eq:etaLnondegen}), then Algorithm~\ref{sec:sfw-alg:sfw} recovers $\meas$ in a finite number of iterations. But, first, one shows that our algorithm produces a sequence of measures $(\iter{m})_{k\in\NN}$ that converges towards $\mlimit$ (if $\mlimit\in\radon$ is the unique solution of the BLASSO) for the weak-* topology on $\radon$. \begin{prop}\label{sec:sfw-prop:cvfaible} Let $(\iter{m})_{k\in\NN}$ be the sequence obtained from the \adcg~algorithm. Then it has an accumulation point for the weak-* topology on $\radon$, and that point is a solution to~\eqref{sec:sfw-def:blasso}. \end{prop} \begin{proof} By Remark~\ref{sec:sfw-rem:links}, we know that $(\iter{m})_{k\in\NN}$ is a sequence obtained by applying Algorithm~\ref{sec:sfw-alg:fw} to $\blassoeq$ where the final update is step~\ref{computeNextAmp} and~\ref{BFGS} of the \adcgshort. As a result, using Lemma~\ref{sec:sfw-lem:fw-blasso-rate}, one gets that for any $\mlimit$ solution of $\blasso$, \eq{% \forall k\in\NN, \quad \fobj(\iter{m})-\fobj(\mlimit)\leq\frac{C_1}{k}. } Hence $(\iter{m})$ is a bounded minimizing sequence. One can extract from it a subsequence that converges towards some $m\in \radon$ (with $\normTVX{m}\leq M$) for the weak-* topology. Since $\fobj$ is convex and l.s.c., it is also weak-* l.s.c.\ so that one obtains: \eq{% \fobj(m)=\fobj(\mlimit). } Hence $m$ is a solution of $\blasso$. \end{proof} From this Proposition, one easily deduces the following Corollary. \begin{cor}\label{sec:sfw-cor:weakstarcv} If $\mlimit\in\radon$ is the unique solution of $\blasso$ then $(\iter{m})_{k\in\NN}$ weak-* converges towards $\mlimit$. \end{cor} In fact, under mild assumptions, our algorithm even converges towards the solution of the BLASSO in a finite number of iterations, thanks to the displacement of the spikes over the continuous domain $\Pos$. For the sake of clarity, we state and prove this Theorem in the case of $d=1$ but the changes for $d\in\NN^*$ can be easily done. \begin{thm}\label{sec:sfw-thm:cvksteps} Suppose that $\phi\in\kernel{2}$, that $\meas=\sum_{i=1}^N\amp_i\dirac{\pos_i}$ is the unique solution of $\blasso$, and that $\etaLL=\frac{1}{\la}\Phi^*(y-\Phi \meas)$ is nondegenerate, \ie \begin{align}\label{sec:sfw-eq:etaLnondegen} \forall \pos\in\Pos\setminus\bigcup_{i=1}^N\{\pos_i\}, \quad |\etaLL(\pos)|<1 \qandq \forall i\in\{1,\ldots,N\}, \quad \etaLL''(\pos_i)\neq 0. \end{align} Then Algorithm~\ref{sec:sfw-alg:sfw} recovers $\meas$ after a finite number of steps (\ie there exists $k\in\NN$ such that $\iter{m}=\meas$). \end{thm} \begin{proof} Since $\meas$ is the unique solution of $\blasso$, one knows by Corollary~\ref{sec:sfw-cor:weakstarcv} that the sequence $(\iter{m})_{k\in\NN}$ produced by Algorithm~\ref{sec:sfw-alg:sfw} converges for the weak-* topology towards $\meas$. As $\Phi$ is weak-* to weak continuous and by defining $\iter{p}\eqdef\frac{1}{\la}(y-\Phi \iter{m})$, one gets that $(\iter{p})_{k\in\NN}$ converges towards $\pLL$ in the weak topology of $\Obs$ and that $\iter{\eta}\eqdef\Phi^*\iter{p}$ converges pointwise towards $\etaLL$. Then one can show that $\Phi^*$ is a compact operator. Indeed, for any bounded subset $A\in\Obs$, one can check easily that $\Phi^*A$ is equicontinuous and pointwise relatively compact so that by Ascoli theorem $\Phi^*A$ is relatively compact for the strong topology of $\ContX(\Pos,\RR)$. As a result one can extract a subsequence of $(\iter{\eta})_{k\in\NN}$ that converges towards $\etaLL$ in uniform norm. $\etaLL$ is then the unique accumulation point in the uniform norm of the bounded sequence $(\iter{\eta})_{k\in\NN}$ hence its convergence towards $\etaLL$ in uniform norm. One can repeat this argument for $(\iter{\eta}{}')_{k\in\NN}$ and $(\iter{\eta}{}'')_{k\in\NN}$ (since $\phi\in\kernel{2}$), obtaining for all $j\in\{0,1,2\}$ \begin{align}\label{sec:sfw-eq:cvetak} (\iter{\eta})^{(j)}\overset{\normLi{\cdot}}{\underset{k\to+\infty}{\longrightarrow}}\etaLL^{(j)}. \end{align} Because $\etaLL$ is nondegenerate, there exists a small neighborhood around each $\pos_i$ on which $\etaLL''\neq0$. Hence, we deduce from Equation~\eqref{sec:sfw-eq:cvetak} that there exist $\epsilon>0$ and $k_1\in\NN$ such that: \eq{ \forall k\geq k_1,\forall i\in\{1,\ldots,N\},\forall\pos\in]\pos_i-\epsilon,\pos_i+\epsilon[, \quad \iter{\eta}{}''(\pos)\neq 0. } We denote in the following \eq{ I_{\pos_i,\varepsilon}\eqdef]\pos_i-\epsilon,\pos_i+\epsilon[, \quad \forall i\in\{1,\ldots,N\}. } Since $\iter{m}$ converges towards $\meas$ in the weak-* topology and $\abs{\meas}$ does not charge the boundary of $I_{\pos_i,\varepsilon}$, we have \eq{\forall i\in\{1,\ldots,N\},\quad \iter{m}(I_{\pos_i,\varepsilon})\to \meas(I_{\pos_i,\varepsilon})=\amp_i\neq 0,} so that there exists $k_2\in\NN$ such that for all $k\geq k_2$, $\iter{m}$ has at least one spike in each $I_{\pos_i,\varepsilon}$. In particular $\iter{m}$ has at least $N$ spikes. Again, from Equation~\eqref{sec:sfw-eq:cvetak}, since $(\iter{\eta})_{k\in\NN}$ converges uniformly towards $\etaLL$, one deduces that there exists $k_3\in\NN$ such that for all $k\geq k_3$: \eq{ \sat{\iter{\eta}}\subset \pa{\sat{\etaLL}}\oplus\pa{]-\epsilon,\epsilon[\times\{0\}}, } where the set of saturation points of a given $\eta\in\ContX(\Pos,\RR)$ is defined as: \begin{align*} \sat{\eta}\eqdef\left\{(\pos,v)\in\Pos\times\{-1,1\}; \ \eta(\pos)=v\right\}. \end{align*} Moreover, \eq{ \forall \pos\in\Pos\setminus\bigcup_{i=1}^N I_{\pos_i,\varepsilon}, \quad |\iter{\eta}(\pos)|<1. } In particular for $k\geq k_3$, $\iter{m}$ has no spikes in $\Pos\setminus\bigcup_{i=1}^N I_{\pos_i,\varepsilon}$ because it would contradict the optimality conditions of Step~\ref{BFGS} of Algorithm~\ref{sec:sfw-alg:sfw}: for all $i\in\{1,\ldots,\iter{N}\}$, $\iter{\eta}(\iter{\pos_i})=\sign(\iter{\amp_i})$. Suppose now that $k\geq \max(k_1,k_2,k_3)$. Then $\iter{m}$ has at least one spike in each neighborhood of $\pos_i$ and no spikes outside. Moreover $|\iter{\eta}|<1$ outside the neighborhoods and $\iter{\eta}{}''\neq 0$ inside. Let $i\in\{1,\ldots,N\}$ and denote $\iter{\pos_j}\in I_{\pos_i,\varepsilon}$ a position of a spike of $\iter{m}$. From the optimality conditions of Step~\ref{BFGS}, one has also that $\iter{\eta}{}'(\iter{\pos_j})=0$. This combined with $\iter{\eta}{}''\neq 0$ in $I_{\pos_i,\varepsilon}$ implies that $|\iter{\eta}|<1$, except at $\iter{\pos_j}$. Hence, $\iter{m}$ has exactly one spike in this neighborhood. As a consequence, we proved that $\iter{m}$ has exactly $N$ spikes (one inside each neighborhood) and: \eq{ \forall \pos\in\Pos\setminus\bigcup_{i=1}^N\{\iter{\pos_i}\}, \quad |\iter{\eta}(\pos)|<1. } Hence $\iter{m}$, composed of $N$ spikes, is a solution of $\blasso$. Since $\meas$ is supposed to be the unique solution of $\blasso$, one concludes that: \eq{ \iter{m}=\meas, } \ie the algorithm recovers $\meas$ in a finite number of iterations. \end{proof} Note that one proved the convergence in a \emph{finite} number of iterations but not exactly $N$ iterations if $\meas$ is composed of $N$ spikes. However in practice this is exactly what we observe. \subsection{Illustration of the $N$-Steps Convergence of the \adcgshort}\label{sec:sfw-subsec:algonumexp} We now illustrate how the algorithm works and we shows that it converges in exactly $N$ iterations in practice (when the noise level and the regularization parameter are appropriate, \ie $\max(\la,\normObs{w}/\la)$ is low enough). We consider $\Pos=[0,1]$ and a convolution operator with a sampled Gaussian kernel for $\Phi$ \eq{ \Phi: m\in\Mm(\Pos)\mapsto \int_{[0,1]} \phi\d m\in\RR^K \qwhereq \phi(\pos)=\pa{\frac{1}{\sqrt{2\pi\sigma^2}}e^{-\frac{(\frac{i-1}{K-1}-\pos)^2}{2\sigma^2}}}_{1\leq i\leq K}. } We set $\sigma=0.05$ and $K=100$. The initial measure used is $\measO=1.3\dirac{0.3}+0.8\dirac{0.37}+1.4\dirac{0.7}$ and the noise is small ($w=10^{-4}w_0$ where $w_0=\mbox{randn}(K)$). \begin{figure}[!htb] \centering \includegraphics[width=.5\linewidth]{sec-sfw/ex_algo/etaV_gaussian} \vspace{-0.3cm} \caption{$\etaVV$ for $\measO=1.3\dirac{0.3}+0.8\dirac{0.37}+1.4\dirac{0.7}$.}\label{sec:sfw-fig:Nstep-etaV} \end{figure} Figure~\ref{sec:sfw-fig:Nstep-etaV} shows $\etaVV$ for this configuration. One can see that it is nondegenerate. Hence, in a small noise now regime, with the appropriate choice of $\lambda$, there is a unique measure solution of $\blassoplus$ which is composed of the same number of spikes as $\measO$. Moreover, by Theorem~\ref{sec:sfw-thm:cvksteps}, the \adcgshort~algorithm recovers it in a finite number of iterations. \begin{figure}[!htb] \centering \includegraphics[width=.5\linewidth]{sec-sfw/ex_algo/ex_gaussian_fobj} \vspace{-0.3cm} \caption{Values of the objective function throughout the \adcgshort~algorithm (cumulative iterations of the BFGS). The vertical black lines separate the main outer iterations of the algorithm.}\label{sec:sfw-fig:Nstep-fobj} \end{figure} The decrease of the objective function throughout the algorithm iterations (cumulative iterations of BFGS) is presented in Figure~\ref{sec:sfw-fig:Nstep-fobj}. As indicated by the two vertical black lines, which show the intermediate iterations, the algorithm converges in exactly $3$ iterations. One can observe an important decrease of the objective function each time a spike is added. Also, it is noteworthy that BFGS converges with very few iterations when $k=0$ and $k=1$ (first two spikes added) and that the main computational load for the non-convex step occurs for $k=2$ (more iterations of BFGS). Figure~\ref{sec:sfw-fig:Nstep-cv} shows $\iter{m}$ and $\iter{\eta}$ at different times of the algorithm. More precisely, for $k\in\{0,1,2\}$, we display the initial measure $\measO$, the recovered measure, and the associated $\eta$. Moreover, we present them after the LASSO step (\ie $\iterph{m}$ and $\iterph{\eta}$) as well as after the BFGS step (\ie $\iterpo{m}$ and $\iterpo{\eta}$) . One remarks, as expected, that for all $i$, $\iterph{\eta}(\pos_i)=1$, $\iterpo{\eta}(\pos_i)=1$ and $\iterpo{\eta}{}'(\pos_i)=0$. In the first two main iterations, the spikes are almost not moved by the BFGS. However, at the last iteration, the displacement of the positions and amplitudes of the spikes is crucial to obtain $\iterpo{\eta}\in\partial \normTVX{\iterpo{m}}$, and thus recover the solution of $\blassoplus$ in three steps. \begin{figure} \centering \begin{tikzpicture} \node at (0,0) {\includegraphics[width=.4\linewidth]{sec-sfw/ex_algo/ex_gaussian_0}}; \node[fill=white] at (0,-1.8) {$k=0$. Start of the loop.}; \node at (7,0) {\includegraphics[width=.4\linewidth]{sec-sfw/ex_algo/ex_gaussian_1}}; \node[fill=white] at (7,-1.8) {$k=0$. End of the loop.}; \node at (0,-3.7) {\includegraphics[width=.4\linewidth]{sec-sfw/ex_algo/ex_gaussian_2}}; \node[fill=white] at (0,-5.5) {$k=1$. End of the loop.}; \node at (7,-3.8) {\includegraphics[width=.4\linewidth]{sec-sfw/ex_algo/ex_gaussian_3}}; \node[fill=white] at (7,-5.5) {$k=2$. End of the loop.}; \end{tikzpicture} \caption{Main steps of the \adcgshort~algorithm.}\label{sec:sfw-fig:Nstep-cv} \end{figure}
1,314,259,995,447
arxiv
\section{Introduction} The solar chromosphere is filled with fibrils in plages and mottles in network seen on the disk and with spicules seen at the limb. Although the mutual correspondence of fibrils, mottles, and spicules has not yet been established directly, we believe that they represent the same feature seen under different circumstances (cf.\ \cite{koza-Christopoulouetal2001}). Their ubiquity is especially evident in images taken at the center of strong spectral lines. So far most effort has been directed towards understanding their nature, internal structure \citep{koza-DePontieuetal2004,koza-Tziotziouetal2003,koza-Tziotziouetal2004}, and drivers \citep{koza-Hansteenetal2006,koza-DePontieuetal2007}. Less attention has been paid to their less obvious tangential motions (i.e., perpendicular to the axis) which may betray braiding of chromospheric magnetic fields due to vortex granular flows underneath \citep{koza-Brandtetal1988}. The first spectroscopic observation of spicule motions parallel to the limb was made by \citet{koza-Pasachoffetal1968}. \citet{koza-NikolskyandPlatova1971} reported on the quasi-periodic motion of spicules along the limb with tangential velocities of 10--15\,km\,s$^{-1}$ and amplitudes about 1\,arcsec. \citet{koza-MamedovandOrudzhev1983a,koza-MamedovandOrudzhev1983b} pointed out the similarity between radial (i.e., along the line-of-sight) and tangential velocities of spicules along the limb and speculated on the motion of spicules as a whole. In this paper, we study tangential motions of fibrils in a small isolated plage and nearby network using a 10-min time sequence of H$\alpha$ filtergrams obtained with the Dutch Open Telescope (DOT). We concentrate on relatively short-lived straight dynamic fibrils (henceforth DFs, \cite{koza-deWijnandDePontieu2006}) exhibiting conspicuous elongation and/or retraction within a few minutes. Longer, more static or more curved fibrils are not considered here. We for the first time present measurements of temporal variations in DF orientations. They suggest a relation between angular velocity and DF length. \begin{figure} \centering \includegraphics[width=0.7\textwidth]{small-koza-fig1top}\\ \vspace{-2mm} \includegraphics[width=0.7\textwidth]{small-koza-fig1bottom} \caption[]{\label{koza-fig:orientations} The orientations and lengths of two fibrils highlighted by arrows at intervals of 2.8\,min (upper panels) and 1.6\,min (lower panels). The angle $\varphi$ is measured clockwise from the fibril to the righthand $y$-axis. } \end{figure} \section{Observations and Measurements} We use data from the DOT obtained on April 24, 2006 for an isolated plage at $\mu=0.768$. A tomographic multiwavelength image sequence was recorded during excellent seeing from 10:53:05~UT until 11:02:54~UT. The seeing quality measured at the G band by the average, maximum, and minimum Fried parameter $r_{\rm 0}$ was 13.0, 16.2, 8.6, respectively. The processed data and movies are available at {\tt http://dotdb.phys.uu.nl/DOT/}. The burst cadence was 12\,s. The resulting time sequence was reconstructed by speckle masking and other steps as summarized in \citet{koza-Ruttenetal04}. In this study, we analyse the 10-min sequence of H$\alpha$ filtergrams taken by the DOT Lyot filter \citep{koza-Gaizauskas1976} with a FWHM passband of 0.025\,nm at $\Delta\lambda=-0.03$\,nm from line center. Inspecting the H$\alpha$ movie frame-by-frame we focused on relatively short-lived ($\approx$5\,min), straight DFs exhibiting apparent elongation or/and retraction. We identified fourteen such DFs and measured the image coordinates of the apparent feet and tops per individual frame. Because the feet locations also vary with time we used the time-averaged feet positions as references. We measured the DF orientations with respect to the $y$-axis (terrestrial north) and the DF lengths along lines connecting their tops and reference foot locations. Figure~\ref{koza-fig:orientations} shows two examples. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{small-koza-fig2left} \includegraphics[width=0.45\textwidth]{small-koza-fig2right} \caption[]{\label{koza-fig:dphi-omegal} {\em Left:} Temporal variations in fibril orientation $\Delta \varphi$ measured as difference with the average value represented by dashed lines. The shifts of the curves in time correspond to the fibril occurrence within the image sequence. {\em Right:} Average angular velocities of the orientation variation versus average fibril length. The filled and empty diamonds represent fibrils turning counterclockwise and clockwise, respectively. } \end{figure} \section{Results} Figure~\ref{koza-fig:dphi-omegal} shows the measured variations in orientation for the fourteen selected DFs and the rate of change in orientation inferred from linear fits to the curves at left plotted against average DF length. Although the measurements suffer from uncertainty and subjectivity, systematic trends appear. The orientation variations of some short-lived DFs can be characterised as a progression between sign changes, i.e., orientation change from one limit position to another. In contrast, the DFs present during the whole time sequence show stable orientation with only episodic deviations from the average value. Figure~\ref{koza-fig:dphi-omegal} suggests that the angular turning speed may be related to DF length, and that shorter DFs turn faster than longer ones. The average fibril lengths and angular velocities yield an estimate of centrifugal acceleration of 1\,m\,s$^{-2}$. \section{Discussion} Because of projection and the magnetic nature of DFs \citep{koza-DePontieuetal2004,koza-DePontieuetal2007,koza-Hansteenetal2006}, we interpret the temporal variation of the measured fibril orientation as a sum of variations in azimuth and inclination of magnetic flux tubes with respect to the local vertical. In the context of force-free fields \citep[e.g.][]{koza-LustandSchluter1954}, the phenomenon may indicate field twisting and braiding by vortex granular flows beneath \citep{koza-Brandtetal1988}, injection of twist into the corona, and twisted coronal fans seen in TRACE EUV movies. On the basis of the indicated relation between angular velocity and length (Fig.\,\ref{koza-fig:dphi-omegal}) we suggest that granulation with larger vortical flows produce more upright flux tubes, i.e., faster turning and shorter in projection on the disk, but better seen as long spicules at the limb with conspicuous tangential motions as reported in \citet{koza-Pasachoffetal1968}, \citet{koza-NikolskyandPlatova1971}, and \citet{koza-MamedovandOrudzhev1983a,koza-MamedovandOrudzhev1983b}. In contrast, granulation with smaller or no vortical flows permits more slanted flux tubes with smaller angular velocities, longer in on-disk projection but hardly observable on the limb because of crowding. Measurements of the horizontal flow fields in the photosphere and elimination of the projection effects are needed to test this scenario. The observed angular velocity of $\approx 1$\,deg\,min$^{-1}$ implies more intensive twisting of the field lines than in sunspots which have typical rotation velocities of about 1\,deg\,h$^{-1}$ \citep{koza-Kucera1982, koza-Brownetal2003}. \section{Summary} Using a time sequence of high-resolution H$\alpha$ filtergrams obtained by the DOT we have searched for temporal variation in the azimuthal orientation of fourteen dynamic fibrils. They show significant variation, for shorter-lived fibrils indicating turning motions at about 1\,deg\,min$^{-1}$. Shorter DFs turn faster than longer ones, which may indicate difference in granulation vorticity. This conjecture suggests measurements of horizontal flow fields in the photosphere together with elimination of the projection effects in fibril imaging.\\ \acknowledgements We thank Rob Rutten for improvements to the text. J.\,Koza's research is supported by an EC Marie Curie Intra European Fellowship. This research was part of the European Solar Magnetism Network and is supported by Slovak agency VEGA (2/6195/26).
1,314,259,995,448
arxiv
\section*{Introduction} Constructive theories of Abelian and modular functions associated with algebraic curves have seen an upsurge of interest in recent times. These classical functions have been of crucial importance in mathematics since their definition at the hands of Abel, Jacobi, Poincar\'e and Riemann, but their relevance in physics and applied mathematics has greatly developed over the past three decades. Algebraic curves are here intended as Riemann surfaces, unless specified to be singular. The study of the simplest hyperelliptic curves, namely curves of genus two, goes back to the beginning of the 20th century, and these are treated in much detail in advanced textbooks, see for example Baker (1907) \cite{ba07} and more recently Cassels and Flynn (1996) \cite{cafl96}. Not so much is known about the simplest trigonal curves, which have genus three. The study of modular functions of these curves was originated by Picard, and reprised recently by Shiga \cite{shiga88} and his school. In this paper we study Abelian functions associated with the simplest general type of curve, the general (3,4) curve. This is an $(n,m)$-curve in the sense of Burchnall-Chaundy \cite{bc28} Our work is based on the realization of Abelian functions as logarithmic derivatives of the multi-dimensional $\sigma$-function. This approach is due to Weierstrass and Klein and was developed by Baker \cite{ba97}; for recent developments of the theory of multi-dimensional $\sigma$-functions, see Grant \cite{gr90}, Buchstaber, Enolskii, and Leykin \cite{bel97}, Buchstaber and Leykin \cite{bl02}, \cite{bl05}, Eilbeck, Enolskii and Previato \cite{eep03}, and Baldwin and Gibbons \cite{bg06}, among others. We shall adopt as a template the Weierstrass theory of elliptic functions, trying to extend as far as possible these results to the case of the trigonal genus-three curve. Let $\sigma(u)$ and $\wp(u)$ be the standard functions in Weierstrass elliptic function theory. They satisfy the well-known formulae \begin{equation} \wp(u) = - \frac{d^2}{du^2}\log \sigma (u), \quad (\wp')^2=4\wp^3-g_2\wp-g_3, \quad \wp''=6\wp^2-\tfrac12 g_2 \lbl{WP} \end{equation} and the addition formula, which is a basic formula of the theory \begin{equation} -\frac{\sigma(u+v)\sigma(u-v)}{\sigma(u)^2\sigma(v)^2}=\wp(u)-\wp(v). \lbl{eq0.1} \end{equation} We present here two addition formulae (Theorems \ref{T7.1} and \ref{T8.1}). The first of these is for the general trigonal curve of degree four, whereas the second is restricted to a ``purely trigonal'' curve of degree four (see (\ref{eq5.1})). The first main Theorem \ref{T7.1} is the natural generalization of (\ref{eq0.1}). The authors realized the existence of the formula of the second main Theorem \ref{T8.1} from \cite{on05}. However we were not able to use that paper to establish our result, instead working from results by Cho and Nakayashiki \cite{cn06}, Grant's paper \cite{gr90}, p.100, (1.6), or a calculation using \cite{bel00}. The crucial part is to identify the coefficients of the right hand sides of these two formulae. To calculate these, we used a power-series expansion of the $\sigma$-function, stimulated by the works of Buchstaber and Leykin \cite{bl02} for hyperelliptic case and of Baldwin and Gibbons \cite{bg06} for a purely trigonal curve of genus four. The $\sigma$-functional realization of Abelian functions of trigonal curve of arbitrary genus $g$ was previously developed in \cite{bel00} and \cite{eel00}. Using these results in the case of $g=3$ we present explicit formulae for 6 canonical meromorphic differentials and the symmetric bi-differential which allow us to derive a complete set of relations for trigonal $\wp$-functions, generalizing the above relations for the Weierstrass $\wp$-function. We note that we have recently developed a parallel, but more limited theory, for purely trigonal curves of genus {\em four} in \cite{bego06}, a paper which draws heavily on the results presented here. It is perhaps useful to compare and contrast these two cases. As demonstrated in Schilling's generalization of the Neumann system \cite{sc89}, there are basically two cases of trigonal cyclic covers, the order of a related linear differential operator that commutes with the given one of order three being congruent to 1 or 2 modulo 3, respectively. In each case, the action variables of the integrable system parametrize a family of curves of the same type, thus the family of curves in the $(3,4)$-case cannot be obtained as a limit of that in the $(3,5)$-case, as they have different dimensions. In the present paper, we develop the method and prove the addition formulae, together with the characterising differential equations, for the former case, in that the highest power of $x$ appearing in the equation of the curve is 4 ($\equiv 1$ modulo 3); this corresponds to the `base' case of the Boussinesq equation, the smallest-genus spectral curve of an algebro-geometric third-order operator. In \cite{bego06}, the case where the highest power of $x$ appearing in the equation of the curve is 5 ($\equiv 2$ modulo 3) is addressed. The differences in the two cases manifest themselves in a number of ways, for example the parity of the $\sigma$ function is different in the two cases, and the two-term addition formulae are antisymmetric in the genus 3 case and symmetric in the genus 4 case. Also the results are given for the {\em general} $(3,4)$-curve here, whereas only for the {\em purely} trigonal $(3,5)$-case in \cite{bego06}. It may be possible with some work to relate the $(3,5)$-case to the $(3,4)$-case, but this would not be straightforward and we have not yet attempted this. Our study is far from complete, and a number of questions still remain. One of the first problems still to be considered should be the explicit recursive construction of the $\sigma$-series generalizing the one given by Weierstrass; for a hyperelliptic curve of genus two, this result was found by Buchstaber and Leykin \cite{bl02}, who also devised a procedure to derive such recursions for the whole family of $(n,m)$-curves \cite{bl02}, \cite{bl05}. Another problem is the deeper understanding of the algebraic structure of the addition theorems developed here, in order to generalize results to higher genera. As a pattern one can consider the addition formula of \cite{bel97} for hyperelliptic $\sigma$-functions of arbitrary genera written in terms of certain Pfaffians. Also, the description of Jacobi and Kummer varieties as projective varieties, whose coordinates are given in terms of (derivatives of) trigonal $\wp$-functions, is far from complete. We hope the results we present to be the first steps towards a general theory of trigonal curves of arbitrary genus, as well as a tool in the study of projective varieties which are images of Jacobians. The paper is organized as follows. We first discuss the basic properties of the general $(3,4)$-curve in Section \ref{BasicProps}, and define a restricted version of this curve, the ``purely trigonal case'', in Section \ref{PureTrig}. In Section \ref{sigma}, we introduce the $\sigma$ function for the general curve, and in Section \ref{Abelian} the Abelian functions $\wp_{ij}$ and their derivatives. Section \ref{PDEs} of the paper is devoted to the various differential relations satisfied by these Abelian functions, and the series expansion of the $\sigma$ function is discussed in Section \ref{sigma_expan}, in which the result (Theorem \ref{L6.1}) is new, is proved quite constructively, and is the key for the rest of papers. Let $\Theta^{[2]}$ be the standard theta divisor, namely the image of the Abelian map of the symmetric square of the curve that we consider, in its Jacobian variety $J$. The basis of the spaces $\varGamma(J,\mathcal{O}(n\Theta^{[2]}))$ of functions on $J$ whose poles are at most of order $n$ along $\Theta^{[2]}$ are discussed in Section \ref{BasisG}, as a preliminary to the two main addition Theorems in Sections \ref{Add1} and \ref{Add2}, respectively. The first addition theorem is a two-term relation for the general $(3,4)$-curve, and the second a three-term relation for the ``purely trigonal'' $(3,4)$-curve. Appendix A has some formulae for the fundamental bi-differential, and Appendix B has a list of quadratic three-index relations for the ``purely trigonal'' case only, as the full relations would require too much space. The web site \cite{Weier34} contains more details of the relations omitted through lack of space. While Sections 1 and 2 overlap somewhat with material in \cite{on05}, we believe that the results are useful to make the present paper reasonably self-contained. \newpage \tableofcontents \section{Trigonal curves of genus three} \setcounter{equation}{0} \label{BasicProps} Let $C$ be the curve defined by $f(x,y)=0$, where \begin{align} \begin{split} f(x,y) = y^3 + &(\mu_1 x + \mu_4)y^2 + (\mu_2 x^2 + \mu_5 x + \mu_8)y \\ & - (x^4 + \mu_3x^3 + \mu_6x^2 + \mu_9x + \mu_{12}), \quad \text{($\mu_j$ are constants), \end{split}\lbl{eq1.1} \end{align} with the unique point $\infty$ at infinity. This curve is of genus $3$, if it is non-singular. We consider the set of 1-forms \begin{equation} \omega_1=\frac{ dx}{f_y(x,y)}, \quad \omega_2=\frac{xdx}{f_y(x,y)}, \quad \omega_3=\frac{ydx}{f_y(x,y)}, \lbl{eq1.2} \end{equation} where $f_y(x,y)=\frac{\partial}{\partial y}f(x,y)$. This is a basis of the space of differentials of the first kind on $C$. We denote the vector consisting of the forms (\ref{eq1.2}) by \begin{equation} \omega=(\omega_1,\omega_2,\omega_3) \lbl{eq1.2.5} \end{equation} We know, by the general theory, that for three variable points $(x_1,y_1)$, $(x_2,y_2)$, and $(x_3,y_3)$ on $C$, the sum of integrals from $\infty$ to these three points \begin{equation} \begin{aligned} u &=(u_1,u_2,u_3) \\ & =\int_{\infty}^{(x_1,y_1)}\omega+\int_{\infty}^{(x_2,y_2)}\omega +\int_{\infty}^{(x_3,y_3)}\omega \end{aligned} \lbl{eq1.3 \end{equation} fills the whole space $\mathbb{C}^3$. We denote the points in $\mathbb{C}^3$ by $u$ and $v$ etc., and their natural coordinates in $\mathbb{C}^3$ by the subscripts $(u_1,u_2,u_3)$, $(v_1,v_2,v_3)$. We denote the lattice generated by the integrals of the basis (\ref{eq1.2}) along any closed paths on $C$ by $\Lambda$. We denote the manifold $\mathbb{C}^3/\Lambda$, by $J$, the Jacobian variety over $\mathbb{C}$ of $C$. We denote by $\kappa$ the natural map to the quotient group, \begin{equation} \kappa:\mathbb{C}^3\rightarrow \mathbb{C}^3/\Lambda=J. \lbl{eq1.4} \end{equation} $\Lambda$ is a lattice of the space $\mathbb{C}^3$ generated by the integrals $\oint\omega$ along any loop on $C$. We define for $k=1$, $2$, $3$, $\dots$, the map \begin{equation} \begin{aligned} \iota : \text{Sym}^k(C)&\rightarrow J \\ (P_1,\cdots,P_k) &\mapsto \left(\int_{\infty}^{P_1}\omega+\cdots+ \int_{\infty}^{P_k}\omega\right) \text{ mod}\,\Lambda, \end{aligned}\lbl{eq1.5} \end{equation} and denote its image by $W^{[k]}$. ($W^{[k]} = J$ for $k\ge3$ by the Abel-Jacobi theorem.) We will use the same symbol $u = (u_1, u_2, u_3)$ for a point $u \in \mathbb{C}^3$ in $\kappa^{-1}(W^{[k]})$. Let \begin{equation} [-1](u_1,u_2,u_3)=(-u_1,-u_2,-u_3), \lbl{eq1.6} \end{equation} and \begin{equation} \Theta^{[k]}:=W^{[k]}\cup[-1]W^{[k]}. \lbl{eq1.7} \end{equation} We call this $\Theta^{[k]}$ the $k$-th {\it standard theta subset}. In particular, if $k=1$, then (\ref{eq1.5}) gives an embedding of $C$: \begin{equation} \begin{aligned} \iota : &C\rightarrow J \\ & P \mapsto \int_{\infty}^P\omega \text{ mod } \Lambda. \end{aligned} \lbl{eq1.8} \end{equation} We note that \begin{equation} \Theta^{[2]}= W^{[2]}, \quad\Theta^{[1]}\neq W^{[1]}, \end{equation} differing from the genus-3 hyperelliptic case in a suitable normalization \cite{bel97}. If $u=(u_1,u_2,u_3)$ varies on the inverse image $\kappa^{-1}\iota(C)=\kappa^{-1}(W^{[1]})$ of the embedded curve, we can take $u_3$ as a local parameter at the origin $(0,0,0)$. Then we have (see \cite{on05}, e.g.) Laurent expansions with respect to $u_3$ as follows: \begin{equation} u_1=\tfrac15{u_3}^5+\cdots,\quad u_2=\tfrac12{u_3}^2+\cdots \lbl{u3expansion1} \end{equation} and \begin{equation} x(u)=\frac1{{u_3}^3}+\cdots, \quad y(u)=\frac1{{u_3}^4}+\cdots. \lbl{u3expansion2} \end{equation} We introduce a weight for several variables as follows: \begin{definition} We define a {\rm weight} for constants and variables appearing in our relations as follows. The weights of the variables $u_1$, $u_2$, $u_3$ for every $u=(u_1, u_2, u_3)$ of $W^{[k]}, (k=1,2,\dots)$ are $5$, $2$, $1$, respectively, and the weight of each coefficient $\mu_j$ in {\rm(\ref{eq1.1})} is $-j$, the weights of $x$ and $y$ of each point $(x, y)$ of $C$ are $-3$ and $-4$, respectively. So, the weights of the variables are nothing but the order of zero at $\infty$, while the weight assigned to the coefficients is a device to render $f(x,y)$ homogeneous. This is the reason why $\mu_7$, $\mu_{10}$, $\mu_{11}$ are absent. \end{definition} We remark that the weights of the variables $u_k$ are precisely the Weierstrass gap numbers of the Weierstrass gap sequence at $\infty$, whilst the weights of monomials of $x(u)$ and $y(u)$ correspond to the Weierstrass non-gap numbers in the sequence. In particular, in the case considered the Weierstrass gap sequence is of the form \[ \overline{0},\;1,\; 2,\;\overline{ 3,\; 4},\; 5,\; \overline{6,\; 7,\; 8,\; 9,\;10,\ldots} \] where orders of existing functions of the form $x^{p}y^{q}$, $p,q\in \mathbb{N}\cup \{0\}$ are overlined. The definition above is compatible, for instance, with the Laurent expansion of $x(u)$ and $y(u)$ with respect to $u_3$, etc.\ for $u \in W^{[1]}$. Moreover, all the equalities in this paper are homogeneous with respect to this weight. In the next section, we use the discriminant of $C$. Axiomatically, the discriminant $D$ of $C$ is defined as (one of) the simplest polynomial(s) in the $\mu_j$'s such that $D=0$ if and only if $C$ has a singular point. Here we are regarding $C$ as a family of curves over $\mathbb{Z}$. While no concrete expression of the discriminant is necessary for the main results in this paper, we put forward a conjecture based on the results of experimentation on special cases of $C$ using computer algebra. \begin{conjecture} Let \def\mathrm{rslt}{\mathrm{rslt}} \begin{equation} \begin{aligned} R_1&=\mathrm{rslt}_x\big(\mathrm{rslt}_y\big(f(x,y), f_x(x,y)\big), \mathrm{rslt}_y\big(f(x,y), f_y(x,y)\big)\big), \\ R_2&=\mathrm{rslt}_y\big(\mathrm{rslt}_x\big(f(x,y), f_x(x,y)\big), \mathrm{rslt}_x\big(f(x,y), f_y(x,y)\big)\big), \\ R_3&=\gcd(R_1,R_2), \end{aligned} \end{equation} where $\mathrm{rslt}_z$ represents the resultant, namely, the determinant of the Sylvester matrix with respect to the variable $z$. Then $R_3$ is of weight $144$ and a perfect square in the ring \begin{equation*} \mathbb{Z}[\mu_1,\mu_4,\mu_2,\mu_5,\mu_8,\mu_3,\mu_6,\mu_9,\mu_{12}]. \end{equation*} \end{conjecture} Unfortunately checking this condition directly is a computing task presenting considerable difficulties due to the size of the intermediate expressions involved. We leave this as a conjecture and remark only that work on a full calculation is continuing. This result is not crucial to this paper, but we will adopt it as a working hypothesis (see Remark \ref{varepsilon}). To continue, we define here the {\it discriminant} $D$ of $C$ by a square root of $R_3$: \begin{equation} D=\sqrt{R_3}. \lbl{discriminant} \end{equation} We comment on the choice of this root in \ref{varepsilon}. If the conjecture is true, $D$ is of weight $72$. For the convenience of the reader we give $R_3$ \[ \begin{aligned} R_3=\big(&256{\mu_{12}}^3-27{\mu_{12}}^2{\mu_3}^4-128{\mu_{12}}^2{\mu_6}^2 +144{\mu_{12}}^2{\mu_6}{\mu_3}^2-192{\mu_{12}}^2{\mu_9}{\mu_3} +16{\mu_{12}}{\mu_6}^4\\ &-80{\mu_{12}}{\mu_9}{\mu_6}^2{\mu_3} -4{\mu_{12}}{\mu_3}^2{\mu_6}^3+18{\mu_{12}}{\mu_9}{\mu_3}^3{\mu_6} +144{\mu_{12}}{\mu_9}^2{\mu_6}-6{\mu_{12}}{\mu_9}^2{\mu_3}^2 \\ &-4{\mu_9}^2{\mu_6}^3-4{\mu_9}^3{\mu_3}^3 +{\mu_9}^2{\mu_3}^2{\mu_6}^2 +18{\mu_9}^3{\mu_6}{\mu_3}-27{\mu_9}^4\big)^6 \end{aligned} \] for the special case of $\mu_1=\mu_2=\mu_4=\mu_5=\mu_8=0$ (see Section \ref{PureTrig}). \begin{definition} The $2$-form $\Omega((x,y),(z,w))$ on $C\times C$ is called {\rm fundamental 2-from of the second kind} or {\rm (fundamental second kind bi-differential)} if it is symmetric, namely, \begin{equation} \Omega((x,y),(z,w))=\Omega((z,w),(x,y)), \lbl{eq3.1.6} \end{equation} it has its only pole {\rm(}of second order{\rm)} along the diagonal of $C\times C$, and in the vicinity of each point $(x,y)$ it is expanded in power series as \begin{equation} \Omega((x,y),(z,w))=\left(\frac{1}{(\xi-\xi')^2 } +O(1)\right)d\xi d\xi' \quad (\text{as } (x,y)\rightarrow (z,w)), \lbl{expansion} \end{equation} where $\xi$ and $\xi'$ are local coordinates of points $(x,y)$ and $(z,w)$. \end{definition} We shall look for a realization of $\Omega((x,y),(z,w))$ in the form \begin{equation} \Omega((x,y),(z,w))=\frac{F((x,y),(z,w))dx dz} {(x-z)^2 f_y(x,y)f_w(z,w)},\lbl{realization} \end{equation} where $F((x,y),(z,w))$ is a polynomial in its variables. \begin{lemma} {\rm (Fundamental 2-form of the second kind)} \quad Let $\Sigma\big((x,y),(z,w)\big)$ be the meromorphic function on $C\times C$, \begin{equation} \Sigma\big((x,y),(z,w)\big) =\frac{1}{(x-z)f_y(x,y)} \sum_{k=1}^3y^{3-k}\bigg[\frac{f(Z,W)}{W^{3-k+1}}\bigg]_W \bigg|_{(Z,W)=(z,w)} \lbl{eq2.3}, \end{equation} where $[\quad ]_W$ means removing the terms of negative powers with respect to $W$. Then there exist differentials $\eta_j=\eta_j(x,y)$ $(j=1, 2, 3)$ of the second kind that have their only pole at $\infty$ such that the fundamental $2$-form of the second kind is given as\footnote{Since $x$ and $y$ are related, we do not use $\partial$.}, \begin{align} \Omega((x,y),(z,w))=\left({\frac{d}{dx} \Sigma((z,w),(x,y))+\sum_{k=1}^3 \frac{\omega_k(z,w)}{dz}\frac{\eta_k(x,y)}{dx}}\right)dxdz. \lbl{eq3.7} \end{align} The set of differentials $\{\eta_1$, $\eta_2$, $\eta_3\}$ is determined modulo the space spanned by the $\omega_j$s of {\rm(\ref{eq1.2})}. \end{lemma} \begin{proof The 2-form \begin{equation} \frac{d}{dz}\Sigma\big((x,y),(z,w)\big) dx dz \lbl{eq3.1.6a} \end{equation} satisfies the condition on the poles as a function of $(x,y)$, indeed one can check that (\ref{eq3.1.6a}) has only a second order pole at $(x,y)=(z,w)$ whenever $(z,w)$ is an ordinary point or a Weierstrass point; at infinity the expansion (\ref{u3expansion2}) should be used. However, the form (\ref{eq3.1.6a}) has unwanted poles at infinity as a form in the $(z,w)$-variables. To restore the symmetry given in (\ref{eq3.1.6}) we complement (\ref{eq3.1.6a}) by the second term to obtain (\ref{eq3.7}) with polynomials $\eta_j(x,y)$ which should be found from (\ref{eq3.1.6}). That results in a system of linear equations for coefficients of $\eta_j(x,y)$ which is always solvable. As a result, the polynomials $\eta_i(x,y)$ as well as $F((x,y),(z,w))$ are obtained explicitly. \end{proof} \begin{remark} The 1-form \[ \Pi_{(z_1,w_1)}^{(z_2,w_2)}(x,y)= \Sigma((x,y),(z_1,w_1))dx - \Sigma((x,y),(z_2,w_2))dx \] is the differential of the third kind, with first order poles at points $(x,y)=(z_1,w_1)$ and $(x,y)=(z_2,w_2)$, and residues $+1$ and $-1$ correspondingly. \end{remark} \begin{remark} The realization of the fundamental 2-form in terms of the Schottky-Klein prime-form and $\theta$-functions is given in \cite{ba97}, no.272, and the theory based on the $\theta$-functional representation is developed in \cite{fa73}. Here we deal with an equivalent algebraic representation of the fundamental 2-form which goes back to Klein and exhibit an algebraic expression for it, that is also mentioned by Fay in \cite{fa73} where the prime-form was defined. The above derivation of the fundamental 2-form is done in \cite{ba97}, around pg.\ 194, and it was reconsidered in \cite{eel00} for a large family of algebraic curves. The case of a trigonal curve of genus four was developed in \cite{bg06}, pp.~3617--3618. \end{remark} It is easily seen that the $\eta_j$ above is written as \begin{equation} \eta_j(x,y)=\frac{h_j(x,y)}{f_y(x,y)}dx,\quad j=1,2,3, \end{equation} where $h_j(x,y) \in \mathbb{Q}[\mu_1,\mu_2, \mu_{4},\mu_5,\mu_8, \mu_3,\mu_6,\mu_9,\mu_{12}][x,y]$, and $h_j$ is of homogeneous weight. The differentials $\eta_j$ are defined modulo the space of holomorphic differentials with the same weight, but it is possible to choose the standard $\eta_j\,$s uniquely by requiring that for each $j=1$, $2$, $3$ the polynomial $h_j(x,y)$ do not contain monomials corresponding to non-gaps with bigger $j$. Moreover there exist {\it precisely} $2g=6$ monomials defining standard differentials, for more details see \cite{bl05}, Chapter 4. In particular, straightforward calculations lead to the following expressions \begin{align} \begin{split} h_3(x,y)&=-x^2,\lbl{denom-eta}\\ h_2(x,y)&=-2xy+\mu_1x^2,\\ h_1(x,y)&=-(5x^2+(\mu_1\mu_2-3\mu_3)x+\mu_2\mu_4+\mu_6)y +\mu_{2}y^2+3\mu_1x^3 \\ & \qquad -({\mu_2}^2+2\mu_3\mu_1-2\mu_4)x^2-(\mu_5\mu_2+\mu_6\mu_1+\mu_3\mu_4)x+\tfrac34 \mu_1 f_x(x,y)\\ & \qquad -\left(\tfrac13\mu_2-\tfrac14{\mu_1}^2\right)f_y. \end{split} \end{align} The orders of monomials defining standard differentials are printed in bold: \[ \overline{\mathbf{0}},\;1,\; 2,\;\overline{ \mathbf{3},\; \mathbf{4}},\; 5,\; \overline{\mathbf{6},\;\mathbf{ 7},\; 8,\; 9,\;\mathbf{10},\ldots}, \] these can be written as $3i+4j$, $0\leq i\leq 2$, $0\leq j\leq 1 $. We remark that the last two terms in the definition of $h_1(x,y)$ are chosen to provide the standard differentials described above. The polynomial $F\big((x,y),(z,w)\big)$ in (\ref{realization}) is of homogeneous weight (weight $-8$), and is given explicitly in Appendix A. \section{Purely trigonal curve of degree four} \label{PureTrig} In Section \ref{Add2} of this paper, we restrict ourselves to the curve \begin{equation} C:\, y^3 = x^4 + \mu_3 x^3 + \mu_6 x^2 + \mu_9 x + \mu_{12} \lbl{eq5.1} \end{equation} specialized from (\ref{eq1.1}). We also restrict results given in Appendix B to this case to save space. This curve is called a {\it purely trigonal curve} of degree four. Equivalently we can represent the curve (\ref{eq1.1}) in the form \begin{equation} C:\, y^3 = \prod_{k=1}^4(x-a_k), \lbl{eq5.1a} \end{equation} and evaluate the discriminant $D$ according to (\ref{discriminant}) as \begin{equation} D=\prod_{1\leq i<j \leq 4 } (a_i-a_j)^4. \end{equation} The curve $C$ is smooth if and only if $a_i\neq a_j$ for all $i,j=1, \ldots 4$. While we assume this to be case, results in the singular cases are obtained by suitable limiting process. For the curve (\ref{eq5.1}), the basis (\ref{eq1.2}) of differential forms of first kind and the function $\Sigma$ in (\ref{eq2.3}) can be written explicitly as \begin{equation} \omega_1=\frac{dx}{3y^2}, \quad \omega_2=\frac{xdx}{3y^2}, \quad \omega_3=\frac{ydx}{3y^2}=\frac{dx}{3y}, \end{equation} and \begin{equation} \Sigma\big((x,y),(z,w)\big) =\frac{y^2+yw+w^2}{3(x-z)y^2}, \end{equation} respectively. The function $\sigma(u)$ is defined by using these. Let \[ \zeta=e^{2\pi \sqrt{-1}/3}. \] The curve $C$ has an automorphism $(x,y)\mapsto (x, {\zeta}y)$, and for $u=(u_1,u_2,u_3)\in\kappa^{-1}\iota(C)$, ${\zeta}^j$ acts by \begin{equation} [{\zeta}^j]u=({\zeta}^ju_1,{\zeta}^ju_2,{\zeta}^{2j}u_3) =\int_{\infty}^{(x,\,{\zeta}^jy)}(du_1,du_2,du_3). \lbl{eq5.2} \end{equation} This action naturally induces an action on $\kappa^{-1} \Theta^{[k]}, (k=2,3,\dots)$, implying that the set $\Theta^{[k]}$ is stable under the action of $[{\zeta}^j]$. \section{The $\sigma$-function} \label{sigma} We construct here the {\it $\sigma$-function} \begin{equation} \sigma(u)=\sigma(u_1,u_2,u_3) \lbl{eq2.1} \end{equation} associated with $C$ for $u \in {\mathbb C}^3$ (see also \cite{bel97}, Chap.1). We choose closed paths \begin{equation} \alpha_i, \beta_j \ (1\leqq i, j\leqq 3) \lbl{eq2.2} \end{equation} on $C$ which generate $H_1(C,\mathbb{Z})$ such that their intersection numbers are $\alpha_i\cdot\alpha_j=\beta_i\cdot\beta_j=0$, $\alpha_i\cdot\beta_j=\delta_{ij}$. Define the period matrices by \begin{equation} \left[\,\omega' \ \omega'' \right]= \left[\int_{\alpha_i}\omega_j \quad \int_{\beta_i}\omega_j\right]_{i,j=1,2,3}, \,\, \left[\,\eta' \ \eta'' \right]= \left[\int_{\alpha_i}\eta_j \quad \int_{\beta_i}\eta_j\right]_{i,j=1,2,3}. \lbl{eq2.5} \end{equation} We can combine these two matrices into \begin{equation} M=\left[\begin{array}{cc}\omega' & \omega'' \\ \eta' & \eta'' \end{array}\right]. \lbl{eq2.6} \end{equation} Then $M$ satisfies \begin{equation} M\left[\begin{array}{cc} & -1_3 \\ 1_3 & \end{array}\right]{}^t {M} =2\pi\sqrt{-1}\left[\begin{array}{cc} & -1_3 \\ 1_3 & \end{array}\right]. \lbl{eq2.7} \end{equation} This is the {\it generalized Legendre relation} (see (1.14) on p.\,11 of \cite{bel97}). In particular, ${\omega'}^{-1}\omega''$ is a symmetric matrix. We know also that \begin{equation} \mathrm{Im}\,({\omega'}^{-1}\omega'') \qquad \mbox{is positive definite.} \qquad\qquad\ \lbl{positive-definiteness} \end{equation} By looking at (\ref{eq1.2}), we see the canonical divisor class of $C$ is given by $4\infty$ and we are taking $\infty$ as the point, the Riemann constant is an element of $\big(\frac12\mathbb{Z}\big)^6$ (see \cite{mu83}, Coroll.3.11, p.166). Let \begin{equation} \delta:=\left[\begin{array}{cc}\delta'\ \\ \delta''\end{array}\right]\in \left(\tfrac12\mathbb{Z}\right)^{6} \lbl{eq2.9} \end{equation} be the theta characteristic which gives the Riemann constant with respect to the base point $\infty$ and to the period matrix $[\omega'\ \omega'']$. Note that we use $\delta',\delta''$ as well as $n$ in (\ref{def-sigma}) as columns, to keep the notation a bit simpler. We define \begin{equation} \begin{aligned} \sigma(u)&=\sigma(u;M)=\sigma(u_1,u_2,u_3;M) \\ &=c\,\text{exp}(-\tfrac{1}{2}u\eta'{\omega'}^{-1}\ ^t\negthinspace u) \vartheta\negthinspace \left[\delta\right]({\omega'}^{-1}\ ^t\negthinspace u;\ {\omega'}^{-1}\omega'') \lbl{def-sigma}\\ &=c\,\text{exp}(-\tfrac{1}{2}u\eta'{\omega'}^{-1}\ ^t\negthinspace u) \times\\ & \qquad \times \sum_{n \in \mathbb{Z}^3} \exp \big[2\pi i\big\{ \tfrac12 \ ^t\negthinspace (n+\delta'){\omega'}^{-1}\omega''(n+\delta') + \ ^t\negthinspace (n+\delta')({\omega'}^{-1}\,^tu+\delta'')\big\}\big], \end{aligned} \end{equation} where \begin{equation} c=\frac{1}{\sqrt[8]{D}}\bigg(\frac{\pi^3}{|\omega'|}\bigg)^{1/2} \lbl{sigma-const} \end{equation} with $D$ from (\ref{discriminant}). Here the choice of a root of (\ref{sigma-const}) is explained in the remark \ref{varepsilon} below. The series (\ref{def-sigma}) converges because of (\ref{positive-definiteness}). In what follows, for a given $u\in\mathbb{C}^3$, we denote by $u'$ and $u''$ the unique elements in $\mathbb{R}^3$ such that \begin{equation} u=u'\omega'+u''\omega''. \lbl{eq2.12} \end{equation} Then for $u$, $v\in\mathbb{C}^3$, and $\ell$ ($=\ell'\omega'+\ell''\omega''$) $\in\Lambda$, we define \begin{align} L(u,v) &:={u}(\eta'{}^tv'+\eta''{}^tv''),\nonumber \\ \chi(\ell)&:=\exp[\pi\sqrt{-1}\big(2({\ell'}\delta''- {\ell''}\delta') +{\ell'}{}^t\ell''\big)] \ (\in \{1,\,-1\}). \lbl{eq2.13} \end{align} In this situation the most important properties of $\sigma(u;M)$ are as follows: \begin{lemma The function $\sigma(u)$ is an entire function. For all $u\in\mathbb{C}^3$, $\ell\in\Lambda$ and $\gamma\in\mathrm{Sp}(6,{\mathbb{Z}})$, we have \lbl{L2.14} \begin{align} \sigma(u+\ell;M) & =\chi(\ell)\sigma(u;M)\exp L(u+\tfrac12\ell,\ell),\lbl{L2.14.1}\\ \sigma(u;\gamma M)&=\sigma(u;M),\lbl{L2.14.2}\\ u\mapsto\sigma(u;M)& \text{ has zeroes of order $1$ along } \Theta^{[2]}, \lbl{L2.14.3}\\ \sigma(u;M)&=0 \iff u\in\Theta^{[2]}.\lbl{L2.14.4} \end{align} \end{lemma} \begin{proof The function $\sigma$ is clearly entire from its definition and from the known property of theta series. The formula (\ref{L2.14.1}) is a special case of the equation from \cite{ba97} (p.286 in the 1995 reprint, $\ell$.22). The statement (\ref{L2.14.2}) is easily shown by using the definition of $\sigma(u)$ since $\gamma$ corresponds to changing the choice of the paths of integration given in (\ref{eq2.5}). The statements (\ref{L2.14.3}) and (\ref{L2.14.4}) are explained in \cite{ba97}, (p.252). These facts are partially described also in \cite{bel97}, (p.12, Th.1.1 and p.15). \end{proof} \begin{lemma The function $\sigma(u)$ is either odd or even, i.e. \lbl{parity} \begin{equation} \sigma([-1]u)=-\sigma(u)\quad \mathrm{or}\quad \sigma([-1]u)= \sigma(u). \lbl{L2.14.0} \end{equation} \end{lemma} \begin{proof} We fix a matrix $M$ satisfying (\ref{eq2.7}) and (\ref{positive-definiteness}). Therefore the bilinear form $L(\ ,\ )$ is fixed. Then the space of the solutions of (\ref{L2.14.1}) is one dimensional over $\mathbb{C}$, because the Pfaffian of the Riemann form attached to $L(\ ,\ )$ is $1$ (see \cite{on98}, Lemma 3.1.2 and \cite{la82}, p.93, Th.3.1). Hence, such non-trivial solutions automatically satisfy (\ref{L2.14.2}) and (\ref{L2.14.4}); while (\ref{L2.14.3}) requires the constant factor to be the same, this is guaranteed by the definition of $\sigma$ and the fact that (\ref{sigma-const}) is independent of $\gamma$. In this sense, (\ref{L2.14.1}) characterizes the function $\sigma(u)$ up to a constant, which depends only on the $\mu_j$s. Now considering the loop integrals for $\omega$ in the reverse direction, we see that \[ [-1]\Lambda=\Lambda. \] Hence $u\mapsto \sigma([-1]u)$ satisfies (\ref{L2.14.1}) also. So there exists a constant $K$ such that \[ \sigma([-1]u)=K\,\sigma(u). \] Since $[-1]^2$ is trivial, it must be $K^2=1$. \lbl{R2.15} \end{proof} \begin{remark} In fact $\sigma(u)$ is an {\it odd function} as we see in the Theorem \ref{L6.1}. \end{remark} We need the power series expansion of $\sigma(u)$ with respect to $u_1$, $u_2$, $u_3$. To get the expansion, first of all, we need to investigate Abelian functions given by logarithmic (higher) derivatives of $\sigma(u)$. We shall examine this in the next Section. \section{Standard Abelian functions} \label{Abelian} \begin{definition} A meromorphic function $u\mapsto\mathfrak{P}(u)$ on $\mathbb{C}^3$ is called a {\it standard Abelian function} if it is holomorphic outside $\kappa^{-1}(\Theta^{[2]})$ and is multi-periodic, namely, if it satisfies \begin{equation} \mathfrak{P}(u+\omega'n+\omega''m)=\mathfrak{P}(u) \end{equation} for all integer vectors $n,m\in \mathbb{Z}$ and all $u\not\in \kappa^{-1}(\Theta^{[2]})$. \lbl{D4.1} \end{definition} To realize the standard Abelian functions in terms of the $\sigma$-function, we first let \begin{equation} \Delta_i=\tfrac{\partial}{\partial u_i}-\tfrac{\partial}{\partial v_i} \end{equation} for $u=(u_1,u_2,u_3)$ and $v=(v_1,v_2,v_3)$. This operator occurs in what is now known as Hirota's bilinear operator, but in fact was introduced much earlier in the PDE case by Baker (\cite{ba02}, p.151, \cite{ba07}, p.49) (see also \cite{ee00}). We define fundamental Abelian functions on $J$ by \begin{equation} \wp_{ij}(u)=-\tfrac{1}{2\sigma(u)^2}\Delta_i\Delta_j\,\sigma(u)\sigma(v)|_{v=u} =-\tfrac{\partial^2}{\partial u_i\partial u_j}\log\sigma(u). \lbl{wpij} \end{equation} It follows from (\ref{L2.14.0}) that these functions are even. For the benefit of the reader familiar with the genus one case, we should point out that the Weierstrass function $\wp(u)$ described in eqn.\ (\ref{WP}) would be written as $\wp_{11}(u)$ in this notation. It is clear that they belong to $\varGamma(J, 2\Theta^{[2]})$. Moreover, we define \begin{equation} \wp_{ijk}(u)=\tfrac{\partial}{\partial u_k}\wp_{ij}(u),\quad \wp_{ijk\ell}(u)=\tfrac{\partial}{\partial u_{\ell}}\wp_{ijk}(u). \lbl{eq3.1} \end{equation} The three index $\wp$-functions are odd and four index $\wp$ are even. The functions (\ref{wpij}) and (\ref{eq3.1}) are standard Abelian functions from Lemma \ref{L2.14}. Following (and generalizing) Baker (\cite{ba02}, pg 151, \cite{ba07}, pp.49--50) (see also \cite{bel00}, pp.18--19, or \cite{cn06}), we define \begin{equation} \begin{aligned} Q_{ijk\ell}(u)&=-\tfrac{1}{2\sigma(u)^2}\Delta_i\Delta_j\Delta_k\Delta_{\ell} \,\sigma(u)\sigma(v)|_{v=u}\\ &= \wp_{ijk\ell}(u)-2(\wp_{ij}\wp_{k\ell}+\wp_{ik}\wp_{j\ell}+ \wp_{i\ell}\wp_{jk})(u), \end{aligned} \lbl{eq3.2a} \end{equation} which specializes to \begin{align*} & Q_{ijkk} = \wp_{ijkk} - 2 \wp_{ij}\wp_{kk}-4\wp_{ik}\wp_{jk},\quad && Q_{iikk} = \wp_{iikk} - 2 \wp_{ii}\wp_{kk}-4{\wp_{ik}}^2,\\ & Q_{ikkk} = \wp_{ikkk} - 6\wp_{ik}\wp_{kk},\quad && Q_{kkkk} = \wp_{kkkk} - 6{\wp_{kk}}^2. \end{align*} A short calculation shows that $Q_{ijk\ell}$ belongs in $\varGamma(J,\mathcal{O}(2\Theta^{[2]}))$, whereas $\wp_{ijk\ell}$ belongs in $\varGamma(J,\mathcal{O}(4\Theta^{[2]}))$. In particular $Q_{1333}$ plays a key role in what follows. Note that although the subscripts in $\wp_{ijk\ell}$ {\em do} denote differentiation, the subscripts in $Q_{ijk\ell}$ do {\em not} denote direct differentiation, and the latter notation is introduced for convenience only. This is important to bear in mind when we use cross-differentiation, for example the $\wp_{ijk\ell}$ satisfy \[ \tfrac{\partial}{\partial u_m}\wp_{ijk\ell}(u) = \tfrac{\partial}{\partial u_\ell}\wp_{ijkm}(u), \] whereas the $Q_{ijk\ell}$ do not. The following useful formula (\ref{klein}) involving fundamental Kleinian functions, for the case of the general curve (\ref{eq1.1}), was derived in \cite{bel00}. It would be helpful for the reader to read \cite{ba98}, p.\ 377, for the case of hyperelliptic curves. The formula (\ref{klein}) below is proved similarly. \begin{prop} Let $u\in\mathbb{C}^3$ and $(x_1,y_1)$, $(x_2,y_2)$, $(x_3,y_3)$ be Abelian preimages of $u$, i.e. \[ u=\int_{\infty}^{(x_1,y_1)}\omega+\int_{\infty}^{(x_2,y_2)}\omega+ \int_{\infty}^{(x_3,y_3)}\omega \] with appropriate paths of the integrals. Let $(x,y)$ be an arbitrary point on the curve $C$. Then, for each $k=1$, $2$, $3$, the following formula holds \lbl{Tklein} \begin{equation} [1 \ x \ y]\Bigg[\wp_{ij}\bigg(\int_{\infty}^{(x,y)}\omega -u \bigg)\Bigg] \Bigg[\begin{matrix} \ 1 \ \\ x_k \\ y_k \end{matrix}\Bigg]= \frac{F\big((x,y),(x_k,y_k)\big)}{(x-x_k)^2}\lbl{klein}, \end{equation} where $F\big((x,y),(z,w)\big)$ is a polynomial defined by {\rm(\ref{realization})} or {\rm(\ref{polar})}. \end{prop} \begin{proof} Using (\ref{L2.14.3}) and relations of differntials of the second kind on $C$ with ones of the third kind (see \cite{ba97}, p.22,$\ell$.15 and p.22,$\ell$.11), we have an equation connecting the theta series appeared in (\ref{def-sigma}) and differentials of the third kind (see \cite{ba97}, p.275, $\ell$.$-11$, for example). Then such the equation is modified to a form suitable with $\sigma(u)$ and the 2-form $\Omega((x,y),(z,w))$ of (\ref{eq3.7}). Finally, after taking logarithm of the modified one, operating $\frac{\partial^2}{\partial u_i\partial u_j}$ to it gives the desired equation. \end{proof} \begin{prop} Suppose the $(x_i,y_i)$s and $u$ are related as in Proposition {\rm \ref{Tklein}}. Let $(x,y)$ be any one of $(x_i,y_i)$s. Then we have infinitely many relations, of homogeneous weight, linear in \lbl{inversion-eq} \[ \wp_{ij}(u),\ \wp_{ijk}(u),\ \wp_{ijk\ell}(u), \cdots \qquad(i,\ j,\ k = 1,\ 2, 3), \] and whose coefficients are polynomials of $x$, $y$ and $\mu_j$s. We list the first three of them of lower weights as follows\,{\rm\upshape:} \begin{align} &\wp_{33}(u)y+\wp_{23}(u)x +\wp_{13}(u)=x^2, \lbl{kl1} \\ &\left(\wp_{23}(u)+\tfrac13\mu_1\wp_{33}(u)-\wp_{333}(u)\right)y + \big(\wp_{22}(u)-\wp_{233}(u) \notag\\ &\qquad\qquad + \tfrac13 \mu_1\wp_{23}(u)\big)x +\tfrac13\mu_1\wp_{13}(u) +\wp_{12}(u)-\wp_{133}(u) = 2xy-\tfrac23 \mu_1 x^2, \lbl{kl2}\\ & -3 y^2+\left(\tfrac13 \wp_{33} \mu_2 +\tfrac12 \wp_{3333} -\tfrac12\mu_1 \wp_{333}+\tfrac19 {\mu_1}^2\wp_{33} + 2 \mu_1 x-\tfrac32 \wp_{233}+2 \mu_4 \right) y \notag\\ & \qquad\qquad +\left(\tfrac23 \mu_2 -\tfrac19 {\mu_1}^2\right) x^2+(-\tfrac12 \mu_1\wp_{233} + \mu_5 +\tfrac12 \wp_{2333}+\tfrac13 \wp_{23} \mu_2 +\tfrac19 {\mu_1}^2\wp_{23} - \tfrac32 \wp_{223})x \notag\\ & \qquad\qquad\qquad + \tfrac12 \wp_{1333} + \tfrac13 \mu_2\wp_{13} +\mu_8 +\tfrac19{\mu_1}^2\wp_{13} - \tfrac32 \wp_{123}-\tfrac12 \mu_1 \wp_{133}=0. \lbl{kl3} \end{align} More equations of this type are available in {\rm\cite{Weier34}}. \end{prop} \begin{proof} These relations are derived from (\ref{klein}) by expanding (\ref{klein}), with respect to a local parameter $t=x^{-1/3}$, in the vicinity of the point at infinity, and comparing the principal parts of the poles on both sides of the relation (\ref{klein}), we find the solution of the Jacobi inversion problem. \end{proof} \begin{remark} (1) In the case of trigonal curves, formula of this type was first given explicitly for a particular case of the curve (\ref{eq1.1}) in \cite{eel00}. \newline\noindent (2) We use in the proof of Lemma \ref{L4index} below the first seven relations in \ref{inversion-eq}. Namely, those of weight from $-6$ to $-12$. \end{remark} The first two of relations in \ref{inversion-eq} give solution of the Jacobi inversion problem (see also \cite{bel00}): \begin{cor} Suppose the $(x_i,y_i)$s and $u$ are related as in Theorem {\rm \ref{Tklein}}. The solution of the Jacobi inversion problem is given by $(x_1,y_1),(x_2,y_2)$,and $(x_3,y_3)$, where these points are the set of zeros of the equations {\rm\upshape(\ref{kl1})}, {\rm\upshape(\ref{kl2})} for $(x,y)$. \lbl{jacobi-inversion} \end{cor} We remark that the right hand side of equations (\ref{kl1}), (\ref{kl2}) are related to the polynomials $h_3(z,w)$ and $h_2(z,w)$ defining the canonical meromorphic differentials $\eta_3(z,w)$ and $\eta_2(z,w)$. {\em Further the first equation in {\rm\upshape (4.8)} is directly related to the determinant of the matrices constructed in {\rm\cite{on05}}, using the algebraic approach developed in \,{\rm\cite{mp05}}.} If we take the resultant of (\ref{kl1}), (\ref{kl2}) with respect to $y$, we find a cubic equation in $x$ which can be used to substitute for $x^3$ in terms of lower powers of $x$ \begin{equation}\begin{split} x^3 &= \tfrac12 \left(3 \wp_{23}+ \mu_1 \wp_{33}-\wp_{333}\right) x^2+\tfrac12 \left(\wp_{33} \wp_{22}+2 \wp_{13}+\wp_{23} \wp_{333}-\wp_{33} \wp_{233}-{\wp_{23}}^2\right) x\\ & \qquad +\tfrac12 \wp_{33} \wp_{12}-\tfrac12 \wp_{33} \wp_{133}-\tfrac12 \wp_{13} \wp_{23}+\tfrac12 \wp_{13} \wp_{333}. \lbl{z3} \end{split} \end{equation} If we now take the resultant of (\ref{kl1}), (\ref{kl3}) with respect to $y$, we get a quartic in $x$ which can be reduced to a quadratic by repeated use of (\ref{z3}). This quadratic in $x$ is not further reducible A quadratic equation in $x$ has at most only two solutions and $u$ has three free variables. Hence each the coefficients of $1$, $x$, $x^2$ of the quadratic must all be identically zero. Furthermore, each coefficient can be split into two parts which are even and odd under the reflection (\ref{eq1.6}), and each of these parts must vanish. So each term of order higher than two in the expansion of (\ref{klein}) can give up to six separate equations involving the $\wp$ functions. The simplest two arising from the resultant of (\ref{kl1}), (\ref{kl3}) are \begin{align} \wp_{222} - 2\wp_{33}\wp_{233} + 2\wp_{23}\wp_{333} - \mu_2\wp_{233} + \mu_3\wp_{333} + \mu_1\wp_{223}&=0, \lbl{3index1}\\ \wp_{23}\wp_{233} - 2 \wp_{33}\wp_{223} + \wp_{333}\wp_{22} + 2\wp_{133} + \mu_1(\wp_{23}\wp_{333}-\wp_{33} \wp_{233})&=0, \lbl{3index2} \end{align} where $\wp_{ij}=\wp_{ij}(u)$ and $\wp_{ijk}=\wp_{ijk}(u)$. \section{Equations satisfied by the Abelian functions for the general trigonal case} \label{PDEs} We can use the expansion of (\ref{klein}) as described in the discussion following Theorem \ref{Tklein} to derive various equations which the Abelian functions defined by (\ref{eq3.1}) and (\ref{eq3.2a}) must satisfy. We consider first the 4-index equations, the generalizations of $\wp''=6\wp^2-\tfrac12g_2$ in the cubic (genus 1) case. \begin{lemma} The $4$-index functions $\wp_{ijk\ell}$ associated with {\rm (\ref{eq5.1})} satisfy the following relations\,{\rm :}\lbl{L4index} \begin{align*} \wp_{3333}& =6{\wp_{33}}^2 +{\mu_1}^2\wp_{33} -3 \wp_{22} + 2 \mu_1\wp_{23} - 4\mu_2 \wp_{33} -2 \mu_4,\\ \wp_{2333}& =6 \wp_{23}\wp_{33} + {\mu_1}^2\wp_{23} + 3\mu_3\wp_{33} - \mu_2\wp_{23} -\mu_5 -\mu_1\wp_{22},\\ \wp_{2233}& = 4{\wp_{23}}^{2}+ 2 \wp_{33}\wp_{22} +\mu_1\mu_3\wp_{33} - \mu_2\wp_{22} + 2 \mu_6 + 3\mu_3\wp_{23} + \mu_1\mu_2\wp_{23} + 4 \wp_{13},\\ \wp_{2223}& =6 \wp_{22}\wp_{23} + 4 \mu_1\wp_{13} + \mu_1\mu_3\wp_{23} + \mu_2\mu_3\wp_{33} + 2 \mu_3\mu_4 + {\mu_2}^2\wp_{23} + 4 \mu_4\wp_{23} + 3 \mu_3\wp_{22}\\ &\qquad + 2\mu_1\mu_6 + \mu_2\mu_5 - 2 \mu_5\wp_{33},\\ \wp_{2222}& = 6{\wp_{22}}^2 -2 \mu_2\mu_3\wp_{23} + \mu_1\mu_2\mu_5 + 2 \mu_1\mu_3\mu_4 + 24\wp_{13}\wp_{33} + 4{\mu_1}^2\wp_{13} - 4\mu_2\wp_{13} - 4 \wp_{1333} \\ &\qquad + 4\mu_5\wp_{23} +2 {\mu_1}^2\mu_6 - 2 \mu_2\mu_6 + \mu_3\mu_ 5 - 3{\mu_3}^2\wp_{33} + 12\mu_6\wp_{33} + 4 \mu_4\wp_{22} \\ &\qquad + {\mu_2}^2\wp_{22} + 4\mu_1\mu_3\wp_{22},\\ \wp_{1233}& = 4 \wp_{13}\wp_{23} +2 \wp_{33}\wp_{12} - 2\mu_1\wp_{33}\wp_{13} - \tfrac13{\mu_1}^{3}\wp_{13} + \tfrac13\mu_1\wp_{1333} +\tfrac13{\mu_1}^2\wp_{12} + 3 \mu_3\wp_{13} \\ &\qquad +\tfrac13 \mu_1\mu_8 +\tfrac43\mu_1\mu_2\wp_{13} - \mu_2\wp_{12} + \mu_9,\end{align*}\begin{align*} \wp_{1223}& = 4 \wp_{23}\wp_{12} + 2 \wp_{13}\wp_{22} - 2\mu_2\wp_{33}\wp_{13} - 2\mu_8\wp_{33} - \tfrac23\mu_8\mu_2 +\tfrac13\mu_2\wp_{1333} + 3 \mu_3 \wp_{12} + 4\mu_4\wp_{13} \\ &\qquad + \tfrac43 {\mu_2}^2\wp_{13} - 2 \wp_{11} - \tfrac13{\mu_1}^2\mu_2\wp_{13} +\tfrac13 \mu_1\mu_2\wp_{12} + \mu_1\mu_3\wp_{13},\\ \wp_{1222}& =6 \wp_{22}\wp_{12} + 6\mu_9\wp_{33} - \mu_3\wp_{1333} + 4 \mu_5\wp_{13} + {\mu_2}^2\wp_{12} - \mu_2\mu_9 + 4 \mu_ 4\wp_{12} - 2 \mu_1\wp_{11} \\ &\qquad + 6\mu_3\wp_{33}\wp_{13} -3\mu_2\mu_3\wp_{13} + {\mu_1}^2\mu_3\wp_{13} + 3 \mu_1\mu_3\wp_{12} - \mu_1\mu_2\mu_8,\\ \wp_{1133}& =4{\wp_{13}}^2 +2 \wp_{33}\wp_{11} - \mu_9\wp_{23} + 2\mu_6\wp_{13} +\mu_8\wp_{22} - \mu_5\wp_{12} + \tfrac23 \mu_4\wp_{1333} + \tfrac23 \mu_4\mu_8 \\ &\qquad +2\mu_2\mu_8\wp_{33} - 4\mu_4\wp_{13}\wp_{33} + \tfrac23\mu_2\mu_4\wp_{13} + \mu_1\mu_9\wp_{33} - \mu_1\mu_8\wp_{23} + \mu_1\mu_5\wp_{13} \\ &\qquad -\tfrac23{\mu_1}^2\mu_4\wp_{13} + \tfrac23 \mu_1\mu_4\wp_{12},\\ \wp_{1123}& =4 \wp_{12}\wp_{13} + 2 \wp_{23}\wp_{11} + 2\mu_3\mu_4\wp_{13} - \mu_3\mu_8\wp_{33} - 2\mu_5\wp_{13}\wp_{33} + \mu_2\mu_8\wp_{23} + \tfrac43\mu_2\mu_5\wp_{13} \\ &\qquad -\mu_9\wp_{22} + 2\mu_6\wp_{12} + \tfrac13\mu_5\wp_{1333} + \tfrac13\mu_5\mu_8 +\mu_1\mu_9\wp_{23} -\tfrac13{\mu_1}^2\mu_5\wp_{13} +\tfrac13 \mu_1\mu_5\wp_{12},\\ \wp_{1122}& = 4{\wp_{12}}^2 +2 \wp_{11}\wp_{22} +\tfrac23 {\mu_1}^2\mu_6\wp_{13} + \tfrac43 \mu_1\mu_6\wp_{12} + \mu_3\mu_9\wp_{33} + \mu_2\mu_9\wp_{23} + 8\mu_{12}\wp_{33} \\ &\qquad + 2 \mu_3\mu_4\wp_{12}-\tfrac23\mu_6\wp_{1333} + 4\mu_8\wp_{13} -\tfrac23\mu_6\mu_8 + 4\mu_6\wp_{33}\wp_{13} - \mu_3\mu_8\wp_{23} + \mu_3\mu_5\wp_{13} \\ &\qquad - \tfrac83 \mu_2\mu_6\wp_{13} + \mu_2\mu_8\wp_{22} + \mu_2\mu_5\wp_{12},\\ \wp_{1113}& = 6 \wp_{13}\wp_{11} + 6 \mu_2\mu_8\wp_{13} - 2\mu_2\mu_{12}\wp_{33} - {\mu_1}^2\mu_8\wp_{13} + 4\mu_1\mu_{12}\wp_{23} + \mu_1\mu_8\wp_{12} + \mu_5\mu_9\wp_{33} \\ &\qquad + {\mu_5}^2\wp_{13} - 2 \mu_4\mu_9\wp_{23} + \mu_1\mu_9\wp_{13} - 6\mu_8\wp_{33}\wp_{13} - 2\mu_6\mu_8\wp_{33} + \mu_8\wp_{1333} - 4\mu_4\mu_{12} \\ &\qquad + 3 \mu_9\wp_{12} - 6 \mu_{12}\wp_{22} - \mu_5\mu_8\wp_{23} + 4 \mu_4\mu_6\wp_{13},\\ \wp_{1112}& = 6 \wp_{12}\wp_{11} + 6\mu_3\mu_{12}\wp_{33} + 3 \mu_3\mu_8\wp_{13} - 2\mu_6\mu_8\wp_{23} - \mu_1{\mu_8}^2 + 5\mu_2\mu_8\wp_{12} + 4\mu_2\mu_{12}\wp_{23} \\ &\qquad - 2 \mu_1\mu_{12}\wp_{22} + 4 \mu_4\mu_6\wp_{12} - \mu_5\mu_8\wp_{22} + {\mu_5}^2\wp_{12} + 4\mu_5\mu_{12} - \mu_9\wp_{1333} - 4 \mu_1\mu_{12}\mu_4 \\ &\qquad + {\mu_ 1}^2\mu_9\wp_{13} + 3 \mu_1\mu_9\wp_{12} - 2 \mu_4\mu_9\wp_{22} +\mu_5\mu_9\wp_{23} -4 \mu_2\mu_9\wp_{13} + 6 \mu_9\wp_{13}\wp_{33} - 3 \mu_8\mu_9,\\ \wp_{1111}& =6{\wp_{11}}^2 + 4 \mu_4\mu_9\wp_{12} - 8{\mu_4}^2\mu_{12} - 2{\mu_2}^2\mu_4\mu_{12} -3 {\mu_8}^2\wp_{22} - 2 \mu_4{\mu_8}^2 + {\mu_5}^2\wp_{11} -3{\mu_9}^2\wp_{33} \\ &\qquad - 4\mu_{12}\wp_{1333} + 24 \mu_{12}\wp_{33}\wp_{13} + 12 \mu_5\mu_{12}\wp_{23} + \mu_2 \mu_4\mu_5\mu_9 - 6 \mu_1\mu_3\mu_4\mu_{12} \\ &\qquad +\mu_1\mu_2\mu_5\mu_{12} +2 {\mu_6}^2\mu_8 +2{\mu_2}^2{\mu_8}^2 - \mu_5\mu_6\mu_9 - 2 \mu_5\mu_9\wp_{13} + 4\mu_4\mu_6\wp_{11} + 4\mu_6\mu_8\wp_{13} \\ &\qquad + 8\mu_2\mu_8\wp_{11} - 6 \mu_2\mu_6\mu_{12} - 12 \mu_2\mu_{12}\wp_{13} + 4{\mu_1}^2\mu_{12}\wp_{13} + 2 {\mu_1}^2\mu_6\mu_{12} + 2\mu _8\mu_5\wp_{12} \\ &\qquad - 6 \mu_8\mu_9\wp_{23} - 12\mu_4\mu_{12}\wp_{22} + \mu_2 {\mu_5}^2\mu_8 + 2\mu_1\mu_4\mu_6\mu_9 +\mu_1\mu_5\mu_6\mu_8 + 12\mu_6\mu_{12}\wp_{33} \\ &\qquad + 4\mu_1\mu_9\wp_{11} + 2\mu_3 {\mu_4}^2\mu_9 + 9\mu_3\mu_5\mu_{12} - 2\mu_1\mu_3{\mu_8}^2 - 6\mu_3\mu_8\mu_9 + 2\mu_1\mu_2\mu_8\mu_9 \\ &\qquad + \mu_3\mu_4\mu_5\mu_8 + 2\mu_2\mu_4\mu_6 \mu_8 + 2\mu_2{\mu_9}^2. \end{align*} \end{lemma} \begin{proof} Many of these relations follow from the sets of equations generated from the first seven terms of the expansion of (\ref{klein}) as indicated in Proposition \ref{inversion-eq} by a similar argument as that explained at the end of the previous Section. Others can be derived making use of derivatives of the equations in Lemma \ref{L3index}, or products of these equations with three index expressions $\wp_{ijk}$, working in a self-consistent way from higher to lower weights. The calculations are somewhat long and tedious and much facilitated by heavy use of Maple. Full Maple worksheets are available on request from the authors \end{proof} \begin{remark} The complete set of the four-index relations for $\wp$-functions for genus three was derived by Baker \cite{ba02} in the hyperelliptic case only. As far as we know, the above relations are new, and a comparison with Baker's relations is of interest. \end{remark} \begin{remark} With the use of (\ref{eq3.2a}), these equations can be written in a slightly more compact form involving the $Q_{ijk\ell}$ functions. For example, the sixth equation (for $\wp_{2222}$) becomes \begin{align*} Q_{2222}& = -2\mu_2\mu_3\wp_{23} + \mu_1\mu_2\mu_5 + 2\mu_1\mu_3\mu_4 + 4{\mu_1}^2\wp_{13} - 4\mu_2\wp_{13} - 4 Q_{1333} + 4\mu _5\wp_{23} \\ &\qquad +2 {\mu_1}^2\mu_6 - 2\mu_2\mu_6 + \mu_3\mu_ 5 - 3{\mu_3}^2\wp_{33} + 12\mu_6\wp_{33} + 4\mu_4\wp_{22} + {\mu_2}^2\wp_{22} + 4\mu_1\mu_3\wp_{22}. \end{align*} The importance of this switch to the $Q$ variables is that the equations become {\em linear} in the $Q_{ijk\ell}$ and the 2-index $\wp_{ij}$. An alternative way of looking at this is that the equations in Lemma \ref{L4index} have only second-order poles in $\sigma$. \end{remark} \begin{remark} The first relation in Lemma \ref{L4index}, after differentiating twice with respect to $u_3$, becomes the Boussinesq equation for the function $\wp_{33}$ (see \cite{bel00,eel00}). \end{remark} \begin{lemma} The $3$-index functions $\wp_{ijk}$ associated with {\rm (\ref{eq5.1})} satisfy a number of bi-linear relations {\rm (}linear in both $3$-index and $2$-index functions{\rm )}. These have no analogue in the genus $1$ case. For example, in decreasing weight, starting at $-6$ we have \lbl{L3index} \begin{alignat*}{2} & {-}2\,\wp_{33}\wp_{233}+2\,\wp_{23}\wp_{333}+\mu_3\wp_{333} +\mu_2\wp_{233}-\mu_1\wp_{223}+\wp_{222}=0,\quad &[-6]\\ & {-}2\,\wp_{33}\wp_{223}+\mu_1\wp_{33}\wp_{233}+\wp_{23}\wp_{233} -\mu_1\wp_{23}\wp_{333}+\wp_{333}\wp_{22}+2\,\wp_{133}=0, \quad &[-7]\\ & {-}2\,\wp_{23}\wp_{223}+4\,\wp_{22}\wp_{233}+4\,\mu_1\wp_{133} +\mu_3\mu_2\wp_{333}+{\mu_2}^2\wp_{233} &\\ & \qquad +4\,\mu_4\wp_{233}-2\,\mu_5\wp_{333} -2\,\wp_{33}\wp_{222}+\mu_2\wp_{222} -\mu_1\mu_2\wp_{223}-4\,\wp_{123}=0, \quad &[-8]\\ & 3\,\mu_1\mu_3\wp_{223}-3\,\mu_2\mu_3\wp_{233} -24\,\wp_{33}\wp_{133}+24\,\wp_{13}\wp_{333}+12\,\wp_{122} -12\,\mu_1\wp_{123} &\\ & \qquad +12\,\mu_2\wp_{133}-3\,\mu_3\wp_{222} +6\,\mu_5\wp_{233}-3\,{\mu_3}^2\wp_{333} +12\,\mu_6\wp_{333}-6\,\wp_{23}\wp_{222}+6\,\wp_{22}\wp_{223}=0, \quad &[-9]\\ & 2\,\wp_{33}\wp_{123}-\mu_1\wp_{33}\wp_{133}+\mu_1\wp_{13}\wp_{333} +\wp_{23}\wp_{133}-\wp_{12}\wp_{333}-2\,\wp_{13}\wp_{233}=0, \quad &[-10]\\ & \wp_{113}+\wp_{13}\wp_{223}-2\,\mu_4\wp_{133}+\wp_{33}\wp_{122} -\wp_{22}\wp_{133}-\wp_{12}\wp_{233}+\mu_8\wp_{333} -\mu_2\wp_{133}\wp_{33} &\\ & \qquad -\mu_1\wp_{13}\wp_{233}+\mu_2\wp_{13}\wp_{333} +\mu_1\wp_{133}\wp_{23}=0, \quad &[-11]\\ & {-}\wp_{112}-3\,\mu_9\wp_{333}+\wp_{13}\wp_{222}-\wp_{12}\wp_{223} -2\,\wp_{22}\wp_{123}-2\,\mu_5\wp_{133} +\mu_1\wp_{113}+2\,\wp_{23}\wp_{122} &\\ & \qquad -\mu_8\wp_{233}-\mu_2\wp_{13}\wp_{233} +3\,\mu_3\wp_{33}\wp_{133}-3\,\mu_3\wp_{13}\wp_{333} +\mu_2\wp_{23}\wp_{133}=0, \quad &[-12]\\ & 8\,\mu_4\wp_{133}\wp_{33}-8\,\mu_4\wp_{13}\wp_{333} -4\,\mu_2\mu_4\wp_{133}+2\,\mu_1\mu_9\wp_{333} -2\,\mu_1\mu_8\wp_{233}+2\,\mu_1\mu_5\wp_{133} &\\ & \qquad +4\,\mu_1\mu_4\wp_{123}+4\,\mu_8\mu_2\wp_{333} +3\,\mu_3\wp_{13}\wp_{233}-3\,\mu_3\wp_{23}\wp_{133} -\mu_1\wp_{112}+3\,\wp_{12}\wp_{222} &\\ & \qquad +4\,\wp_{11}\wp_{333}-2\,\mu_6\wp_{133}-3\,\wp_{122}\wp_{22} -4\,\mu_4\wp_{122}+\mu_9\wp_{233} +2\,\mu_8\wp_{223}-8\,\wp_{33}\wp_{113} &\\ & \qquad +4\,\wp_{13}\wp_{133}-2\,{\mu_1}^2\wp_{113} +2\,\mu_2\wp_{113}-2\,\mu_5\wp_{123}=0, \quad &[-13] \end{alignat*} \begin{alignat*}{2} & 4\,\wp_{123}\wp_{13}+4\,\mu_4\wp_{23}\wp_{133} +\mu_3\mu_8\wp_{333}-2\,\mu_5\wp_{33}\wp_{133} +2\,\mu_5\wp_{13}\wp_{333}+\mu_2\mu_8\wp_{233} +\mu_8\wp_{222} &\\ & \qquad -4\,\wp_{12}\wp_{133}-2\,\wp_{23}\wp_{113} + 2\,\wp_{33}\wp_{112} - 4\,\mu_4\wp_{13}\wp_{233}-\mu_1\mu_8\wp_{223}=0, \quad &[-14]\\ & {-}\mu_9\wp_{222}+\mu_1\mu_9\wp_{223}+4\,\wp_{13}\wp_{122} +2\,\wp_{23}\wp_{112}-2\,\wp_{113}\wp_{22} -\mu_3\mu_9\wp_{333}-\mu_2\mu_9\wp_{233} &\\ & \qquad +2\,\mu_5\wp_{23}\wp_{133}-8\,\mu_{12}\wp_{333} -4\,\mu_8\wp_{133}-4\,\mu_6\wp_{13}\wp_{333} +4\,\mu_6\wp_{33}\wp_{133}-4\,\wp_{12}\wp_{123} &\\ & \qquad -2\,\mu_5\wp_{13}\wp_{233}=0. \quad &[-15] \end{alignat*} where the number in brackets $[\quad ]$ indicates the weight. \end{lemma} \begin{proof} We have already given the first two of these equations in the discussion following Theorem \ref{Tklein}. Some of the others follow in the same way from the expansion of (\ref{Tklein}). Alternatively, some can be calculated directly by expressing the equations in Lemma \ref{L4index} in terms of $\wp_{ijk\ell}$ and $\wp_{mn}$ functions, then using cross differentiation on suitably chosen pairs of equations. For example the first relation above for $\wp_{222}$ can be derived from \[ \frac{\partial}{\partial u_2}\wp_{3333} -\frac{\partial}{\partial u_3}\wp_{2333}=0. \] \end{proof} \begin{remark} For a fixed weight, these relations are not always unique, for example at weight $-11$ we also have the relation \lbl{non-unique} \begin{equation*} \wp_{33}\wp_{122}+2\wp_{23}\wp_{123}+3\wp_{113} +\mu_2\wp_{13}\wp_{333}-\mu_2\wp_{33}\wp_{133} +\mu_8\wp_{333}-2\wp_{12}\wp_{233}-2\mu_4\wp_{133} -\wp_{13}\wp_{223}=0 \end{equation*} These dual relations arise because in some cases the cross differentiation can be done in two different ways. In deriving the results in this section, it is sometimes required to make use of both bilinear relations at a given weight to provide enough equations to solve for the unknowns. A full list of the known bilinear relations is given at \cite{Weier34}. \end{remark} \begin{lemma} The quadratic expressions in the $3$-index functions $\wp_{ijk}$ associated with {\rm (\ref{eq5.1})} down to weight $-23$ can be expressed in terms of {\rm (}at most cubic{\rm )} relations in the $\wp_{mn}$ and $\wp_{1333}$. For example we have the following five relations down to weight $-8$\,{\rm:}\lbl{L33index} \begin{align*} {\wp_{333}}^2 & = {\wp_{33}}^2{\mu_1}^2 + 2\mu_1\wp_{23}\wp_{33} + {\wp_{23}}^2 + 4 \wp_{13} - 4\wp_{33}\wp_{22} + 4{\wp_{33}}^3 - 4\mu_2{\wp_{33}}^2 - 4 \mu_4\wp_{33},\\ \wp_{233}\wp_{333} & = 2 \mu_3{\wp_{33}}^2 + 4 {\wp_{33}}^2\wp_{23} - \mu_1\wp_{33}\wp_{22} - 2 \mu_5\wp_{33} - 2\mu_2\wp_{33}\wp_{23} + {\mu_1}^2\wp_{33}\wp_{23} - 2\wp_{12}\\ & \quad -\wp_{22}\wp_{23}+\mu_1{\wp_{23}}^2 + 2 \mu_1\wp_{13},\\ \wp_{133}\wp_{333} & =-\tfrac13 \mu_1\wp_{33}\wp_{12} + \tfrac13{\mu_1}^2\wp_{33}\wp_{13}-\tfrac43\mu_2\wp_{33}\wp_{13} + \tfrac23\wp_{33}\wp_{1333}-\tfrac43\mu_8\wp_{33}+\wp_{23}\wp_{12}\\ & \quad + \mu_1\wp_{13}\wp_{23} - 2 \wp_{13}\wp_{22},\\ \wp_{223}\wp_{333} & = 2\mu_1\wp_{23}\wp_{22} - 2 \mu_2\wp_{33}\wp_{22} + 2\mu_1\mu_4\wp_{23} - \mu_1\mu_5\wp_{33} + 2{\wp_{33}}^2\wp_{22} - 2 \mu_4\wp_{22} + 2 \wp_{33}{\wp_{23}}^2\\ & \quad + \tfrac43{\mu_1}^2 \wp_{13} - \tfrac43\mu_2\wp_{13} - \tfrac43 \mu_1\wp_{12} - \tfrac43 \mu_8 - 2{\wp_{22}}^2 + \mu_1\mu_2\wp_{33}\wp_{23} + \tfrac23 \wp_{1333}\\ & \quad +\wp_{23}\wp_{33}\mu_3 + \mu_1\mu_3{\wp_{33}}^2 - \mu_2{\wp_{23}}^2 -\mu_5 \wp_{23},\\ {\wp_{233}}^2 & = 4 \wp_{33}{\wp_{23}}^2 + 8 \wp_{13}\wp_{33} + 4 \mu_3\wp_{33}\wp_{23} - 2\mu_1\wp_{23}\wp_{22} + \tfrac43{\mu_1}^2\wp_{13} - \tfrac43\mu_2\wp_{13} + 4 \mu_6\wp_{33}\\ & \quad +{\mu_1}^2{\wp_{23}}^2 - \tfrac43 \mu_8 + {\wp_{22}}^{2} - \tfrac43 \wp_{1333} - \tfrac43\mu_1\wp_{12}. \end{align*} The expressions at lower weight quickly become very lengthy. For the purely trigonal case we give a list of the known quadratic expressions in the $3$-index functions up to weight $-15$ in {\rm Appendix B}. The full list for the general $(3,4)$-curve down to weight $-23$ is available at {\rm\cite{Weier34}}. \end{lemma} \begin{proof} The relations can be found using a combination of three types of intermediate relations. One type is from terms in the expansion of (\ref{klein}). Another is to multiply one of the linear three-index $\wp_{ijk}$ relations above by another $\wp_{ijk}$ and substitute for previously calculated $\wp_{ijk}\wp_{\ell mn}$ relations of higher weight. Yet another is to take a derivative of one of the bilinear three-index $\wp_{ijk}$ relations above and to substitute the known linear four-index $\wp_{ijk\ell}$ and previously calculated $\wp_{ijk}\wp_{\ell mn}$ relations. Again, we work in a self-consistent way from higher to lower weights. The strategy for all the results in this section is to proceed down one weight at a time and to derive {\em all} the three types of relations (4-index $\wp_{ijk\ell}$, bilinear 2- and 3-index, and quadratic 3-index) at a given weight before moving down to the next. An extra complication is that at certain weights some of the intermediate calculations can involve {\em quartic} terms in the $\wp_{mn}$ and $\wp_{1333}$. It is always possible to find enough relations to eliminate the quartic term up to weight $-23$. \end{proof} \begin{remark} (1) These relations are the generalizations of the familiar relation $(\wp')^2 = 4 \wp^3-g_2\wp-g_3$ in the genus 1 theory. \\ (2) For equations of weight below $-23$, we have not been able to find cubic expressions for the $\wp_{ijk}\wp_{\ell m n}$ terms. We believe it should be possible to explain this using the results of Cho and Nakayashuiki \cite{cn06}, and we are currently investigating this possibility. \\ (3) The calculations in this section make no use of the expansion of the $\sigma$ function, which is given in the next section. \end{remark} \section{Expansion of the $\sigma$-function} \label{sigma_expan} This Section is devoted to show the coefficients of the power series expansion of $\sigma(u)$ is a polynomial in $\mu_j$s. In the Weierstrass formulation of the theory of elliptic functions, the $\sigma$-function is defined as the power series expansion in the Abelian variable $u$ with coefficients depending on the Weierstrass parameters $g_2,g_3$, and related by certain recursive relations. The extension of Weierstrass theory to arbitrary algebraic curves was intensively developed in the 19th century and later, its development being attached to names such as Baker, Bolza, Brioschi, Burkhardt, Klein, and Wiltheiss. Some important modern developments of this theory are due to Buchstaber and Leykin \cite{bl02,bl05} who give a construction of linear differential (heat-like) operators that annihilate the $\sigma$-function for any $(m,n)$-curve. In the hyperelliptic case the operators are sufficient to find the recursion defining the whole series expansion. The exact analogue of the Weierstrass recursive series formula is known only for genus two, see \cite{bl05}, p.68. In other cases the detailed results have not yet been developed, although the general method is provided in the publications mentioned above. Here we shall give the few first terms of the power series expansion, obtained by finding the coefficients of the Taylor series by using the PDEs given in Lemma \ref{L4index}. \begin{theorem} The function $\sigma(u)$ associated with the general trigonal curve {\rm (\ref{eq1.1})} of genus three has an expansion of the following form\,{\rm :}\lbl{L6.1} \begin{equation} \begin{aligned} \sigma(u_1,u_2,u_3) &=\varepsilon\cdot\big(C_5(u_1,u_2,u_3)+C_6(u_1,u_2,u_3) +C_{7}(u_1,u_2,u_3) +\cdots\big), \end{aligned} \end{equation} where $\varepsilon$ is a non-zero constant and each $C_{j}$ is a polynomial composed of sums of monomials in $u_i$ of odd total degree and of total weight $j$ with polynomial coefficient in $\mu_i$s of total weight $(5-j)$. Especially, $\sigma(u)$ is an odd function {\rm\upshape(}see {\rm\upshape\ref{parity}}{\rm\upshape)}. The first few $C_j$s are \begin{align*} C_5 & = u_1-u_3\,{u_2}^2+\tfrac1{20}\,{u_3}^5, \qquad\qquad\quad\quad\;\; C_6 = \tfrac1{12}\,\mu_1{u_3}^4u_2 - \tfrac13\,\mu_1{u_2}^3,\\ C_7 & = \tfrac {1}{504}\,\big({\mu_1}^{2}-3\,\mu_2 \big){u_3}^7 + \tfrac16\,\mu_2{u_3}^3{u_2}^2,\quad C_8 = \tfrac {1}{360}\,\big({\mu_1}^3+9\,\mu_3-2\,\mu_1\mu_2\big){u_3}^6u_2 - \tfrac12\,\mu_3{u_3}^2{u_2}^3,\\ C_9 & = \tfrac {1}{25920}\,\big({\mu_1}^2-3\,\mu_2\big)^2{u_3}^9 + \tfrac {1}{120}\,\big(2\,\mu_4-{\mu_2}^2 +{\mu_1}^2\mu_2+6\,\mu_1\mu_3 \big){u_3}^5{u_2}^2\\ & \quad -\tfrac1{12}\,\big( 4\,\mu_1\mu_3+4\,\mu_4+ \mu_2^2 \big) u_3{u_2}^4 + \tfrac1{12}\,\mu_4{u_3}^4 u_1,\\ C_{10} & = \tfrac{1}{20160}\,\big(8\,\mu_1\mu_4 - 54\,\mu_2\mu_3 + 3\,\mu_1{\mu_2}^2 + 18\,{\mu_1}^2\mu_3+{\mu_1}^5 - 12\,\mu_5 - 4\,{\mu_1}^3\mu_2\big){u_3}^8u_2\\ & \quad + \tfrac {1}{72}\,\big(6\,\mu_2\mu_3 + 2\,\mu_1\mu_4 + \mu_1{\mu_2}^2 + {\mu_1}^2\mu_3 \big){u_3}^4{u_2}^3 \\ & \quad -\tfrac {1}{60}\,\big( 4\,{\mu_1}^2\mu_3 + \mu_1{\mu_2}^2 + 4\,\mu_5 + 4\,\mu_1\mu_4 - 2\,\mu_2\mu_3 \big){u_2}^5 + \tfrac16\,\mu_5{u_3}^3u_2u_1,\\ C_{11} & = -{\tfrac {1}{6652800}}\, \big(18\,\mu_1\mu_2\mu_3+27\,{\mu_1}^4 \mu_2 - 72\,\mu_6 -3\,{\mu_1}^6 - 24\,\mu_2 \mu_4 + 16\,{\mu_1}^2\mu_4 - 24\,\mu_1\mu_5 \\ & \qquad + 27\,{\mu_3}^2 + 85\,{\mu_2}^3 - 4\,{\mu_1}^3\mu_3 - 82\,{\mu_1}^2{\mu_2}^2 \big){u_3}^{11} + \tfrac{1}{5040} \, \big(27\,{\mu_3}^2+{\mu_2}^3-6\,\mu_2\mu_4 \\ & \qquad -18\,\mu_1\mu_2\mu_3 + 8\,{\mu_1}^{3}\mu_3 - 4\,\mu_1\mu_5 + 6\,{\mu_1}^2\mu_4 + 12\,\mu_6 + {\mu_1}^4\mu_2 - 3\,{\mu_1}^2{\mu_2}^2\big) {u_3}^7{u_2}^2\\ & \quad -\tfrac {1}{72}\, \big(9\,{\mu_3}^2-{\mu_2}^3-4\,\mu_2\mu_4 - 2\,\mu_1\mu_2\mu_3 \big){u_3}^3{u_2}^4 \\ & \quad + \tfrac {1}{360}\,\big(\mu_1\mu_5 - 4\,\mu_2\mu_4 + {\mu_1}^2\mu_4+3\,\mu_6 \big) {u_3}^6u_1 - \tfrac12\,\mu_6{u_3}^2{u_2}^2u_1,\\ C_{12}& = -\tfrac {1}{1814400}\, \big( 27\,\mu_1{\mu_3}^2 - 243\,{\mu_2}^2\mu_3-{\mu_1}^7 + 72\,\mu_1\mu_2\mu_4 - 31\,{\mu_1}^4\mu_3 - 144\,\mu_2\mu_5 - 16\,{\mu_1}^3\mu_4\\ &\quad + 6\,{\mu_1}^5\mu_2 - 10\,{\mu_1}^3{\mu_2}^2 + 24\,{\mu_1}^2\mu_5 + 4\,\mu_1{\mu_2}^3 - 72\,\mu_1\mu_6 + 180\,{\mu_1}^2\mu_2\mu_3 \big){u_3}^{10}u_2\\ & \quad +{\tfrac {1}{2160}}\, \big( 18\,\mu_3\mu_4 - 2\,\mu_1{\mu_2}^3 + 27\,\mu_1{\mu_3}^2 - 9\,{\mu_2}^2\mu_3 + {\mu_1}^3{\mu_2}^2 + {\mu_1}^4\mu_3 + 6\,{\mu_1}^2\mu_2\mu_3 \\ & \quad + 2\,{\mu_1}^3\mu_4 + 12\,\mu_1\mu_6 \big){u_3}^6{u_2}^3 - \tfrac1{24}\,\mu_3 \big(3\,\mu_1\mu_3+4\,\mu_4 +{\mu_2}^2\big) {u_3}^2{u_2}^5 \\ & \quad + \tfrac {1}{120}\, \big(6\,\mu_3\mu_4+2\,\mu_1\mu_6-\mu_2\mu_5+{\mu_1}^2\mu_5\big) {u_3}^5u_2u_1 - \tfrac16\,\big( 2\,\mu_1\mu_6 + 2\,\mu_3\mu_4+\mu_2\mu_5\big) u_3{u_2}^3u_1. \end{align*} \end{theorem} \begin{proof} We divide the proof into four parts. \vskip 5pt \noindent{\bf Step 1.} We have already shown in \ref{parity}, that all the terms are of total odd degree or even degree. We first show that the expansion contains a term linear in $u_1$, so the expansion must be odd. Let $B(D)$ be the Brill-Noether matrix for an effective divisor $D$ of $C$. Then it is well known that (see for example \cite{on98} or \cite{onPEMS}) \[ \dim\varGamma(C,\mathcal{O}(D))=\deg D + 1 -\mathrm{rank} B(D), \] where $\varGamma(C,\mathcal{O}(D))$ is the space of functions on $C$ whose divisor are larger than or equal to $-D$. Moreover, for two points $P_1$, $P_2$ on $C$, $\dim\varGamma(C,\mathcal{O}(P_1+P_2))>1$ if and only if the point $\iota(P_1,P_2)\in \Theta^{[2]}$ is a non-singular point of $\Theta^{[2]}$ (note that $C$ is of genus $3$). By checking the Brill-Noether matrix $B(P_1+P_2)$, we see $\Theta^{[2]}$ is non-singular everywhere, Especially $\kappa^{-1}(\Theta^{[2]})$ is non-singular at the origin $(0,0,0)$. On the other hand, let $u$ and $v$ be two variables on $\kappa^{-1}(\Theta^{[1]})$. Then we have an expansion with respect to $v_3$: \[ 0=\sigma(u+v)=\sigma_3(u)v_3+\tfrac12\,\left(\sigma_2(u) + \sigma_{33}(u)\right){v_3}^2+\cdots, \] where $\sigma_i= \partial \sigma/\partial u_i$, etc. Hence \[ \sigma_3(u)=0 \qquad \sigma_2(u)+\sigma_{33}(u)=0. \] Again by expansion \[ 0=\sigma_3(u)=\sigma_{33}(0)v_3+\cdots, \] we see that \[ \sigma_{33}(0)=0. \] In summary, \[ \sigma_3(0)=\sigma_2(0)=0, \] so from the above arguments and (\ref{L2.14.3}), we must have \[ \sigma_1(0)\neq 0. \] Hence the $\sigma$-expansion must be odd. \vskip 5pt \noindent{\bf Step 2.} Next we show that the terms of weight less than 5 vanish and $C_5(u)$ is non-trivial. We write all the possible odd terms up to and including terms of weight $5$. Using the first two equations in \ref{L4index}, we can show that the coefficients of the terms of weight four and less are zero, and that the coefficients of weight $5$ are given by those in $C_5$ up to multiplication by a constant. We know from Step 1 that this constant is non-zero and we insert this constant into the $\varepsilon$. \vskip 5pt \noindent{\bf Step 3.} We now calculate the coefficents $C_i$, $i>5$. The proof of this step is by construction (with heavy use of Maple) using the PDEs given in Lemma \ref{L4index}. We expand $\sigma(u_1,u_2,u_3)$ in a Taylor series with undetermined coefficients, keeping only odd terms. We do {\em not} assume that the coefficients of the expansion are polynomial in the $\mu_i$, only that they are independent of the $u_i$. We then insert the expansion into the 4-index PDEs for the $\wp$, and truncate to successive orders in the weights of the $u_i$. These give a series of linear equations for the coefficients, and be using a sufficient number of the PDEs we can always find unique solutions, as listed above. We have carried out this calculation down to $C_{18}$. We have omitted the details of the expressions for $C_{13}, \dots,C_{18}$, as they are rather lengthy, but these are available at \cite{Weier34}. \vskip 5pt \noindent{\bf Step 4.} Now consider the general term in the expansion. Set \[ A\,{u_1}^p{u_2}^q{u_3}^r, \quad A\in\mathbf{Q}(\mu_i) \] to be the lowest weight unknown term. Since we have already shown by construction that the coefficients for all weights down to $-29$ with respect to $u_j$s are polynomials, we may assume that $p+q+r\geqq 4$. We consider the set $ (\sharp)$ of quadratic equations in $\sigma(u)$ and its (higher) derivatives obtained from the above, by multiplying the equations in \ref{L4index} by $\sigma(u)^2$. We take an equation \begin{equation} \sigma(u)^2\,Q_{ijk\ell}(u)=\cdots\lbl{choosen} \end{equation} from $(\sharp)$ such that $ {u_1}^p{u_2}^q{u_3}^r$ is divisible by $u_i u_j u_ku_{\ell}$. We have at least one such equation. Differentiating (\ref{choosen}), we have an equation of the form \begin{equation} \sigma(u)\, \left(\frac{\partial^{p+q+r}\,\sigma}{\partial {u_1}^p\, \partial {u_1}^q\,\partial {u_1}^r}\right)(u) +\cdots =0 \lbl{induction-key} \end{equation} such that all terms are polynomial of $\sigma(u)$ and its higher derivatives and such that \[ \left(\frac{\partial^{p+q+r}\,\sigma}{\partial {u_1}^p\, \partial {u_1}^q\,\partial {u_1}^r}\right)(u) \] is the highest derivative in (\ref{induction-key}). By looking at the coefficient of the term $u_1$, we have a linear equation of the form \[ A+\cdots=0 \] over $\mathbf{Q}[\mu_1,\cdots,\mu_{12}]$. Since the other terms except $A$ in the above equation come from terms of $\sigma(u)$ whose weight is less than weight of ${u_1}^p{u_2}^q{u_3}^r$, we see $A$ is a polynomial in the $\mu_j$s by the induction hypothesis. \end{proof} \begin{remark} (1) In Theorem \ref{L6.1}, the constant $\varepsilon$ might be unity, another 8th root of 1, or some other constant. We have not been able to narrow down this result. If the case $\varepsilon=1$ is true, then the determination of $\varepsilon$ reduces to the choice of roots in (\ref{discriminant}) and (\ref{sigma-const}). The remaining results in this paper do not depend on this choice, or on the possibility that $\varepsilon \ne 1$.\lbl{varepsilon} \newline (2) The weight of $\sigma(u)$ is inferred from (\ref{sigma-const}) since the weight of $|\omega'|$ is $5+2+1$ and the conjectured weight of $D$ is $72$. The weight of the terms in the exponentials are all $0$ and the weight of $c$ is $72/8-(5+2+1)/2=5$ and coincides with the terms in the expansion of \ref{L6.1} if the weight of $\varepsilon$ is $0$. \end{remark} We shall need later on the following special property of the $\sigma$-function in the purely trigonal case: \begin{lemma} The $\sigma$ function associated with the purely trigonal curve {\rm (\ref{eq5.1})} satisfies $ \sigma([-\zeta]u)=-\zeta\sigma(u)$ for $u \in \mathbb{C}^3$ under the notation {\rm(\ref{eq5.1})}. \lbl{sigsym} \end{lemma} \begin{proof} Since $\Lambda$ is stable under the action of $[\zeta]$ and $[-1]$, we can check the statement by Lemma~\ref{L2.14} and Remark~\ref{R2.15}. \end{proof} \section{Basis of the space $\varGamma(J, \mathcal{O}(n\Theta^{[2]}))$} \label{BasisG} For notational simplicity, we denote \begin{equation} \partial_j=\tfrac{\partial}{\partial u_j}. \lbl{eq4.1} \end{equation} We also define \begin{equation} \wp^{[ij]}=\text{the determinant of the $(i,j)$-(complementary) minor of $[\wp_{ij}]_{3\times 3}$}. \lbl{eq3.2b} \end{equation} We have explicit bases of the vector spaces $\varGamma(J,\mathcal{O} (2\Theta^{[2]}))$ and $\varGamma(J,\mathcal{O}(3\Theta^{[2]}))$ as follows (see also \cite{cn06}, Example in Section 9): \begin{lemma We have the following\,{\rm :}\lbl{L4.2} \begin{equation*} \begin{aligned} \varGamma(J, \mathcal{O}(2\Theta^{[2]}))& = \mathbb{C} 1 \oplus\mathbb{C} \wp_{11} \oplus\mathbb{C} \wp_{12} \oplus\mathbb{C} \wp_{13} \oplus\mathbb{C} \wp_{22} \oplus\mathbb{C} \wp_{23} \oplus\mathbb{C} \wp_{33} \oplus\mathbb{C} Q_{1333}, \\ \varGamma(J,\mathcal{O}(3\Theta^{[2]}))&= \varGamma(J,\mathcal{O}(2\Theta^{[2]})) \oplus\mathbb{C} \wp_{111} \oplus\mathbb{C} \wp_{112} \oplus\mathbb{C} \wp_{113} \oplus\mathbb{C} \wp_{122} \oplus\mathbb{C} \wp_{123} \\ & \quad \oplus\mathbb{C} \wp_{133} \oplus\mathbb{C} \wp_{222} \oplus\mathbb{C} \wp_{223} \oplus\mathbb{C} \wp_{233} \oplus\mathbb{C} \wp_{333} \oplus\mathbb{C} \partial_1Q_{1333} \oplus\mathbb{C} \partial_2Q_{1333}\\ & \quad \oplus\mathbb{C} \partial_3Q_{1333} \oplus\mathbb{C} \wp^{[11]} \oplus\mathbb{C} \wp^{[12]} \oplus\mathbb{C} \wp^{[13]} \oplus\mathbb{C} \wp^{[22]} \oplus\mathbb{C} \wp^{[23]} \oplus\mathbb{C} \wp^{[33]}. \end{aligned} \end{equation*} \end{lemma} \begin{proof} We know the dimensions of the spaces above are $2^3=8$ and $3^3=27$, respectively by the Riemann-Roch theorem for Abelian varieties (see for example, \cite{mu85}, (pp.150--155), \cite{la82}, (p.99, Th.\ 4.1). Moreover, (\ref{L2.14.3}) shows that the functions in the right hand sides belong to the spaces of the left hand sides, respectively. For the space $\varGamma(J,\mathcal{O}(2\Theta^{[2]}))$, $\wp_{ij}$ and $Q_{ijk\ell}$ become the basis of the space from Definition \ref{D4.1}, Lemma \ref{L2.14}, and the arguments in the previous section. However these are not all linearly independent, since there are connecting relations, such as those given in Lemma \ref{L4index}, and the number of these relations is greater than the dimension of the space. Thus the problem is reduced to picking the linearly independent bases as a function space. It is obvious that such independence does not depend upon the coefficients of curve by considering these expansions around the origin of $\mathbb C^3$. Hence by multiplying by $\sigma(u)^2$ from the right hand side with respect to $u_1$, $u_2$, $u_3$, and after putting all the $\mu_j$ equal to zero, we see the functions of the right hand side are linearly independent. The authors used a computer to check this. Similarly, for the space $\varGamma(J,\mathcal{O}(3\Theta^{[2]}))$, the $27$ functions obtained by multiplying by $\sigma(u)^3$ from the right hand side are checked to be linearly independent by using a computer, expanding the given functions in the Abelian variables (cf.\ Theorem \ref{L6.1}) to a sufficiently high power that independence is checked. We also see both decompositions in Lemma \ref{L4.2} in Example in Section 9 of \cite{cn06}. \end{proof} \section{The first main addition theorem} \label{Add1} \begin{theorem} The $\sigma$-function associated with {\rm (\ref{eq5.1})} satisfies the following addition formula on $J\times J$\,{\rm :}\lbl{T7.1} \begin{equation} \begin{split} -\frac{\sigma(u+v)\sigma(u-v)}{\sigma(u)^2\sigma(v)^2}&= \wp_{11}(u) -\wp_{11}(v) +\wp_{12}(u)\wp_{23}(v) -\wp_{12}(v)\wp_{23}(u)\\ & +\wp_{13}(u)\wp_{22}(v) -\wp_{13}(v) \wp_{22}(u) +\tfrac13\left(\wp_{33}(u)Q_{1333}(v) -\wp_{33}(v) Q_{1333}(u) \right) \\ & -\tfrac13\mu_1 \left( \wp_{12}(u) \wp_{33}(v) -\wp_{12}(v) \wp_{33}(u) \right) -\mu_1 \left( \wp_{13}(u)\wp_{23}(v) -\wp_{13}(v) \wp_{23}(u) \right)\\ & +\tfrac13 \left({\mu_1}^2 -\mu_2 \right) \left( \wp_{13}(u) \wp_{33}(v) -\wp_{13}(v)\wp_{33}(u) \right) +\tfrac13 \mu_8 \left(\wp_{33}(u)-\wp_{33}(v)\right) \end{split} \end{equation} \end{theorem} \begin{proof} Firstly, we notice that the left hand side is an odd function with respect to $(u,v)\mapsto ([-1]u,[-1]v)$, and that it has poles of order 2 along $(\Theta^{[2]}\times J)\cup(J\times\Theta^{[2]})$ but nowhere else. Moreover it is of weight $-10$. Therefore, by Lemma \ref{L4.2}, the left hand side is expressed by a finite sum of the form \begin{equation} \sum_jA_j\,\big(X_j(u)Y_j(v)-X_j(v)Y_j(u)\big), \lbl{rhs} \end{equation} where the $A_j$ are rational functions of the $\mu_i$s with homogeneous weight, and the $X_j$ and $Y_j$ are functions chosen from the right hand side of the first equality in Lemma \ref{L4.2}. We claim that all the $A_j$ are polynomial in the $\mu_i$s. Suppose all the $A_j$s are reduced fractional expressions, and at least one of the $A_j$s is not a polynomial. Take the least common multiple $B$ of all the denominators of the $A_j$s. Note that there is a set of special values of the $\mu_i$s such that $B$ vanishes and the numerator of at least one $A_j$ does not vanish. After multiplying the equation $\mbox{\lq\lq lhs"}{=}$\,(\ref{rhs}) by $B\,\sigma(u)^2\sigma(v)^2$, and taking the $\mu_i$s to be such a zero of $B$, we have a contradiction, by using the linear independency of Lemma \ref{L4.2} twice with respect to the variables $u$ and $v$ for the corresponding curve of (\ref{eq1.1}). Hence, all the $A_j$ must be polynomials. Hence, we see that the desired right hand side must be expressed by using constants $a,b,c,d,e,f,g_1,g_2,h_1,h_2,i_1,i_2,j,k_1,k_2,k_3$ which are polynomials in $\mu_i$s and independent of the $u_i$ and $v_i$, as follows: \begin{equation} \begin{aligned} & a\,[\wp_{11}(u) - \wp_{11}(v)] +b\,[\wp_{12}(u)\wp_{23}(v) - \wp_{12}(v)\wp_{23}(u)] +c\,[\wp_{13}(u)\wp_{22}(v) - \wp_{13}(v)\wp_{22}(u)] \lbl{eq7.2}\\ &\quad+d\,[Q_{1333}(u)\wp_{33}(v) - Q_{1333}(v)\wp_{33}(u)] +e\mu_1[\wp_{12}(u)\wp_{33}(v)-\wp_{12}(v)\wp_{33}(u)] \\ &\quad+f[\wp_{13}(u)\wp_{23}(v)-\wp_{13}(v)\wp_{23}(u)] +g_1[\wp_{13}(u)\wp_{33}(v)-\wp_{13}(v)\wp_{33}(u)]\\ & \quad +g_2[Q_{1333}(u) - Q_{1333}(v)] +h_1[\wp_{23}(u)\wp_{22}(v)-\wp_{23}(v)\wp_{22}(u)] +h_2[\wp_{12}(u)-\wp_{12}(v)]\\ & \quad +i_1[\wp_{22}(u)\wp_{33}(v)-\wp_{22}(v)\wp_{33}(u)] +i_2[\wp_{13}(u)-\wp_{13}(v)]+j[\wp_{23}(u)\wp_{33}(v)-\wp_{23}(v)\wp_{33}(u)]\\ & \quad +k_1[\wp_{22}(u)-\wp_{22}(v)] +k_2[\wp_{23}(u)-\wp_{23}(v)] +k_3[\wp_{33}(u)-\wp_{33}(v)]. \end{aligned} \end{equation} We find by computer using Maple, on substituting the expansion (\ref{L6.1}) up to $C_{13}$ terms of $\sigma(u)$ into (\ref{eq7.2}), and truncating up to weight 18 in the $u_i$ and $v_i$, that \begin{equation} \begin{split} & a=b=c=-1, \quad d=\tfrac13, \quad e=-\tfrac13\mu_1,\quad f=-\mu_1,\quad g_1=\tfrac13 (\mu_1^2-\mu_2),\\ & \quad g_2=h_1=h_2=i_1=i_2=j=k_1=k_2=0, \quad k_3=\tfrac13 \mu_8. \lbl{eq7.3} \end{split} \end{equation} as asserted. In the Maple calculation, it is not necessary to assume the polynomial nature of the coefficients as functions of the $\mu_j$. \end{proof} \begin{remark} By applying \lbl{R7.4} \begin{equation} \frac12\frac{\partial}{\partial u_i} \left( \frac{\partial}{\partial u_j} +\frac{\partial}{\partial v_j} \right)\log \end{equation} to \ref{T7.1}, we have $-\wp_{ij}(u+v)+\wp_{ij}(u)$ from the left hand side, and have a rational expression of several $\wp_{ij\cdots\ell}(u)$s and $\wp_{ij\cdots\ell}(v)$s on the right hand side. Hence, we have an algebraic addition formulae for $\wp_{ij}(u)$s. \end{remark} \begin{remark} By putting $v=u-(\delta,0,0)$ and letting $\delta \rightarrow 0$, we can get a ``double-angle'' $\sigma$-formula \lbl{R9.3} \begin{equation} \begin{split} \frac{\sigma(2u)}{\sigma(u)^4}& = -\wp_{111}(u)-\wp_{112}\wp_{23}+ \wp_{12}(u)\wp_{123}(u)-\wp_{113}(u)\wp_{22}(u)+\wp_{13}(u)\wp_{122}(u)\\ &\quad -\tfrac13\wp_{133}(u) Q_{1333}(u) +\tfrac13\wp_{33}(u)\tfrac{\partial}{\partial u_1}Q_{1333}(u) +\tfrac13\mu_1\big(\wp_{112}(u)\wp_{33}(u)-\wp_{12}(u)\wp_{133}(u)\big)\\ &\quad +\mu_1\left(\wp_{113}(u)\wp_{23}-\wp_{13}(u)\wp_{123}\right) -\tfrac13\big({\mu_1}^2 -\mu_2\big) \big( \wp_{113}(u) \wp_{33}(u)-\wp_{13}(u)\wp_{133}(u)\big)\\ &\quad -\tfrac13 \mu_8 \wp_{133}(u). \end{split} \end{equation} In the case of the elliptic curve, the corresponding relation is $\sigma(2u)=-\wp'(u)\sigma^4(u)$, whilst the corresponding formula for the hyperelliptic genus two curve is given in \cite{ba07}, p. 129. \end{remark} \section{The second main addition theorem} \label{Add2} The second main addition result applies only in the purely trigonal case (\ref{eq5.1}), using the results of Lemma \ref{sigsym}. The formula is as follows: \begin{theorem} The $\sigma$-function associated with {\rm (\ref{eq5.1})} satisfies the following addition formula on $J\times J $\,{\rm :}\lbl{T8.1} \begin{equation} \frac{\sigma(u+v)\sigma(u+[\zeta]v)\sigma(u+[\zeta^2]v)} {\sigma(u)^3\sigma(v)^3} = R(u,v)+R(v,u)\lbl{zeta-add}, \end{equation} where \begin{equation*}\begin{split} R(u,v) &= \quad-\tfrac13\wp_{13}(u)\partial_3Q_{1333}(v) -\tfrac34\wp_{23}(u)\wp_{112}(v) -\tfrac12 \wp_{111}(u) +\tfrac14\wp_{122}(u)\wp^{[11]}(v) \\ & \quad -\tfrac14\wp_{222}(u)\wp^{[12]}(v) +\tfrac1{12}\partial_3Q_{1333}(u)\wp^{[11]}(v) +\tfrac12\wp_{333}(u)\wp^{[22]}(v) -\tfrac14 \mu_3 \wp_{333}(u)\wp^{[12]}(v) \\ & \quad+\tfrac12 \mu_6 \wp_{13}(u)\wp_{333}(v) -\tfrac14\mu_9\wp_{23}(u)\wp_{333}(v) -\tfrac12 \mu_{12} \wp_{333}(u). \end{split}\end{equation*} \end{theorem} \begin{proof} Our goal is to express \begin{equation} \frac{\sigma(u+v)\sigma(u+[\zeta]v)\sigma(u+[\zeta^2]v)} {\sigma(u)^3\sigma(v)^3} \lbl{eq8.3} \end{equation} using several $\wp$ functions. Because (\ref{eq8.3}) belongs to $\varGamma(J\times J, \mathcal{O}(3((\Theta^{[2]}\times J)\cup(J\times \Theta^{[2]}))))$, a similar argument to that at the beginning of the proof of Th.\ \ref{T7.1} shows that it must be a finite sum of multi-linear forms of the 27 functions in Lemma \ref{L4.2}, namely, of the form \begin{equation} \sum_{j}^{\text{finite sum}}\hskip -3pt C_j\,X_j(u)Y_j(v), \lbl{eq8.4} \end{equation} where $X_j$ and $Y_j$ are any of the functions appearing in the right hand side of the description of $\varGamma(J,\mathcal{O}(3\Theta^{[2]})))$ in Lemma \ref{L4.2}, and the $C_j$ are polynomial in $\mu_i$s. Moreover, (\ref{eq8.3}) has the following properties: \begin{enumerate} \item[L1.] As a function on $J\times J$, its weight is $(-5)\times 3=-15$; \item[L2.] It is invariant under $u\mapsto [\zeta]u$ (resp. $v\mapsto [\zeta]v$); \item[L3.] It has a pole of order $3$ on $(\Theta^{[2]}\times J)\cup(J\times \Theta^{[2]})$; \item[L4.] It is invariant under the exchange $u\leftrightarrow v$ (by Lemma \ref{sigsym}). \end{enumerate} Hence, (\ref{eq8.4}) has the same properties. Thus, we may consider only the functions in our basis of $\varGamma(J, \mathcal{O} (3\Theta^{[2]}))$ that have the following corresponding properties: \begin{itemize} \item[R1.] The weight is greater than or equal to $(-5)\times 3=-15$; \item[R2.] They are invariant under $u\mapsto [\zeta]u$; \item[R3.] They have poles of order at most $3$ on $\Theta^{[2]}$. \end{itemize} There are 12 such functions and they are listed as follows: \begin{align*} & 1, && \wp_{13} \quad (\text{weight}=-6), && \wp_{23} \quad (\text{weight}=-3), \\ & \wp_{111} \quad (\text{weight}=-15), && \wp_{112} \quad (\text{weight}=-12), && \wp_{122} \quad (\text{weight}=-9), \\ & \wp_{222} \quad (\text{weight}=-6), && \wp_{333} \quad (\text{weight}=-3), && \wp^{[22]} \quad (\text{weight}=-12), \\ & \wp^{[12]} \quad (\text{weight}=-9), && \wp^{[11]} \quad (\text{weight}=-6), && \end{align*} \[ \partial_3 Q_{1333}= -6(\wp_{13}\wp_{333}-\wp_{133}\wp_{33})-3\wp_{122} \quad (\text{weight}=-9), \] and the $\wp^{[ij]}$ are defined in (\ref{eq3.2b}). Here the last equality is given by cross-differentiation from $\partial_1 Q_{3333}$ using the first of the relations in Lemma \ref{L4index} with $\mu_1=\mu_2=\mu_4=0$. Since (\ref{eq8.3}) is an even function, it must be of the form \begin{equation} \frac{\sigma(u+v)\sigma(u+[\zeta]v)\sigma(u+[\zeta^2]v)} {\sigma(u)^3\sigma(v)^3} = \tilde R(u,v)+ \tilde R(v,u),\lbl{eq8.6} \end{equation} where \begin{align*} \tilde R(u,v) &= a_1\wp_{13}(u)\wp_{122}(v) +a_2\wp_{13}(u)\,\partial_3Q_{1333}(v) +a_3\wp_{23}(u)\wp_{112}(v) +a_4\wp_{111}(u) \\ & \quad +a_5\wp_{122}(u)\wp^{[11]}(v) +a_6\wp_{222}(u)\wp^{[12]}(v) +a_7\,\partial_3Q_{1333}(u)\wp^{[11]}(v) +a_8\wp_{333}(u)\wp^{[22]}(v) \\ & \quad +b_1\wp_{13}(u)\wp_{222}(v) +b_2\wp_{23}(u)\wp_{122}(v) +b_3\wp_{23}(u)\,\partial_3Q_{1333}(v) +b_4\wp_{112}(u)\\ & \quad +b_5\wp_{222}(u)\wp^{[11]}(v) +b_6\wp_{333}(u)\wp^{[12]}(v) +c_1\wp_{13}(u)\wp_{333}(v) +c_2\wp_{23}(u)\wp_{222}(v)\\ & \quad +c_3\wp_{122}(u) +c_4\wp_{333}(u)\wp^{[11]}(v) +c_5\,\partial_3Q_{1333}(u) +d_1\wp_{23}(u)\wp_{333}(v) +d_2\wp_{222}(u) \\ & \quad +e_1\wp_{333}(u). \end{align*} By substituting (\ref{L6.1}) into (\ref{eq8.6}), and comparing coefficients of different mononomials in $u_i,v_j$, we can find the constants $a_1$, $\cdots$, $e_1$ depending on the $\mu_k$s. Again, in this lengthy Maple calculation, it is not necessary to assume the coefficients are polynomial in the $\mu_i$. \end{proof} \begin{remark} By applying \begin{equation} \frac13\left( \frac{\partial^2}{\partial u_i\partial u_j} +\frac{\partial^2}{\partial u_i\partial v_j} +\frac{\partial^2}{\partial v_i\partial v_j} \right)\log \end{equation} to (\ref{zeta-add}), we obtain algebraic addition formulae for standard Abelian functions, which would be interesting to compare with those of Remark (\ref{R7.4}). \end{remark} \begin{remark} By putting $v=-u+(\delta,0,0)$ into (\ref{zeta-add}), dividing through by $\delta$ and letting $\delta \rightarrow 0$, we can get an unusual ``shifted'' $\sigma$-formula of the form \lbl{R10.2} \begin{align} &-\frac{\sigma(u-[\zeta]u)\sigma(u-[\zeta^2]u)}{\sigma(u)^6} =\sum_{i=1}^{12} c_i \left[g_i(u) \partial_1f_i(u)-f_i(u)\partial_1 g_i(u)\right],\lbl{shifted} \end{align} where the $f_i$ and the $g_i$ are the even and odd derivative components respectively of the formula in (\ref{zeta-add}), i.e.\ as given in the following table \begin{equation*} \setlength{\extrarowheight}{2pt} \begin{array}{c|c|c||c|c|c}c_i & f_i & g_i& c_i & f_i & g_i \\\hline \tfrac12 & \wp_{13}(u) & \wp_{122}(u) & -\tfrac13 & \wp_{13}(u) & \partial_3Q_{1333}(u) \\ -\tfrac34 & \wp_{23}(u) & \wp_{112}(u) & -\tfrac12 & 1 & \wp_{111}(u) \\ \tfrac14 &\wp^{[11]}(u)& \wp_{122}(u) & -\tfrac14 & \wp^{[12]}(u)& \wp_{222}(u) \\ \tfrac1{12} &\wp^{[11]}(v)& \partial_3Q_{1333}(u) & \tfrac12 & \wp^{[22]}(u)& \wp_{333}(u)\\ -\tfrac14\mu_3 &\wp^{[12]}(u)& \wp_{333}(u) &\tfrac12\mu_6& \wp_{13}(u) & \wp_{333}(u) \\ -\tfrac14\mu_9 & \wp_{23}(u) & \wp_{333}(v) &-\tfrac12 \mu_{12} & 1 & \wp_{333}(u) \setlength{\extrarowheight}{0pt} \end{array}\end{equation*} \end{remark} \begin{remark} In the general elliptic case, there appears to be no formulae corresponding to (\ref{zeta-add}) and (\ref{shifted}). However for the specialized {\em equianharmonic case}, where $\wp$ satisfies \[ (\wp')^2 = 4 \wp^3 - g_3, \] it is straightforward to show that \[ \frac{\sigma(u+v)\sigma(u+\zeta v)\sigma(u+\zeta^2 v)} {\sigma^3(u)\sigma^3(v)} = -\tfrac12 (\wp'(u) + \wp'(v)), \] and \[ \frac{\sigma\left((1-\zeta)u\right)\sigma\left((1-\zeta^2)u\right)} {\sigma^6(u)} = 3 \wp^2(u). \] These seem to be just the first of a family of multi-term addition formulae on special curves with automorphisms, which will be discussed in more detail elsewhere. \end{remark} \section*{Acknowledgements} \addcontentsline{toc}{section}{Acknowledgements} This paper was started during a visit by the authors to Tokyo Metropolitan University in 2005, supported by JSPS grant 16540002. We would like to express our thanks to Prof.\ M.\ Guest of TMU who helped organize this visit. The work continued during a visit by VZE to Heriot-Watt University under the support of the Royal Society. Further work was done whilst three of the authors (JCE, VZE, and EP) were attending the programme in Nonlinear Waves at the Mittag-Leffler Institute in Stockholm in 2005, and we would like to thank Professor H.\ Holden of Trondheim and the Royal Swedish Academy of Sciences for making this possible [EP, being then on leave from Boston University, is grateful for NSA grant MDA904-03-1-0119 which supported her doctoral students who were performing related research]. The authors are also grateful for a number of useful discussions with Prof.~A.~Nakayashiki, and Drs.~John Gibbons and Sadie Baldwin. In particular we are grateful to John Gibbons for pointing out the possibility of the relations described in Remarks \ref{R9.3} and \ref{R10.2}. We are grateful to Mr Matthew England for pointing out a number of typos in various versions of this manuscript. Some of the calculations described in this paper were carried out using Distributed Maple \cite{smb03}, and we are grateful to the author of this package, Professor Wolfgang Schreiner of RISC-Linz, for help and advice. Finally, we would like to express special thanks to the referees for constructive suggestions to improve the paper, in particular for pointing out some crucial gaps in the main theorems and for giving hints how to fill them. \parindent=0pt
1,314,259,995,449
arxiv
\section{Introduction} \label{sec1} In recent years, there have been a lot of activities on the Ir-based transition metal oxides. Due to the strong spin-orbit coupling (SOC) of its $5d$ electrons, many novel phases, theoretical models and experiments have been proposed and discovered in these Ir-based materials~\cite{2014ARCMP...5...57W,2016ARCMP...7..195R,2019arXiv190308081T,Schaffer_2016}. Among them, for example, a quantum spin liquid phase was proposed for an Ir-based hyperkagom\'{e} lattice in Na$_4$Ir$_3$O$_8$~\cite{PhysRevLett.99.137207}; a ferromagnetic ground state with a large ferromagnetic moment was discovered in Sr$_2$IrO$_4$ with the Ir$^{4+}$ ions forming a square lattice~\cite{PhysRevB.57.R11039}. In these Mott insulating systems, the presence of strong SOC drastically changes the local spin physics. The local moment of the magnetic ion Ir$^{4+}$ is an effective ${J = 1/2}$ moment~\cite{PhysRevLett.99.137207,hyperk_chen,PhysRevLett.102.017205} describing local spin-orbital doublets rather than usual electron spin ${S = 1/2}$ for systems with a weak SOC. The existence of local spin-orbital doublets has been detected by resonant X-ray scattering experiment in Sr$_2$IrO$_4$~\cite{xray}. As a consequence, the non-trivial exchange interaction can arise due to the mixing of spin and orbitals~\cite{PhysRevLett.99.137207,PhysRevLett.105.027204}. Even though the superexchange interaction between the Ir local moments is often used to describe most iridates, most well-known Mott insulating iridates are actually weak Mott insulators with quasi-itinerant $5d$ electrons and small charge gaps. This weak Mott insulating nature was not really emphasized in the literature, and we think this may be important in understanding some of the physical properties of iridates and the related materials. What is electron quasi-itinerancy? Quasi-itinerancy is the key property of the electrons in the weak Mott regime where the Mott gap is not large enough to fully localize the electron to one single lattice site, and the electron can still be delocalized to a finite extent spatially due to the small charge gap. Electron quasi-itinerancy is believed to the driving force for the possible spin liquid phase in the weak Mott regime for $\kappa$-(ET)$_2$Cu$_2$(CN)$_3$ and EtMe$_3$Sb[Pd(dmit)$_2$]$_2$~\cite{PhysRevB.72.045105,PhysRevLett.95.036403}. Over there, the electron quasi-itinerancy generates the frustrated ring exchange interactions that suppresses the magnetic orders. Similar kind of electron quasi-itinerancy~\cite{PhysRevLett.108.247215,PhysRevLett.107.186403,PhysRevLett.94.156402} that emphasizes different outcomes of the charge fluctuations has been discussed in various spinels and osmate pyrochlores. Thus, besides the prevailing strong coupling perspectives, the weak to intermediate coupling perspective is found to be both complementary and exciting. Ref.~\onlinecite{PesinBalents} applied a slave-rotor mean field theory to study the Mott transition in a series of rare-earth based pyrochlore iridates R$_2$Ir$_2$O$_7$. They discovered topological band insulator in the non-interacting limit and a novel topological Mott insulator in intermediate coupling regime. Several other groups re-examined the problem with more realistic Hamiltonian and discovered various magnetic ordered phases and an interesting Weyl semi-metal phase that is located in the narrow regime separating the topological band insulator or metal phase from strong coupling Mott insulating phase~\cite{PhysRevLett.109.066401,PhysRevB.85.045124,PhysRevB.83.205101}. Aligned with the above theoretical efforts, the experiments discovered that a metal-insulator transition in R$_2$Ir$_2$O$_7$ (R = Nd, Sm, Y and Eu) involves a magnetic ordering produced by the $5d$ electrons in Ir~\cite{PhysRevB.85.245109,PhysRevLett.109.136402,PhysRevB.85.214434,PhysRevB.86.014428,PhysRevB.85.205104,PhysRevB.87.100403,PhysRevLett.117.037201,PhysRevLett.115.056402,PhysRevB.89.140413,PhysRevB.89.115111,PhysRevLett.117.056403,PhysRevB.88.060411,PhysRevB.89.075127,PhysRevLett.114.247202,PhysRevB.87.060403,PhysRevB.90.235110,PhysRevB.83.180402,PhysRevB.94.161102,PhysRevB.92.094405,PhysRevB.96.094437,PhysRevMaterials.2.011402,2020arXiv200512768W,PhysRevB.101.121101,PhysRevB.96.144415}. Moreover, an exotic spin liquid metallic phase was also proposed experimentally for Pr$_2$Ir$_2$O$_7$~\cite{PhysRevLett.96.087204,Machida}. Now, the Ir electrons are proposed as Luttinger semimetal~\cite{2015NatCo...610042K,PhysRevX.4.041027,PhysRevLett.111.206401}, while the Pr spin is proposed to be proximate to a transition between a U(1) spin liquid and an ordered spin ice~\cite{PhysRevX.8.041039,PhysRevB.94.205107,2017arXiv171107813O,PhysRevB.92.054432}. Based on the existing theoretical~\cite{PesinBalents,kim_distortion,PhysRevB.85.045124,PhysRevB.86.235129,PhysRevB.96.195158,PhysRevB.91.115124,PhysRevX.8.041039,PhysRevLett.112.246402,PhysRevLett.109.066401,PhysRevB.87.214416,PhysRevLett.111.206401,PhysRevB.95.045133,PhysRevX.4.041027} and experimental works, the true magnetic state of these Ir-based pyrochlore systems remains open. In the present paper, we address this problem and provide some understanding. We primarily focus on the magnetic properties and avoid touching the band structure topology that has been invoked in early works. We first explore the magnetic properties of the Ir-based pyrochlore lattice in the strong coupling regime. Physically, since the $5d$ electron orbitals of the Ir$^{4+}$ is spatially extended, which enhances the electron bandwidth, therefore these Ir-based systems are usually considered to be in the intermediate coupling regime. Nevertheless, the SOC could enhance the correlation by suppressing the bandwidth~\cite{PesinBalents}. Moreover, certain magnetic properties in the strong coupling limit could persist to the intermediate coupling regime even if the system is located in the intermediate coupling regime. In the strong coupling limit, the effective moments ${J = 1/2}$ of the Ir$^{4+}$ ions are coupled by the superexchange interaction. We analyze the symmetry-allowed exchange Hamiltonian, that includes three types of pairwise terms, Heisenberg exchange, antisymmetric Dzyaloshinskii-Moriya (DM) interaction and the symmetric pseudo-dipolar (PD) interaction. This model is equivalent to the one that was used for interacting Kramers doublet for the rare-earth pyrochlores. In the mean-field phase diagram, we find five different ordered phases (see Sec.~\ref{sec2}): 4-in-4-out state, a continuously degenerate state spanned by two basis vectors ${({\bf v}_1, {\bf v}_2)}$, a weakly ferromagnetic state (FM) and two coplanar states with spin orient along the particular $[110]$ directions. Almost all these ordered states have the magnetic wavevector ${{\bf q} = 0}$. For the realistic exchange model obtained from an extended Hubbard model relevant for R$_2$Ir$_2$O$_7$, there are only two ordered phases, that are the “4-in-4-out” state and the continuously degenerate manifold spanned by two basis vectors ${({\bf v}_1, {\bf v}_2)}$. For the latter, we find that the quantum fluctuation selects a non-coplanar spin configuration by a linear spin-wave expansion. This is the mechanism of quantum order by disorder. For the intermediate coupling regime, we apply the self-consistent mean-field theory to study the microscopic Hubbard model and assume a general magnetic configuration except having the same magnetic cell as the crystallographic cell (or ${{\bf q} = 0}$) order. Again, we find the system is ``fluctuating'' within the continuously degenerate manifold spanned by ${({\bf v}_1, {\bf v}_2)}$, and the electron kinetic energy selects the magnetic orders. The electron kinetic energy is important here due to the quasi-itinerancy in the weak Mott regime. It is found that the magnetic orders in the strong coupling regime persist into the intermediate coupling regime. Since it is unclear which regime the actual system is proximate to, it is reasonable to think the electron quasi-itinerancy is intertwined with the quantum order by disorder here. In the following, we outline the main content of this paper. In Sec.~\ref{sec2}, we study a generic symmetry-allowed exchange Hamiltonian on the pyrochlore lattice with the effective spin-1/2 originating from Kramers' degeneracy, that is relevant for R$_2$Ir$_2$O$_7$ in the strong coupling regime. In the exchange Hamiltonian, there are four symmetry allowed coupling parameters, Heisenberg exchange $J_0$, DM interaction $D$, and $\Gamma_1$, $\Gamma_2$ for PD interaction. We analyze this Hamiltonian with the mean-field method in different parameter regimes. In many parts of phase diagram, the ground state can be understood as simultaneously optimizing different terms of the Hamiltonian. In Sec.~\ref{sec3}, we derive a realistic exchange from the extended Hubbard model. Two limits with the dominant direct or indirect electron tunneling via intermediate oxygens are considered. In these two cases, we find there is only one mean-field phase, which is the continuously degenerate manifold ${({\bf v}_1, {\bf v}_2)}$. We then implement the linear spin-wave theory and a non-coplanar ground state is favored by this quantum order by disorder mechanism. For the certain intermediate regime with comparable direct and indirect electron tunnelings, the “4-in-4-out” state is favored. We further explore the magnetic properties of the Hubbard model in the intermediate coupling regime. By assuming a ${\bf q} = 0$ magnetic structure, we implement a Hartree-Fock type of self-consistent mean-field theory for the interaction. Finally in Sec.~\ref{sec4}, we discuss the relevant experiments and other related works. \section{The generic exchange Hamiltonian} \label{sec2} In this section, we analyze the Ir-based pyrochlore lattice in strong coupling regime. In the strong coupling limit, the local effective spin moments are coupled by an exchange Hamiltonian. For the effective spin-1/2 moment describing the local Kramers' doublets, the exchange interaction is guaranteed to be pairwise. The generic exchange Hamiltonian has the following form \begin{eqnarray} {\mathcal H}_{\text{ex}} &=& \sum_{\langle ij \rangle} J_0 ( {\bf J}_i \cdot {\bf J}_j ) + {\bf D}_{ij} \cdot ( {\bf J}_i \times {\bf J}_j ) + { {\Gamma}}^{\mu\nu}_{ij} J^{\mu}_i J^{\nu}_j, \label{eq:exchange} \end{eqnarray} where the nearest-neighbor interaction is assumed, and $J_0$ is the isotropic Heisenberg exchange, ${\bf D}_{ij} $ describes the antisymmetric Dzyaloshinskii-Moriya (DM) interaction and $ {\Gamma}_{ij}^{\mu\nu}$ is the symmetric pseudo-dipolar (PD) interaction. This form of decomposition is well-known to the much older literature of magnetism~\cite{tmo}, but is not quite popular among newer ones. Kitaev or any other anisotropic exchange interactions can be well cast into this form, as long as they are {\sl pairwise} interactions. As a general rule of thumb, for systems with a weak SOC, the DM interaction is weaker than to the Heisenberg part, and PD interaction is even weaker than DM interaction. For systems with strong SOC such as iridates here, there is no general thumb of rule, and all the interactions could be of similar magnitudes. Thus, for most magnetic systems composed of $3d$ transition metal ions, the DM interaction and PD interaction are expected to be much weaker than the Heisenberg exchange and hence can be neglected at lowest order approximation. For Ir-based magnets or other magnetic systems formed by $4d$/$5d$ transition metal ions, SOC is quite strong and local moment is a mixture of spin and orbitals. As a result, the exchange interaction is usually very non-Heisenberg-like and the anisotropic exchanges (such as DM and PD interactions) can be quite significant. \begin{figure}[t] \includegraphics[width=8cm]{fig1.pdf} \caption{(Color online) The pyrochlore lattice in the global cubic coordinate system. ``0,1,2,3'' label the four sublattices.} \label{fig1} \end{figure} Throughout this section, we assume an antiferromagnetic Heisenberg part with ${J_0 >0}$. Since most R$_2$Ir$_2$O$_7$ (and also spinel AB$_2$X$_4$) compounds have a space group Fd$\bar{3}$m, this space group symmetry further restricts the allowed forms of the DM interaction and PD interaction. Therefore, for the bond connecting the sublattice 0 with the sublattice 1 (see Fig.~\ref{fig1}), we have, \begin{eqnarray} {\bf D}_{01} &=& D (0,\frac{1}{\sqrt{2}},-\frac{1}{\sqrt{2}}), \\ \Gamma_{01} &=& \left[ \begin{array}{ccc} -2\Gamma_1 & 0 & 0 \\ 0 & \Gamma_1 & -\Gamma_2 \\ 0 & -\Gamma_2 & \Gamma_1 \end{array} \right], \end{eqnarray} where the matrix $\Gamma_{01}$ is demanded to be symmetric and traceless as the traceful part of the full interaction is taken care of by the Heisenberg interaction, and anti-symmetric one is from the DM interaction. Exchange interactions on other bonds can be simply generated by cubic permutations. \begin{figure*}[t] \includegraphics[width=18cm]{fig2.pdf} \caption{(Color online.) The spin configuration on each sublattice for different phases.} \label{fig2} \end{figure*} Although the exchange Hamiltonian in Eq.~\eqref{eq:exchange} is introduced for Ir-based pyrochlore lattice, it is widely applicable to other pyrochlore systems with the same symmetry properties as long as the local moment is a Kramer spin-1/2 doublet. Our results would also apply to these contexts as well. In fact, this model is equivalent to the one that was used for the rare-earth pyrochlore material Yb$_2$Ti$_2$O$_7$ where some detailed analysis were given in Ref.~\onlinecite{PhysRevX.1.021002,PhysRevB.95.094422}. Over there, a local coordinate system was used for each pyrochlore sublattice and the local moment is the Kramers doublet of the Yb$^{3+}$ ion, while here we are using a global cubic coordinate for the Ir$^{4+}$ effective spin-1/2 moments. In the next subsection, we analyze the mean-field ground states of this general Hamiltonian and understand the role of different anisotropic interactions. \subsection{Role of Dzyaloshinskii-Moriya interaction} Here we consider the role of Dzyaloshinskii-Moriya interaction on top of the Heisenberg interaction and set ${ \Gamma_1=\Gamma_2 = 0}$. Classically, it is well-known that the pyrochlore lattice is the most frustrated lattice by having a macroscopic number of ground state degeneracies with the nearest-neighbor Heisenberg model. The presence of the anisotropic exchange surely lifts the classical ground state degeneracy. Ref.~\onlinecite{pyrochlore_DM} has already studied the role of DM interaction by mean-field theory and classical Monte-Carlo simulation. Our mean-field analysis below by treating the effective spin ${\boldsymbol J}_i$ as a classical vector is consistent with their results. With a direct DM interaction that corresponds to ${D<0}$ in the present work, the ground state is 2-fold degenerate (related by time reversal) with the magnetic ordering wavevector ${\bf q} = {\bf 0}$. The magnetic unit cell coincides with the crystallographic one and the four spins on the unit cell are \begin{eqnarray} \Psi &\equiv& ({\bf J}_0, {\bf J}_1, {\bf J}_2, {\bf J}_3 ) \nonumber \\ &=& \frac{1}{\sqrt{3}} (111,1\bar{1}\bar{1},\bar{1}1\bar{1}, \bar{1}\bar{1}1). \end{eqnarray} Here we define a vector $\Psi$ for the four spin vectors on the elementary tetrahedron. This is the simple 4-in-4-out state. For the indirect DM interaction with ${D>0}$, DM interaction only partially lifts the ground state degeneracy. There are two sets of ground states, coplanar and non-coplanar states, both of which have a magnetic wavevector ${{\bf q} = {\bf 0} }$. The 4-spin vector $\Psi$ of the coplanar ground states can be constructed as linear superpositions of the following two basis vectors ${\bf u}_1$ and ${\bf u}_2$ (or their equivalence under discrete symmetry operations) \begin{eqnarray} {\bf u}_1&=& (100,010,0\bar{1}0,\bar{1}00) , \label{eq:basiscp1} \\ {\bf u}_2 &=& (010,\bar{1}00,100,0\bar{1}0). \label{eq:basiscp2} \end{eqnarray} The non-coplanar states are constructed from the following two basis vectors ${\bf v}_1$ and ${\bf v}_2$ (or their symmetry equivalence) \begin{eqnarray} {\bf v}_1 &=& \frac{1}{ \sqrt{2} } (\bar{1}10,\bar{1}\bar{1}0,110,1\bar{1}0) , \label{eq:basis1_dm} \\ {\bf v}_2 &=& \frac{1}{\sqrt{6} } (\bar{1}\bar{1}2,\bar{1}1\bar{2},1\bar{1}\bar{2},112). \label{eq:basis2_dm} \end{eqnarray} Here, when only the first basis vector ${\bf v}_1$ is chosen, the ground state is a special coplanar state with spin orient along different $[110]$ lattice directions. Both the coplanar and non-coplanar degenerate ground state manifolds have an accidental U$(1)$ degeneracy with one continuous degree of freedom. This degenerate spin manifold is actually identical to the one that was proposed for the rare-earth pyrochlore Er$_2$Ti$_2$O$_7$, and are selected via the quantum order by disorder mechanism~\cite{PhysRevLett.109.167201}. \subsection{Role of pseudo-dipolar interaction: case 1} In this and next subsections, we study the role of the PD interaction. We first consider the regime with ${D=0}$, ${\Gamma_1 \neq 0}, {\Gamma_2 =0}$. For ${\Gamma_1>0}$, we find that optimal spin configurations have magnetic wavevector ${{\bf q}={\bf 0}}$. Even though the Hamiltonian breaks the spin rotation symmetry completely, the ground state manifold has an accidental O$(3)$ degeneracy. The 4-spin vector $\Psi$ of the ground states is an arbitrary linear superposition of the following three basis vectors ${\bf w}_1$, ${\bf w}_2$ and ${\bf w}_3$, \begin{eqnarray} {\bf w}_1 &=&(100,100,\bar{1}00,\bar{1}00), \\ {\bf w}_2 &=&(010,0\bar{1}0,010,0\bar{1}0), \\ {\bf w}_3 &=& (001, 00\bar{1}, 00\bar{1},001). \end{eqnarray} For ${\Gamma_1 <0}$, to simultaneously optimize the energy and satisfy the hard spin constraint, there only exist two sets of ground states. One has the magnetic wavevector ${\bf q} = {\bf 0}$. Similar to the case with ${\Gamma_1 >0}$, the ground state spin configuration has O$(3)$ degeneracy and the 4-spin vector $\Psi$ is constructed from the following three basis vectors ${\bf z}_1, {\bf z}_2$ and ${\bf z}_3$ (or their symmetry equivalence), \begin{eqnarray} {\bf z}_1 &=& ( 100, \bar{1}00, \bar{1}00, 100 ), \\ {\bf z}_2 &=& ( 0{1}0, 0\bar{1}0, 0\bar{1}0, 0{1}0 ), \\ {\bf z}_3 &=& ( 001, 001, 00\bar{1}, 00\bar{1} ). \end{eqnarray} The other set of ground states has the magnetic wavevector ${{\bf q} = 2\pi(100)}$ or its cubic equivalences. Although the magnetic unit cell doubles the size of crystallographic cell, the spin configuration can still be fully described within one tetrahedron and the 4-spin vector $\Psi$ is given as \begin{equation} \Psi = (\bar{1}00,100,\bar{1}00,100), \end{equation} and the spin configuration of other sites is generated from this and the ordering wavevector. \subsection{Role of pseudo-dipolar interaction: case 2} Here we consider the parameter regime with ${D=0}$, $ {\Gamma_1 = 0}$, $ {\Gamma_2 \neq 0}$. For ${\Gamma_2<0}$, the ground state is the same as the case for ${D<0}$, which is the 4-in-4-out state. For $\Gamma_2>0$, the anisotropy does not lift the classical degeneracy of the nearest-neighbor Heisenberg model on pyrochlore lattice. \subsection{With both Dzyaloshinskii-Moriya and pseudo-dipolar interactions} \label{sec:sec2D} In this subsection, we study the classical phase diagram when both two of the anisotropic exchanges are present. We start from the $D$-$\Gamma_1$ plane with ${\Gamma_2 = 0}$. The phase diagram is depicted in Fig.~\ref{fig3}. In all the parts of the phase diagram, the magnetic wavevector is ${{\bf q} = {\bf 0}}$. Most parts of the phase diagram can be understood as the intersection of two different ground state manifolds separately favored by $D$ and $\Gamma_1$, which have already been discussed in detail in the previous subsections. \begin{figure}[b] \includegraphics[width=7.5cm]{fig3.pdf} \caption{The mean-field phase diagram in the $D$-$\Gamma_1$ plane with ${\Gamma_2= 0}$. The corresponding spin configurations are found in Fig.~\ref{fig2}.} \label{fig3} \end{figure} For ${D<0, \Gamma_1 >0}$, the 4-in-4-out state is favored. For ${D>0, \Gamma_1 >0}$, we have the classical ground states constructed as the linear superpositions of the same two basis vectors ${\bf v}_1$ and ${\bf v}_2$ that are introduced in Eq.~\eqref{eq:basis1_dm} and ~\eqref{eq:basis2_dm} for the case of ${D>0}$. For ${D>0, \Gamma_1<0}$, the ground state is a coplanar state with the spins pointing along different [110] directions (denoted as ``coplanar-[110]'' in Fig.~\ref{fig3}), whose 4-spin vectors $\Psi$ can be constructed from the basis vectors ${\bf u}_1$ and ${\bf u}_2$ in Eq.~\eqref{eq:basiscp1} and Eq.~\eqref{eq:basiscp2}, \begin{equation} \Psi = \frac{1}{\sqrt{2}} (110,\bar{1}10,1\bar{1}0,\bar{1}\bar{1}0). \end{equation} For ${D<0}, {\Gamma_1<0}$, the $D$-demanded and $\Gamma_1$-demanded ground state manifolds have no overlap. We find that, when ${D<3\sqrt{2}\Gamma_1}$, DM interaction has a more weight in the Hamiltonian and the ground state is the 4-in-4-out state, and in the opposite case, the ground state is a coplanar state (denoted as ``coplanar$^{\ast}$-[110]'' in Fig.~\ref{fig3}) whose 4-spin vector is given \begin{equation} \Psi = \frac{1}{\sqrt{2}} (1\bar{1}0,\bar{1}\bar{1}0,110,\bar{1}10). \label{eq:noncoplanar2} \end{equation} Note this coplanar state is distinct from the ``coplanar-[110]'' state found for $D>0, \Gamma_1<0$. Now we discuss the ground states in $D$-$\Gamma_2$ plane with ${\Gamma_1 = 0}$. The phase diagram is depicted in Fig.~\ref{fig4}. The magnetic wavevector is ${{\bf q} =0}$ everywhere in the phase diagram. \begin{figure}[t] \includegraphics[width=7.5cm]{fig4.pdf} \caption{ The mean-field phase diagram in the $D$-$\Gamma_2$ plane with ${\Gamma_1= 0}$. The corresponding spin configurations are found in Fig.~\ref{fig2}. } \label{fig4} \end{figure} For ${D<0, \Gamma_2 <0}$, the ground state is simply the 4-in-4-out state. For ${D>0, \Gamma_2 >0}$, the ground state is an arbitrary linear superposition of the basis vectors ${\bf v}_1$ and ${\bf v}_2$ in Eq.~\eqref{eq:basis1_dm} and Eq.~\eqref{eq:basis2_dm}. In the regime of ${D>0, \Gamma_2<0}$, there exist two phases. When ${D>D_{c1}(\Gamma_2)}$ with \begin{equation} D_{c1}(\Gamma_2) = \frac{\sqrt{2}}{6} (3J_0-2\Gamma_2 - \sqrt{9J_0^2 - 6J_0 \Gamma_2 +4 \Gamma_2^2}) , \end{equation} the ground state turns out to be weakly ferromagnetic and denoted as ``weak FM'' in Fig.~\ref{fig4}. The 4-spin vectors of the magnetic unit cell are parameterized as \begin{equation} \Psi = \cos \theta \, y_1 + \sin \theta \, y_2 \end{equation} with \begin{eqnarray} y_1 & = & \frac{1}{\sqrt{2}} (\bar{1}\bar{1}0,1\bar{1}0,\bar{1}10,110) , \\ y_2 & = & (001,001,001,001), \end{eqnarray} and the angular variable $\theta$ satisfies \begin{eqnarray} \cos 2 \theta &=& \frac{4J_0 +\sqrt{2}D -\Gamma_2 }{ \sqrt{(4J_0+\sqrt{2}D -\Gamma_2)^2 + 8\Gamma_2^2 }} \\ \sin 2\theta &=& \frac{-2\sqrt{2} \Gamma_2 }{ \sqrt{(4J_0+\sqrt{2}D -\Gamma_2)^2 + 8 \Gamma_2^2 }}. \end{eqnarray} When ${D< D_{c1} (\Gamma_2)}$, the ground state is the 4-in-4-out state. In the region of ${D<0, \Gamma_2>0}$, there also exist two phases. When ${D < D_{c2} (\Gamma_2)}$ with $D_{c2} (\Gamma_2)$ given by \begin{equation} D_{c2} = -\frac{3\sqrt{2} \Gamma_2}{2}. \end{equation} When the DM interaction is dominant and negative, the ground state is the 4-in-4-out state. When ${D > D_{c2} (\Gamma_2)}$, a coplanar state with spins pointing along various [110] directions is favored and the 4-spin vector $\Psi$ is the same as the one introduced in Eq.~\eqref{eq:noncoplanar2} and its symmetry equivalence. Hence, we also denote this coplanar state as ``coplanar$^{\ast}$-[110]'' in Fig.~\ref{fig4}. \section{Magnetism from electron quasi-itinerancy} \label{sec3} Having understood the role of each anisotropic exchange for the generic exchange Hamiltonian in the previous section, in this section we discuss the physical exchange Hamiltonian derived perturbatively from the microscopic parent Hubbard model and from there approach the magnetic states in the intermediate coupling regime. We analyze the possible magnetic ground states for the compound R$_2$Ir$_2$O$_7$. \begin{figure}[b] \includegraphics[width=8.6cm]{orbitals.pdf} \caption{(Color online.) The direct electron tunneling between the Ir atoms. Left: the $\sigma$-bonding with tunneling amplitude $t_1$. Right: the $\pi$-bonding with tunneling amplitude $t_2$.} \label{fig:bonds} \end{figure} \subsection{Hubbard model and exchange} We assume the on-site SOC is strong enough so that the lower ${J=3/2}$ bands are completely filled and the upper ${J=1/2}$ bands are half filled. This approximation misses the hybridization between the ${J=1/2}$ and the ${J=3/2}$ bands, and this process may lead to some interesting properties and will be addressed in later works. The electrons can tunnel from one Ir$^{4+}$ ion to neighboring Ir$^{4+}$ ions either directly or indirectly via the $p$ orbitals of the intermediate oxygen ions~\cite{PesinBalents,PhysRevB.85.045124}. Since $5d$ electron orbitals are spatially extended, therefore the direct tunneling of electrons might be equally important as the indirect tunneling. With electrons locally projected onto the ${J=1/2}$ basis, one can write down a minimal Hubbard model~\cite{PhysRevB.85.045124} \begin{equation} {\mathcal H} = \sum_{\langle ij \rangle} \big[( {\mathcal T}^d_{ij,\alpha \beta} + {\mathcal T}^{id}_{ij,\alpha \beta} ) d^{\dagger}_{i \alpha} d^{}_{j \beta} +h.c.\big] + \sum_i U n_{i, \uparrow} n_{i,\downarrow} , \label{eq:hubbard} \end{equation} in which, only the nearest-neighbor tunneling term is included, $d^{\dagger}_{i \alpha}$ ($d_{i\alpha}^{}$) is the creation (annihilation) operator for an electron on effective spin state $|{J=1/2, J^z =\alpha} \rangle$ at site $i$, and ${n_{i\sigma} \equiv d^\dagger_{i\sigma} d^{}_{i\sigma}}$ measures the electron number with the spin $\sigma$ at site $i$. In Eq.~\eqref{eq:hubbard}, $ {\mathcal T}^d$ and ${\mathcal T}^{id}$ are the tunneling matrices for the direct and indirect processes, respectively. \begin{figure}[t] \includegraphics[width=7.5cm]{fig6.pdf} \caption{(Color online.) The dependence of anisotropic couplings on the ratio between the $\pi$-bonding amplitude $t_2$ and the $\sigma$-bonding amplitude $t_1$. In the plot, from top to bottom, the curves are $D/J_0$, $\Gamma_1/J_0$ and $\Gamma_2/J_0$. } \label{fig6} \end{figure} For the direct tunneling processes, there exist two types of tunneling amplitudes: the $\sigma$-bonding $t_1$, and the $\pi$-bonding $t_2$ (see Fig.~\ref{fig:bonds})~\cite{PhysRevB.85.045124}. Moreover, it is expected from the orbital overlaps that $t_2$ has a different sign from $t_1$. In the limit of dominant direct tunneling, standard second order perturbation yields the exchange couplings introduced in Eq.~\eqref{eq:exchange}, \begin{eqnarray} &&J_0 = \frac{603 t_1^2 -58296 t_1 t_2 + 248369 t_2^2}{2834352 U} \\ && D = \frac{ 5 \sqrt{2} (153 t_1^2 - 1356 t_1 t_2 + 2528 t_2^2) }{118098 U} \\ &&\Gamma_1 = \frac{ 50 (9 t_1^2- 48 t_1t_2 + 64 t_2^2 ) }{ 177147U } \\ && \Gamma_2 = 3 \Gamma_1. \label{eq:direct_ex} \end{eqnarray} It turns out that, the DM interaction has the most weight in the exchange Hamiltonian. As $J_0$ is assured to be positive in Eq.~\eqref{eq:direct_ex}, we depict the ratios of $D/J_0, \Gamma_1/J_0$ and $\Gamma_2/J_0$ in Fig.~\ref{fig6}. In contrast, the indirect tunneling process is described by one single tunneling amplitude $t$.~\cite{PesinBalents} When it is dominant, the exchange couplings are given by \begin{eqnarray} &&J_0 = \frac{49132 t^2 }{ U } , \\ && D = \frac{ 7280 \sqrt{2} t^2 }{59049 U} , \\ && \Gamma_1 = \frac{1568 t^2 }{ 177147 U} , \\ && \Gamma_2 = 3 \Gamma_1 . \end{eqnarray} It is important to note that, although we find ${\Gamma_2 =3 \Gamma_1}$ for both limits studied above, this relation is not protected by symmetry and will break down if a more realistic model is assumed. Although we find that $J_0, D, \Gamma_1, \Gamma_2$ are all positive for the two limits studied above, this result actually breaks down when both direct and indirect tunnelings are included. As plotted in Fig.~\ref{fig7} for the case of ${t_2=-t_2/3}$, the Heisenberg exchange $J_0$ and DM interaction $D$ both change sign for certain intermediate ranges of $t_1/t$. This indicates that different magnetic order may emerge in the intermediate regimes of $t_1/t$. \begin{figure}[t] \includegraphics[width=8cm]{fig7.pdf} \caption{(Color online.) The dependence of couplings on the ratio between the direct tunneling to the indirect tunneling. In the figure, we have set ${t_2 = -2t_1/3}$. The ground state of the region II is the ``4-in-4-out'' state. For the region I and III, the ground state is degenerate and spanned by the basis vectors ${\bf v}_1$ and ${\bf v}_2$. The two dashed vertical lines are the phase boundaries separating the ``4-in-4-out'' state in region II from the $({\bf v}_1, {\bf v}_2)$ manifold in the region I and III. The unit of the vertical axis is set to be $t^2/U$.} \label{fig7} \end{figure} \subsection{Ground states of the exchange Hamiltonian} \label{sec3B} In the previous subsection, we have explicitly derived the exchange Hamiltonian from the Hubbard model. For both exchanges in the limit of the dominant direct or indirect tunneling, the coupling parameters $J_0, D, \Gamma_1, \Gamma_2$ are found to be positive. For this parameter regime, It is ready to show by the mean-field theory and/or directly observe from the phase diagram depicted in Fig.~\ref{fig3} and ~\ref{fig4} that, the mean-field classical ground state manifold is continuously degenerate and is spanned by the two basis vectors ${\bf v}_1$ and ${\bf v}_2$ (see Eq.~\eqref{eq:basis1_dm} and Eq.~\eqref{eq:basis2_dm}). As shown in Fig.~\ref{fig7}, there is a region that the DM interaction $D$ changes sign, that may favor the ``4-in-4-out'' state as the classical ground state in that region. After a complete calculation, we find the phase diagram that is depicted in Fig.~\ref{fig7}. Region II develops the ``4-in-4-out'' ground state. Region I and III have the degenerate ground state manifold $({\bf v}_1, {\bf v}_2)$. Remarkably, the phase boundary between region II and region I and III is exactly the same as the one obtained from a self-consistent mean field calculation for the intermediate coupling regime in the calculation below and the one in Ref.~\onlinecite{PhysRevB.85.045124}. This continuous degeneracy of $({\bf v}_1,{\bf v}_2)$ ground state manifold will be lifted if the quantum fluctuation is included. We study this quantum order-by-disorder effect by the linear spin-wave theory. We express the classical 4-spin vectors as \begin{equation} \Psi = \cos \phi \, {\bf v}_1 + \sin \phi \, {\bf v}_2, \end{equation} where $\phi$ parameterizes the orientation of the spin vectors. Then we introduce the Holstein-Primarkoff bosons, \begin{eqnarray} && {\bf J}_i \cdot \hat{m}_i = J - a_i^{\dagger} a_i , \\ &&{\bf J}_i \cdot \hat{n}_i = \frac{\sqrt{2J}}{2} ( a_i + a_i^{\dagger} ) , \\ &&{\bf J}_i \cdot (\hat{m}_i \times \hat{n}_i) = \frac{\sqrt{2J}}{ 2 i} (a_i - a^{\dagger}_i), \end{eqnarray} here, $\hat{m}_i$ is the unit vector describing the spin orientation of classical spin order at site $i$, and $\hat{n}_i$ is a unit vector that is normal to $\hat{n}_i$ but within the plane spanned by ${\bf v}_1$ and ${\bf v}_2$. Plugging the above relations into the exchange Hamiltonian, one is ready to write down the quadratic spin wave Hamiltonian, \begin{eqnarray} {\mathcal H}_{\text{sw}} &=& \sum_{\bf k} \big[A_{ij} ({\bf k}) a_i^{\dagger} ({\bf k})a_j ({\bf k}) + B_{ij} ({\bf k} ) a_i (-{\bf k}) a_j ({\bf k}) \nonumber \\ && \quad\quad + B^{\ast}_{ij} ({\bf k} ) a_i^{\dagger} ({\bf k}) a_j^{\dagger} (-{\bf k}) \big] + E_{\text{cl}} , \label{spinwave} \end{eqnarray} in which, $E_{\text{cl}}$ is the classical ground state energy, and $A_{ij}$ and $B_{ij}$ satisfy \begin{eqnarray} A_{ij} ({\bf k}) &=& A_{ij}^{\ast} ({\bf k}), \\ B_{ij} ({\bf k}) &=& B_{ji} (- {\bf k}), \label{relation} \end{eqnarray} and are given in Appendix.~\ref{appendix}. From the quadratic spin-wave Hamiltonian, we obtain the quantum zero-point energy, that is found to be optimized by the non-coplanar spin configuration ${\bf v}_2$ (see Eq.~\eqref{eq:basis2_dm}) with ${\phi = \pi/2}$ (and its symmetry equivalences). We also find that, the magnon spectrum (see Fig.~\ref{fig8}) is gapless at the $\Gamma$ point, that originates from the continuous degeneracy of the classical ground states. This gapless mode is not supposed to remain if the anharmonic effects beyond the linear spin-wave theory are included as the gapless feature is not protected by any continuous symmetry of the Hamiltonian. A mini-gap would appear if a full calculation is performed. \begin{figure}[t] \includegraphics[width=8.4cm]{fig8.pdf} \caption{The magnon dispersion along the high symmetry momentum direction $\Gamma$-X-W-L-$\Gamma$. The parameters in this figure are set to be ${D = 0.5 J_0}, {\Gamma_1 = 0.2 J_0}, {\Gamma_2 = 0.3 J_0}$. The gapless mode at the $\Gamma$ point is an artifact of the linear spin-wave theory. } \label{fig8} \end{figure} \subsection{Hubbard model and electron quasi-itinerancy in the intermediate coupling regime} \label{sec3C} In the previous subsections, we have analyzed the magnetic ground states of the Ir-based pyrochlore lattice for R$_2$Ir$_2$O$_7$ in the strong coupling regime. We find that, even though the classical mean-field ground states are continuously degenerate for the exchange derived from the Hubbard model, all the ground states have a magnetic wavevector ${{\bf q} = {\bf 0}}$. It is known that, the SOC twists the electron motion and reduces the electron bands. Although the large spatial extension of the $5d$ electrons reduces the electron correlation, as the bandwidth is also reduced, it is then not quite obvious where the actual physical system is located. Thus, it is legitimate for us to tackle the system from the strong correlation to the intermediate correlation by reducing the correlation strength. The knowledge that we have learned from the strong coupling regime may be extended to the intermediate regime. Moreover, the existing experiments on Eu$_2$Ir$_2$O$_7$, Nd$_2$Ir$_2$O$_7$, Tb$_2$Ir$_2$O$_7$ and Sm$_2$Ir$_2$O$_7$ suggest a ${{\bf q} ={\bf 0}}$ magnetic order~\cite{PhysRevLett.117.037201,PhysRevB.89.140413,PhysRevMaterials.2.011402,PhysRevB.87.100403}. In this subsection, we study the magnetic properties of the Hubbard model in the intermediate coupling regime by a self-consistent mean field theory. Based on the results from the strong correlation regime, we assume the magnetic order in this regime also have a magnetic wavevector ${{\bf q} = {\bf 0}}$. To implement the mean-field theory, we decouple the Hubbard-$U$ interaction as, \begin{eqnarray} U n_{i, \uparrow} n_{i,\downarrow} & = & -\frac{2U}{3} {\bf J}_i^2 + \frac{U}{6} n_i \nonumber \\ &\rightarrow & - \frac{2U}{3} ( 2 \langle {\bf J}_i \rangle \cdot {\bf J}_i - \langle {\bf J}_i \rangle^2 ) + \frac{U}{6} n_i, \end{eqnarray} in which, $n_i$ is the electron number at site $i$ and ${{\bf J}_i = \sum_{\alpha\beta}d^{\dagger}_{i\alpha} \boldsymbol{\sigma}^{}_{\alpha\beta} d^{}_{i\beta}/2}$ is the operator for the effective spin moment. With this decoupling, the mean-field Hamiltonian is quadratic with \begin{eqnarray} H_{\text{MF}} & \equiv & \sum_{\langle ij \rangle} \big[( {\mathcal T}^d_{ij,\alpha \beta} + {\mathcal T}^{id}_{ij,\alpha \beta} ) d^{\dagger}_{i \alpha} d^{}_{j \beta} +h.c.\big] \nonumber \\ && -\sum_i \frac{4U}{3} \langle {\bf J}_i \rangle \cdot {\bf J}_i + \cdots, \end{eqnarray} where ``$\cdots$'' refers to the unessential terms that do not involve the electron operators. We then diagonalize the mean-field Hamiltonian and solve for the magnetic order of each sublattice self-consistently. Our results of the magnetic orders can be found in Fig.~\ref{fig2}. In the region II, the calculation quickly converges to the 4-in 4-out magnetic order. For the region I and the region III, the calculation does not quickly converge. After a few steps, the magnetic order from the self-consistent calculation actually drops into the continuous manifold that is spanned by the 4-spin vectors ${\bf v}_1$ and ${\bf v}_2$ and then fluctuate within this manifold without seeing a quick convergence. To resolve the magnetic orders in these two regions, we perform a different calculation below that may be illuminating. The self-consistent calculation tells us that the magnetic orders can be parameterized as \begin{eqnarray} \Psi (\phi)&=& ( {\bf J}_0, {\bf J}_1, {\bf J}_2, {\bf J}_3) \nonumber \\ & = & M ( \cos \phi \,{\bf v}_1 + \sin \phi \, {\bf v}_2 ), \label{eq:mag} \end{eqnarray} where the order parameter $M$ depends on the dimensionless parameter $U/t$ that measures the strength of the interaction. For a given $U/t$, the magnetic order parameter $M$ is fixed. The self-consistent calculation was unable to quickly converge the angular parameter $\phi$ which is the task to be fulfilled. It is ready to see that the task boils down to optimize the kinetic energy in the mean-field Hamiltonian $H_{\text{MF}}$, {\sl i.e.} \begin{eqnarray} \langle {\Psi ( \phi )} | \sum_{\langle ij \rangle} \big[( {\mathcal T}^d_{ij,\alpha \beta} + {\mathcal T}^{id}_{ij,\alpha \beta} ) d^{\dagger}_{i \alpha} d_{j \beta}^{} +h.c.\big] |{\Psi ( \phi )} \rangle . \end{eqnarray} The spirit of this calculation scheme is a bit similar to the double exchange. Over there, the itinerant electron is coupled with the local moments with ferromagnetic Kondo/Hund's coupling, and the magnetic order is established by optimizing the kinetic energy of the itinerant electrons and the exchange energy of the local moments~\cite{PhysRev.82.403}. In the doped maganites, to gain the kinetic energy, the local moments twist themselves from the spin configuration favored by the exchange energy. Another possibly electron kinetic energy driven magnetism was proposed for the doped van der Waals antiferromagnet CeTe$_3$~\cite{okuma2020fermionic}, and was refereed as fermionic order by disorder. For our case here, the electron kinetic energy is optimized within the background of the magnetism that operates on the continuously degenerate manifold. Our calculation suggests the selection of the angle $\phi$ to $\pi/2$ for all ${U>0}$. We find that, the kinetic energy stabilizes the non-coplanar state with ${\phi =\pi/2}$. Although this mechanism of breaking the continuous degeneracy by optimizing the kinetic energy is qualitatively different from the quantum order by disorder discussed in the previous subsection, the magnetic order from both mechanisms turns out to be identical, the phase boundaries separating different ordered phases are also remarkably identical for both mechanisms. These results suggest that the magnetic orders in the intermediate and the strong coupling regimes may be continuously connected. \section{Discussion} \label{sec4} To summarize, we have studied the magnetic ground states for the Ir-based pyrochlore lattice in both intermediate and strong coupling regimes. Various classical ground states are identified for the generic exchange Hamiltonian in the strong coupling limit. These results can be further applied to other magnetic systems on the pyrochlore lattice. We find that, the magnetic orders in the intermediate and strong coupling regimes for the pyrochlore iridates turn out to be identical. The experiments on the pyrochlore iridates have rapidly evolved~\cite{PhysRevB.85.245109,PhysRevLett.109.136402,PhysRevB.85.214434,PhysRevB.86.014428,PhysRevB.85.205104,PhysRevB.87.100403,PhysRevLett.117.037201,PhysRevLett.115.056402,PhysRevB.89.140413,PhysRevB.89.115111,PhysRevLett.117.056403,PhysRevB.88.060411,PhysRevB.89.075127,PhysRevLett.114.247202,PhysRevB.87.060403,PhysRevB.90.235110,PhysRevB.83.180402,PhysRevB.94.161102,PhysRevB.92.094405,PhysRevB.96.094437,PhysRevMaterials.2.011402,2020arXiv200512768W,PhysRevB.101.121101,PhysRevB.96.144415}. There exists a large body of experimental works, and the review papers on this topic can be more useful to the interested readers~\cite{2014ARCMP...5...57W,2016ARCMP...7..195R,2019arXiv190308081T,Schaffer_2016}. Instead of delving on a few specific experimental results and details, we here make some experimental suggestion based on the theoretical calculations in our work. In the strong coupling analysis, there exists a broad parameter regime that the magnetic order is realized from the quantum order by disorder mechanism. Once the particular magnetic order with the ordering wavevector ${{\bf q} =0}$ and the spins orientating along the vector ${\bf v}_2$ in Eq.~\eqref{eq:basis2_dm} is realized, one can check if the excitation spectrum and thermodynamic properties are consistent with the theoretical results. A qualitative feature in the magnetic excitation spectrum is the almost gapless mode at the $\Gamma$ point (see Fig.~\ref{fig8} and the explanation in Sec.~\ref{sec3C}). A consequence on the thermodynamics is the a nearly $\sim T^3$ temperature dependence in the specific heat at the temperatures above the mini-gap energy. In the intermediate coupling scenario, the interaction and the charge gap are not very large compared to the bandwidth. Although the same magnetic order persists to the intermediate coupling regime, the quantum order by disorder mechanism are expected to break down. If one uses the local moment language and relies on the exchange interaction, one necessarily needs to invoke further neighbor exchanges and even the ring exchange interactions. These extra interactions modify the original pairwise nearest-neighbor exchange model and will break the original applicability of the quantum order by disorder here. A surprising result in our self-consistent calculation in Sec.~\ref{sec3C} is that the magnetic order quickly falls into the degenerate manifold spanned by ${\bf v}_1$ and ${\bf v}_2$, and then we use the electron kinetic energy to break the degeneracy and select the magnetic order. This indicates that the degenerate manifold could be readily accessible if the system is activated by a small energy. A pump-probe measurement of the magnetic properties of the system would be helpful in this regards. Finally, the weak Mottness with quasi-itinerant electrons might be relevant for many other $4d$/$5d$ materials. The effect should be considered if the charge gap is not very large. It is very likely that many $4d$/$5d$ magnets would be located in this regime. Even the square lattice material Sr$_2$IrO$_4$ was believed to be proximate to a Mott transition~\cite{PhysRevB.57.R11039}. The well-known $\alpha$-RuCl$_3$ has a relatively weak charge gap~\cite{PhysRevB.94.161106,PhysRevB.90.041112,PhysRevB.93.075144,PhysRevLett.117.126403}, even though the existing theoretical analysis mostly starts from a pairwise superexchange interaction between the effective spin-1/2 moments. The interlayer ring exchange, due to the weak Mott gap and the electron quasi-itinerancy, could be responsible for the anomalous thermal Hall effect in $\alpha$-RuCl$_3$ for the magnetic field in the honeycomb plane and parallel to the zig-zag ordering axis~\cite{Ong2020,2020arXiv200101899Y,PhysRevLett.120.217205} where the interlayer magnetic flux could be experienced by the material. \acknowledgments We thank Xu Ping Yao and Dr. Fei Ye Li for the help with Fig.~1 and Fig.~2. We acknowledge the hospitality of Aspen center for theoretical physics for the ultracold atom program in the summer of 2011 when and where this work was carried out. We especially thank Michael Hermele for discussion around that time period. GC is supported by the Ministry of Science and Technology of China with Grant No.2018YFGH000095, 2016YFA0301001, 2016YFA0300500, by Shanghai Municipal Science and Technology Major Project with Grant No.2019SHZDZX04, and by the Research Grants Council of Hong Kong with General Research Fund Grant No.17303819. The part of work in Boulder was supported by DOE award No.~desc0003910. XQW is supported by MOST 2016YFA0300501 and NSFC 11974244 and additionally from a Shanghai talent program.
1,314,259,995,450
arxiv
\section{#1}} \renewcommand{\theequation}{\thesection.\arabic{equation}} \begin{document} \vskip 2mm \begin{center} {\large\bf COMPLEX RANDOM MATRIX MODELS WITH POSSIBLE APPLICATIONS TO SPIN-IMPURITY SCATTERING IN QUANTUM HALL FLUIDS } \end{center} \vskip 3mm \begin{center} S. Hikami$^{1}$ and A. Zee$^{2}$ \end{center} \vskip 2mm \begin{center} {Department of Pure and Applied Sciences$^{1}$ }\\ {University of Tokyo, Meguro-ku, Komaba, Tokyo 153, Japan}\\ \vskip 2mm Institute for Theoretical Physics{$^{2}$ }\\ {University of California Santa Barbara, CA 93106, USA}\\ \end{center} \vskip 5mm \begin{abstract} We study the one-point and two-point Green's functions in a complex random matrix model to sub-leading orders in the large $N$ limit. We take this complex matrix model as a model for the two-state scattering problem, as applied to spin dependent scattering of impurities in quantum Hall fluids. The density of state shows a singularity at the band center due to reflection symmetry. We also compute the one-point Green's function for a generalized situation by putting random matrices on a lattice of arbitrary dimensions. \end{abstract} \newpage \sect{Introduction} Recently, matrix modes have been studied in various contexts. In particular, in a series of papers [1-4], the universality of the connected two-point correlation function was discussed. These studies focussed on $N$ by $N$ Hermitian matrices with $N$ approaching infinity. In this paper, we investigate the behavior of the Green's functions of a complex matrix model. To leading order in the large $N$ limit, complex matrix model and Hermitian matrix model behave similarly. For instance, the density of state, for Gaussian randomness, obeys Wigner's semi-circle law in both cases. However, in subleading orders, complex random matrix and Hermitian random matrix behave differently. In the language of random surfaces, a random complex matrix model represents, due to the complex conjugate pairs, a surface made of plaquets with two colors, like a red and black checkerboard, with the rule that a given plaquette can only be glued to a plaquette of a different color. Our motivations stem from possible applications of the complex random matrix model to physical problems involving scattering on impurities. In particular, recently Hikami, Shirai, and Wegner [5] has proposed a model for impurity scattering in quantum Hall fluids in the spin degenerate case. For certain quantum Hall samples, disorder broadening can be much larger than the Zeeman splitting between spin up and spin down electrons, in which case the spin up and spin down electrons have the same energy. In [5] the further simplification is made that when a spin up electron scattering on an impurity becomes a spin down electron, and vice versa. This is known as the ``strong spin orbit" case, in which case it is known that an extended state appears at the band center of the lowest Landau level with white noise Gaussian random scattering and that the density of state shows a singularity at the band center [5-9]. Although this problem has been simulated numerically [9], the nature of the singularity of the density of state remains unclear. Another example involving scattering between two states occurs in high temperature superconductors, in which the conducting plane contains two different sites, copper and oxygen. The density of state also shows a singular behavior at the band center. More generally, the problem of scattering between $C$ sectors (with $C=2$ in the example mentioned above) is of interest. It turns out that a class of matrix models called ``lattices of matrices" and studied by Br\'ezin and Zee [3] is relevant for this class of problem. In these models, random matrices are placed on a lattice of arbitrary dimension [10]. It was shown that in the large $N$ limit, various correlation functions can be determined. The two-state scattering problem considered here corresponds to the simple case of a lattice consisting of two points. Following the analysis of [3] we can readily generalize the two- state scattering problem to an arbitray lattice. In this paper, we show that the one-point Green's function is singular in the next-to-leading order in the large $N$ expansion. We first evaluate the one-point Green's function and the density of state to order $1/N^2$ by the diagrammatic method for one matrix model. The singular behavior of the one-point Green's function in order $1/N^2$ has been noticed in the literatures [11], but our discussion of this singularity in the context of the two-state scattering problem in condensed matter physics may be new. Next we discuss the origin of this spurious singularity by the orthogonal polynomial method. On the other hand, if we fix $N$ to be large but finite, and let the energy $E$ go to zero, the density of state oscillates and eventually goes to zero. This phenomenon is due to energy-level repulsion. The discussion here is somewhat reminiscent of the double pole encountered in the connected correlation function in the Hermitian matrix model when evaluated in the diagrammatic approach [2]. In the diagrammatic approach, to calculate the Green's functions we select diagrams by letting $N$ go to infinity first. We then obtain the correlation function by taking the imaginary part of the two-point Green's function. The connected correlation function has a double pole as its two arguments approach each other. In contrast, in the orthogonal polynomial approach, we in effect take the imaginary part of the two-point Green's function first, and then let $N$ go to infinity. The connected two-point correlation calculated in this way does not have a double pole when its two arguments approach each other. However, when we smooth out the short distance oscillations of the correlation function by averaging it appropriately, we recover the double pole obtained in the diagrammatic approach [1]. This paper is organized as follows. In section two, the two-state scattering problem is formulated as a complex matrix model. We evaluate the one point Green's function diagrammatically and discuss the singularity of the density of state in $d=0$. In section three, we study the complex matrix model further using the orthogonal polynomial method. In section four, we develop the analysis for general dimensions, using the ``lattice of matrices" formulation given in [3]. Using the diagrammatic expansion [12,13], we obtain the expression for the one-point Green's function. We compare our results with that obtained in previous studies [14-16]. \sect{ Matrix model formulation of two-state scattering } In the standard Hermitian one-matrix model, the one-point Green's function $G(z)$ is defined by \begin{equation} G(z) = {1\over N} < {\rm Tr} {1\over{ z - \varphi }} > \end{equation} where the average is taken with the probability distribution $P(\varphi)$ \begin{equation} P(\varphi) = {\exp}[ - N{\rm Tr} V(\varphi) ] \end{equation} The $N\times N$ matrix $\varphi$ is Hermitian. In the Gaussian case, we have \begin{equation} V(\varphi) = {1\over {2}} {\rm Tr} \varphi^{2} \end{equation} More generally, we have, for example, as \begin{equation} V(\varphi) = {1\over{2}} {\rm Tr}\varphi^{2} + {g\over{N}}{\rm Tr}{\varphi^{4}}\end{equation} For application to disordered systems, the random matrix $\varphi$ is interpreted as the Hamiltonian. We start with the simplest case of two-state scattering. The model is described by the random Hamiltonian [3], \begin{equation} H = \left(\matrix{H_{1}&\varphi^{\dagger}\cr \varphi&H_{2}\cr}\right) \end{equation} taken from the Gaussian distribution $P(H)$, \begin{equation} P(H) = {1\over{Z}} {e}^{-N{\rm Tr}[{1\over{2}}(m_{1}^2 H_{1}^{2} + m_{2}^2 H_{2}^{2}) + m^2 \varphi^{+}\varphi]} \end{equation} The matrices $H_1$ and $H_2$ are Hermitian while $\varphi$ is complex. In [3] this Hamiltonian was taken to describe a system with two sectors (C=2). Here we can think of the two sectors as representing as the spin up and spin down sectors in a spin-dependent quantum Hall system. We may also think of possible applications to the double-layered quantum Hall system [17,18]. We now go to the model of (2.6) by letting $m^2_{1}=m^2_{2}=\infty$, so that $H_1$ and $H_2$ are suppressed. This model is for the off-diagonal disorder. This same complex matrix model is considered also in [19]. (We will treat the more general case with $H_1$ and $H_2$ non- zero later in this paper.) We have \begin{equation} H = \left(\matrix{0&\varphi^{\dagger}\cr \varphi&0\cr}\right) \end{equation} Notice that there exists a matrix \begin{equation} \Gamma = \left(\matrix{I&0\cr 0&-I\cr}\right) \end{equation} such that \begin{equation} \{\Gamma, H\}= 0 \end{equation} This implies that if $\psi$ is an eigenstate of $H$ with eigenvalue $E$, then $\Gamma \psi$ is an eigenstate with eigenvalue $-E$. Thus, eigenvalues of $H$ come in pairs. Due to level repulsion, around $E=0$, there should be a "hole" of width of order $1/N$ in the density of state $\rho(E)$. As $N$ goes to infinity, this hole disappears and $\rho(E)$ should become smooth. To proceed, we calculate the one point Green's function, with $P = Z^{-1} exp(-{1\over 2} tr H^2)$ where $H$ is given by (2.7). \begin{equation} G(z) = < {1\over {2N}}{\rm Tr} ({1\over{z - H}}) > = < {1\over N}{\rm Tr} ({z\over{z^{2} - \varphi^{\dagger}\varphi}}) > \end{equation} To leading order in the large $N$ limit, we obtain easily \begin{equation} G(z) = {z - \sqrt{z^{2} - 4}\over{2}} \end{equation} using for example the diagrammatic approach of [2]. The density of state $\rho(E)=-{1\over{\pi}}{\rm Im }G(E)$ is given by the semi-circle law \begin{equation} \rho(E) = {1\over{2\pi}}\sqrt{4 - E^{2}} \end{equation} We denote (2.11) by $G_{0}(z)$ hereafter. To leading order, as expected, the density of state $\rho (E)$ is smooth, without any singularity at $E = 0$. However, if we go to order $1/N^{2}$, we will find a divergent term in the one point Green's function. Henceforth, we will work with the Gaussian distribution. (We expect that the singularity at $E=0$ will occur also for a non-Gaussian distribution.) Using the diagrammatic approach, we can readily evaluate the Green's function to order $1/N^{2}$ [2,12,13]. We decompose the self- energy $\Sigma (z)$ into two parts $\Sigma_{a}$ and $\Sigma_{b}$, obtained by breaking the solid line (quark line in the terminology of [2]) in the diagrams $D_{a}$ and $D_{b}$, respectively. In the simplest two-state scattering case considered here, in which scattering from 1 to 2 and from 2 to 1 occurs, but not from 1 to 1 or from 2 to 2, we see that the numbers $n$ and $m$ appearing in the diagram $D_{a}$, describing the number of rungs in the gluon ladders, must be even. In the diagram $D_{b}$, $n_{1},n_{2} $ and $n_{3}$ must be all even, or all odd. We have from diagram $D_{a}$ two terms as follows. Denoting the even number of rungs in the ladder by $2n$ and $2m$, we have \begin{eqnarray} \Sigma_{a} &=& \sum_{n,m=1}^{\infty} G_{0}^{4n+4m +1} + \sum_{n,m=1}^{\infty} G_{0}^{4n + 4m +1} (2n -1)\nonumber\\ &=& {2G_{0}^{9}\over{( 1 - G_{0}^{4})^{3}}} \end{eqnarray} The factor of $(2n -1)$ is due to the number of ways of inserting a cut inside the ladder. Similarly, for $\Sigma_{b1}$, we have \begin{eqnarray} \Sigma_{b1} &=& \sum_{n,m,l=1}^{\infty} G_{0}^{4n+4m+4l+1} + \sum_{n,m,l=1}^{\infty} G_{0}^{2(2n-1)+2(2m-1)+2(2l-1)+1}\nonumber\\ &=& {G_{0}^{7}( 1 + G_{0}^{6})\over{( 1 - G_{0}^{4})^{3}}} \end{eqnarray} For $\Sigma_{b2}$, in which the cut appears inside the ladder, we have \begin{eqnarray} \Sigma_{b2} &=& \sum_{n,m,l=1}^{\infty} G_{0}^{4n + 4m + 4l +1} (2n -1)\nonumber\\ &+& \sum_{n,m,l=1}^{\infty} G_{0}^{2(2n-1)+2(2m-1)+2(2l-1)+1} ( 2n - 2)\nonumber\\ &=& {2G_{0}^{7}( 1 + G_{0}^{6})\over{( 1 - G_{0}^{4})^{4}}} - { 2 G_{0}^{7} + G_{0}^{13} \over{( 1 - G_{0}^{4})^{3}}} \end{eqnarray} Thus we get \begin{eqnarray} \Sigma_{b} &=& \Sigma_{b1} + \Sigma_{b2}\nonumber\\ &=& {2G_{0}^{7}( 1 + G_{0}^{6})\over{(1 - G_{0}^{4})^{4}}} - {G_{0}^{7}\over{(1 - G_{0}^{4})^{3}}} \end{eqnarray} Adding the two terms $\Sigma_{a}$ and $\Sigma_{b}$, we have \begin{equation} \Sigma = {G_{0}^{7}\over{(1 + G_{0}^{2})^{2}( 1 - G_{0}^{2})^{4}}} \end{equation} where $G_{0}$ is the one-point Green's function (2.9) evaluated to leading order in the large $N$ limit. After including an extra factor $1/(1 - G_{0}^{2})$ for the external legs, we obtain the one-point Green's function to order $1/N^2$ \begin{equation} G(z) = G_{0} + {1\over {N^{2}}}{G_{0}^{7}\over{(1 + G_{0}^{2})^{2} (1 - G_{0}^{2})^{5}}} + O({1\over{N^{4}}}). \end{equation} Since $G_{0}(z)$ goes to $(-i)$ as $z\rightarrow 0$, the factor $1/(1 + G_{0}^{2})$ diverges. Using $ 1 + G_{0}^{2} = z G_{0}$, we have \begin{equation} G(z) = G_{0}(z) + {1 \over{ N^{2} z^{2} ( z^{2} - 4 )^{5/2}}} + O({1\over N^4}) \end{equation} Thus, to order $1/N^{2}$, the Green's function $G(z)$ has a singular imaginary part $ i/(32 N^{2} z^{2})$ as $z\rightarrow 0$. This singularity is related to the reflection symmetry (parity symmetry) as mentioned earlier. The density of state diverges like $\rho(E) \rightarrow 1/(32\pi N^{2}E^{2})$. Apparently, this double pole is too singular, since the integral of the density of state should be one. We consider next the connected two-point correlation function $\rho_{2c}(z,w)$, which may be obtained from the connected two-point Green's function \\ $G_{2c}(z,w)$ as shown in [1,2]. We have from a diagrammatic analysis [2-4], \begin{eqnarray} &&N^{2}G_{2c}(z,w) = - {1\over 4}{\partial \over{\partial w}}{\partial \over{\partial z}} {\rm Log} [ 1 - G^{2}(z) G^{2}(w) ]{\nonumber}\\ &=& ({1\over{G(z)G(w)}} - G(z)G(w))^{-2}{1\over{G(z)G(w)}} ({\partial G(z) \over{\partial z}})({\partial G(w)\over{\partial w}}) \end{eqnarray} This leads to \begin{equation} N^{2}G_{2c}(z,w) = {1\over{4(z^{2} - w^{2})^{2}}} [{2 z^{2}w^{2} - 4 z^{2} - 4 w^{2} \over{\sqrt{z^{2} - 4} \sqrt{w^{2} - 4}}} - 2 z w ] \end{equation} The two-point correlation function $\rho_{2c}(z,w)$ agrees with the universal behavior in the short distance $z \rightarrow w$ limit: \begin{equation} \rho_{2c}(z,w) = - {1\over{2 \pi^{2} N^{2} (z - w)^{2} }} \end{equation} By the equation of motion method, the complex matrix model can be studied using a recursion approach order by order in $1/N^{2}$ [11]. Pole terms appear in each order of $1/N^{2}$. The one-point Green's function diverges in the $k$-th order as \begin{equation} \delta G(z) \sim {c_{k}\over{N^{2k}z^{2k}}} \end{equation} Since the connected two-point correlation function in the Hermitian matrix model does not have a double pole in the orthogonal polynomial approach, let us use the orthogonal polynomial approach to investigate whether we have the same situation for the one-point Green's function $G(z)$ in the limit $z \rightarrow 0$. \vskip 5mm \sect{ Orthogonal polynomial analysis } Since $\varphi^{+}\varphi$ can be regarded as a Hermitian matrix with positive eigenvalues, we can write the joint probability distribution in terms of its positive eigenvalues $\epsilon_{i}$ [19,20] \begin{equation} P_{N}(\varepsilon_{1},...,\varepsilon_{N})d \varepsilon_{1} \cdots d \varepsilon_{N} = C e^{- N \sum \varepsilon_{i}}\Pi_{i<j} (\varepsilon_{i} - \varepsilon_{j})^{2}d \varepsilon_{1}\cdots d \varepsilon_{N} \end{equation} with $C$ a normalization constant. The relevant orthogonal polynomial is the Laguerre polynomial $L_{n}(x)$, where $L_{0}(x) = 1$, $L_{1}(x) = 1 - x$, and $L_{2}(x) = 1 - 2 x + x^{2}/2$. The density of state $\rho(\varepsilon)$ and the two-point connected correlation function\\ $\rho_{2c}(\varepsilon,\varepsilon')$ are given in terms of the kernel $K(\varepsilon,\varepsilon')$ [1] as \begin{equation} \rho(\varepsilon) = K(\varepsilon,\varepsilon) \end{equation} \begin{equation} \rho_{2c}(\varepsilon,\varepsilon') = - [K(\varepsilon,\varepsilon')]^{2} \end{equation} where \begin{equation} K(\varepsilon,\varepsilon') = {1\over {N}} \sum_{0}^{N-1}\psi_{n}(\varepsilon)\psi_{n}(\varepsilon') \end{equation} and \begin{equation} \psi_{n}(\varepsilon) = e^{-N\varepsilon/2}L_{n}(N\varepsilon) \end{equation} Noting that $\varepsilon = E^{2}$ in our terminology, we have an extra factor of $E$ for the density of state since $d\varepsilon = 2 E dE$. At $E=0$, all Laguerre polynomials are equal to one, so the density of state should vanish at $E = 0$ due to this extra factor $E$ in accordance with the reflection symmetry argument given earlier. This result for the density of state at $E = 0$ seems to contradict the result we obtained in the previous section. But there is no contradiction. In the previous section, we took the large $N$ limit first, so that the density is effectively smoothed over a certain width. Here we take $N$ large but fixed and let $E$ go to zero, and obtain $\rho(E) = 0$. We see the hole described earlier. This non-commutativity of the two limits discussed here is similar to the discussion of the two-point connected correlation function of a Hermitian matrix model [1, 2]. Using the Christoffel-Darboux identity, we have a compact expression for the density of state, \begin{eqnarray} \rho(\varepsilon) &=& e^{-N\varepsilon} \sum_{k=0}^{N-1} L_{k}^{2}(N\varepsilon)\cr &=& N e^{-N\varepsilon} [ L_{N}(N\varepsilon) L_{N-1}^{'}(N\varepsilon) - L_{N-1}(N\varepsilon) L_{N}^{'}(N\varepsilon) ]. \end{eqnarray} Since $ \varepsilon = E^{2}$, the density of state $\rho(E)$ is \begin{equation} \rho(E) = 2 E e^{- NE^{2}} \sum_{k=0}^{N-1} L_{k}^{2}(NE^{2}) \end{equation} In Fig. 2, this density of state is shown for $N = 5$ and $N = 10$. There appears N-oscillations in the density of state, and the first peak near $E = 0$ is finite for $N \rightarrow 0$. The ratio of the value of the first peak to the second one is almost 1.2. In Fig.2, the dotted line represents the semi-circle behavior in the large N limit given by $\sqrt{ 4 - E^2} / \pi$. When the oscillating part is averaged smoothly, we obtain the correction of the density of state to the semi-circle law of order $1/N^2$. \begin{equation} \Delta \rho = < \rho(E) > - {1\over \pi} \sqrt{ 4 - E^2 } \end{equation} It may be useful to write the asymptotic expression by the Bessel function. Knowing the large $N$ behavior (for $E$ small ) of Laguerre polynomials, we can write the preceding in terms of Bessel function. Remarkably, the oscillating part near $E = 0$ in Fig.2 is approximated by \begin{equation} \rho(E) \simeq 2NE [J_{0}^{2}(2NE) + J_{1}^{2}(2NE)] \end{equation} By plotting the oscillating curve of this equation, we find that the first peak near $E = 0$ is almost same as the first peak value of (3.7). The ratio of the first peak to the second one is also 1.2, which we mentioned before. Now we use Hankel's asymptotic expansion of Bessel functions $J_{0}(t)$, $J_{1}(t)$, for the large $t = 2NE$. \begin{eqnarray} J_{0}(t) &=& \sqrt{{2\over{\pi t}}}[P(0,t){\rm cos} (t - {\pi\over{4}}) - Q(0,t){\rm sin}(t - {\pi\over{4}})],\nonumber\\ J_{1}(t) &=& \sqrt{2\over{\pi t}} [ P(1,t){\rm cos}(t - {3\pi\over{4}}) - Q(1,t){\rm sin}(t - {3\pi\over{4}})] \end{eqnarray} where \begin{eqnarray} P(l,t) &=& \sum_{k=0}^{\infty} (- 1)^{k}{(l,2k)\over{(2t)^{2k}}}\nonumber\\ &\sim& 1 - {(4l^2 - 1) (4l^2 - 9)\over{128 t^2}} + \cdots\nonumber\\ Q(l,t) &=& \sum_{k=0}^{\infty} (-1)^{k} {(l,2k+1)\over{ (2t)^{2k+1}}}\nonumber\\ &\sim& {4l^2-1 \over{8t}} + \cdots\nonumber\\ (l,m) &=& {\Gamma({1\over{2}} + l + m)\over{m! \Gamma({1\over{2}}+ l -m)}} \end{eqnarray} We use the smooth average for the oscillating part by setting $< {\rm sin}^2 (x) > \\ = < {\rm cos}^2 (x) > = {1\over {2}}$. Then it is easy to find that the density of state becomes \begin{equation} \rho^{\rm smooth}(E) = {2\over{\pi}}( 1 + {1\over{32N^{2}E^{2}}} - {9\over{2048N^{4}E^{4}}} + \cdots ). \end{equation} The term of order $1/N^{2}$ agrees with the result obtained in the previous section by the diagrammatic method. In this approximation, the leading term is given simply by ${2\over{\pi}}$ for ${\sqrt{4 - E^2}/{\pi}}$. Thus we find that the singular double pole at order $1/N^{2}$ is in a sense spurious, which appears because we took the large $N$ limit first, and it is recovered after the smooth average [1]. The density of state does not diverge for $E \rightarrow 0$. Recently the density of state of this complex matrix model has been studied in various contexts, and related expressions in terms of Bessel functions have been discussed [22]. We note that in the case of the Hermitian matrix model, the Bessel function is of half-integer order, and consequently we have no poles in the $1/N$ expansion. The double pole $1/x^{2}$ of the connected two-point correlation function cancels with a factor ${\rm sin^{2}} (x)$ as $x \rightarrow 0$ [1]. Here we have a different situation since ${\rm sin^{2}} (x - {\pi\over{4}})$ does not vanish for $x \rightarrow 0$. We must sum the leading pole terms of the equation above and it gives eventually a finite result for $E \rightarrow 0$ at fixed large $N$. \sect {Lattice of matrices} We now extend our analysis to the general $d$-dimensional lattice of matrices [3]. We place $N$ by $N$ matrices on the lattice. The gluon propagator is given by $\sigma_{\alpha \beta}$ defined by \begin{equation} \sigma_{\alpha \beta} = {1\over{M_{\alpha \beta}^{2}}} \end{equation} Here $M_{\alpha \beta}$, as defined in [3], is a real symmetric matrix whose entries are the analogs of $m_1^2$, $m_2^2$, and $m^2$ in (2.6). The indices $\alpha$ and $\beta$, which label the lattice sites, run from $1$ to $C$, where $C$ denotes the number of sites on the lattice. In leading order, the one-point Green's function is given by $G(z) = {1\over N} \sum_{\alpha} g_{\alpha}$ where $g_{\alpha}$ is determined by \begin{equation} g_{\alpha} = {1\over{ E - \sum_{\beta} \sigma_{\alpha \beta} g_{\beta}}} \end{equation} Let us restrict ourselves to a $d$-dimensional hypercubic lattice, on which a quantum particle hops with a hopping matrix is given by $\sigma$. We introduce $\varepsilon(k)$ by [3] \begin{equation} \sigma_{\alpha \gamma} = \sum_{k}< \alpha\vert k > \epsilon (k) < k \vert \gamma > \end{equation} For the case of nearest neighbor hopping, we have \begin{equation} \varepsilon (k) = {1\over{m^{2}}} + {2\over{M^{2}}} \sum_{a} {\rm cos} k_{a} \end{equation} Below we will calculate the one-point Green's function for the most general case with arbitrary $ \varepsilon (k)$. For specific examples, we often take for simplicity the case $m^2 = \infty$ and $M^2 = 2$. We will now calculate the one-point Green's function to order $1/N^{2}$. We consider here the general situation defined by some $\sigma$ matrix. In particular, for the simple example given in (2.5) we include the diagonal part of the Hamiltonian. The self-energy part to order $1/N^{2}$ is obtained from the diagram of $D_{a}$ and $D_{b}$, where the number of rungs on the ladders are no longer restricted as they were before. We obtain \begin{equation} \Sigma_{a} = G [({\sigma G^{2}\over{1 - \sigma G^{2}}})_{\alpha \alpha}]^{2} + G [{\sigma^{2}G^{4}\over{(1 - \sigma G^{2})^{2}}}]_{\alpha \alpha} ({\sigma G^{2}\over{1 - \sigma G^{2}}})_{\beta \beta} \end{equation} Note that repeated indices are not summed unless indicated otherwise. By translation invariance this expression is actually independent of $\alpha$ and $\beta$. For the self-energy $\Sigma_{b}$, we have two parts $\Sigma_{b1}$ and $\Sigma_{b2}$. We obtain for $\Sigma_{b1}$ \begin{equation} G \sum_{\gamma} (\sigma^{n})_{\gamma \alpha} (\sigma^{m})_{ \alpha \gamma} (\sigma^{k})_{\gamma \alpha} G^{2 n} G^{2 m} G^{2 k} = G \sum_{\gamma} [ ({\sigma G^{2}\over{1 - \sigma G^{2}}})_{ \alpha \gamma}]^{3} \end{equation} where $n$, $m$ and $k$ are the number of the gluon propagators. The other part $\Sigma_{b2}$, obtained by cutting one quark propagator inside the ladder, becomes \begin{eqnarray} &&G \sum_{\beta \gamma} \sum_{n,m,j,k}^{\infty} (\sigma^{m})_{\beta \gamma} (\sigma^{n})_{\gamma \alpha} (\sigma^{k})_{\gamma \beta} (\sigma^{j})_{\beta \alpha} G^{2n} G^{2m} G^{2j} G^{2k}\nonumber\\ &=& G \sum_{\beta \gamma} [ ( {\sigma G^{2}\over{1 - \sigma G^{2}}} )_{\beta \gamma}]^{2} ({\sigma G^{2} \over{1 - \sigma G^{2}}} )_{\alpha \gamma} ({\sigma G^{2}\over{1 - \sigma G^{2}}} )_{\alpha \beta} \end{eqnarray} Note the rather unusual wasy in which the indices or site labels are arranged. Let us check these expression for the case $C = 2$ discussed earlier. We set \begin{equation} \sigma = \left (\matrix{ 0 & 1\cr 1& 0}\right ) \end{equation} The first term in (4.5) becomes \begin{equation} ( {\sigma G^{2}\over{1 - \sigma G^{2}}} )_{\alpha \alpha } = {1\over{2}}( {G^{2}\over{ 1 - G^{2}}} - {G^{2} \over { 1 + G^{2}}} ) = {G^{4}\over{ 1 - G^{4}}} \end{equation} and \begin{equation} ( ( {\sigma G^{2}\over{1 - \sigma G^{2}}} )^{2} )_{\alpha \alpha} = {1\over {2}} ( {G^{4}\over{( 1 - G^{2}) ^{2}}}+ {G^{4}\over{( 1+ G^{2})^{2}}} ) = {G^{4}( 1+ G^{4})\over {( 1 - G^{4} )^{2}}} \end{equation} The factor $1/2$ in (4.7) and (4.8) are necessary for a fixed $\alpha$ (which we do not sum over.) Thus we get the same result for the self-energy $\Sigma_{a} = 2 G^{9}/(1 - G^{4})^{3}$ as was given earlier (2.11). As for $\Sigma_b$, we note \begin{eqnarray} &&(\sigma^{n})_{\alpha \gamma} = \delta_{\alpha \gamma} {\rm (n = even)} \nonumber\\ && (\sigma^{n})_{\alpha \gamma} = \sigma_{\alpha \gamma} {\rm (n=odd)} \end{eqnarray} Using this property, we see that the numbers of gluon propagators $n,m,m$ or $n+j,m,k$ should be either all even or all odd. Thus we obtain the previous result for $\Sigma_{b}$ as given in (4.9) and (4.10). \vskip 5mm We can now immediately go to a $d$-dimensional lattice on which matrices are placed by inserting the $\sigma$ given in (4.3). Thus, we obtain \begin{eqnarray} ({\sigma G^{2}\over {1 - \sigma G^{2}}})_{\alpha \beta} &=& <\alpha \vert {\sigma G^{2}\over{1 - \sigma G^{2}}}\vert \beta >\nonumber\\ &=& \sum_{k} <\alpha \vert k><k \vert {\sigma G^{2}\over{ 1 - \sigma G^{2}}}\vert k><k\vert \beta>\nonumber\\ &=& {1\over C}\sum_{k} e^{i k(\alpha - \beta)} ({\varepsilon_{k} G^{2}\over{1 - \varepsilon_{k} G^{2}}}) \end{eqnarray} where we have used $<\alpha \vert k><k\vert \beta> = {1\over C}e^{i k(\alpha - \beta)}$. Using this diagonalized representation, we obtain for the different parts of the self-energy the following expressions: \begin{equation} \Sigma_{a1} = G [ {1\over {C}}\sum_{k} {\varepsilon_{k} G^{2}\over { 1 - \varepsilon_{k}G^{2}}}]^{2} \end{equation} \begin{equation} \Sigma_{a2} = G ({1\over{C}}\sum_{k} {\varepsilon_{k}^{2}G^{4}\over { (1 - \varepsilon_{k}G^{2})^{2}}})({1\over{C}}\sum_{p}{\varepsilon_{p}G ^ {2} \over{1 - \varepsilon_{p}G^{2}}}) \end{equation} \begin{equation} \Sigma_{b1} = G \sum_{\gamma}[{1\over{C}} \sum_{k}e^{ik(\alpha - \gamma)} ({\varepsilon_{k}G^{2}\over{1 - \varepsilon_{k}G^{2}}})]^{3} \end{equation} \begin{eqnarray} \Sigma_{b2} &=& G \sum_{\beta,\gamma}[ ({1\over{C}}\sum_{k}{ \varepsilon_{k}G^{2}\over{1 - \varepsilon_{k}G^{2}}}e^{ik(\beta - \gamma)})^{2} ({1\over{C}}\sum_{q}{\varepsilon_{q}G^{2}\over{1 - \varepsilon_{q}G^{2}}} e^{iq(\alpha - \gamma)})\nonumber\\ &\times&({1\over{C}}\sum_{p}{\varepsilon_{p}G^{2}\over{1 - \varepsilon_{p}G^{2}}}e^{ip(\alpha - \beta)})] \end{eqnarray} where $C$ is the number of lattice points, $C=L_{1}\cdots L_{d}$. We note that by translation invariance $\Sigma_{b1, b2}$ are in fact independent of $\alpha$. \vskip 5mm We can easily recover our previous results of course. For the two-site case we have $ k=0$ and $k=\pi$, and $\beta, \gamma, = 1,2$ with a fixed $\alpha=1$. For instance, noting that $\varepsilon_{k=0}=1$, and $ \varepsilon_{k=\pi}= -1$, we get immediately \begin{eqnarray} \Sigma_{b1} &=& {G\over{8}} \sum_{\gamma=1}^{2} ( {G^{2}\over{ 1 - G^{2}}} - e^{i\pi(1 - \gamma)}{G^{2}\over{1 + G^{2}}})^{3}\nonumber\\ &=& {G^{7}( 1 + G^{6})\over{( 1 - G^{4})^{3}}} \end{eqnarray} in agreement with(2.12). Similarly, we find easily that $\Sigma_{b2}$ calculated here agrees with (2.13). We have no divergence for the 3-site lattice, where $k = 0, {2\pi\over {3}},{4\pi\over{3}}$ and $\varepsilon_{k}= 1, - {1\over{2}}, - {1\over{2}}$, respectively. Similarly for lattices with odd number of sites. In contrast, for a one-dimensional lattice with the number of sites $C=L=$ an even integer, we have $k = \pi$ and $\varepsilon_{k=\pi} = -1$, and thus we get a divergence when $G^{2}=- 1$. For $L\rightarrow \infty$, we find \begin{eqnarray} \Sigma_{a1} &=& G [ {1\over{C}}\sum_{k} {\cos (k) G^{2}\over { 1 - \cos (k) G^{2}}}]^{2}\nonumber\\ &=& G{(1 - \sqrt{1 - G^{4}})^{2}\over{1 - G^{4}}} \end{eqnarray} \begin{equation} \Sigma_{a2} = G {(2G^{4} - 1 + ( 1 - G^{4})^{3/2})(1 - \sqrt{1 - G^{4}}) \over{ ( 1 - G^{4})^{2}}} \end{equation} where $k=2\pi (j-1)/L, j=1,\cdots L$ For the self-energy parts $\Sigma_{b1},\Sigma_{b2}$, the sum over $\beta$ and $\gamma$ give complications. But the leading singularity of $\Sigma_{b2}$ cancels exactly the singularity of $\Sigma_{a2}$ in (4.20). The singularity of $\Sigma_{b1}$ at $G^{2}=-1$ is the same as the singularity of $\Sigma_{a1}$. Thus we determine the singularity for one-dimensional case at the band center $G^{2}=-1$ with a single pole divergence in order $1/N^{2}$. The cancellation of the leading singularity of $\Sigma_{b2}$ with $\Sigma_{a2}$ holds for any dimension and coincides with the cancellation found in ref.[5] for the lowest Landau level. We have also verified this cancellation by the numerical evaluation of the self-energy for large $L$ near $G^{2}= -1$. The singularity $1/ N^{2}\vert E \vert$ in the density of state as $E \rightarrow 0$ is too strong, since the integral of the density of state should be equal to one. In one dimension, the off-diagonal disorder case of the tight binding model, in which the hopping matrix is real and $N = 1$ case, is known to have a singularity of $1/\vert E \vert ( \ln \vert E \vert )^{3}$ in the density of state near $E=0$ [14]. In Fig.3, the density of state of a finite chain with real hopping matrix for N=1, is evaluated for a box distribution. We consider the nearest neighbour random hopping, which is represented by the tridiagonal real matrix $M$, \begin{equation} M = \left(\matrix{0&b_{1}^{*}& 0 & \dots\cr b_{1}& 0& b_{2}^{*}& 0& \dots\cr 0& b_{2}& 0 & b_{3}^{*} & \dots \cr 0 & 0& b_{3} & 0 & b_{4}^{*} \cr & & \dots \cr} \right). \end{equation} By the calculation of the density of the eigenvalue of this matrix, we obtain the curve of the density of state. The random variable $b_{i}$ is generated 5000 times, and the histogram of the eigenvalues is evaluated. {}From these calculation, we see the divergent singularity near E = 0. The singularity is consistent with $1/E$ behavior, although it is difficult to see the existence of the logarithmic correction from this calculation. For the complex hopping case, in which the coupling $b_{i}$ in (4.20) is a complex random number, the density of state shows oscillations similar to Fig.2. We evaluate this complex case in Fig. 4. The first peak near $E = 0$ is finite for $L \rightarrow \infty$, where $L$ is the length of the chain. It may be also interesting to note that we have the similar behavior in different examples. For the sparse random matrix, the density of state shows a singularity $1/\vert E \vert (\ln \vert E \vert )^{2}$ instead of $1/\vert E \vert (\ln \vert E \vert )^{3}$ [15]. We have also another example, studied by Br\'ezin, Gross and Itzykson who have obtained the same singularity $1/\vert E \vert (\ln (\vert E \vert ))^{2}$ in the density of state for the lowest Landau level with a random Poisson distribution [16]. For the two dimensional case, the self-energy $\Sigma_{a1}$ behaves like $[\ln ( 1 + G^{2})]^{2}$ and $\Sigma_{a2}$ like $\ln (1 +G^{2})/(1 + G^{2})$. This singularity is cancelled with a singularity in $\Sigma_{b2}$. We find that the singularity of the self-energy is proportional to $[\ln (1 + G^{2})]^{2}$ near the band center. This coincides with the calculation of the two-site lowest Landau level result in ref. [5]. It is remarkable that we have the same singularity as in the two- state (spin up and down) degenerate quantum Hall case studied in [5]. We consider this a manifestation of the possibility that these two mdoels may same universality class. The integral of the density of state with this logarithmic square singularity is finite. The exact singularity of the off-diagonal tight binding model is not known. Numerical work [9] for the two-state lowest Landau level model also shows a singularity at the band center also. Moreover, the logarithmic square singularity seems to have an applicable region near $E = 0$ according to a recent numerical study for the lowest Landau level, although the density of state does not diverge for $E \rightarrow 0$ [23]. The state of $E=0$ in the two-dimensional case is related to the zero energy wave function, for which the Atiyah-Singer index theorem can be applied. In this respect, one can perhaps relate the present problem to other interesting problems [24,25]. For two-dimensional case, we have also evaluated the density of state of the off-diagonal disorder (N = 1). We examined the real and complex case similar to the one dimensional case. For the real case, the density of state seems divergent at $E = 0$ and consistent with [26]. However, the density of state of the complex case, in which nearest neighbour hopping matrix element is complex, shows the similar behavior to the complex matrix model (Fig.2). The first peak near $E=0$ is finite. \sect{Discussion} In this paper, we have discussed the singularity of the density of state in the large $N$ limit of a complex matrix model as well as its $d$-dimensional lattice generalization. We have found the singularity in order $1/N^{2}$. We have compared our result with known results in $d=0$ and $d=1$. For $d=0$, the double pole singularity we obtained is reminiscent to the spurious double pole of the two-point correlation function in the Hermitian matrix model. For $d=1$, we compared our result with the off-diagonal tight binding model. Our result is different by the logarithmic factor. For $d=2$, we have obtained the logarithmic square singularity, which coincides with the result of [5] for the $N$-orbital two-state lowest Landau level quantum Hall model. We interprete this coincidence of the singularity as a manifestation of the possibility that these models belong to the same universality case. Although our calculation is restricted to order $1/N^2$, the result of the singularity of the density of state gives a clue of what the true behavior might be. It may be interesting to evaluate the Green's functions to order $1/N^4$. For the spin degenerate two-state quantum Hall system, the numerical similation suggests that there are three extended states [9]. In $1/N$ expansion, the singularity $( {\rm ln} E )^{2}/N^{2}$ has been obtained for the density of state [5]. However, as we discussed in this paper, this is interpreted as the result after the smooth average. The density of state is considered to be finite for $E \rightarrow 0$. It is also important to note that the density of state has a nonvanishing value for $E \rightarrow 0$. Then, as discussed in [5], the conductivity is exactly given by $\sigma_{xx} = e^2/\pi h$ since the parameter $\theta$ in [5], defined by $\theta = - {\rm tan}^{-1}({\rm Im}G(z)/ {\rm Re}G(z))$, becomes $- \pi/2$. \vskip 5mm \begin{center} {\bf Acknowledgement} \end{center} We thank Edouard Br\'ezin for discussions about the formulation of lattices of matrices. SH thanks K. Minakuchi for discussion of his numerical results. AZ would like to thank D. Arovas for mentioning that the lattices of matrices studied in [3] may be relevant for impurity scattering in spin-dependent quantum Hall systems. He would also like to thank M. Kohmoto for hispitality at the Institute for Solid State Physics, University of Tokyo, where this work was initiated. This work is supported in part by the National Science Foundation under Grant No. PHY89-04035, and by CNRS-JSPS cooperative research project. \newpage \begin{center} {\bf References } \end{center} \vskip 3mm \begin{description} \item[{[1]}] E. Br\'ezin and A. Zee, Nucl. Phys. {\bf B40} (1993), 613. \item[{[2]}] E. Br\'ezin and A. Zee, Phys. Rev. {\bf E49} (1994) 2588. \item[{[3]}] E. Br\'ezin and A. Zee, "Lattice of matrices," Santa Barbara preprint (1994) NSF-ITP-94-75. \item[{[4]}] E. Br\'ezin, S. Hikami and A. Zee, Phys. Rev. E (1995), in press. (hep-th.9412230). \item[{[5]}] S. Hikami, M. Shirai and F. Wegner, Nucl. Phys. {\bf B408} (1993) 415. \item[{[6]}] R. M. Gade, Nucl. Phys. {\bf B398} (1993), 499. \item[{[7]}] D. K. K. Lee, Phys. Rev. {\bf B50} (1994), 7743. \item[{[8]}] A. W. W. Ludwig, M. P. A. Fisher, R. Shanker and G. Grinstein, Phys. Rev. {\bf B50} (1994), 7526. \item[{[9]}] C. B. Hanna, D. P. Arovas, K. Mullen and S. M. Girvin, a preprint (1994). cond-mat.9412102. \item[{[10]}] F. J. Wegner, Phys. Rev. {\bf B19} (1979), 783. \item[{[11]}] Yu. M. Makeenko, JETP-Lett. {\bf 52} (1990), 259.\\ J. Ambj\o rn, C. F. Kristjansen and Yu. M. Makeenko, Mod. Phys. Lett. {\bf A7} (1992), 3187. \item[{[12]}] R. Oppermann and F. Wegner, Z. Phys. {\bf B34} (1979), 327. \item[{[13]}] S. Hikami, Prog. Theor. Phys. {\bf 72} (1984), 722; Prog. Theor. Phys. {\bf 76} (1986), 1210. \item[{[14]}] G. Theodorou and M. H. Cohen, Phys. Rev. {\bf B13} (1976), 4597.\\ T. P. Eggarter and R. Riedinger, Phys. Rev. {\bf B18} (1978), 569. \item[{[15]}] J. Rodgers and C. De Dominicis, J. Phys. {\bf A23} (1990) 1567. \item[{[16]}] E. Br\'ezin, D. J. Gross and C. Itzykson, Nucl. Phys. {\bf B235} (1984), 24. \item[{[17]}] X. G. Wen and A. Zee, Phys. Rev. Lett. {\bf 69} (1992) 953, 3600(E), Phys. Rev. {\bf B47} ( 1993) 2265.\\ X. G. Wen and A. Zee, " A phenomenological study of interlayer tunneling in double-layered quantum Hall systems", MIT-SBITP preprint. \item[{[18]}] K. Yang, K. Moon, L. Zheng, A. H. MacDonard, S. M. Girvin, D. Yoshioka and S-C. Zhang, Phys. Rev. Lett. {\bf 72} (1994) 732. \item[{[19]}] K. Slevin and T. Nagao, Phys. Rev. Lett. {\bf 70} (1993) 635. \item[{[20]}] B. V. Bronk, J. Math. Phys. {\bf 6} (1965), 228. \item[{[21]}] M. Abramowitz and I. A. Stegun, Handbook of Mathematical Functions, National Bureau of Standards, (1972) Washington, P.364. \item[{[22]}] T. Nagao and K. Slevin, J. Math. Phys. {\bf 34} (1993) 2075.\\ A. V. Andreev, B. D. Simons and N. Taniguchi, Nucl. Phys. {\bf B432[FS]} (1994) 487. \item[{[23]}] K. Minakuchi, Master thesis, University of Tokyo (1995). \item[{[24]}] X. G. Wen and A. Zee, Nucl. Phys. {\bf B316} (1989), 641. \item[{[25]}] M. Inui, S. A. Trugman and E. Abrahams, Phys. Rev. {\bf B 49} (1994) 3190. \item[{[26]}] S. N. Evangelou, J. Phys. {\bf C 19} (1986), 4291. \end{description} \newpage \begin{center} {\bf Figure caption} \end{center} \vspace{5mm} \begin{description} \item[Fig.1a] The diagrams $D_{a}$ and $D_{b}$ of order $1/N^2$. \item[Fig.1b] The self-energy $\Sigma_{a1},\Sigma_{a2},\Sigma_{b1}$ and $\Sigma_{b2}$, which are obtained from $D_{a}$ and $D_{b}$ by cutting the solid lines. \item[Fig.2] The density of state of the complex matrix model for $N=5$ (broken line) and $N=10$ (solid line). The semi-circle law $\sqrt{4 - E^2}/\pi$ is also shown by a dotted line. \item[Fig.3] The density of state of a finite chain model (L = 10) with off-diagonal disorder. The hopping random variable $b_{i}$ is real and obeys the box distribution with $-1 < b_{i} < 1$. There is a divergence at $E = 0$. \item[Fig.4] The density of stae of a finite chain model (L = 12) with off-diagonal disorder. The hopping random variable $b_{i}$ is complex. The real part and the imaginary part both obey the box distribution, which takes the value between $-1$ and 1 uniformly. \end{description} \end{document}
1,314,259,995,451
arxiv
\section{Introduction} \label{S:Intro} An \emph{H-trivial link} of type $(m,n)$ is a link in $\Rr^3$ which is ambiently isotopic to the split union of $m$ Hopf links and $n$ trivial knots. When~$m=0$, it is a trivial link with $n$ components. H-trivial links are a generalization of trivial links, and play an important role in normal forms of immersed surface-links in~$\Rr^4$~\cite{Kamada-Kawamura:2017, Kamada-Kawauchi-Kim-Lee:2017}. A \emph{ring} in $\Rr^3$ is a circle in the strict Euclidian sense, \ie, a round circle on a plane in~$\Rr^3$. We call a link in $\Rr^3$ a \emph{ring link} if each component is a ring. The \emph{ring group} $R_n$ (of a trivial ring link with $n$ components) was introduced by Brendle and Hatcher~\cite{BrendleHatcher:2013} as the fundamental group of the space of all configurations of ring links which are equivalent, as ring links\footnote{ The original definition of $R_n$ in~\cite{BrendleHatcher:2013} is the fundamental group of the space of all configurations of ring links which are equivalent \emph{as links} to a trivial link with $n$ components. If a ring link is equivalent as a link to a trivial ring link, then it is equivalent as a ring link. This fact is asserted in~\cite{BrendleHatcher:2013}. }, to a trivial ring link with $n$ components. We generalize this notion to the ring group $R_{m,n}$ as the fundamental group of the space of all configurations of ring links which are equivalent, as ring links, to an H-trivial ring link of type~$(m,n)$. We give presentations of the ring groups $R_{m,n}$ for~$(m,n)= (0,1)$,~$(1,0)$, and~$(1,1)$. Some basic properties of the group $R_{m,n}$ are also given. The paper is structured as follows: in Section~\ref{S:generalities_motions} we give the basic definitions concerning ring motions, and we discuss some tools and properties. In Section~\ref{S:Motion} we review known results about the ring group $\R\nn$ of a trivial link, discussing its relation with the motion group of a trivial link studied in~\cite{Dahm:Thesis} and~\cite{Goldsmith:MotionGroupsTrivial}, and recalling a complete presentation given in~\cite{BrendleHatcher:2013} (Proposition~\ref{P:BrendleHatcher}). In Section~\ref{S:Ring} we introduce an exact sequence for groups of ring motions of ring links (Proposition~\ref{P:ExactSequenceRing}) on which we rely to find presentations for many of the considered groups. In Section~\ref{S:Circle} we focus on the particular case of the ring group $\R1$ of just one ring. Here we give an alternative argument for the proof of its presentation (Lemma~\ref{L:RingCircle}). This serves as a strategy model for the case of the ring group $R_{1,0}$ of a Hopf link, treated in Section~\ref{S:Hopf} (Lemma~\ref{L:PresentationHopf}). Finally in Section~\ref{S:HopfCircle} we join all preliminary results, and using standard techniques to write presentations for group extensions we give a presentation for the group of ring motions $R_{1,1}$ of a H-trivial ring link of type~$(1, 1)$ in the main result of this paper~(Theorem~\ref{T:HopfCirclepresentation}). \section{Ring motions and motions of links} \label{S:generalities_motions} Let $M$ be a $3$-manifold in~$\Rr^3$. A link in $M$ is called a \emph{ring link} if each component is a ring. Two ring links $L$ and $L'$ in $M$ are \emph{equivalent} (as ring links in $M$) if there exists an isotopy $\{ L_t \}_{t \in [0,1]}$ through ring links $L_t$ in $M$ with $L_0=L$ and~$L_1=L'$. For a ring link $L$ in~$M$, let $\Ring(M, L)$ be the space of all configurations of ring links which are equivalent, as ring links in $M$, to $L$. This space has $L$ as base point. The \emph{ring group} of $L$ in~$M$, denoted by~$R(M,L)$, is the fundamental group~$\pi_1(\Ring(M, L))$. Let $L_{m,n}$ be a ring link in $\Rr^3$ which is a \emph{split} union of $m$ Hopf links and $n$ trivial knots, namely, each Hopf link (and each trivial knot component) can be separated from the other by a convex hull in~$\Rr^3$. The \emph{ring group} $R_{m,n}$ is the ring group $R(\Rr^3, L_{m,n})$ of~$L_{m,n}$, \ie, the fundamental group of the space of all configurations of ring links which are equivalent, as ring links, to ~$L_{m,n}$. This group does not depend on the choice of a base point~$L_{m,n}$. A \emph{ring motion} of a ring link $L$ in $M$ is a loop in the based space~$\Ring(M, L)$, which is presented by a $1$-parameter family $\{L_t\}_{t \in [0,1]}$ of ring links in $M$ with~$L =L_0 = L_1$. The \emph{stationary motion} or the \emph{trivial motion} of $L$ is a ring motion $\{L_t\}_{t \in [0,1]}$ with $L=L_t$ for all $t \in [0,1]$. Two ring motions are said to be \emph{equivalent} (as ring motions) or \emph{homotopic} if they are homotopic through ring motions of $L$ in~$M$. The product of two ring motions are defined by concatenation. The set of equivalence classes of ring motions of $L$ in $M$ forms a group. This is, by definition, the ring group~$R(M, L)$. Ring groups are related to motion groups as introduced by Dahm~\cite{Dahm:Thesis} and Goldsmith~\cite{Goldsmith:MotionGroupsTrivial, Goldsmith:MotionGroupsTorus}. Let $M$ be a $3$-manifold and $L$ a link in~$M$. Roughly speaking, a \emph{motion} of $L$ in $M$ is a $1$-parameter family $\{L_t\}_{t \in [0,1]}$ of links in $M$ with $L =L_0 = L_1$ such that there exists an ambient isotopy $\{f_t\}_{t \in [0,1]}$ of $M$ with compact support and such that $L_t = f_t(L)$ for~$t \in [0,1]$. Two motions are said to be \emph{equivalent} (as motions) or \emph{homotopic} if they are homotopic through motions of $L$ in~$M$. The product of two motions is defined by concatenation. The set of equivalence classes of motions of $L$ in $M$ forms a group, which is the \emph{motion group} of $L$ in $M$ and is denoted by~$\M(M, L)$. For a detailed treatment of motions and motion groups, we refer to Dahm~\cite{Dahm:Thesis} and Goldsmith~\cite{Goldsmith:MotionGroupsTrivial, Goldsmith:MotionGroupsTorus}. For a ring link $L$ in a $3$-manifold~$M \subset \Rr^3$, there is a natural homomorphism \[ R(M, L) \to \M(M, L). \] This map is an isomorphism when $M= \Rr^3$ and $L$ is a trivial ring link~{\cite[Theorem 1]{BrendleHatcher:2013}}, {\cite[Theorem~3.10]{Damiani:Journey}}. \section{The ring group and the motion group of a trivial link} \label{S:Motion} In this section we recall some known results about the group $R_{0, n}= R_n = R(\Rr^3, L) \cong \M(\Rr^3, L)$ of a trivial link $L$ with $n$ components. Let $L$ be a link in~$\Rr^3$. The \emph{Dahm homomorphism} is a well-defined homomorphism \[ D \colon \M( \Rr^3 , L) \longrightarrow \Aut(\pi_1(\Rr^3 \setminus L)), \] defined as follows. Let $\{ L_t \}_{t \in [0,1]}$ be a motion of $L$ in~$\Rr^3$, and $p$ a base point far from the motion. Let $A \subset \Rr^3 \times [0,1]$ be the annulus with $A \cap \Rr^3 \times \{t\} = L_t \times \{t\}$ for~$t \in [0,1]$. Consider the automorphism $(i_1)^{-1}_\ast \circ (i_0)_\ast \colon \pi_1(\Rr^3 \setminus L; p) \to \pi_1(\Rr^3 \setminus L; p)$, where~$i_k$, for $(k=0,1)$, is the inclusion map of $\Rr^3 \setminus L = (\Rr^3 \setminus L) \times \{k\}$ to~$\Rr^3 \times [0,1] \setminus A$. Then $D(\{ L_t \}_{t \in [0,1]})$ is defined by this automorphism. The Dahm homomorphism is also defined on the ring group $R(\Rr^3, L)$, \[ D \colon R( \Rr^3 , L) \longrightarrow \Aut(\pi_1(\Rr^3 \setminus L)). \] Let~$\nn \geq 1$, and $C = C_1 \sqcup \cdots \sqcup C_\nn$ be a trivial (ring) link with $\nn$ components in~$\Rr^3$, with $C_i =\{ (x,y,0) \in \Rr^3 \mid (x - i)^2 + y^2 = (1/4)^2 \}$ for~$i=1, \dots, \nn$. The fundamental group $\pi_1(\Rr^3 \setminus C)$ is the free group $\F\nn$ of rank~$\nn$ generated by $x_1, \dots, x_{\nn}$, where $x_i$ is the element represented by a positively oriented meridian loop of $C_k$ with respect to the counterclockwise orientation of~$C_k$. The two following results display some basic properties for the motion group $\M(\Rr^3, C)$ and the ring group $R_n$. These will lead to explaining the relation between the two, and to recalling a presentation for these groups. \begin{thm}[{\cite[Theorems~5.3 and 5.4]{Goldsmith:MotionGroupsTrivial}}] \label{T:Gold} \mbox{} \begin{enumerate}[label={(\arabic*)}] \item The Dahm homomorphism \[ D \colon \M(\Rr^3, C) \longrightarrow \Aut(\F\nn) \] is injective. \item The motion group $\M(\Rr^3, C)$ is generated by the following types of motions: \begin{itemize} \item Permute the $i$th and the $(i+1)$st rings by pulling the $i$th ring through the $(i+1)$st ring. \item Permute the $i$th and the $(i+1)$st rings by passing the $i$th ring around the $(i+1)$st ring. \item Reverse the orientation of the $i$th ring by rotationg it by 180 degrees around the $x$-axis. \end{itemize} \item The above generators correspond to the following automorphisms of~$F_\nn$: \begin{align} \label{E:sigma} \sig\ii &: \begin{cases} x_\ii \mapsto x_{\ii+1}; &\\ x_{\ii+1} \mapsto x_{\ii+1}\inv x_\ii x_{\ii+1}; &\\ x_\jj \mapsto x_\jj, \ &\text{for} \ \jj \neq \ii, \ii+1. \end{cases} \\ \label{E:rho} \rr\ii &: \begin{cases} x_\ii \mapsto x_{\ii+1}; & \\ x_{\ii+1} \mapsto x_\ii; &\\ x_\jj \mapsto x_\jj, \ &\text{for} \ \jj \neq \ii, \ii+1. \end{cases} \\ \label{E:tau} \tau_\ii &: \begin{cases} x_\ii \mapsto x\inv_\ii; &\\ x_\jj \mapsto x_\jj, \ &\text{for} \ \jj \neq \ii. \end{cases} \end{align} \item The image of the Dahm homomorphism, \ie, the subgroup of $\Aut(F_\nn)$ generated by the above automorphisms, is the group of automorphisms of $F_\nn$ of the form $\alpha \colon x_\ii \mapsto w_\ii\inv x^{\pm 1}_{\pi({\ii})} w_\ii$, where $\pi$ is a permutation of the indices and $w_\ii$ is a word in~$F_\nn$ (compare with the \emph{group of conjugating automorphisms}~\cite{Savushkina:1996}, also known as \emph{group of permutation-conjugacy automorphisms}~\cite{Suciu-Wang:2017}). \end{enumerate} \end{thm} \begin{thm}[{\cite[Theorem~1]{BrendleHatcher:2013}}] \label{T:RelaxingCircles} Let $\Ring_n$ be the configuration space of ring links which are equivalent to $C$ and let $\Link_n$ be the space of all smooth links equivalent to $C$. The inclusion of $\Ring_n$ into $\Link_n$ is a homotopy equivalence. \end{thm} Leaning on Theorem~\ref{T:RelaxingCircles} it is possible to show that there is a natural isomorphism between $R_n = R( \Rr^3, C) $ and $\M( \Rr^3 , C)$~\cite[Theorem~3.10]{Damiani:Journey}. Thus the statement of Theorem~\ref{T:Gold} holds for the ring group~$R_n$ too. \begin{rmk} Our notations $\sig\ii, \rr\ii, \tau_\ii$ for the motions and the automorphisms in Theorem~\ref{T:Gold} are different from those used in \cite{Goldsmith:MotionGroupsTrivial} or~ \cite{BrendleHatcher:2013}. However they coincide with the ones used in~\cite{Damiani:Journey}, where this group is called \emph{extended loop braid group~$\LBE\nn$}. \end{rmk} \begin{prop}[{\cite[Theorem~3.7]{BrendleHatcher:2013}}] \label{P:BrendleHatcher} The group $\R\nn$ admits a presentation given by the sets of generators $\{\sig\ii, \rr\ii \mid \ii=1, \dots , \nno \}$ and $\{\tau_\ii \mid \ii=1, \dots , \nn \}$ subject to relations: \begin{equation} \label{E:presentation} \begin{cases} \sig{i} \sig j = \sig j \sig{i} \, &\text{for } \vert i-j\vert > 1\\ \sig{i} \sig {i+1} \sig{i} = \sig{i+1} \sig{i} \sig{i+1} \, &\text{for } i=1, \dots, \nn-2 \\ \rr{i} \rr j = \rr j \rr{i} \, &\text{for } \vert i-j\vert > 1\\ \rr\ii\rr{i+1}\rr\ii = \rr{i+1}\rr\ii\rr{i+1} \, &\text{for } i=1, \dots, \nn-2 \\ \rrq{i}2 =1 \, &\text{for } i=1, \dots, \nno \\ \rr{i} \sig{j} = \sig{j} \rr{i} \, &\text{for } \vert i-j\vert > 1\\ \rr{i+1} \rr{i} \sig{i+1} = \sig{i} \rr{i+1} \rr{i} \, &\text{for } i=1, \dots, \nn-2 \\ \sig{i+1} \sig{i} \rr{i+1} = \rr{i} \sig{i+1} \sig{i} \, &\text{for } i=1, \dots, \nn-2 \\ \tau_{i} \tau_j = \tau_j \tau_{i} \, &\text{for } i \neq j \\ \tau_\ii^2=1 \, &\text{for } i=1, \dots, \nn \\ \sig{i} \tau_j = \tau_j \sig{i} \, &\text{for } \vert i-j\vert > 1 \\ \rr{i} \tau_j = \tau_j \rr{i} \, &\text{for } \vert i-j\vert > 1 \\ \tau_i \rr\ii = \rr\ii \tau_{i+1} \, &\text{for } i=1, \dots, \nno \\ \tau_\ii \sig\ii = \sig\ii \tau_{i+1} \, &\text{for } i=1, \dots, \nno \\ \tau_{i+1} \sig\ii = \rr\ii \siginv\ii \rr\ii \tau_\ii \, &\text{for } i=1, \dots, \nno. \\ \end{cases} \end{equation} \end{prop} \section{Extensions and projections} \label{S:Ring} Let $L_1$ and $L_2$ be ring links in a $3$-manifold $M \subset \Rr^3$ with~$L_1 \cap L_2 = \emptyset$. We say that a ring motion $ \{ L_{1(t)} \}_{t \in [0,1]}$ of $L_1$ in $M$ and a ring motion $ \{ L_{2(t)} \}_{t \in [0,1]}$ of $L_2$ in $M$ are \emph{disjoint} if $L_{1(t)} \cap L_{2(t)} = \emptyset$ for all~$t \in [0,1]$. In this case, $\{ L_{1(t)} \sqcup L_{2(t)} \}_{t \in [0,1]}$ is a ring motion of $L_1 \sqcup L_2$ in~$M$. We denote this ring motion by $\{ L_{1(t)} \}_{t \in [0,1]} \sqcup \{ L_{2(t)} \}_{t \in [0,1]}$ and call it the \emph{union} of the motions $ \{ L_{1(t)} \}_{t \in [0,1]}$ and~$ \{ L_{2(t)} \}_{t \in [0,1]}$. We denote by $R(M, L_1, L_2)$ the subgroup of the ring group $R(M, L_1 \sqcup L_2)$ consisting of equivalence classes of ring motions which can be written as the union of a motion of $L_1$ and a motion of~$L_2$. It is a subgroup of index two if and only if there exists a ring motion of $L_1 \sqcup L_2$ in $M$ which interchanges $L_1$ and~$L_2$. Otherwise, $R(M, L_1, L_2) = R(M, L_1 \sqcup L_2)$. For a ring motion $\{ L_{2(t)}\}_{t \in [0,1]}$ of $L_2$ in~$M \setminus L_1$, we have a ring motion $\{ L_1 \}_{t \in [0,1]} \sqcup \{ L_{2(t)}\}_{t \in [0,1]}$ of $L_1 \sqcup L_2$ in~$M$, where $\{ L_1 \}_{t \in [0,1]}$ is the stationary motion of~$L_1$. We call it the \emph{extension} of $\{ L_{2(t)}\}_{t \in [0,1]}$ with~$L_1$, and we denote it by~$e(\{ L_{2(t)}\}_{t \in [0,1]})$. We have a well-defined homomorphism \[ e \colon R(M \setminus L_1, L_2) \longrightarrow R(M, L_1, L_2) \] with $e([ \{ L_{2(t)}\}_{t \in [0,1]} ] ) = [e(\{ L_{2(t)}\}_{t \in [0,1]}) ]$. Let \[ p_1 \colon R(M, L_1, L_2) \longrightarrow R(M, L_1) \] be the homomorphism sending $[ \{ L_{1(t)} \}_{t \in [0,1]} \sqcup \{ L_{2(t)} \}_{t \in [0,1]} ]$ to $ [ \{ L_{1(t)} \}_{t \in [0,1]} ]$. \begin{prop} \label{P:ExactSequenceRing} Let $L_1$ and $L_2$ be disjoint ring links in~$M \subset \Rr^3$. Consider the composition of $e$ and~$p_1$: \begin{equation} \label{E:epRing} \begin{CD} R(M \setminus L_1, L_2) @>e>> R(M, L_1, L_2) @>p_1>> R(M, L_1). \end{CD} \end{equation} Then ${\mathrm Im}~{e} \subset {\mathrm Ker}~{p_1}$. \end{prop} \proof Follows directly from the definitions of the applications $e$ and~$p_1$. \endproof Although it appears that sequence~\eqref{E:epRing} is exact in many cases, few examples are known to the authors at this moment. For example, in the case of trivial links due to \cite{BrendleHatcher:2013} and in the cases which we discuss in this paper, sequence~\eqref{E:epRing} is exact. \begin{rmk}[] \label{P:ExactSequence} Let $L_1$ and $L_2$ be disjoint links in a $3$-manifold~$M$. It is known~{\cite[Proposition~3.10]{Goldsmith:MotionGroupsTrivial}} that the following sequence on motion groups is exact: \[ \begin{CD} \M(M \setminus L_1, L_2) @>e>> \M(M, L_1, L_2) @>p_1>> \M(M, L_1). \end{CD} \] \end{rmk} \section{The ring group of a ring} \label{S:Circle} First we observe the ring group of a ring $C$ in~$\Rr^3$. Let $C$ be the unit ring~$\{ (x, y, 0) \in \Rr^3 \mid x^2 + y^2 =1 \}$. In \cite{Goldsmith:MotionGroupsTrivial} and \cite{BrendleHatcher:2013} it is shown that the ring group $R(\Rr^3, C)$ and the motion group $\M(\Rr^3, C)$ are cyclic groups of order $2$ generated by the class of a ring motion of $C$ rotating it $180$ degrees about the $y$-axis. Let $R_x(\varphi), R_y(\varphi), R_z(\varphi)$ denote (counterclockwise) rotations of $\Rr^3$ about the $x$-axis, the $y$-axis and the $z$-axis by angle $\varphi$. These are identified with special orthogonal matrices as follows: \[ R_x(\varphi)= \left( \begin{array}{ccc} 1 & 0 & 0 \\ 0 & \cos \varphi & -\sin \varphi \\ 0 & \sin \varphi & \cos \varphi \end{array} \right), \quad R_y(\varphi)= \left( \begin{array}{ccc} \cos \varphi & 0 & \sin \varphi \\ 0 & 1 & 0 \\ -\sin \varphi & 0 & \cos \varphi \end{array} \right) \] \[ R_z(\varphi)= \left( \begin{array}{ccc} \cos \varphi & -\sin \varphi & 0 \\ \sin \varphi & \cos \varphi & 0 \\ 0 & 0 & 1 \end{array} \right). \] Let $\tau_C $ be the element of $R(\Rr^3, C)$ represented by a ring motion $\{ R_y(\pi t)(C) \}_{t \in [0,1]}$, \ie, the $180$ degrees rotation about the $y$-axis. \begin{lem}[{\cite[Theorem~3.7]{BrendleHatcher:2013}}, {\cite[Theorem~5.3]{Goldsmith:MotionGroupsTrivial}}] \label{L:RingCircle} The ring group $R(\Rr^3, C)$, which is isomorphic to the motion group $\M(\Rr^3, C)$, admits the presentation \[ \langle \tau_C \mid \tau_C^2 = 1 \rangle. \] \end{lem} \proof We only show the case of~$R(\Rr^3, C)$. Any ring $L$ in $\Rr^3$ is determined uniquely by the position of the center~$c(L) \in \Rr^3$, the radius~$r(C) \in \Rr_{>0}$, and an element $g(C)$ of the Grassman manifold $G(2,3)$ of unoriented $2$-planes through the origin $O$ in~$\Rr^3$, which is obtained from the plane $H(C)$ containing $C$ by sliding it along the vector~$\overrightarrow{c(L)O}$. Thus the space of rings in $\Rr^3$ is identifies with $\Rr^3 \times \Rr_{>0} \times G(2,3)$. There is a deformation retract to the subspace~$\{O\} \times \{1\} \times G(2,3) \cong G(2,3)$. The fundamental group of $G(2,3)$ is a cyclic group of order $2$ generated by a loop which rotates the $xy$-plane by $180$ degrees about the $y$-axis. This corresponds~$\tau_C \in R(\Rr^3, C)$. \endproof The proof above suggests a strategy to deform a ring motion to a \lq\lq standard\rq\rq~ ring motion, which is used later for the case of a Hopf link. \section{The ring group of a Hopf link} \label{S:Hopf} Let $H_1$ and $H_2$ be unit rings in $\Rr^3$ with $H_1 = \{ (x, y, 0) \in \Rr^3 \mid x^2 + y^2 =1 \}$ and~$H_2 = \{ (0, y, z) \in \Rr^3 \mid (y-1)^2 + z^2 =1 \}$. The \emph{positive standard rotation} of $H_2$ along $H_1$ is a ring motion $\{ R_z(2\pi t)(H_2) \}_{t \in [0,1]}$ of $H_2$ in $\Rr^3 \setminus H_1$ or in~$\Rr^3$, and the \emph{negative standard rotation} of $H_2$ along $H_1$ is a ring motion~$\{ R_z(-2\pi t)(H_2) \}_{t \in [0,1]}$. \begin{lem} \label{L:MH2} The ring group $R(\Rr^3 \setminus H_1, H_2)$ admits the presentation \[ \langle \ell \mid ~~~ \rangle, \] where $\ell$ is represented by the positive standard rotation of $H_2$ along~$H_1$. \end{lem} First we introduce the \emph{rotation number} of a ring motion of $H_2$ in $\Rr^3 \setminus H_1$ such that we obtain a homomorphism ${\mathrm rot}: R(\Rr^3 \setminus H_1, H_2) \to \Zz$ with~${\mathrm rot}(\ell)=1$. Given an orientation on~$H_2$, note that $H_2$ always comes back to itself with the same orientation after any ring motion $H_2$ in~$\Rr^3 \setminus H_1$. Thus, a ring motion of $H_2$ in $\Rr^3 \setminus H_1$ induces a continuous map from~$H_2 \times S^1 \to \Rr^3 \setminus H_1$, and hence a homomorphism $H_2(H_2 \times S^1; \Zz) \to H_2(\Rr^3 \setminus H_1; \Zz)$ on the $2$nd homology groups. We call it the homomorphism on the $2$nd homology groups induced from the motion of~$H_2$. Note that if two ring motions are homotopic as ring motions then their induced homomorphisms are the same. Note that $H_2(H_2 \times S^1; \Zz) \cong \Zz$ and $H_2(\Rr^3 \setminus H_1; \Zz) \cong \Zz$ and the homomorphism induced from the positive standard rotation of $H_2$ along $H_1$ sends a generator to a generator. Choose generators of $H_2(H_2 \times S^1; \Zz) \cong \Zz$ and $H_2(\Rr^3 \setminus H_1; \Zz) \cong \Zz$ so that the homomorphism induced from the positive standard rotation of $H_2$ along $H_1$ sends $1 \in \Zz$ to~$1 \in \Zz$. The \emph{rotation number} of the motion is the integer which is the image of $1$ under the induced homomorphism on the $2$nd homology groups. It yields the desired homomorphism ${\mathrm rot}: R(\Rr^3 \setminus H_1, H_2) \to \Zz$ with~${\mathrm rot}(\ell)=1$. We call a ring motion $\{ L_t \}_{t \in [0,1]}$ of $H_2$ in $\Rr^3 \setminus H_1$ a \emph{normal} ring motion if there is a continuous map $\phi: [0,1] \to \Rr$ such that $L_t= R_z(2 \pi \phi(t))(H_2)$ for all~$t \in [0,1]$. For a normal ring motion, $\phi(1) - \phi(0) \in \Zz$ is the rotation number. We have that the equivalence class of a normal ring motion is~$\ell^{\phi(1) - \phi(0)} \in R(\Rr^3 \setminus H_1, H_2)$. \proof[Proof of Lemma~\ref{L:MH2}] It is sufficient to show that $R(\Rr^3 \setminus H_1, H_2)$ is generated by~$\ell$, by using the rotation number. Let $\{ L_t \}_{t \in [0,1]}$ be a ring motion of $H_2$ in~$\Rr^3 \setminus H_1$. We give $H_2$ the orientation induced from the $yz$-axis. We can give an orientation to $L_t$ which is induced from the orientation of~$H_2$. Let $c(L_t) \in \Rr^3$ be the center of~$L_t$, $r(L_t) \in \Rr_{>0}$ the radius, and $g^+(L_t)$ an element of the Grassman manifold $G^+(2,3)$ of oriented $2$-planes through the origin $O$ in~$\Rr^3$, which is obtained from the oriented plane $H(L_t)$ containing $L_t$ by sliding it along the vector~$\overrightarrow{c(L_t)O}$. Let $D(L_t)$ be the oriented disk in $\Rr^3$ bounded by $L_t$ in the plane $H(L_2)$ and let $d(L_t)$ be the intersection point~$D(L_t) \cap H_1$. Give $H_1$ an orientation induced from the $xy$-plane. Since each disk $D(L_t)$ intersects with $H_1$ on $d(L_t)$ in the positive direction, we can deform, up to homotopy as ring motions, the ring motion so that the normal vector of $D(L_t)$ at $d(L_t)$ is the positive unit tangent vector of~$H_1$. Then each $H(L_t)$ becomes a $2$-plane in $\Rr^3$ containing the $z$-axis. Next, we deform the ring motion so that the radius $r(L_t)$ is $1$ for all~$t \in [0,1]$. Finally, we deform the ring motion so that the center $c(L_t)$ is the intersection point~$d(L_t)$. Now we see that any ring motion is homotopic as ring motions to a normal ring motion. This implies that $R(\Rr^3 \setminus H_1, H_2)$ is generated by~$\ell$. \endproof Now we discuss the ring group~$R(\Rr^3, H_1, H_2)$. Let $H$ denote the Hopf link~$H_1 \sqcup H_2$. Let $\tau_H \in R(\Rr^3, H_1, H_2)$ be represented by~$\{ R_y(\pi t)(H) \}_{t \in [0,1]}$, \ie, the rotation of $180$ degrees about the $y$-axis and let $\ell \in R(\Rr^3, H_1, H_2)$ be represented by~$\{ R_z(2 \pi t)(H_2) \}_{t \in [0,1]}$, \ie, the positive standard rotation of $H_2$ along~$H_1$. \begin{lem} \label{L:Order} In the ring group~$R(\Rr^3, H_1, H_2)$, we have the following. \begin{enumerate}[label={(\arabic*)}] \item \label{I:first} $\tau_H^2 = \ell$ and~$\tau_H^4 = \ell^2 = 1$. \item \label{I:second} $\tau_H \ell \tau_H^{-1} = \ell^{-1}$. \item \label{I:third} The order of $\tau_H$ is $4$ and the order of $\ell$ is~$2$. \end{enumerate} \end{lem} \proof \ref{I:first} Let $f_{\tau_H}\colon [0,1] \to SO(3)$ and $f_{\ell}\colon [0,1] \to SO(3)$ be maps defined by \[ f_{\tau_H}(t) = R_y(\pi t) \in SO(3) \quad \mbox{and} \quad f_{\ell}(t) = R_z( 2 \pi t) \in SO(3). \] Then $[ f_{\tau_H} \ast f_{\tau_H} ] = [ f_{\ell}] = -1 $ in~$\pi_1(SO(3)) = \{ 1, -1\}$. This implies that $\tau_H^2 = \ell$ and $\tau_H^4 = \ell^2 = 1$ in~$R(\Rr^3, H_1, H_2)$. \ref{I:second} $[ f_{\tau_H}^{-1} \ast f_{\tau_H} \ast f_{\tau_H} ] = [ f_{\ell}^{-1}] = -1 $ in~$\pi_1(SO(3)) = \{ 1, -1\}$. \ref{I:third} Consider the image of $\tau_H$ in the motion group $\M(\Rr^3, H)$ under the homomorphisms $R(\Rr^3, H_1, H_2) \to R(\Rr^3, H) \to \M(\Rr^3, H)$. By using the double linking number defined in~\cite{Carter-Kamada-Saito-Satoh:2002}, it can be seen that the order of $\tau_H$ in $\M(\Rr^3, H)$ is~$4$. Thus the order of $\tau_4$ in $R(\Rr^3, H_1, H_2)$ is~$4$. By \ref{I:first}, the order of $\ell$ is $2$. \endproof \begin{lem} \label{L:ExactSequenceOrderedHopf} Let $H_1$ and $H_2$ be the unit rings as above. Consider the composition of $e$ and~$p_1$: \begin{equation} \label{E:epOrderedHopf} \begin{CD} R(\Rr^3 \setminus H_1, H_2) @>e>> R(\Rr^3, H_1, H_2) @>p_1>> R(\Rr^3, H_1). \end{CD} \end{equation} \begin{enumerate}[label={(\arabic*)}] \item \label{I:ExactFirst} The sequence \eqref{E:epOrderedHopf} is exact. \item \label{I:ExactSecond} The map $p_1$ is surjective. \end{enumerate} \end{lem} \proof \ref{I:ExactFirst} By Proposition~\ref{P:ExactSequenceRing}, we have that ${\mathrm Im}~{e} \subset {\mathrm Ker}~{p_1}$. We show that ${\mathrm Ker}~{p_1} \subset {\mathrm Im}~{e}$. Let $[ \{ L_{1(t)} \}_{t \in [0,1]} \sqcup \{ L_{2(t)} \}_{t \in [0,1]} ]$ belong to~${\mathrm Ker}~{p_1}$. Then $[ \{ L_{1(t)} \}_{t \in [0,1]}] =1$ in~$R(\Rr^3, H_1)$. Thus the ring motion $\{ L_{1(t)} \}_{t \in [0,1]}$ is homotopic to the stationary motion $\{ H_1 \}_{t \in [0,1]}$ of~$H_1$. To obtain such a homotopy, we use the strategy in the proof of Lemma~\ref{L:RingCircle}. Namely, first we change the ring motion $\{ L_{1(t)} \}_{t \in [0,1]}$ so that the center $c(L_{1(t)})$ of the ring $L_{1(t)}$ is the origin $O$ for every~$t \in [0,1]$, then change the radius~$r(L_{1(t)})$, and change the element $g(L_{1(t)})$ of the Grassman manifold~$G(2,3)$. This procedure may change $\{ L_{2(t)} \}_{t \in [0,1]}$ by a homotopy keeping $L_{2(t)}$ to be a ring for every~$t$. Thus the ring motion $ \{ L_{1(t)} \}_{t \in [0,1]} \sqcup \{ L_{2(t)} \}_{t \in [0,1]} $ is equivalent to a motion which is the union of the statinary motion of $H_1$ and a ring motion of~$H_2$. Therefore,~${\mathrm Ker}~{p_1} \subset {\mathrm Im}~{e}$. \ref{I:ExactSecond} By Lemma~\ref{L:RingCircle}, the ring group $R(\Rr^3, H_1)$ is generated by~$\tau_{H_1}$. The map $p_1$ sends $\tau_H \in R(\Rr^3, H_1, H_2)$ to~$\tau_{H_1} \in R(\Rr^3, H_1)$. Thus $p_1$ is surjective. \endproof \begin{thm} \label{T:PresentationOrderedHopf} The ring group $R(\Rr^3, H_1, H_2)$ of the ordered Hopf link $H= H_1 \sqcup H_2$ admits the presentation \begin{equation} \label{E:OrderHopf} \langle \tau_H \mid \tau_H^4 =1 \rangle. \end{equation} \end{thm} \proof By Lemma~\ref{L:ExactSequenceOrderedHopf}, we have a short exact sequence: \begin{equation} \label{E:OrderHopfA} \begin{CD} 1 \longrightarrow e(R(\Rr^3 \setminus H_1, H_2)) @> \iota >> R(\Rr^3, H_1, H_2) @>p_1>> R(\Rr^3, H_1) \longrightarrow 1. \end{CD} \end{equation} Since $R(\Rr^3 \setminus H_1, H_2)$ is generated by $\ell \in R(\Rr^3 \setminus H_1, H_2)$ (Lemma~\ref{L:MH2}), the image $e(R(\Rr^3 \setminus H_1, H_2))$ is generated by~$\ell \in R(\Rr^3, H_1, H_2)$. By Lemma~\ref{L:Order} the order of $\ell \in R(\Rr^3, H_1, H_2)$ is~$2$. Thus, we have \begin{equation} \label{E:OrderHopfB} e(R(\Rr^3 \setminus H_1, H_2)) = \langle \ell \mid \ell^2 =1 \rangle \end{equation} By Lemma~\ref{L:RingCircle}, we have \begin{equation} \label{E:OrderHopfC} R(\Rr^3, H_1) = \langle \tau_{H_1} \mid \tau_{H_1}^2 = 1 \rangle. \end{equation} We choose $\tau_H$ as a set-theoretical lift of $\tau_{H_1}$ under~$p_1$. By Lemma~\ref{L:Order}, we have \begin{equation} \label{E:OrderHopfD} \tau_H^2 = \ell \quad \mbox{and} \quad \tau_H \ell \tau_H^{-1} = \ell^{-1}. \end{equation} Using the short exact sequence~\eqref{E:OrderHopfA}, presentations \eqref{E:OrderHopfB} and~\eqref{E:OrderHopfC}, and relations~\eqref{E:OrderHopfD}, and applying a standard method to give presentations for group extensions~\cite[Chapter~10]{Johnson:1997} we have that \begin{equation} \label{E:OrderHopfE} R(\Rr^3, H_1, H_2) = \langle \ell, \tau_H \mid \ell^2 =1, \tau_H^2 = \ell, \tau_H \ell \tau_H^{-1} = \ell^{-1} \rangle, \end{equation} which is reduced to the desired presentation~\eqref{E:OrderHopf}. \endproof Now we discuss the ring group~$R(\Rr^3, H)$. Let $e_2$ be the unit vector $(0,1,0)$ in~$\Rr^3$. We consider an element $s \in R(\Rr^3, H)$ which interchanges $H_1$ and~$H_2$, represented by the ring motion realized by a sequence of isometries of $\Rr^3$ as follows: first slide $H$ along~$(-1/2)e_2$, apply the rotation by $45$ degrees about the $y$-axis, apply the rotation by $180$ degrees about the $x$-axis, apply the rotation by $-45$ degrees about the $y$-axis, and slide along $(1/2)e_2$. (This ring motion is equivalent to the following ring motion: first slide $H$ along $-e_2$, apply the rotation by $180$ degrees about the $x$-axis, and then apply the rotation by $-90$ degrees about the $y$-axis.) \begin{lem} \label{L:PresentationHopf} In the group $R(\Rr^3, H)$, the following relations are satisfied. \begin{equation} \label{E:HopfA} s^2 = \tau_H^2 \quad \mbox{and} \quad s \tau_H s^{-1} = \tau_H^{-1} \quad \in R(\Rr^3, H). \end{equation} \end{lem} \proof We have $s^2 = \ell$ in~$R(\Rr^3, H)$ (it is easily understood when we draw link diagrams on the $yz$-plane). Thus, by Lemma~\ref{L:Order}, we have~$s^2 = \tau_H^2$. By the sequence of isometries of $\Rr^3$ in the definition of~$s$, the $y$-axis is sent to itself with reversed orientation. Since $\tau_H$ is a motion of $H$ realized by the rotation of $180$ degrees along the $y$-axis, we have~$s \tau_H s^{-1} = \tau_H^{-1}$. \endproof \begin{thm} \label{T:Hopfpresentation} The ring group $R(\Rr^3, H)$ of the Hopf link admits the presentation \begin{equation} \label{E:HopfB} \langle \tau_H, s \mid \tau_H^4=1, ~ s^2 =\tau_H^2, ~ s \tau_H s^{-1} = \tau_H^{-1} \rangle. \end{equation} \end{thm} Remark that presentation \eqref{E:HopfB} can be rewritten as \begin{equation} \label{E:HopfBQuat} \langle \tau_H, s \mid \tau_H^2 = s^2 = (\tau_H s)^2 \rangle, \end{equation} which is a famous presentation of the \emph{quaternion group}. \proof The ring group $R(\Rr^3, H_1, H_2)$ is a subgroup of $R(\Rr^3, H)$ of index~$2$; consider the short exact sequence \begin{equation} \label{E:Hopfexact} 1 \longrightarrow R(\Rr^3, H_1, H_2) \longrightarrow R(\Rr^3, H) \longrightarrow \Zz_2 \longrightarrow 1. \end{equation} As a set-theoretical lift of the generator of $\Zz_2$ under $R(\Rr^3, H) \to \Zz_2$, we choose $s \in R(\Rr^3, H)$. Using the short exact sequence \eqref{E:Hopfexact}, presentation \eqref{E:OrderHopf} of~$R(\Rr^3, H_1, H_2)$, and relations \eqref{E:HopfA}, we have that $R(\Rr^3, H)$ admits the desired presentation~\eqref{E:HopfB}. \endproof \section{The ring group of a Hopf link with a ring} \label{S:HopfCircle} In this section we discuss the ring group of an H-trivial link of type~$(1,1)$, \ie, the split union of a Hopf link and a ring. Let $H = H_1 \sqcup H_2$ be a Hopf link and $C$ a ring with~$H_1 = \{ (x, y, 0) \in \Rr^3 \mid x^2 + y^2 =1 \}$, $H_1 = \{ (0, y, z) \in \Rr^3 \mid (y-1)^2 + z^2 =1 \}$ and $C = \{ (x, y, 0) \in \Rr^3 \mid x^2 + (y-5)^2 =1\}$. \subsection{An exact sequence for \texorpdfstring{$R(\Rr^3, H, C$)}{}} \begin{lem} \label{L:ExactSequenceHopfCircle} Let $H$ and $C$ be as above. Consider the composition of $e$ and~$p_1$: \begin{equation} \label{E:epHopfCircle} \begin{CD} R(\Rr^3 \setminus H, C) @>e>> R(\Rr^3, H, C) @>p_1>> R(\Rr^3, H). \end{CD} \end{equation} \begin{enumerate}[label={(\arabic*)}] \item \label{I:ExactHopfCircleFirst} The sequence \eqref{E:epHopfCircle} is exact. \item \label{I:ExactHopfCircleSecond} The map $p_1$ is surjective. \end{enumerate} \end{lem} \proof \ref{I:ExactHopfCircleFirst} By Proposition~\ref{P:ExactSequenceRing}, we have that ${\mathrm Im}~{e} \subset {\mathrm Ker}~{p_1}$. We show that ${\mathrm Ker}~{p_1} \subset {\mathrm Im}~{e}$. Let $[ \{ L_{1(t)} \}_{t \in [0,1]} \sqcup \{ L_{2(t)} \}_{t \in [0,1]} ]$ belong to ${\mathrm Ker}~{p_1}$. Then $[ \{ L_{1(t)} \}_{t \in [0,1]}] =1$ in $R(\Rr^3, H)$. Thus the ring motion $\{ L_{1(t)} \}_{t \in [0,1]}$ is homotopic to the stationary motion $\{ H \}_{t \in [0,1]}$ of $H$. We show that $\{ L_{1(t)} \}_{t \in [0,1]} \sqcup \{ L_{2(t)} \}_{t \in [0,1]}$ are homotopic as ring motions to the union of the stationary motion of $H$ and a ring motion of $C$. \begin{step} First deform the ring motion $\{ L_{1(t)} \}_{t \in [0,1]}$ of $H$ and the motion $\{ L_{2(t)} \}_{t \in [0,1]}$ of $C$ in such a way that the restriction to $H_1$ becomes a stationary motion of $H_1$ keeping the condition that the new $\{ L_{1(t)} \}_{t \in [0,1]}$ and $\{ L_{2(t)} \}_{t \in [0,1]}$ are disjoint ring motions. This is done by the strategy used in the proof of Lemma~\ref{L:RingCircle} to deform the motion of $H_1$ to the stationary motion. (Recall the proof of Lemma~\ref{L:ExactSequenceOrderedHopf}.) Now we may assume that the restriction of $\{ L_{1(t)} \}_{t \in [0,1]}$ to $H_1$ is the stationary motion. The restriction of $\{ L_{1(t)} \}_{t \in [0,1]}$ to $H_2$ is a ring motion of $H_2$ in~$\Rr^3 \setminus H_1$. \end{step} \begin{step} Secondly, deform the ring motion $\{ L_{1(t)} \}_{t \in [0,1]}$ of $H$ and the motion $\{ L_{2(t)} \}_{t \in [0,1]}$ of $C$ so that the restriction to $H$ becomes the stationary motion of $H$ keeping the condition that the new $\{ L_{1(t)} \}_{t \in [0,1]}$ and $\{ L_{2(t)} \}_{t \in [0,1]}$ are disjoint ring motions. This is done as follows: Since the restriction of $\{ L_{1(t)} \}_{t \in [0,1]}$ to $H_2$ is a ring motion of $H_2$ in~$\Rr^3 \setminus H_1$, it is homotopic to the power of the positive or negative standard rotation of $H_2$ along $H_1$ by the argument in the proof of Lemma~\ref{L:MH2}. During the homotopy for~$\{ L_{1(t)} \}_{t \in [0,1]}$, we may deform $\{ L_{2(t)} \}_{t \in [0,1]}$ keeping the condition that it is a ring motion. Now, $\{ L_{1(t)} \}_{t \in [0,1]} \sqcup \{ L_{2(t)} \}_{t \in [0,1]}$ satisfies that $\{ L_{1(t)} \}_{t \in [0,1]}$ is the stationary motion of $H$ and $\{ L_{2(t)} \}_{t \in [0,1]}$ is a ring motion of $H_2$ in $\Rr^3 \setminus H$. Thus it represents an element of the image of $e: R(\Rr^3 \setminus H, C) \to R(\Rr^3, H, C)$. \ref{I:ExactHopfCircleSecond} By Lemma~\ref{T:Hopfpresentation}, the ring group $R(\Rr^3, H)$ is generated by $\tau_H$ and~$s$. Let $\tilde\tau_H$ (or~$\tilde s$) be elements of $R(\Rr^3, H, C)$ which is the union of $\tau_H$ (or $s$) and the stationary motion on~$C$. Then $p_1 (\tilde\tau_H) = \tau_H$ and~$p_1(\tilde s) =s$. Thus $p_1$ is surjective. \end{step} \endproof Later, in Lemma~\ref{L:ExactSequenceHopfCircleB}, we will see that sequence \eqref{E:epHopfCircle} induces a short exact sequence that will allow us to use once more the standard method to write presentations of group extensions. \subsection{On the ring group \texorpdfstring{$R(\Rr^3 \setminus H, C)$}{}} Let $H= H_1 \sqcup H_2$ be the Hopf link and $C$ the ring disjoint from~$H$ as before. Let us choose a base point for the fundamental group~$\pi_1(\Rr^3 \setminus (H \sqcup C))$ in such a way that the $z$-coordinate is sufficiently large. Let~$a$, $b$ and $c$ be elements of $\pi_1(\Rr^3 \setminus (H \sqcup C))$ represented by meridian loops of~$H_1$,~$H_2$, and~$C$, respectively. We assume that these meridian loops are oriented such that the linking number is $+1$ when we give~$H_1$,~$H_2$, and $C$ orientations induced from the $xy$-plane and the $yz$-plane, see Figure~\ref{F:HunionC}. The fundamental group is the free product of the free abelian group of rank $2$ generated by $a$ and $b$ and the infinite cyclic group generated by~$c$: \[ \pi_1(\Rr^3 \setminus (H \sqcup C)) = \langle a, b, c \mid [a, b]=1 \rangle \cong (\Zz \oplus \Zz) \ast \Zz. \] \begin{figure}[ht] \centering \includegraphics[scale=.8]{Generators_Fig1.pdf} \caption{Generators of~$\pi_1(\Rr^3 \setminus (H \sqcup C))$. } \label{F:HunionC} \end{figure} Let us introduce some ring motions: \begin{itemize} \item $g_a$: $C$ pulls through~$H_1$, see Figure~\ref{F:g_a}; \item $g_b$: $C$ pulls through~$H_2$, see Figure~\ref{F:g_b}; \item $\tau_C$: $C$ rotates by~$180$ degrees around the $y$-axis, as in Section~\ref{S:Circle}; \item $\varepsilon_C$: $C$ traslates above~$H$, slides downwards encircling~$H$, and then traslates back to its original position, see Figure~\ref{F:epsilon_C}. \end{itemize} \begin{figure}[hbtp] \centering \includegraphics[scale=.5]{g_a.pdf} \caption{The ring motion~$g_a$. } \label{F:g_a} \end{figure} \begin{figure}[hbtp] \centering \includegraphics[scale=.5]{g_b.pdf} \caption{The ring motion~$g_b$. } \label{F:g_b} \end{figure} \begin{figure}[hbtp] \centering \includegraphics[scale=.4]{EpsilonC.pdf} \caption{The ring motion~$\varepsilon_C$. } \label{F:epsilon_C} \end{figure} \begin{lem} \label{L:RingGroupFirstHCgenerator} The ring group $R(\Rr^3 \setminus H, C)$ is generated by $g_a, g_b, \varepsilon_{C}, \tau_C$. The following relations are satisfied. \begin{equation} [g_a, g_b] = 1, \tau_C^2=1, \, [g_a, \tau_C] = [g_b, \tau_C] = 1, \tau_C \varepsilon_{C} \tau_C = \varepsilon_{C}^{-1}. \end{equation} \end{lem} \proof First of all, remark that the motion group $R(\Rr^3 \setminus H, \ast)$, where $\ast$ is a point, is the fundamental group $\pi_1(\Rr^3 \setminus H) = \langle a, b \mid [a, b]=1 \rangle \cong (\Zz \oplus \Zz) $, and recall that $R(\Rr^3, C) = \langle \tau_C \mid \tau_C^2=1 \rangle \cong \Zz_2$ (Section~\ref{S:Circle}). Consider~$D_a$,~$D_b$ and $D_c$ to be disks bounded by~$H_1$, $H_2$ and~$C$, flatly embedded in the planes where $H_1$, $H_2$ and~$C$ lie, as in Figure~\ref{F:HunionC}. Let $\{ C_t \}_{t \in [0,1]}$ be a ring motion of $C$ in~$\Rr^3 \setminus H$, and $D_{C_t}$ be the flat disk bounded by~$C_t$, for~$t \in [0, 1]$. Let us distinguish two cases. \begin{enumerate}[label={\arabic*.}] \item Suppose that for all~$t \in [0, 1]$, $D_{C_t} \cap (D_a \cup D_b) = \emptyset$. After a deformation of~$\{ C_t \}_{t \in [0,1]}$ by a homotopy, we may assume that there exists a convex $3$-ball~$B_C$, disjoint from $(D_a \cup D_b)$, and such that $C_t$ lies in $B_C$ for all~$t \in [0, 1]$. Then $\{ C_t \}_{t \in [0,1]}$ represents an element of $R(B_C, C) \cong R(\Rr^3, C) = \langle \tau_C \mid \tau_C^2=1 \rangle$. \item \label{I:intersection} Suppose that for some value of~$t$, $D_{C_t} \cap (D_a \cup D_b) \neq \emptyset$. Then let us consider these two subcases. \begin{enumerate}[label={\ref{I:intersection}\arabic*.}] \item The disks $D_{C_t}$ intersects the interior of $D_a$ and/or $D_b$ for $t$ in a finite number of intervals~$[\tilde{t}-\varepsilon, \tilde{t}+\varepsilon]$, and $H_1 \cap \mathrm{int}(D_{C_t}) = H_2 \cap \mathrm{int}(D_{C_t}) = \emptyset$ for all~$t \in [0, 1]$. Then $\{ C_t \}_{t \in [0,1]}$, modulo $\tau_C$, represents an element of~$R(\Rr^3 \setminus H, \ast)=\langle a, b \mid [a, b]=1 \rangle$. \item The interiors $\mathrm{int}(D_{C_t})$ intersects $H_1$ and/or $H_2$ for $t$ in a finite number of intervals~$[\tilde{t}-\varepsilon, \tilde{t}+\varepsilon]$, and $C_t \cap (\mathrm{int}(D_a) \cup \mathrm{int}(D_b) )= \emptyset$ for all~$t \in [0, 1]$. Then $\{ C_t \}_{t \in [0,1]}$, modulo~$\tau_C$, represents an element of the subgroup of $R(\Rr^3 \setminus H, C)$ generated by the motion~$\varepsilon_C$ (Figure~\ref{F:epsilon_C}). \end{enumerate} \end{enumerate} Every generic ring motion of $C$ in $\Rr^3 \setminus H$ can be decomposed in a combination of motions that fall in the considered cases, thus $\tau_C, g_a, g_b$ and $\varepsilon_C$ are a generating set for~$R(\Rr^3 \setminus H, C)$. The relations in the statement descend from relations of $R(\Rr^3 \setminus H, \ast)$ and~$R(\Rr^3, C)$, with the exception of~$\tau_C \varepsilon_{C} \tau_C = \varepsilon_{C}^{-1}$. This last relation can be seen from the sequence of Figures~\ref{F:Tau_Epsilon_Tau_1}, \ref{F:Tau_Epsilon_Tau_2}, \ref{F:Tau_Epsilon_Tau_3}, and~\ref{F:Tau_Epsilon_Tau_4}. \endproof \begin{figure}[hbtp] \centering \includegraphics[scale=.5]{Tau_Epsilon_Tau_1.pdf} \caption{The ring motion $\varepsilon_{C}^{-1}$. } \label{F:Tau_Epsilon_Tau_1} \end{figure} \begin{figure}[hbtp] \centering \includegraphics[scale=.5]{Tau_Epsilon_Tau_2.pdf} \caption{A deformation of $\varepsilon_{C}^{-1}$, where the plane where $C$ is lying slightly tilts before encircling~$H$. } \label{F:Tau_Epsilon_Tau_2} \end{figure} \begin{figure}[hbtp] \centering \includegraphics[scale=.5]{Tau_Epsilon_Tau_3.pdf} \caption{The plane where $C$ is lying tilts a bit more before encircling~$H$. } \label{F:Tau_Epsilon_Tau_3} \end{figure} \begin{figure}[hbtp] \centering \includegraphics[scale=.5]{Tau_Epsilon_Tau_4.pdf} \caption{ The plane where $C$ is lying tilts by $180$ degrees before encircling~$H$, and the ring motion $\varepsilon_{C}^{-1}$ has been continuously deformed into he motion~$\tau_C \varepsilon_{C} \tau_C $.} \label{F:Tau_Epsilon_Tau_4} \end{figure} Let $R^+(\Rr^3 \setminus H, C)$ be the index $2$ subgroup of $R(\Rr^3 \setminus H, C)$ consisting of equivalence classes of ring motions of $C$ that preserve an orientation of~$C$. This is the subgroup generated by~$g_a$,~$g_b$,~$\varepsilon_{C}$. \begin{lem} \label{L:RingGroupFirstHCpresentation} The ring group $R^+(\Rr^3 \setminus H, C)$ admits the presentation \begin{equation} \label{E:Presentation-Hplus} \langle \, g_a, g_b, \varepsilon_{C} \mid [g_a, g_b] = 1 \, \rangle, \end{equation} and the Dahm homomorphism $D \colon R^+(\Rr^3 \setminus H, C) \to {\mathrm Aut}(\pi_1(\Rr^3 \setminus (H \sqcup C)))$ is injective. \end{lem} \proof The images of the elements $g_a$, $g_b$ and $\varepsilon_{C}$ under the Dahm homomorphism $D: R^+(\Rr^3 \setminus H, C) \to {\mathrm Aut}(\pi_1(\Rr^3 \setminus (H \sqcup C)))= {\mathrm Aut}\big(\langle a, b, c \mid [a, b]=1 \rangle\big)$ are the following automorphisms: \begin{equation} D(g_a) \colon \left\{ \begin{array}{lll} a & \mapsto & a \\ b & \mapsto & b \\ c & \mapsto & a c a^{-1} \end{array} \right. \\ D(g_b) \colon \left\{ \begin{array}{lll} a & \mapsto & a \\ b & \mapsto & b \\ c & \mapsto & b c b^{-1} \end{array} \right. \\ D(\varepsilon_{C}) \colon \left\{ \begin{array}{lll} a & \mapsto & c a c^{-1}\\ b & \mapsto & c b c^{-1}\\ c & \mapsto & c. \end{array} \right. \end{equation} Let $G_1$ be the free abelian group generated by $g_1$ and~$g_2$, let $G_2$ be the infinite cyclic group generated by~$\varepsilon_{C}$, and let $G$ be the free product of $G_1$ and~$G_2$, \ie, $G= \langle g_a, g_b, \varepsilon_{C} \mid [g_a, g_b]=1 \rangle$. We show that the natural epimorphism $\mu: G \to R^+(\Rr^3 \setminus H, C)$ is injective by showing that the homomorphism $D' = D \circ \mu : G \to {\mathrm Aut}(\pi_1(\Rr^3 \setminus (H \sqcup C)))$ is injective. Let $W\colon G \to \langle a, b, c \mid [a, b]=1 \rangle$ be the isomorphism with $g_a \mapsto a, g_b \mapsto b, \varepsilon_{C} \mapsto c$. Note that for any $g \in G$, $D'(g)$ is the inner automorphism of $\langle a, b, c \mid [a, b]=1 \rangle$ by $W(g)$, \ie, $D'(g)(x) = W(g) x W(g)^{-1}$. This implies that $D'(g)=1$ if and only if $W(g)=1$. Thus, $D'$ is an isomorphism and we have the presentation~\eqref{E:Presentation-Hplus}. \endproof \begin{rmk} Remark that $\pi_1(\Rr^3 \setminus (H \sqcup C))$ is a right-angled Artin group, and that $\{D(g_a), D(g_b), D(\varepsilon_C)\}$ is the set of (partial) conjugations in~${\mathrm Aut}(\pi_1(\Rr^3 \setminus (H \sqcup C)))$. Then $\{D(g_a), D(g_b), D(\varepsilon_C)\}$ is a generating set for a particular case of \emph{group of vertex-conjugating automorphisms of a right-angled Artin group}, for which Toinet gives a complete presentation in~\cite{Toinet:2012}. In this paper he generalises a method used by McCool~\cite{McCool:1986} to study \emph{groups of basis-conjugating automorphisms of free groups}. We recall that these last ones are isomorphic to \emph{pure untwisted} ring groups, and to \emph{pure} loop braid groups~\cite{BrendleHatcher:2013, Damiani:Journey} \end{rmk} \begin{lem} \label{L:RingGroupHCpresentation} The ring group $R(\Rr^3 \setminus H, C)$ admits the presentation \begin{equation} \label{E:Presentation-H} \langle \, g_a, g_b, \varepsilon_{C}, \tau_{C} \mid [g_a, g_b] = 1, \tau_{C}^2=1, \, [g_a, \tau_{C}] = [g_b, \tau_{C}] = 1, \tau_{C} \varepsilon_{C} \tau_{C} = \varepsilon_{C}^{-1} \, \rangle, \end{equation} and the Dahm homomorhism $D\colon R(\Rr^3 \setminus H, C) \to {\mathrm Aut}(\pi_1(\Rr^3 \setminus (H \sqcup C)))$ is injective. \end{lem} \proof Presentation \eqref{E:Presentation-H} is obtained from presentation \eqref{E:Presentation-Hplus} and Lemma~\ref{L:RingGroupFirstHCgenerator} by using the short exact sequence \begin{equation} \label{E:seHC} \begin{CD} 1 \longrightarrow R^+(\Rr^3 \setminus H, C) @> \iota >> R(\Rr^3 \setminus H, C) @>>> \Zz_2 \longrightarrow 1. \end{CD} \end{equation} Let $g \in R(\Rr^3 \setminus H, C)$ be an element of the kernel of~$D$. In Lemma~\ref{L:RingGroupFirstHCpresentation} we have seen that $D$ is injective on the subgroup~$R^+(\Rr^3 \setminus H, C)$. Suppose $g \in R(\Rr^3 \setminus H, C) \setminus R^+(\Rr^3 \setminus H, C)$. Then $g = g_0 \tau_{C}$ for some~$g_0 \in R^+(\Rr^3 \setminus H, C)$. Since \begin{equation} D(\tau_{C}) \colon \left\{ \begin{array}{lll} a & \mapsto & a \\ b & \mapsto & b \\ c & \mapsto & c^{-1}, \end{array} \right. \end{equation} $D(\tau_{C})$ is never an inner automorphism of~$\pi_1(\Rr^3 \setminus (H \sqcup C)))$. This contradicts to that $D(g_0)$ is an inner automorphism. Thus, $D$ is injection on~$R(\Rr^3 \setminus H, C)$. \endproof \begin{lem} \label{L:ExactSequenceHopfCircleB} The sequence involving $e$ and $p_1$ in Lemma~\ref{L:ExactSequenceHopfCircle} induces the short exact sequence \begin{equation} \label{E:epHopfCircleB} \begin{CD} 1 \longrightarrow R(\Rr^3 \setminus H, C) @>e>> R(\Rr^3, H, C) @>p_1>> R(\Rr^3, H) \longrightarrow 1. \end{CD} \end{equation} \end{lem} \proof By Lemma~\ref{L:ExactSequenceHopfCircle}, it is sufficient to show that $e$ is injective. This follows from the injectivity of the Dahm homomorphism $D\colon R(\Rr^3 \setminus H, C) \to \pi_1(\Rr^3 \setminus (H \sqcup C)))$. \endproof \subsection{The ring group \texorpdfstring{$R(\Rr^3, H \sqcup C)$}{}} \begin{thm} \label{T:HopfCirclepresentation} The ring group $R(\Rr^3, H \sqcup C)$ $(= R(\Rr^3, H, C))$ admits the following presentation: Generators: \begin{equation} \label{E:HCgenerator} g_a, g_b, \varepsilon_C, \tau_C, \tau_H, s. \end{equation} Relations: \begin{equation} \label{E:HCrelationA} [g_a, g_b] = 1, ~ \tau_{C}^2=1, ~ [g_a, \tau_{C}] = [g_b, \tau_{C}] = 1, ~ \tau_{C} \varepsilon_{C} \tau_{C} = \varepsilon_{C}, \end{equation} \begin{equation} \label{E:HCrelationB} \tau_H^4 =1, ~ s^2 =\tau_H^2, ~ s \tau_H s^{-1} = \tau_H^{-1}, \end{equation} \begin{equation} \label{E:HCrelationC} \tau_H g_a \tau_H^{-1} = g_a^{-1}, ~ \tau_H g_b \tau_H^{-1} = g_b^{-1}, ~ \tau_H \varepsilon_C \tau_H^{-1} = \varepsilon_C, ~ \tau_H \tau_C \tau_H^{-1} = \tau_C, \end{equation} \begin{equation} \label{E:HCrelationD} s g_a s^{-1} = g_a, ~ s g_b s^{-1} = g_b, ~ s \varepsilon_C s^{-1} = \varepsilon_C, ~ s \tau_C s^{-1} = \tau_C. \end{equation} \end{thm} \proof Consider the short exact sequence \eqref{E:seHC}. Let $\tilde\tau_H$ (or~$\tilde s$) be elements of $R(\Rr^3, H \sqcup C)$ which is the union of $\tau_H$ (or $s$) and the stationary motion on~$C$. Then $p_1 (\tilde\tau_H) = \tau_H$ and~$p_1(\tilde s) =s$. We have a section $R(\Rr^3, H) \to R(\Rr^3, H \sqcup C)$ sending $\tau_H$ to $\tilde\tau_H$ and $s$ to~$\tilde s$. Thus, the short exact sequence \eqref{E:seHC} is split. We may denote the elements $\tilde\tau_H$ and $\tilde s$ by $\tau_H$ and $s$ for simplicity. Using the presentation \eqref{E:Presentation-H} of~$R(\Rr^3 \setminus H, C)$, and the presentation \eqref{E:HopfB} of~$R(\Rr^3, H)$, we have the generators~\eqref{E:HCgenerator} and relations \eqref{E:HCrelationA} and~\eqref{E:HCrelationB}. The actions of $\tau_H$ and $s$ yield relations \eqref{E:HCrelationC} and~\eqref{E:HCrelationD}. \endproof \section*{Acknowledgements} During the writing of this paper both authors were supported by JSPS KAKENHI Grant Number JP16F1679. The first author was also supported by a JSPS Postdoctoral Fellowship For Foreign Researchers, and the second author was also supported by JSPS KAKENHI Grant Number JP26287013. We thank Riccardo Piergallini and John Guaschi for helpful discussions, Eric Rowell for giving us access to a precious reference, and Arnaud Mortier for the interest he expressed in this work.
1,314,259,995,452
arxiv
\section{Introduction} \label{Section: Introduction} Weakly integral modular categories, \textit{i.e.}, modular categories of integral FP-dimension, arise in a number of settings such as quantum groups at certain roots of unity \cite{NR1}, equivariantizations of Tambara-Yamagami categories \cite{GNN1}, and by gauging pointed modular categories \cite{ACRW1,CGPW1,BBCW1}. Several key conjectures characterizing weakly integral categories are found in the literature, \textit{e.g.} the Property-F Conjecture, \cite[Conjecture 2.3]{NR1}, the Weakly Group-Theoretical Conjecture \cite[Question 2]{ENO2}, and the Gauging Conjecture \cite[Abstract]{ACRW1}. In particular, it is known \cite{RWe1} that the braid group representations associated with the weakly integral modular categories $SO(M)_2$ factor over finite groups for all $M$; that is, these categories have property $F$. We will assume throughout that all objects in our categories have positive dimensions. This is not a significant loss of generality: see Remark \ref{rmkunitary}. In the spirit of \cite{HNW}, we call any modular category with the same fusion rules as the $4M$-dimensional modular category $SO(M)_2$ with $M$ even an \textit{even metaplectic} modular category.\footnote{In fact they require unitarity. It is not known if this is a strictly stronger assumption that positivity of dimensions.} Our main results are summarized as follows: \begin{thm} If $\mcC$ is a non-pointed modular category of dimension $p^{3}m$ where $p$ is a prime and $m$ is a square-free integer, then, $p=2$ and one of the following is true: \begin{itemize} \item[(i)] $\mcC$ is a Deligne product of an even metaplectic modular category of dimension $8\ell$ and a pointed $\mbbZ_k$-cyclic modular category with $k$ odd, or \item[(ii)] $\mcC$ is the Deligne product of a Semion modular category with a modular category of dimension $4m$ (see \cite{BGNPRW1}) \end{itemize} \end{thm} Furthermore, we extend the work of \cite[Theorem 3.1]{BGNPRW1} characterizing modular categories of dimension $p^2m$, to include the case $p$ odd: in this case such categories are pointed (see \thmref{Theorem: p2m main theorem}). In particular, all modular categories of dimension $p^{2}m$ and $p^{3}m$ satisfy the Weakly Group-Theoretical Conjecture \cite[Question 2]{ENO2} and Gauging Conjecture \cite{ACRW1}. Moreover, we obtain the following result similar to that of \cite[Theorem 3.1]{ACRW1}: \begin{thm}[\thmref{Theorem: Metaplectic main result}] If $\mcC$ is an even metaplectic modular category of dimension $8N$, with $N$ an odd integer, then $\mcC$ is a gauging of the particle-hole symmetry of a $\mbbZ_{2N}$-cyclic modular category. Moreover, there are exactly $2^{r+2}$ inequivalent even metaplectic modular categories of dimension $8N$ for $N = p_1^{k_1} \cdots p_r^{k_r}$, with $p_i$ distinct odd primes. \end{thm} \section{Preliminaries} \label{Section: Preliminaries} In this section we recall results and notation that are presumably well-known to the experts. Further details can be found in \cite{ENO1,DGNO1,BKi}. A \textit{premodular} category, $\mcC$, is a braided, balanced, fusion category. Throughout we will denote the isomorphism classes of simple objects of $\mcC$ by $X_{a}$ ordered such that $X_{0}=\1$ is the unit object. The set of isomorphism classes of simple objects will be denoted by $\Irr\paren{\mcC}$. Duality in $\mcC$ introduces an involution on the label set of the simple objects by $X_{a}^{*}\cong X_{a^{*}}$. The fusion matrices, $N_{a,b}^{c}=\paren{N_{a}}_{b,c}$ provide the multiplicity of $X_{c}$ in $X_{a}\otimes X_{b}$. These matrices are non-negative integer matrices and thus are subject to the Frobenius-Perron Theorem. The Frobenius-Perron eigenvalue of $N_{a}$, $d_{a}$, is called the \textit{Frobenius-Perron dimension} of $X_{a}$, or FP-dimension for short. An object is said to be \textit{invertible} if its FP-dimension is $1$, and \textit{integral} if its FP-dimension is an integer. The invertible and integral simple objects generate full fusion subcategories called the \textit{pointed subcategory}, $\mcC_{\pt}$, and the \textit{integral subcategory}, $\mcC_{\mathrm{int}}$. If $\mcC=\mcC_{\mathrm{int}}$, then $\mcC$ is said to be \textit{integral}. The categorical FP-dimension is given by $\FPdim\mcC=\sum_{a}d_{a}^{2}$. If $\FPdim\mcC\in\mbbZ$, then $\mcC$ is said to be a \textit{weakly integral category}. A weakly integral category that is not integral is called \textit{strictly weakly integral}. \begin{rmk}\label{rmkunitary} Let $\mcC$ be any weakly integral modular category. By \cite[Prop. 5.4]{ENO1} the underlying braided fusion category $\mcC$ has a unique pivotal (in fact, spherical) structure so that each object in $\mcC$ has positive dimension. Moreover, the spherical structures on the braided fusion category underlying any modular category are in 1-1 correspondence with simple objects $a$ with $a^{\otimes 2}\cong\1$ and each spherical structure again yields a modular category \cite[Lemma 2.4]{BNRW1}. Thus a classification of some class of weakly integral modular categories with positive dimensions can be easily extended to a classification of all modular categories in that class. This motivates our assumption that the categorical and FP-dimensions coincide. \end{rmk} Important data for a premodular category are the $S$-matrix and $T$-matrix, $S=\paren{S_{a,b}}$ and $T=\paren{\th_{a}\d_{a,b}}$. These matrices are indexed by the simple objects in $\mcC$ and the diagonal entries of the $T$-matrix are referred to as \textit{twists}. These data obey (see \cite{BKi}) $S_{a,b}=S_{b,a}$, $S_{0,a}=d_{a}$, $\bar{S_{a,b}}=S_{a,b^{*}}$ and the \textit{balancing} relation $\th_{a}\th_{b}S_{a,b}=\sum_{c}N_{a^{*},b}^{c}d_{c}\th_{c}$. For $\mcD\subset \mcC$ premodular categories the \textit{centralizer} of $\mcD$ in $\mcC$ is denoted by $C_{\mcC}\paren{\mcD}$ and is generated by the objects $\lcb X\in\mcC\mid S_{X,Y}=d_{X}d_{Y}\;\forall Y\in\mcD\rcb$ (see \cite{Brug1}. The category $C_{\mcC}\paren{\mcC}$ is called the \textit{M\"{u}ger center} and is often denoted by $\mcC'$. Note that a useful characterization of a simple object being outside of the M\"{u}ger center is that its column in the $S$-matrix is orthogonal to the first column of $S$. If $\mcC'=\Vec$, then $\mcC$ is said to be a \textit{modular category} whereas if $\mcC'=\mcC$, then $\mcC$ is called \textit{symmetric}. Symmetric categories are classified in terms of group data: \begin{theorem}[\cite{D1}] If $\mcC$ is a symmetric fusion category, then there exists a finite group $G$ such that $\mcC$ is equivalent to the super-Tannakian category $\Rep(G,z)$ of super-representations (i.e. $\mbbZ_2$ graded) of $G$ where $z\in Z(G)$ is a distinguished central element with $z^2=1$ acting as the parity operator. \end{theorem} Observe that $\mcC\cong\Rep(G,z)$ with $z=e\in G$ if and only if the $\mbbZ_2$-grading is trivial so that $\Rep(G,z)=\Rep(G)$. In this case we say $\mcC$ is Tannakian, and otherwise we say it is non-Tannakian. The smallest non-Tannakian symmetric category is $\Rep(\mbbZ_2,1)$ which we will denote by $\sVec$ when equipped with the (unique) structure of a ribbon category with $\dim(\chi)=1$ for a non-trivial object $\chi$. An invertible object in $\mcC$ that generates a Tannakian category (\text{i.e.} $\Rep\paren{\mbbZ_{2}}$) is referred to as a \textit{boson} while an invertible object in $\mcC$ generating $\sVec$ is referred to as a \textit{fermion}. By \cite[Lemma 5.4]{M5} we find that if $\sVec\subset\mcC'$ for some premodular category $\mcC$, and $\chi$ is the generator of $\sVec$, then $\chi\otimes Y\ncong Y$ for all simples $Y$ in $\mcC$. Bosons are useful through a process known as de-equivariantization which we will discuss shortly. The Semion and Ising categories will appear frequently in the sequel in factorizations of modular categories. The Semion categories, denoted by $\Sem$, are modular categories with the same fusion rules as $\Rep\paren{\mbbZ_{2}}$, see \cite{RSW} for details. The Ising categories are rank 3 modular categories with simple objects: $\1$, $\psi$ (a fermion), and $\s$ (the Ising anyon) of dimension $\sqrt{2}$. The key fusion rules are $\ps^{2}=\1$ and $\s^{2}=\1+\ps$; while the modular datum can be found in \cite{RSW}. There are exactly $8$ inequivalent Ising categories. They are the smallest examples of \textit{generalized Tambara-Yamagami categories}, \textit{i.e.} non-pointed fusion categories with the property that the tensor product of any two non-invertible simple objects is a direct sum of invertible objects. In fact, by \cite{Nat} any \textit{modular} generalized Tambara-Yamagami category is a (Deligne) product of an Ising category and a pointed modular category. Characterizations of the Ising categories can be found in \cite{DGNO1,BGNPRW1}. In Lemmas \ref{Lemma: Equivariantizations contain Ising}, \ref{Lemma: Dimension 16 SWI MTC Contains Ising}, and \corref{Cor: Dimension 8 SWI Fusion Contains Ising} we find new conditions implying a category must contain an Ising category. A \textit{grading} of a fusion category $\mcC$ by a finite group $G$ is a decomposition of the category as direct sum $\mcC = \oplus_{g\in G} \mcC_g$, where the components are full abelian subcategories of $\mcC$ indexed by the elements of $G$, such that the tensor product maps $\mcC_g\times \mcC_h$ into $\mcC_{gh}$. The \textit{trivial component} $\mcC_e$ (corresponding to the unit of the group $G$) is a fusion subcategory of $\mcC$. The grading is called \textit{faithful} if $\mcC_g \neq 0$, for all $g\in G$, and in this case all the components are equidimensional with $\abs{G}\dim \mcC_g = \dim\mcC$. Every fusion category $\mcC$ is faithfully graded by the universal grading group $\mcU\paren{\mcC}$ and every faithful grading of $\mcC$ is a quotient of $\mcU\paren{\mcC}$. Furthermore, the trivial component under the universal grading is the \textit{adjoint subcategory} $\mcC_{\ad}$, the full fusion subcategory generated by the objects $X\otimes X^*$, for $X$ simple. The universal grading was first studied in \cite{GN2}, and it was shown that if $\mcC$ is modular, then $\mcU\paren{\mcC}$ is canonically isomorphic to the character group of the group $G(\mcC)$ of isomorphism classes of invertible objects in $\mcC$. When $\mcC$ is a weakly integral fusion category there is another useful grading called the \textit{GN-grading} first studied in \cite{GN2}: \begin{theorem}\cite[Theorem 3.10]{GN2} Let $\mcC$ be a weakly integral fusion category. Then there is an elementary abelian $2$-group $E$, a set of distinct square-free positive integers $n_x$, $x \in E$, with $n_0 = 1$, and a faithful grading $\mcC = \oplus_{x\in E} \mcC(n_x)$ such that $\dim(X) \in \mathbb Z \sqrt{n_x}$ for each $X \in \mcC(n_x)$. \end{theorem} Notice the trivial component of the $GN$-grading is the integral subcategory $\mcC_{\mathrm{int}}$. \iffalse The grading and centralizer structures can be used to produce relationships between subcategories of $\mcC$. For instance: \begin{prop} \label{Prop: Cint center in Cpt} If $\mcC$ is a modular category, then $C_{\mcC_{\mathrm{int}}}\(\mcC_{\mathrm{int}}\)\subset\mcC_{\pt}$ with equality if and only if $\mcC_{\ad}=\mcC_{\mathrm{int}}$. In particular, if $\mcC$ is strictly weakly integral and $\abs{\mcC_{\pt}}>2$, then $C_{\mcC_{\mathrm{int}}}\(\mcC_{\mathrm{int}}\)\neq\mcC_{\pt}$. \end{prop} \begin{proof} First note that $\mcC_{\ad}\subset\mcC_{\mathrm{int}}\subset\mcC$ and so we have $\Vec=C_{\mcC}\(\mcC\)\subset C_{\mcC}\(\mcC_{\mathrm{int}}\)\subset C_{\mcC}\(\mcC_{\ad}\)=\mcC_{\pt}$. Of course $C_{\mcC_{\mathrm{int}}}\(\mcC_{\mathrm{int}}\)\subset C_{\mcC}\(\mcC_{\mathrm{int}}\)$. So we have $C_{\mcC_{\mathrm{int}}}\(\mcC_{\mathrm{int}}\)\subset \mcC_{\pt}$. The final part of the statement follows from the structure of the GN-grading and the fact that $\mcC_{\pt}'=\mcC_{\ad}$. \end{proof} \fi Our approach to classification relies upon \textit{equivariantization} and its inverse functor \textit{de}-\textit{equivariantization} (see for example \cite{DGNO1}) which we now briefly describe. An \textit{action} of a finite group $G$ on a fusion category $\mcC$ is a strong tensor functor $\rho: \underline{G}\to \End_{\otimes}(\mcC)$. The \textit{$G$-equivariantization} of the category $\mcC$ is the category $\mcC^G$ of $G$-equivariant objects and morphisms of $\mcC$. When $\mcC$ is a fusion category over an algebraically closed field $\mbbK$ of characteristic $0$, the $G$-equivariantization, $\mcC^G$, is a fusion category with $\dim \mcC^G = |G| \dim \mcC$. The fusion rules of $\mcC^G$ can be determined in terms of the fusion rules of the original category $\mcC$ and group-theoretical data associated to the group action \cite{BuN1}. De-equivariantization is the inverse to equivariantization. Given a fusion category $\mcC$ and a Tannakian subcategory $\Rep G\subset\mcC$, consider the algebra $A = \Fun\paren{G}$ of functions on $G$. Then $A$ is a commutative algebra in $\mcC$. The category of $A$-modules on $\mcC$, $\mcC_{G}$, is a fusion category called a \textit{$G$-de-equivariantization} of $\mcC$. We have $\abs{G}\dim \mcC_G = \dim \mcC$, and there are canonical equivalences $\paren{\mcC_G}^G \cong \mcC$ and $\paren{\mcC^G}_G \cong \mcC$. An important property of the de-equivariantization is that if the Tannakian category in question is $\mcC'$, then $\mcC_{G}$ is modular and is called the \textit{modularization} of $\mcC$ \cite{Brug1,M5}. One may also construct modular categories from a given modular category $\mcC$ with an action of a finite group $G$, by the \textit{gauging} introduced and studied in \cite{BBCW1} and \cite{CGPW1}. Gauging is a $2$-step process. The first step is to extend the modular category $\mcC$ to a $G$-crossed braided fusion category $\mcD\cong\bigoplus_{g\in G}\mcD_g$ such that $\mcD_e=\mcC$ with a $G$-braiding, see \cite{DGNO1}. The second step is to equivariantize $\mcD$ by $G$. One important remark is that there are certain obstructions to the first step \cite{ENO3}, while the second is always possible. \section{Even metaplectic categories of dimension $8N$, with $N$ an odd integer} \label{Section: Metaplectic Classification} In this section we consider metaplectic modular categories. A general understanding of these categories will facilitate a classification of modular categories of dimension $8N$ for any odd square-free integer $N$. Metaplectic categories were defined and studied in \cite{ACRW1}. For convenience we recall the definition here. \begin{defn} An \textbf{even metaplectic modular category} is a modular category $\mcC$ with positive dimensions that is Grothendieck equivalent to $SO(2N)_2$, for some integer $N \geq 1$. \end{defn} \begin{rmk} There are significant differences between $N$ odd and even, see \cite{NR1}. For our results only the case $N$ odd plays a role. Notice the case $N=1$ is degenerate, corresponding to $SO(2)_2$ which has fusion rules like $\mbbZ_8$. Our results carry over to this (pointed) case with little effort, so we include it. \end{rmk} Throughout the remainder of this section, $\mcC$ will denote an even metaplectic category with $N$ an odd integer. In particular, $\mcC$ has dimension $8N$. In this case it follows from the definition that $\mcC$ has rank $N+7$ \cite{NR1}. While the dimension and rank of $\mcC$ are straightforward, the fusion rules strongly depend on the parity of $N$. While the full fusion rules and $S$-matrix can be found in \cite[Subsection 3.2]{NR1}, we review here some of the fusion rules that will be relevant in later proofs. The group of isomorphism classes of invertible objects of $\mcC$ is isomorphic to $\mbbZ_4$. We will denote by $g$ a generator of this group, and will abuse notation referring to the invertible objects as $g^{k}$. The only non-trivial self-dual invertible object is $g^2$. In $\mcC$, there are $N-1$ self-dual simple objects, $X_i$ and $Y_{i}$, of dimension $2$. Furthermore, one may order the $2$-dimensional objects such that $X_{1}$ generates $\mcC_{\ad}$ and $Y_{1}$ generates $\mcC_{\mathrm{int}}$. The remaining four simples in $\mcC$, $V_{i}$, have dimension $\sqrt{N}$. $\mcC$ has the following fusion rules: \begin{itemize} \item $g\otimes X_a\simeq Y_{\frac{N+1}{2}-a}$, and $g^2\otimes X_a\simeq X_a,$ and $g^{2}\otimes Y_{a}\simeq Y_{a}$ for $1\leq a\leq \paren{N-1}/2$. \item $X_a\otimes X_a = \1\oplus g^2\oplus X_{\mathrm{min}\{2a, N-2a\}}$; $X_{a}\otimes X_{b}=X_{\mathrm{min}\{a+b,N-a-b\}}\oplus X_{\abs{a-b}}$ ($a\neq b$) \item $V_1\otimes V_1 = g\oplus \bigoplus\limits_{a= 1}^{\frac{N-1}{2}}Y_{a}$. \item $gV_{1}=V_{3}$, $gV_{3}=V_{4}$, $gV_{2}=V_{1}$, $gV_{4}=V_{2}$ and $g^{3}V_{a}=V_{a}^{*}$, $V_{2}=V_{1}^{*}$, $V_{4}=V_{3}^{*}$ \end{itemize} We remark that it is immediately clear that an even metaplectic modular category of dimension $8N$ with $N$ odd is prime (not the Deligne product of two modular subcategories): $V_1$ is a tensor generator, hence cannot reside in any proper fusion subcategory. For $\SO\paren{2N}_{2}$ we order the simple objects by their hightest weights as follows:\ \\ $\mathbf{0},2\l_{1},2\l_{N-1},2\l_{N},\l_{1},\ldots,\l_{N-2},\l_{N-1}+\l_{N}, \l_{N-1},\l_{N},\l_{N-1}+\l_{1},\l_{N}+\l_{1}$ The correspondence with the notation used in this paper is $\1,g^{2},g,g^{3},Y_{1},X_{1},\ldots,Y_{\frac{N-1}{2}},X_{\frac{N-1}{2}},V_{1},V_{2},V_{3},V_{4}$. The $S$- and $T$-matrices have the following forms: \begin{align*} S&=\paren{\begin{smallmatrix} A & B & C \\ B^t & D & 0 \\ C^t & 0 & E \end{smallmatrix}}, \qquad T=\diag\paren{1,1,i^{N},i^{N},\ldots,(-1)^{a}e^{-a^2\pi i/2N},\ldots,\theta_{\epsilon},\theta_{\epsilon},-\theta_{\epsilon},-\theta_{\epsilon}} \end{align*} where $\theta_{\epsilon}=e^{\pi i\frac{2N-1}{8}}$, and $A$, $E$, and $C$ are $4\times 4$ matrices, $B$ is a $4\times\paren{N-1}$ matrix and $D$ is $\paren{N-1}\times\paren{N-1}$. To give their explicit forms we first define a $\mbbC$-valued function: \begin{equation*} G(N) = \begin{cases} (\frac{-1}{N})\sqrt{N} & \text{ if } N\equiv 1 \mod{4}\\ i(\frac{-1}{N})\sqrt{N} & \text{ if } N\equiv 3 \mod{4}\\ \end{cases} \end{equation*} where $\paren{\frac{-1}{N}}$ is the Jacobi symbol and set $\alpha = \bar{G\paren{N}}\th_{\varepsilon}^{2}$ and $\beta = i^{2N-1}\th_{\varepsilon}^{2} G\paren{N}$. Then: \begin{align*} A &= \paren{\begin{smallmatrix} 1 & 1 & 1 & 1 \\ 1 & 1 & 1 & 1 \\ 1 & 1 & -1 & -1 \\ 1 & 1 & -1 & -1 \end{smallmatrix}}, \quad B = 2\paren{\begin{smallmatrix}1&\cdots&1\\1&\cdots& 1\\-1\cdots&(-1)^a&\cdots 1\\-1 \cdots&(-1)^a&\cdots 1\end{smallmatrix}}, \quad C = \sqrt{N}\paren{\begin{smallmatrix} 1 & 1 & 1 & 1 \\ -1 & -1 & -1 & -1 \\ -i^N & i^N & -i^N & i^N \\ i^N & -i^N & i^N & -i^N \end{smallmatrix}},\\ % E&=\paren{\begin{smallmatrix} % % \bar{\alpha} & \alpha & \beta & \bar{\beta}\\ \alpha & \bar{\alpha} & \bar{\beta} & \beta\\ \beta & \bar{\beta} & \bar{\alpha} & \alpha\\ \bar{\beta} & \beta & \alpha & \bar{\alpha} \end{smallmatrix}}, \quad D_{a,b} = 4 \cos(\frac{\pi ab}{N}) \quad 1\leq a,b\leq N-1. \end{align*} Finally, we note that the in terms of simple objects the basis in which $S$ and $T$ is expressed is ordered such that the first two columns of $A$ correspond to invertibles in $\mcC_{\ad}$, and the even indexed columns of $B$ correspond to the $2$-dimensional objects in $\mcC_{\ad}$. \begin{lemma} \label{boson} If $\mcC$ is an even metaplectic modular category, then the unique non-trivial self-dual invertible object is a boson. \end{lemma} \begin{proof} For $N=1$, we have the degenerate case where $\mcC$ is Grothendieck equivalent to $\Rep\paren{\mbbZ_{8}}$ with twists $\th_{j}=e^{j^2\pi i /8}$. There is a unique non-trivial self-dual object, $j=4$, and the corresponding twist is $\th_{4} = e^{16\pi i /8}=1$, thus it is a boson. If $N \geq 2$, consider $X_i$ one of the $2$-dimensional simple objects in $\mcC_{\ad}$ and recall that $g^2\otimes X_i = X_i$. Of course, $g^{2}\in\mcC_{\ad}$. Since $\mcC$ is modular we know that $C_{\mcC}\(\mcC_{\ad}\)=\mcC_{\pt}$ \cite{GN2}. In particular, $g^{2}$ is in $C_{\mcC_{\ad}}\(\mcC_{\ad}\)$. Thus $g^{2}$ is not a fermion by \cite[Lemma 5.4]{M5}. % \end{proof} It follows such a category, $\mcC$, contains a Tannakian category equivalent to $\Rep(\mbbZ_{2})$. \begin{defn} A \textit{cyclic modular category} is a modular category that is Grothendieck equivalent to $\Rep(\mbbZ_n)$ for some integer $n$. When the specific value of $n$ is important we will refer to such a category as a $\mbbZ_n$-cyclic modular category. \end{defn} For more details regarding cyclic modular categories see \cite[Section 2]{ACRW1}. \begin{lemma} If $\mcC$ is an even metaplectic modular category then $\mcC_{\mbbZ_2}$ is a generalized Tambara-Yamagami category of dimension $4N$. In particular, the trivial component of $\mcC_{\mbbZ_2}$ is a cyclic modular category of dimension $2N$. \end{lemma} \begin{proof} As we noted previously, by \lemmaref{boson}, $\langle g^{2}\rangle\cong\Rep\(\mbbZ_2\)$ is a Tannakian subcategory of $\mcC$. In particular we can form the de-equivariantization $\mcC_{\mbbZ_2}$ which is a braided $\mbbZ_2$-crossed fusion category of dimension $4N$. In particular, the trivial component of $\mcC_{\mbbZ_2}$ under the $\mbbZ_2$-crossed braiding is modular of dimension $2N$ \cite[Proposition 4.56(ii)]{DGNO1}. Since $g^{2}$ fixes the 2-dimensional objects of $\mcC$, these 2-dimensionals give rise to pairs of distinct invertibles in $\mcC_{\mbbZ_2}$. This produces $2(N-1)$ invertibles in $\mcC_{\mbbZ_2}$. On the other hand, $g^2\otimes 1=g^{2}$ and $g^{2}\otimes g=g^{3}$. This leads to two more invertibles in $\mcC_{\mbbZ_2}$. Since the non-integral objects are moved by $g^{2}$, we can conclude that $\mcC_{\mbbZ_2}$ has only two non-integral objects which each have dimension $\sqrt{N}$. In particular, $\mcC_{\mbbZ_2}$ is generalized Tambara-Yamagami category and the the trivial component of $\mcC_{\mbbZ_2}$ is a pointed modular category with $2N$ invertible objects \cite{Lip1}. To conclude that this pointed category is cyclic we apply the proof of \cite[Lemma 3.4]{ACRW1} \textit{mutatis mutandis}. \end{proof} \begin{remark}\label{gauging} By the previous lemmas, every even metaplectic category with $N$ odd, can obtained as a $\mbbZ_2$-gauging of a cyclic modular category of dimension $2N$. We refer the reader to \cite{BBCW1} and \cite{CGPW1} for a precise definition of gauging and its properties. \end{remark} So to classify even metaplectic categories with $N$ odd, we must understand $\mbbZ_2$ actions by braided tensor autoequivalences of a cyclic modular category of dimension $2N$. Note that a $\mbbZ_{n}$-cyclic modular category with $n$ odd has a fixed-point-free $\mbbZ_2$ action by braided tensor autoequivalences associated to the $\mbbZ_{n}$ group automorphism $j\mapsto n-j$. We refer to this automorphism as the \textit{particle-hole symmetry}. In fact, it is shown in \cite{ACRW1} that for $n=p^a$ an odd prime power, this is the only non-trivial $\mbbZ_2$ action by braided tensor autoequivalences. By decomposing a $\mbbZ_n$-cyclic modular category into its prime power Deligne tensor factors, we see that every $\mbbZ_2$ action by braided tensor autoequivalences on a $\mbbZ_{n}$ cyclic modular category (for $n$ odd) is obtained by choosing either particle-hole symmetry or the identity on each $\mbbZ_{p^a}$ cyclic modular Deligne factor. Of course particle-hole symmetry for $\mbbZ_n$ corresponds to chosing particle-hole symmetry on each Deligne factor. On the other hand, the Semion (or its conjugate) has exactly one non-trivial $\mbbZ_2$ action by braided tensor autoequivalences corresponding to the non-trivial element of $H^2(\mbbZ_2,\mbbZ_2)=\mbbZ_2$ \cite{BBCW1}. For a $\mbbZ_{2N}$ cyclic modular category ($N$ odd) let us define the particle-hole symmetry to be the $\mbbZ_2$ action by braided tensor autoequivalences corresponding to the non-trivial tensor autoequivalence on the Semion factor and ordinary particle-hole symmetry on the odd $\mbbZ_N$ factor. \begin{remark} \label{nontriv} Let $\mathcal{C}$ be a $\mbbZ_{2N}$-cyclic modular category. It follows from \cite{BBCW1,CGPW1} that if we choose a $\mbbZ_2$ action by braided tensor autoequivalences on $\mathcal{C}$ that restricts to the identity on some $\mbbZ_{p^a}$-cyclic modular subcategory $\mathcal{P}_{p^a}$ of $\mathcal{C}$ (including $p=2$, $a=1$) then the corresponding gauging $\mathcal{C}^{\times, G}_G$ will have a Delinge factor of $\mathcal{P}_{p^a}$. Indeed, in the extreme case of $N=1$, if we gauge $Sem$ by the trivial $\mbbZ_2$ action we obtain $Sem\boxtimes \Rep(D^\omega \mbbZ_2)$ for some cocycle $\omega$. Indeed, gauging $\Vec$ by the trivial $\mbbZ_2$ action produces the factor on the right. \end{remark} \begin{theorem} \label{Theorem: Metaplectic main result} If $\mcC$ is an even metaplectic category of dimension $8N$, with $N$ an odd integer, then $\mcC$ is a gauging of the particle-hole symmetry of a $\mbbZ_{2N}$-cyclic modular category. Moreover, for $2N = 2p_1^{k_1} \cdots p_r^{k_{r-1}}$, with $p_i$ distinct odd primes, there are exactly $2^{r+2}$ many inequivalent even metaplectic modular categories. \end{theorem} \begin{proof} By \rmkref{gauging}, each even metaplectic modular category, with $N$ odd, is obtained as a $\mathbb Z_2$-gauging of a $\mbbZ_{2N}$-cyclic modular category. However, since an even metaplectic modular category is prime, \rmkref{nontriv} implies that it can only be obtained by gauging particle-hole symmetry. This proves the first statement. To count the inequivalent even metaplectic categories of dimension $8N$ with $N$ odd, we note that for each of the $r+1$ prime divisors of $2N=2p_1^{k_1} \cdots p_r^{k_r}$ there are exactly two cyclic modular categories (see \cite[Section 2]{ACRW1}). Gauging the particle-hole symmetry leads to an additional choice of $H^3(\mathbb Z_2, U(1))=\mathbb Z_2$ yielding a total of $2^{r+2}$ choices. \end{proof} \section{Modular categories of dimension $8m$, with $m$ odd square-free integer} Metaplectic categories are a large class of dimension $8m$ weakly integral modular categories for $m$ odd and square-free. In this section we show that the only non-metaplectic modular categories of dimension $8m$ are pointed or products of smaller categories. In order to accomplish this we will first determine the structure of the pointed subcategory. This will enable us to show that the integral subcategory is Grothendieck equivalent to $\Rep\paren{D_{4m}}$. This will be sufficient to establish that $\mcC$ is metaplectic if it is prime. Along the way we will resolve the case that $\mcC$ has dimension $p^{3}m$ for $p$ an odd prime, and $m$ a square-free integer. We begin by considering integral categories of dimension $p^{2}m$ and $p^{3}m$. This will allow us to quickly reduce to the case of $8m$. \begin{lemma} \label{Lemma: pkm} Let $p$ be a prime and $m$ a square-free integer such that $\gcd(m,p)=1$. If $\mcC$ is a modular category of dimension $p^{k}m$ with $p^{k}\mid\dim \mcC_{\pt}$, then $\mcC$ is pointed. \end{lemma} \begin{proof} First note that $\dim\mcC_{\ad}=\frac{p^{k}m}{\dim\mcC_{\pt}}$. Since $p^{k}\mid\dim\mcC_{\pt}$ we know that $\dim\mcC_{\ad}\mid m$. On the other hand, $\mcC_{\ad}\subset\mcC_{\mathrm{int}}$ and all simples in $\mcC_{\mathrm{int}}$ have dimension $p^j$, $j\geq 0$. Thus the dimensions of simples in $\mcC_{\ad}$ are $1$ since $\gcd(p,m)=1$ and $p\nmid\dim\mcC_{\ad}$, thus $\mcC_{\ad}\subset\mcC_{\pt}$. In particular, $\mcC$ is nilpotent and is given by $\mcC_{p^{k}}\boxtimes\mcC_{q_{1}}\boxtimes\cdots\boxtimes\mcC_{q_{\ell}}$ where $q_{j}$ are primes such that $m=q_{1}q_{2}\cdots q_{\ell}$ and $\mcC_{k}$ is a modular category of dimension $k$, see \cite[Theorem 1.1]{DGNO2}. By the classification of prime dimension modular categories we know $\mcC_{q_{1}}\boxtimes\cdots\boxtimes\mcC_{q_{\ell}}$ is pointed, but $p^{k}$ does not divide its dimension. Thus $\mcC_{p^{k}}$ must be pointed. \end{proof} For an integral modular category of dimension $p^{2}m$ with $p$ and $m$ as in the above lemma, we have that $p^{2}m= \dim \mcC = \abs{\mcC_{\pt}}+ap^{2}$, where $a$ is the number of simple objects of dimension $p$. Thus we have the following corollary. \begin{cor} \label{Cor: p2m integral} If $p$ is prime and $m$ is a square-free integer with $\gcd\(m,p\)=1$, and $\mcC$ is an integral modular category with dimension $p^{2}m$, then $\mcC$ is pointed. \end{cor} \begin{lemma} \label{Lemma: integral p3m} Any integral modular category $\mcC$ of dimension $p^{3}m$ where $p$ is a prime and $m$ is a square-free integer is pointed. \end{lemma} \begin{proof} The case $m=1$ (i.e. $p^3$) is well-known and easily verified using the grading and $|\mcU(\mcC)|=\rank\mcC_{\pt}$. Assume that $\mcC$ is not pointed. First we consider $\dim\mcC=p^4$. By \cite[Lemma 4.9]{DN} $\mcC$ is a Drinfeld center $\mcZ(\Vec_G^\omega)\cong \Rep(D^\omega G)$ for some group $G$ of order $p^2$. By \cite{Ng03} such a $D^\omega G$ is commutative, i.e. $\mcC$ is pointed. Since $\mcC$ is modular we know that the simples have dimensions $1$ and/or $p$. In particular, there is an integer $b$ such that $p^{3}m=\dim\mcC_{\pt}+p^{2}b$ and thus $p^{2}\mid \dim\mcC_{\pt}$. By \lemmaref{Lemma: pkm} we may assume $p^{3}$ does not divide $\dim\mcC_{\pt}$. In particular, there exists an integer $k$ such that $k\mid m$ and $\dim\mcC_{\pt}=p^{2}k$. Next let $\mcC_{i}$ denote the components of the universal grading of $\mcC$. Then $pm/k=\dim\mcC_{i}=a_{i}+b_{i}p^{2}$ where $a_{i}$ is the number of invertibles in component $\mcC_{i}$ and $b_{i}$ is the number of dimension $p$ objects in $\mcC_{i}$. So we can conclude that $p\mid a_{i}$ and $a_{i}\neq 0$. Thus $\dim\mcC_{\pt}=\sum_{i}a_{i}\geq \sum_{i}p=p\dim\mcC_{\pt}$, an impossibility. \end{proof} Next we note that the condition that $\mcC$ is integral in the previous statements is vacuous if $p$ is odd. This is made explicit by the following lemma. \begin{lemma} \label{Lemma: 4 divides} If $\mcC$ is a strictly weakly integral modular category, then $4\mid\dim\mcC$ \end{lemma} \begin{proof} Coupling the weakly integral grading of \cite{GN2} and the fact that $\mcC$ is strictly weakly integral, we have $2\mid \dim\mcC_{\pt}\mid\dim\mcC_{\mathrm{int}}\mid\dim\mcC/2$. \end{proof} \lemmaref{Lemma: 4 divides}, \corref{Cor: p2m integral}, and \cite[Theorem 3.1]{BGNPRW1} resolve the case of $\mcC$ a weakly integral modular category of dimension $p^{2}m$ with $m$ square-free and coprime to $p$. We have: \begin{theorem} \label{Theorem: p2m main theorem} If $p$ is a prime, $m$ is a square-free integer coprime to $p$, and $\mcC$ is non-pointed modular category of dimension $p^{2}m$, then $p=2$ and one of the following is true: \begin{itemize} \item[(i)] $\mcC$ contains an object of dimension $\sqrt{2}$ and is equivalent to a Deligne product of an Ising modular category with a cyclic modular category or \item[(ii)] $\mcC$ contains no objects of dimension $\sqrt{2}$ and is equivalent to a Deligne product of a $\mbbZ_{2}$-equivariantization of a Tambara-Yamagami category over $\mbbZ_{k}$ and a cyclic modular category of dimension $n$ where $1\leq n=m/k\in\mbbZ$. \end{itemize} \end{theorem} Applying Lemmas \ref{Lemma: pkm} and \ref{Lemma: integral p3m} we have: \begin{cor} If $\mcC$ is a modular category of dimension $8m$ with $m$ an odd square-free integer, then either $\mcC$ is pointed or $\mcC$ is strictly weakly integral and $8\nmid\abs{\mcU\(\mcC\)}$. \end{cor} So to study weakly integral modular categories of dimension $p^{3}m$ it suffices to study strictly weakly integral modular categories of dimension $8m$ by Lemmas \ref{Lemma: 4 divides} and \ref{Lemma: integral p3m}. \begin{lemma} \label{Lemma: Equivariantizations contain Ising} Let $G$ be a finite group and $\mcC$ be the $G$-equivariantization of a fusion category $\mcD$ which contains an Ising category. If $\mcC$ has the property that every $X\in\Irr\(\mcC\)$ such that $\dim X\in\mbbZ\[\sqrt{2}\]$ actually has dimension $\sqrt{2}$, then $\mcC$ contains an Ising category. \end{lemma} \begin{proof} By \cite{BuN1}, the objects in $\mcC$ are pairs $S_{X,\p}$ where $X$ is the $G$-orbit of a simple in $\mcD$ and $\p$ is a projective representation of $\mathrm{Stab}_{G}\(X\)$. Now let $\s$ be the Ising object in $\mcD$. Then there exists a projective representation of $\mathrm{Stab}_{G}\(\s\)$ that is self-dual, denote it by $\r$. Then $S_{\s.\r}$ is a simple in $\mcC$ and is self-dual by \cite{BuN1}. Moreover, $\dim S_{\s,\r}=\sqrt{2}\dim\r\[G:\mathrm{Stab}_{G}\(X\)\]$. However, our hypotheses regarding $\mcC$ ensure that $\dim S_{\s,\r}=\sqrt{2}$. \end{proof} The next two lemmas deal with the cases $\dim\mcC=8$ or $16$, i.e. $m=1,2$. \begin{lemma} \label{Cor: Dimension 8 SWI Fusion Contains Ising} If $\mcC$ is a dimension $8$ strictly weakly integral premodular category, then $\mcC\cong\mcI\boxtimes\mcD$ where $\mcD$ is a $\mathbb Z_2$-cyclic premodular category. \end{lemma} \begin{proof} Under the GN-grading (either $(\mbbZ/2\mbbZ)^2$ or $\mbbZ/2\mbbZ$), $\mcC_0=\mcC_{\mathrm{int}}$ so $\dim\mcC_{\mathrm{int}}=2$ or $4$ (respectively). The first case would yield 3 objects of dimension $\sqrt{2}$, which is inconsistent with the GN-grading. Thus $\dim\mcC_{\mathrm{int}}=4$ and we conclude that $\mcC_{\mathrm{int}}=\mcC_{\pt}$ and there are two distinct objects $X$ and $Y$ of dimension $\sqrt{2}$. Observe that if \textit{either} the universal grading group $\mcU\(\mcC\)\cong(\mbbZ/2\mbbZ)^2$ or $X\cong X^*$ then $X^{\otimes 2}=\1\oplus z\in\mcC_{\ad}$ with $z\not\cong\1$ and $z^*\cong z$. Thus $X,\1,z$ form a braided Ising category $\mcI$ by \cite{DGNO1}. Now it follows from a dimension count and \cite[Cor. 7.8]{M2} that $\mcC\cong\mcI\boxtimes\mcD$ as claimed. Thus it is enough to consider the case that $X^*\cong Y$. Then we have $X\otimes Y\cong \1\oplus z$ for some self-dual $z\in\mcC_{\ad}$ with $\theta_z^4=1$ (i.e. $z$ is a boson, a fermion or a semion). If $\theta_z=1$ then $\langle z\rangle$ is Tannakian so that we may de-equivariantize $\mcC$. However, since $(\1\oplus z)\otimes X\cong 2X$, under the de-equivariantization functor $F$ we have $F(X)=X_1\oplus X_2$ with $\dim(X_i)=\frac{\sqrt{2}}{2}$ (see \cite[Prop. 2.15]{M3}) which is not an algebraic integer. So $\theta_z\neq 1$. Now if the remaining two invertible objects satisfy $\theta_a=\theta_b$, the balancing equation implies $S_{z,b}=S_{z,a}=\frac{1}{\theta_z}\neq 1$ hence the M\"uger centers of both $\mcC$ and $\mcC_{\mathrm{int}}$ are trivial. But by \cite{M2} we would then have $\rank{\mcC_{\mathrm{int}}}=4\mid \rank{\mcC}=6$, a contradiction. Thus we must have $\theta_a\neq\theta_b$ (so $a\ncong b^*$) and $\theta_z\neq 1$. Now if either $a$ or $b$ is a boson we may de-equivariantize and then apply \lemmaref{Lemma: Equivariantizations contain Ising} to conclude that $X$ generates an Ising category, contradicting $X\ncong X^*$. So each of $z,a$ and $b$ are fermions or semions. Now applying balancing and the fact that $a\otimes X\cong X^*\cong b\otimes X$ we have $S_{z,X}=\frac{d_X}{\theta_z}\neq d_Xd_z$, $S_{a,X}=\frac{d_X}{\theta_a}\neq d_Xd_a$ and $S_{b,X}=\frac{d_X}{\theta_b}\neq d_Xd_b$ so that $\mcC$ is modular. Now if any of $a,b$ or $z$ is a semion we may factor $\mcC$ as a Delinge product $\Sem\boxtimes \mcE$ again contradicting $X\ncong X^*$. The only remaining possibility is that each of $a,b$ and $z$ is a fermion. From the balancing equation we compute the $S$-matrix of $\mcC_{\mathrm{int}}$ and find that $S_{i,j}=-1$ for $\1\neq i\neq j\neq \1$ which implies $\mcC_{\mathrm{int}}$ is modular and we obtain the contradiction $4\mid 6$ as above. \iffalse and $\mcU(\mcC)\cong\mbbZ/4\mbbZ$. Note that in this case there is a $\1\neq z\in \mcC_0$ with $z^2=\1$ and $\theta_z\in\{\pm 1,\pm i\}$ corresponding to $z$ a boson, fermion or semion. Denote by $a,b$ the two invertible objects in $\mcC_2$. Since $z\otimes X=X$ the balancing equation implies that $S_{z,X}=\frac{d_zd_X}{\theta_z}$ while $S_{z,z}=\theta_z^{-2}$ and $S_{z,a}=\frac{\theta_b}{\theta_a\theta_z}$ and $S_{z,b}=\frac{\theta_a}{\theta_b\theta_z}$. Now we use the fact that simple objects $U,V$ centralize each other if and only if $S_{U,V}=d_Ud_V$ to complete the proof. If $z$ is a boson ($\theta_z=1$) it is central and so we may de-equivariantize $\mcC$ to obtain a strictly weakly integral $4$-dimensional braided fusion category $\mcC_{\mbbZ_2}$ which must be an Ising by \cite[Corollaries B.10 and B.12]{DGNO1}. In this case the claim now follows from \lemmaref{Lemma: Equivariantizations contain Ising}. Finally, if $z$ is a fermion ($\theta_z=-1$), it only centralizes itself and $\1$ so that $\mcC_{\mathrm{int}}$ is modular, a contradiction. has order $2$ or $4$. If $\mcU\(\mcC\)\cong\mbbZ/2\mbbZ$, then with respect to this grading $\mcC_0=\mcC_{\ad}$ has dimension $2$ since $X\otimes X^{*}=Y\otimes Y^{*}=\1\oplus h$ for some invertible object $h$. This is not possible as $\mcC_{\ad}$ must have dimension $4$ by \cite{GN2}. Thus $|\mcU\(\mcC\)|=4$. Now Furthermore, if $X$ is self-dual then it will generate an Ising category, and hence factors as $\mcI\boxtimes\mcD$ by \cite{M2}. So it suffices to consider $X^{*}=Y$. \textit{A priori} there are three possibilities for the universal grading group. However, if $\mcU\(\mcC\)\cong\mbbZ/2\mbbZ$, then $\mcC_{\ad}$ has dimension $2$ since $X\otimes X^{*}=Y\otimes Y^{*}=\1\oplus h$ for some invertible object $h$. This is not possible as $\mcC_{\ad}$ must have dimension $4$ by \cite{GN2}. Thus $\mcU\(\mcC\)\cong\mbbZ/4\mbbZ$. Let $g$ be the generator for $\mcC_{\pt}$, then $X\otimes X^{*}=\1\oplus g^{2}$. % % % Observe that $\mcC_{\mathrm{int}}$ cannot be modular since this is forbidden by factorization as the rank of $\mcC$ is $6$. Thus $\Vec\neq C_{\mcC_{\mathrm{int}}}\(\mcC_{\mathrm{int}}\)\subsetneq\mcC_{\pt}$ by \propref{Prop: Cint center in Cpt}. Thus $\mcC_{\mathrm{int}}=\mcC_{\ad}$ is symmetric and we can de-equivariantize $\mcC$ to form a (possibly $\mbbZ_2$-crossed) braided fusion category $\mcC_{\mbbZ_2}$ which is strictly weakly integral and of dimension $4$. In particular, it must be an Ising by \cite[Corollaries B.10 and B.12]{DGNO1}. The result now follows from \lemmaref{Lemma: Equivariantizations contain Ising}. \fi \end{proof} \begin{lemma} \label{Lemma: Dimension 16 SWI MTC Contains Ising} If $\mcC$ is a strictly weakly integral modular category of dimension $16$, then $\mcC$ contains an Ising subcategory. In particular, $\mcC\cong\mcI\boxtimes\mcD$ where $\mcD$ is Ising or a pointed modular category of dimension $4$. \end{lemma} \begin{proof} The squares of the dimensions of the simple objects must divide $16$ and so the possible simple object dimensions are $1$, $2$, $\sqrt{2}$, and $2\sqrt{2}$. In particular, the GN grading is $\mbbZ/2\mbbZ$ and the dimension of the integral component is $8$. There are now five possibilities for the universal grading group: $\mbbZ/2\mbbZ$, $\mbbZ/4\mbbZ$, $\(\mbbZ/2\mbbZ\)^{2}$, $\(\mbbZ/2\mbbZ\)^{3}$, or $\mbbZ/8\mbbZ$. The first case is not possible by dimension count in the integral component. In the case that the universal grading has order $8$, each component has dimension $2$. Then the integral components are rank $2$ pointed categories and the non-integral components each have a single simple object of dimension $\sqrt{2}$. Then the category is a modular generalized Tambara-Yamagami category and $\mcC\cong\mcI\boxtimes\mcD$ where $\mcD$ is a pointed modular category of dimension $4$ by \cite[Theorem 5.4]{Nat}. In the case that the universal grading has order $4$, dimension count reveals that there are four invertible objects, one object of dimension $2$, and four objects of dimension $\sqrt{2}$. Moreover, $\mcC_{\ad} = \mcC_{\pt}$. Since there is only one simple object of dimension $2$, it is self-dual. Moreover, $\mcC_{\mathrm{int}}$ is a braided Tambara-Yamagami category, and therefore $G(\mcC)\cong U(\mcC) \cong \mathbb Z_2\times \mathbb Z_2$, by \cite[Theorem 1.2]{Si1}. Then the fusion subcategory generated by the adjoint component and one non-integral component has dimension $8$. Then, by \corref{Cor: Dimension 8 SWI Fusion Contains Ising}, it contains Ising and the result follows. \end{proof} \begin{prop} \label{prop: prime or classified} If $\mcC$ is a strictly weakly integral modular category of dimension $8m$ where $m$ is an odd square-free integer, then one of the following is true: \begin{itemize} \item[(i)] $\mcC$ is a Deligne product of a prime modular category of dimension $8\ell$ and a pointed $\mbbZ_k$-cyclic modular category with $k$ odd, or \item[(ii)] $\mcC=\Sem\boxtimes\mcD$, where $\Sem$ is a Semion category and $\mcD$ is a strictly weakly integral modular category of dimension $4m$ (see \cite{BGNPRW1}). \end{itemize} \end{prop} \begin{proof} If $\mcC$ is prime then we are in case (i) with $k=1$. If $\mcC$ is not prime, then it must factor into a Deligne product of modular categories \cite[Theorem 4.2]{M2}. By \lemmaref{Lemma: 4 divides} we can conclude that $\mcC=\mcC_{1}\boxtimes\mcC_{2}$, $\mcC_{1}$ is strictly weakly integral, and $\mcC_{2}$ is cyclic, pointed and non-trivial. The result now follows from earlier work \cite{RSW,BGNPRW1}: if $\dim\mcC_2$ is odd then we are in case (i), otherwise $2\mid \dim\mcC_2$, and hence $\mcC$ contains a Semion category \cite{ACRW1}. \end{proof} We have now reduced to the case that $\mcC$ is a prime strictly weakly integral modular category of dimension $8m$ with $m$ and odd square-free integer, and we assume $\mcC$ has this form in what follows. \begin{prop} \label{semion} Suppose $\mcD$ is a non-symmetric premodular category that is Grothendieck equivalent to $\Rep\(\mbbZ_{2}\times\mbbZ_{2}\)$. If $\mcD$ contains the symmetric category $\Rep \(\mbbZ_2\)$, then $\mcD$ contains a Semion. \end{prop} \begin{proof} Denote the simple objects in $\mcD$ by $1,g,h,gh$ and $g$ generates the symmetric category $\Rep \(\mbbZ_2\)$. Then examining the balancing relations for $S_{h,h},S_{g,h},S_{gh,gh}$, and $S_{h,gh}$ we see that $\th_{h}=\th_{gh}$, $S_{h,h}=S_{gh,gh}=S_{h,gh}=\th_{h}^{2}$. Since $\mcD$ is not symmetric, but $\langle g \rangle$ is, we can conclude that $S_{h,h}\neq 1$. However, $\mcD$ is self-dual and hence $S_{h,h}$ is a real root of unity. In particular, $S_{h,h}=-1$ and $\th_{h}=\pm i$. Consequently, $h$ generates a Semion subcategory. \end{proof} \begin{corollary}\label{Z_4} $\mcU\(\mcC\)\cong\mbbZ/4\mbbZ$. \end{corollary} \begin{proof} By \lemmaref{Lemma: pkm} we know that $8$ does not divide $\dim\mcC_{\pt}$. Moreover, by invoking the universal grading and examining the dimension equation for $\mcC_{\ad}$ modulo 4 we see that there exists an odd integer $k$ such that $\dim\mcC_{\pt}=4k$. If $k>1$, there exists an odd prime $p$ dividing $k$ and a $p$-dimensional category $\mcD\subset\mcC_{\pt}$. Since $\mcC$ is prime we know that $\mcD$ is symmetric and Tannakian. De-equivariantizing $\mcC$ by $\mbbZ_{p}$, the trivial component of the resulting $\mbbZ_{p}$-graded category has dimension $8m/p^{2}$ \cite[Proposition 4.56(i)]{DGNO1}. This is not possible as $p$ is odd and $m$ is square-free. Thus $\dim\mcC_{\pt}=4$. By dimension count we can deduce that $\dim\(\mcC_{\ad}\)_{\pt}=2$. Since $C_{\mcC}\mcC_{\pt}=\mcC_{\ad}$ we can deduce that $\mcC_{\pt}$ is not symmetric. However, $\mcC$ is prime and so $\Vec\neq C_{\mcC_{\ad}}\(\mcC_{\ad}\)\subset\(\mcC_{\ad}\)_{\pt}$ and so $C_{\mcC_{\ad}}\(\mcC_{\ad}\)=\(\mcC_{\ad}\)_{\pt}$. Letting $g$ be a generator of this M\"{u}ger center and $X$ a 2-dimensional object in $\mcC_{\ad}$, then by dimension count $g$ is a subobject of $X\otimes X^{*}$. In particular, $g$ fixes $X$ and thus $\th_{g}=1$ by \cite[Lemma 5.4]{M5}. By applying Proposition \ref{semion} to $\mcD = \mcC_{\pt}$ and invoking the primality of $\mcC$ we have that $\mcC_{\pt}\ncong \Rep\(\mathbb Z_2\times\mathbb Z_2\)$. \end{proof} Henceforth, let $g$ be a generator of $\mcC_{\pt}$, and $\mcC_{g^{k}}$ the component of the universal grading of $\mcC$ corresponding to the simple $g^{k}\in\mcC_{\pt}$. Furthermore note that, $\mcC_{\1}$ and $\mcC_{g^{2}}$ each contain exactly two invertible objects. Finally, we will denote by $X_{1},X_{2},\ldots, X_{n}$ the simple 2-dimensional objects in $\mcC_{\ad}$ and $Y_{1}=g\otimes X_{i}$ the simple 2-dimensional objects in $\mcC_{g^{2}}$. \begin{rmk} \label{rmk: Tannakian} Under this notation $\(\mcC_{\ad}\)_{\pt}=\langle g^{2}\rangle$ is a 2-dimensional Tannakian category. \end{rmk} % % % % % % % % % \begin{lemma} \label{Lemma: self-dual} All 2-dimensional simple objects in $\mcC$ are self-dual. \end{lemma} \begin{proof} Recall that under our notation, $X_{i}$ are the simple 2-dimensional objects in $\mcC_{\ad}$, and $Y_{i}=g X_{i}$ are the 2-dimensional simple objects in $\mcC_{g^{2}}$. Then $Y_{i}^{*}=g^{3}X_{i}^{*}=g\(g^{2}X_{i}^{*}\)=gX_{i}^{*}$. So it suffices to show that $X_{i}$ are self-dual. To this end, observe that $X_{i}\otimes X_{i}^{*}=\1\oplus g^{2}\oplus \tilde{X_{i}}$ for some 2-dimensional simple $\tilde{X_{i}}$. In particular, $\1$ and $g^{2}$ are simples in $\(\mcC_{\ad}\)_{\ad}$ and all 2-dimensionals in $\(\mcC_{\ad}\)_{\ad}$ are self-dual. Now suppose $\(\mcC_{\ad}\)_{\ad}\neq\mcC_{\ad}$. Then $\mcC_{\ad}$ has a nontrivial universal grading, the trivial component is $\(\mcC_{\ad}\)_{\ad}$ and has dimension $2+4k$ for some integer $k$. The remaining components must also have this dimension, but consist entirely of 2-dimensional objects. This is not possible and so $\(\mcC_{\ad}\)_{\ad}=\mcC_{\ad}$. \end{proof} \begin{lemma} \label{Lemma: Cyclically Generated} $\mcC_{\mathrm{int}}$ is Grothendieck equivalent to $\Rep\(\mbbZ_{m}\rtimes\mbbZ_{4}\)$ and $\mcC_{\ad}$ is Grothendieck equivalent to $\Rep\(D_{2m}\)$. \end{lemma} \begin{proof} Since $\mcC_{ad}$ is a subcategory of $\mcC_{\mathrm{int}}$, then $C_{\mcC_{\mathrm{int}}}\paren{\mcC_{\mathrm{int}}}\subset C_{\mcC_{\mathrm{int}}}\paren{\mcC_{ad}}=\Rep\mbbZ_{2}$. Given that $\mcD$ is prime, then $\mcC_{\mathrm{int}}$ is not modular and so its M\"{u}ger center is $\mcC_{\mathrm{int}}\cong\Rep\mbbZ_{2}$ Tannakian. Thus $\paren{\mcC_{\mathrm{int}}}_{\mbbZ_{2}}$ is pointed, modular, and of dimension $2m$. In particular, $\paren{\mcC_{\mathrm{int}}}_{\mbbZ_{2}}$ is cycically generated, \textit{i.e}, the category is tensor generated by a single object. So by \cite[Proposition 4.30(i)]{DGNO1} $\mcC_{\mathrm{int}}$ must be cyclically generated. The remainder of the statement follows immediately from \cite[Theorem 4.2 and Remark 4.4]{NR1} and \lemmaref{Lemma: self-dual}. \end{proof} To completely determine the Grothendieck class of $\mcC$ we must determine $X_{i}\otimes V_{k}$ and $g\otimes V_{k}$ where $V_{k}$ are the four non-integral objects. \begin{cor} \label{Cor: non-integral objects} $\mcC$ has four non-integral objects: $V$, $gV$, $g^{2}V$, $g^{3}V$. Moreover, $V^{*}=g^{3}V$. In particular, the GN-grading of $\mcC$ is $\mbbZ_{2}$ and all non-integral objects have dimension $\sqrt{m}$. \end{cor} \begin{proof} We know that the GN-grading is given by a nontrivial abelian 2-group and hence is $\mbbZ/2\mbbZ$, by \lemmaref{Lemma: pkm} and \corref{Z_4}. In particular, since $m$ is square-free, there is a square-free integer $x$ such that the non-integral objects have dimension $\sqrt{x}$ and $2\sqrt{x}$. A straightforward application of the fusion symmetries and a dimension calculation reveals that $g^{2}$ fixes all simples of dimension $2\sqrt{x}$. Using this fact, we can appeal to the balancing equation and find that $S_{g^{2},g^{2}X} = 2\sqrt{x}$ and $S_{g^{2},g^{2}Y}=\frac{\th_{Y}}{\th_{g^{2}Y}}\sqrt{x}$ for any simples $X$ and $Y$ of dimension $2\sqrt{x}$ and $\sqrt{x}$ respectively. However, $g^{2}$ is self-dual and so $\th_{Y}/\th_{g^{2}Y}=\pm1$. Now observe that the orthogonality of the $g^{2}$ and $\1$ columns of the $S$-matrix can only be satisfied if $\th_{Y}=-\th_{g^{2}Y}$ and there are no objects of dimension $2\sqrt{x}$. In particular, $g^2$ moves all simples of dimension $\sqrt{x}$. Next, let $V,W\in\mcC_{g}$ be simples of dimension $\sqrt{x}$. Moreover, without loss of generality we may assume that $g$ is a subobject of $V\otimes V$. Then $\1$ is a subobject of $\(g^{3}V\)\otimes V$ and hence $V^{*} = g^{3}V$. Next note that by the universal grading and the parity of $x$ we can deduce that either $g$ or $g^{3}$ is a subobject of $V\otimes W$ (but not both). In the former case $\1$ is a subobject of $V\otimes g^{3}W$ and in the later case $\1$ is a subobject of $V\otimes gW$. Thus $g^{3}W=V^{*}=g^{3}V$ or $gW=V^{*}=g^{3}V$, according to the invertible subobject appearing in $V\otimes W$. Since $W$ was arbitrary we can conclude that there are exactly four non-integral objects: $V,gV,g^{2}V$, and $g^{3}V$. Moreover, each of these objects has dimension $\sqrt{m}$. \end{proof} \begin{lemma} \label{Lemma: 2 with sqrtm} $X_{i}\otimes V\cong V\oplus g^{2}V$ \end{lemma} \begin{proof} By dimension count and the grading we have $X_{i}\otimes V=V\oplus g^{2}V$, $2V$, or $2g^{2}V$. The later two are not possible as $g^{2}$ fixes $X_{i}$ and hence must fix $X_{i}\otimes V$. \end{proof} \begin{theorem} \label{Theorem: p3m main result} If $\mcC$ is a non-pointed modular category of dimension $p^{3}m$ where $p$ is a prime and $m$ is a square-free integer that is coprime to $p$, then, $p=2$ and one of the following is true: \begin{itemize} \item[(i)] $\mcC$ is a Deligne product of an even metaplectic modular category of dimension $8\ell$ and a pointed $\mbbZ_k$-cyclic modular category with $k$ odd, or \item[(ii)] $\mcC$ is the Deligne product of a Semion modular category with a modular category of dimension $4m$ (see \cite{BGNPRW1}) \end{itemize} \end{theorem} \begin{proof} By Lemmas \ref{Lemma: 4 divides} and \ref{Lemma: integral p3m}, and \propref{prop: prime or classified}, \corref{Cor: Dimension 8 SWI Fusion Contains Ising} it suffices to consider $p=2$, $m$ odd and square-free, and $\mcC$ prime. In this case we may apply Lemmas \ref{Lemma: 2 with sqrtm}, \ref{Lemma: Cyclically Generated}, and \corref{Cor: non-integral objects} to conclude that $\mcC$ is metaplectic. \end{proof} \printbibliography \end{document}
1,314,259,995,453
arxiv
\section{Introduction} The X-ray emission of the order of $10^{38}$ erg s$^{-1}$ from the Galactic ridge (GR) in the energy range above 1 keV was detected almost forty years ago \citep{bleach} but its origin remains unknown till now. Two possibilities are debated either this emission is really diffuse \citep{ebi1, ebi2} or it is due to an accumulative effect of faint X-ray sources \citep{rev1}. In the case of diffuse origin serious energetic problems arise if this emission is produced by non-thermal bremsstrahlung of high energy electrons, since the required power of sources of charged particles in the disk should exceed $10^{42}$ erg s$^{-1}$, i.e. higher than the power of supernovae in the Galaxy \citep{ski1,val,dog1}. However, this energy problem can be solved if the particles are in-situ accelerated from background plasma \citep{dog2,dog3}. An alternative explanation of emission from GR was developed by \citet{rev1} who presented recently essential arguments in favor of the idea that the Galactic ridge emission was due to cumulative emission of faint discrete X-ray sources. Though this interpretation is not completely proved at present, it appears to be plausible. \citet{rev1} showed that the $3-20$~keV map of GRXE as well as the 6.7 keV iron line distribution in the disk closely follow the near-infrared (3.5 $\mu$) brightness distribution which traces the galactic stellar mass distribution. This proportionality is the same in the disk and in the bulge. From recent Suzaku observations it was concluded that the soft X-ray disk emission in the range 0.4 to 1 keV from the latitudes $<\timeform{2D}$ originated also from faint dM stars \citep{masui}. Very decisive analysis about the origin of the Galactic ridge emission was provided recently by \citet{rev09}. From the Chandra data they showed that most ($\sim 88$\%) of the ridge emission is clearly explained by dim and numerous point sources. Therefore, at least in the ridge emission, accreting white dwarfs and active coronal binaries are considered to be main emitters. The situation in the Galactic center (hereafter GC) may be quite different. The GC region has been observed by X-ray experiments flown for almost 30 years (see \cite{watson, kawai}). Latter {\it Ginga} made a remarkable measurements of the spectra in this region \citep{koya89,yama90}. Though as in the case of GRXE, observations show there a significant contribution of discrete sources with luminosity $L_{2-10~\rm{keV}}>10^{31}$ erg s$^{-1}$ which contribute from 20\% to 40\% of the total flux \citep{muno,rev2}, the origin of the rest of the flux is still unknown and essential distinctions were found between flux characteristics from GRXE and GC. First of all, the GC emission is seen as a completely separated spherical region around Sgr-A$^\ast$ whose radius is about $100 - 200$~pc \citep{muno,koya2}. The plasma temperature there is higher than in other part of the Galactic disk. Secondly, the ratios of 6.9 to 6.7 keV lines (which traces the plasma temperature) and 6.4 to 6.7 keV lines are higher in GC than in GR, while in GR this ratio is almost constant along the plane \citep{yama}. Thirdly, unlike the above-mentioned correlation of \citet{rev1} the X-ray source distribution derived from {\it Chandra} deep exposure of the $ <0.3^\circ$-radius central region does not show any correlation with the distribution of 6.7 keV line \citep{koya2}. Therefore, they concluded that the integrated flux of point sources contributed a rather small fraction of the total flux of GC X-rays and the major of emission from there is diffuse. This leaves an open possibility that the nuclear region of the Galaxy (within $\sim 10^\prime - 1^\circ$ around Sgr A$^\ast$) may be somewhat different from the rest of the Galaxy, and, therefore, processes of radiation there have an origin which differs from other parts of the Galactic disk. This region is known, indeed, to be peculiar in many respects: \begin{itemize} \item The plasma temperature in the GC region is higher ($\sim 10$ keV) than in other parts of the Galactic disk. Such a high plasma temperature is surprising, since the gravitational potential in the GC region is no greater than several hundred eV which is too small to bind the gas. The plasma could not be gravitational confined and a very high amount of energy is required to maintain the plasma outflow. This energy supply cannot be produced by SN explosions and other more powerful sources of energy are required to support the energy balance there \citep{{sun1},koya1,muno} ;\\ \item GC region is a source of annihilation emission whose origin is still enigmatic. It may be explained either by emission of point sources like nova and supernova (see e.g. \cite{derm}) or low mass X-ray binaries \citep{weid}, or by dark matter annihilation \citep{sizun};\\ \item Intensive emission in X-ray iron lines is observed from the Galactic center, which is often explained that the gas there was exposed in the past by sources of intensive X-ray emission e.g. from a supernova or from the Galactic nucleus \citep{sun1, koya1};\\ \item The temperature of molecular hydrogen ($H_2$) in GC is unusually high, $T= 100- 200$~K. With the exception of Sgr B2 no embedded source are observed inside clouds. Therefore, a global heating mechanism is needed to explain the high gas temperature, e.g. by cosmic rays (see e.g. \citet{yuz});\\ \item The flux of VHE gamma-rays of unknown origin was discovered by HESS in the direction of GC, which is supposed due to processes nearby the central black hole \citep{aha}. \end{itemize} As one can see, each of these observational phenomena can be explained separately by completely different physical processes which are not concerned with each others. We assume, however, that these phenomena have common origin, namely, they are consequences of star accretion onto the central supermassive black hole. As our estimations showed the energy produced by accretion in the form of relativistic and subrelativistic particles is so high that it is inaccessible for any other sources of energy in the Galaxy. Another important characteristic of this process is that the energy erupted in GC is conserved there for rather long time because of relatively slow dissipation time of primary and secondary charged particles generated by accretion processes. We developed this model in series of papers \citep{cheng1, cheng2,dog08,dog09}, in which we interpreted the origin of annihilation emission and estimated the flux of gamma-ray de-excitation lines from GC. This publication is a continuation of these investigations. Below we present a model of thermal and non-thermal hard X-ray emission from the GC which is supposed to be due to specific processes of accretion on the central black hole. We restrict our analysis by integral characteristics of this emission, its spatial variations are beyond the scope of this paper. We suppose to present such an analysis in our following publication. The goal of this paper is to demonstrate a principle possibility of of X-ray production in the GC (thermal and non-thermal) by subrelativistic protons at the level observed by Suzaku. \section{Proton Injection and Plasma Heating. The Origin of Thermal X-ray Emission from GC} Processes of injection of subrelativistic protons by star accretion were described in details in \citet{dog09}. Here we remind only the main parameters of the process. Every star capture by supermassive black holes releases a huge energy which is several order of magnitude higher than produced by a supernova explosion. The average frequency capture of one solar mass stars by supermassive black holes is about $(1-10)\times 10^{-5}$year$^{-1}$ (see \cite{don} and \cite{syer}). As it was shown by \citet{ayal} once passing the pericenter, the star is tidally disrupted into a very long and dilute gas stream. Approximately $50-75$\% of the star matter was not accreted but instead became unbounded. This unbounded mass receives an additional angular momentum and escapes with velocities higher than the orbital speed that corresponds to the energy per baryon higher than \begin{equation} E_{esc} \sim \frac{2GM_{bh}m_{\rm p}}{R_T} \sim 5\times 10^7 M_6^{2/3}m_*^{1/3}r_*^{-1} \mbox{~eV}\,. \label{esc} \end{equation} For the mass of the black hole located in the center of our Galaxy is about $(4.31 \pm 0.06)\times10^6$~M$_{\odot}$ (see, \cite{guss}) it follows that the average energy of escaped particles may be more than $50-100$~MeV~nucleon$^{-1}$ when a one-solar mass stars is captured (see for details \cite{dog09}. Here $R_T$ is the capture radius of a black hole given by \begin{equation} R_T \approx 1.4\times 10^{13} M_6^{1/3}m_*^{-1/3}r_*\,\mbox{~cm}\,, \label{rt} \end{equation} where $m_*=M_*/M_\odot$, $M_6=M_{bh}/10^6M_\odot$, $r_*=R_*/R_\odot$, $M_*$ and $R_*$ are the star mass and radius, $M_{bh}$ the mass of the black hole, and $M_\odot$ and $R_\odot$ are the solar mass and radius. For these parameters after every capture event about $10^{57}$ protons with energies $\sim 100$ MeV escape into the surrounding medium whose temperature is $\sim 6.5$ keV and the uniform target gas distribution was assumed \citep{koya1,muno,koya2}. The rate of their energy losses is \begin{equation} \left(\frac{dE}{dt}\right)_i=-\frac{ 4\pi ne^4\ln\Lambda_1}{ m\mathrm{v}_{\rm{p}}}\,, \end{equation} where $\mathrm{v}_{\rm{p}}$ is the proton velocity, and $\ln\Lambda_1$ is the Coulomb logarithm. Since the lifetime of subrelativistic protons \begin{equation} \tau_i=\int\limits_E\frac{dE}{(dE/dt)_i}\,, \end{equation} is about $10^{14}$ s. Since the proton lifetime is much longer than the characteristic time of star capture, the process of proton injection can be considered as quasi-stationary with the rate of proton injection in between from $Q=10^{45}$ to $10^{46}$protons s$^{-1}$ (or the energy input $\dot{W}\sim 10^{42}$ erg s$^{-1}$). The time-dependent spectrum of subrelativistic protons, $N({\bf r},E,t)$ can be calculated from the equation \begin{equation}\label{pr_state} \frac{\partial N}{\partial t} - \nabla D\nabla N + \frac{\partial}{\partial E}\left( b(E) N\right) = Q(E,{\bf r},t)\,, \end{equation} where $D$ is the spatial diffusion coefficient of cosmic ray protons whose average value in the was taken to be $D\simeq 10^{26}$ cm$^2$s$^{-1}$ in order to reproduce the Suzaku data, $dE/dt \equiv b(E)$ is the rate of proton energy losses, and $Q(E,t)$ is the rate of proton production by accretion, which can be presented in the form \begin{equation} Q(E, {\bf r}, t) = \sum \limits_{k=0}Q_k(E)\delta(t - t_k)\delta({\bf r})\,, \end{equation} where $t_k$ is the injection time. The average time of star capture in the Galaxy was taken to be $T\simeq 10^4$ years, then $t_k=k\times T$. The energy distribution of erupted nuclei $Q_k(E)$ is taken as a simple Gaussian distribution the energy distribution of these erupted nuclei is taken as a simple Gaussian distribution ( in order to avoid a simple delta function injection) \begin{equation}\label{Qesc} Q_k(E)=\frac{N}{\sigma\sqrt{2\pi}} \exp\left[-\,\frac{(E-E_{esc})^2}{2\sigma^2}\right], \end{equation} where we take the width $\sigma=0.03E_{esc}$ with $E_{esc}\simeq 100$ MeV, and $N$ is total amount of particles ejected by one stellar capture. In the nonrelativistic case when the rate of energy losses can be approximated as $(dE/dt)_i\simeq a/\sqrt{E}$, the solution (\ref{pr_state}) can be presented as \begin{equation} f({\bf r},E,t)=\sum\limits_{k=0}\frac{N_k\sqrt{E}}{\sigma\sqrt{2\pi}Y_k^{1/3}} \frac{\exp\left[-\frac{\left(E_{esc}-Y_k^{2/3}\right)^2}{2\sigma^2}-\frac{{\bf r}^2}{4D(t-t_k)}\right]}{ \left(4\pi D(t-t_k)\right)^{3/2}}\,, \label{sol1} \end{equation} where \begin{equation} Y_k(t,E)=\left[\frac{3a}{2}(t-t_k)+E^{3/2}\right]\,. \end{equation} Below we use the following parameters of the model for calculations: $Q=2\times 10^{45}$protons s$^{-1}$, $E_{esc}=100$ MeV. The plasma temperature in the GC derived from Suzaku data and it equals about 6.5 keV , and the average plasma density there is about 0.2 cm$^{-3}$. As am example we show in figure \ref{two_dist} spatial and energy distributions of subrelativistic protons near the GC. \begin{figure}[h] \begin{center} \FigureFile(80mm,80mm){np_2dstr_r.eps} \FigureFile(80mm,80mm){np_2dstr_E.eps} \end{center} \caption{ (a)Spatial distribution of protons. (b)Energy spectrum of protons. }\label{two_dist} \end{figure} The energy releases in the Galactic center in the form of subrelativistic protons may effectively heat the plasma there. Last Suzaku observations of \citet{koya2} found a clear evidence for a hot plasma in the GC with the diameter about 20 acrminutes (i.e $\sim 50-60$ pc). Total X-ray flux from this region in the range 2 to 10 keV is $F_{\rm{X}}\sim 2\times 10^{36}$ erg s$^{-1}$, total energy of plasma in this region is about $3\times 10^{52}$ erg. The temperature derived from the continuous spectrum is about 15 keV but such a high value may be a result of contribution from a non-thermal component of X-ray emission. Indeed, from the 6.9/6.7 keV line ratio \citet{koya2} concluded that the spectrum in $5-11.5$~keV range was naturally explained by a 6.5~keV-temperature plasma in collisional ionization equilibrium. The origin of the hot temperature plasma is unclear. Powerful sources of heating are required. However, there are no evident sources of energy in the Galactic central region. \citet{sun1}, \citet{koya1} and \citet{muno} concluded that the plasma in the $1^\circ- 2^\circ$ radius central region can be heated up to the observed temperature $T\sim 6- 10$~keV if the energy release there is about $\dot{W}\sim10^{41}- 10^{42}$erg s$^{-1}$ that cannot be provided by supernova explosions. Just this power can be supplied by subrelativistic protons which loose their energy mainly by ionization and thus heating of background plasma. Undoubtedly, this very crude estimate of temperature is severely limited. In order to describe the temperature and gas distribution in GC when the energy is sporadically released due to star capture more sophisticated hydrodynamic and MHD calculations are required. In this respect we mention recent results of \citet{breit} who simulated from 3D numerical calculations the dynamical structure of interstellar medium in star formation region with respect to the volume and mass fractions of the different ISM "phases". Due to energy release from star explosions the medium is strongly nonuniform and turbulent. Compressed region of cold and dense filaments co-exist with a hot and low density plasma. This reminds the filamentary and non-uniform structure around GC. \section{6.7 and 6.9 keV Iron Line Emission from GC} As follows from conclusions of the previous section subrelativistic protons heat effectively the background gas. We search below whether they generate also an excess of iron line emission in the X-ray flux from the GC as it was expected from in-situ accelerated non-relativistic electrons in the GRXE spectrum \citep{dog2,masai}. As follows from \citet{dog4} subrelativistic protons effectively produce shell vacancies that may be found in the line X-ray spectrum of the Galactic center. Recent Suzaku observations with high energy resolution clearly resolved several iron lines in the spectrum of hot plasma into individual peaks of FeI K$\alpha$ (6.4 keV), FeXXV K$\alpha$ (6.7~keV), FeXXVI Ly$\alpha$ (6.9~keV), FeXXV K$\beta$ (7.8~keV), FeXXVI Ly$\beta$ + FeXXV K$\gamma$ (around $8.2-8.3$~keV), and FeXXVI Ly$\gamma$ (8.7 keV) \citep{koya2,ebi2}. The FeI K$\alpha$ emission is associated with molecular clouds and therefore it requires a special analysis which will be presented in other paper. The lines 6.7 keV and 6.9 keV, provide important information about the plasma parameters since their intensities are proportional to the number of iron ions. The He-like K-$\alpha$ emission consists of emission lines of different transitions, and it might even contains emissions from Li-like ions in CCD energy resolutions. The photon-weighted centroid energy of the emission depends on the ionization process (e.g. collisional or photo ionization) and the emission process (e.g. thermal or non-thermal emission like charge exchange). The centroid energy derived from observation is consistent with that the GC plasma is in the collisional ionization equilibrium as it was claimed by \citet{koya2}. The ratio of 6.9/6.7 lines is proportional to abundances of FeXXV and FeXXVI iron ions which are functions of plasma temperature. From the best determined flux ratio of FeXXVI Ly$\alpha$ and FeXXV K$\alpha$ lines equaled $0.3-0.38$ in GC region \citet{koya2} concluded that the electron temperature is $\mathrm{kT}_{\mathrm{e}}=6.4\pm 0.2$ keV. \citet{yama} showed that this ratio is almost constant in the Galactic disk but increases in almost two times in the direction of GC that indicates on higher temperatures in the Galactic center than in the disk. In principle, a flux of subrelativistic protons may provide additional vacancies in iron ions that distorts the temperature estimation obtained from the line ratio. In order to estimate the effect from nonthermal protons one should accurately calculate 6.7 keV and 6.9 keV line intensities provided by thermal plasma and by nonthermal particles. A correct analysis of the ratio 6.7/6.9 lines was provided by \citet{prokh} for the case of galaxy clusters who estimated a mimic temperature excess due to nonthermal particles. Below we present results of similar analysis for GC. Taking into account both electron impact excitation and radiative recombination, the line flux ratio of FeXXVI Ly$\alpha$/FeXXV K$\alpha$ is given by \begin{equation} R=\frac {\xi_{\mathrm{FeXXVI}}Q^{1-2}_{\mathrm{FeXXVI}} + \xi_{\mathrm{FeXXVII}}\alpha^{1-2}_{\mathrm{FeXXVI}}} {\xi_{\mathrm{FeXXV}}Q^{1-2}_{\mathrm{FeXXV}} + \xi_{\mathrm{FeXXVI}}\alpha^{1-2}_{\mathrm{FeXXV}}}\,, \end{equation} where the coefficients $Q^{1-2}_{\mathrm{FeXXV}}$ and $Q^{1-2}_{\mathrm{FeXXVI}}$ describes processes of impact excitation by thermal electron and subrelativistic protons for FeXXV and FeXXVI respectively, $\alpha^{1-2}_{ \mathrm{FeXXV}}$ and $\alpha^{1-2}_{ \mathrm{FeXXVI}}$ are the rate coefficients for the contribution from radiative recombination of the spectral lines FeXXV (He-like triplet) and FeXXVI (H-like doublet) respectively. The rate coefficients are obtained by averaging the product of cross section by particle velocity over the particle distribution function. The ionic fractions of $\xi_{\mathrm{FeXXV}}$, $\xi_{\mathrm{FeXXVI}}$ and $\xi_{\mathrm{FeXXVII}}$ are calculated for the case of thermal plasma with a nonthermal particle population. For corresponding references on cross sections of ionization, recombination and impact excitation etc, see \citet{prokh}. In figure\ref{T(n)} we presented the temperature of plasma derived from the observed 6.7/6.9 ratio which was fixed for the central region and equaled 0.33. The contribution from subrelativistic protons was calculated for the spectrum derived from equation (\ref{pr_state}). We calculated the real plasma temperature for the plasma density which changes in the range from 0.1 to 0.4 cm$^{-3}$. \begin{figure}[h] \begin{center} \FigureFile(110mm,80mm){T(n)_NEW.eps} \end{center} \caption{Real temperature of plasma derived from the 6.7/6.9 ratio as a function of the plasma density, when ionization is provided also by subrelativistic protons.}\label{T(n)} \end{figure} We see that the contribution from subrelativistic protons is negligible ($\leq 10$\%) and the calculated temperature is close to the one derived by \citet{koya2} for the pure thermal case. The reason is evident, the energy of lines are close to the plasma temperature, therefore, these lines are mainly excited by thermal particles. We found that the value of the FeXXV K$\alpha$ line intensity for the case with the nonthermal component differs from the pure thermal case in a factor less than 1\% while intensity variations of the FeXXVI Ly$\alpha$ line are more significant, they are at the level of several percents . Just variations of FeXXVI Ly$\alpha$ line leads to a higher "effective temperature" ($\sim 6.5$ keV) for the case of nonthermal particles contribution. Our calculations show that "an X-ray signal" from subrelativistic protons can be found at energies higher than 10 keV where the influence of thermal emission is insignificant. \section{Nonthermal Emission of Subrelativistic Protons from the GC} Suzaku data of \citet{koya2} suggested that the continuum flux from the GC contained an additional hard component. Up to recently any direct confirmation of nonthermal emission at energies above 10 keV has been unavailable. \citet{yuasa} performed analysis of Suzaku data and showed a prominent hard X-ray emission in the range from 14 to 40~keV whose spectrum is a power law with the spectral index ranging from 1.8 to 2.5. The total luminosity of the power-law component from the central region ($\mid l \mid<2^\circ,~\mid b \mid<0.5^\circ$) is $(4\pm 0.4)\times 10^{36}$ erg s$^{-1}$ . The spatial distribution of hard X-rays correlates with the distribution of hot plasma. This spectrum can be represented by an exponentially cutoff power-law model, \begin{equation} f(E)=K(E/1~\rm{keV})^{-\Gamma}\exp(-E/E_c)\,, \end{equation} with $\Gamma$ and $E_c$ varying from region to region over 1.2 - 2.2 and $19 - 50$~keV, respectively. Since the Hard X-ray Detector (HXD) onboard Suzaku is not an imaging detector, \citet{yuasa} obtained the GC diffuse hard X-ray spectra by subtracting contamination fluxes from known bright X-ray point sources in its field of view ($34'\times34'$ FWHM). They considered contributions from those point sources with fluxes higher than $1.5\times10^{-11}~{\rm ergs}\ {\rm cm}^{-2}\ {\rm s}^{-1}$ (or $10^{-3}$ of a flux from Crab Nebula) in the $14-40$~keV band, and subtracted them from observed spectra with assumptions such as their spectral shapes and fluxes are invariable during the observations. Resulting energy spectrum of the GC hard X-ray emission observed by the HXD after subtracting bright-point-source contaminations is shown in figure \ref{X-ray} by crosses. The residual spectrum still contains contributions from dimmer point sources ($<1.5\times10^{-11}~{\rm ergs}\ {\rm cm}^{-2}\ {\rm s}^{-1}$ in the $14-40$~keV band). Therefore before comparing the spectral shape and the luminosity of our model with observed ones, we estimated the remaining point source fluxes in the HXD spectra. By integrating a luminosity function of X-ray point sources in the GC region \footnote{"field" curve shown in figure 13 of \cite{muno08} was used. In a small region around Sgr A*, rather long exposure enabled high sensitivity and the luminosity function was measured in the dimmer flux range. However we should use "field" curve instead that of around Sgr A* because the HXD observes wider region, i.e. $\sim$deg$\times$deg scale around the Sgr A*.} over the luminosity range from $2\times10^{32}~{\rm ergs}\ {\rm s}^{-1}$ to $1\times10^{34}~{\rm ergs}\ {\rm s}^{-1}$ (in the $0.5-8$~keV band; a distance of 8~kpc was assumed), we obtain a dim-point-source contaminating flux of $\sim1.1\times10^{-15}~{\rm ergs}\ {\rm cm}^{-2}\ {\rm s}^{-1}$~arcmin$^{-2}$. In the estimation, we took into account the effective solid angle of the HXD/PIN of 1220 arcmin$^2$, and assumed that a spectral photon index of the faint point sources is 1.5. The value is on the order of 10\% of the HXD/PIN residual fluxes. If we further integrate the luminosity function, extrapolating the measured one with the same slope index down to $2\times10^{31}~{\rm ergs}\ {\rm s}^{-1}$ and $2\times10^{30}~{\rm ergs}\ {\rm s}^{-1}$, the contaminating flux increase by 2.5 and 5.5 times ($\sim25\%$ and $\sim50\%$ of the HXD flux), respectively. Since a precise qualitative treatment of the contaminating point source flux is not a trivial procedure, we do not deal them further, and compare our model spectra directly with that of the HXD in the present analysis. Interactions of subrelativistic protons with plasma result in production of bremsstrahlung photons (inverse bremsstrahlung radiation). Though the rate of these energy losses is negligible in comparison with the above-mentioned Coulomb energy losses, nevertheless, these losses generate emission in the energy range higher than the thermal emission of background plasma and hence can be observed. Subrelativistic protons generate bremsstrahlung photons with characteristic energies about $E_{\rm{X}}< (m/M)E_{\rm{p}}$ where $E_{\rm{p}}$ is the kinetic energy of protons and $m$ and $M$ are the masses of an electron and a proton. For the proton energies $E\leq 100$ MeV the bremsstrahlung radiation is in the range $E_{\rm{X}}< 55$ keV . The cross-section of inverse bremsstrahlung radiation is \citep{haya} \begin{equation} {{d\sigma_{\rm{br}}\over{dE_{\rm{X}}}}}={8\over 3}{Z^2}{{e^2}\over{\hbar c}}\left({{e^2} \over{m{c^2}}}\right)^2{{m{c^2}}\over{E^\prime}}{1\over{E_{\rm{X}}}} \ln{{\left(\sqrt{E^\prime}+\sqrt{{E^\prime}-{E_{\rm{X}}}}\right)^2}\over{E_{\rm{X}}}}\,. \label{sbr} \end{equation} Here $E^\prime = (m/M)E_{\rm{p}}$. Then the total flux of inverse bremsstrahlung emission from the GC can be calculated from \begin{equation} F_{\rm{X}}^{\rm{ib}}(E_{\rm{X}})=4\pi\int\limits_{E}dE\int\limits_{V_{\rm{GC}}}N_{\rm{p}}(E,{\bf r},t){{d\sigma_{\rm{br}}\over{dE_{\rm{X}}}}}\mathrm{v}_{\rm{p}}n({\bf r})~d^3r\,, \label{int_ib} \end{equation} where the $V_{\rm{GC}}$ is the volume of emitting region. \begin{figure}[h] \begin{center} \FigureFile(110mm,80mm){XR_tst6.eps} \end{center} \caption{The spectrum of inverse bremsstrahlung emission generated by subrelativistic protons ($E_{\rm{esc}}=100 MeV$ was assumed, see text) in the GC region (solid line with points) and the de-convolved X-ray flux observed by the Suzaku HXD in the GC region (crosses; Suzaku Observation ID 100027010). The thermal X-ray emission for the temperature 6.4 keV is shown by the dashed line and the total emission (thermal+inverse bremsstrahlung) is shown by the solid line. Normalization factors are adjusted so that the total model emission reproduces the observed HXD flux.} \label{X-ray} \end{figure} The calculated X-ray spectrum of inverse bremsstrahlung radiation is shown in figure \ref{X-ray} by the solid lines with points. The GC hard X-ray spectrum observed with the Suzaku HXD shown by crosses \citep{yuasa} was deconvolved to a photon-spectrum to perform a direct comparison with our model spectrum of inverse bremsstrahlung. In the deconvolution, as a rough approximation, we assumed a $0.55^\circ\times 0.55^\circ$ spatially uniform emission (albeit in reality the surface brightness of the emission has spatial gradient centered on the GC). From spectral characteristics of the Suzaku flux it follows that the energy $E_{esc}$ estimated by equation (\ref{esc}) is no smaller than $85-100$~MeV, otherwise it is impossible to reproduce the Suzaku data. The total spectrum with the contribution of thermal emission is shown by the solid line. The total flux of thermal component (without absorption) in the energy range above 10 keV is $F_{\rm{X}}^{\rm{th}}\simeq 2\times10^{36}$ erg s$^{-1}$. The inverse bremsstrahlung flux for the same energy range is $F_{\rm{X}}^{\rm{ib}}\simeq 3\times 10^{36}$ erg s$^{-1}$. We notice, however, that this estimate of the nonthermal emission from the hot plasma in the GC is an upper limit of the model, because continuum emission generated by these protons in the molecular gas may contribute a significant part of the total flux (see \cite{dog_p3}). Besides, if a part of line emission comes from individual sources, the effect of the non-thermal proton becomes smaller. Thus, the spectrum presented in figure 3 is the worst case for the model. The calculated flux of inverse bremsstrahlung radiation from the $0.55^\circ\times 0.55^\circ$ central region is weakly sensitive to the average density of background plasma in the GC if the spatial diffusion coefficient of protons is small enough. The reason is that (as it follows from Eqs. (\ref{pr_state}) and (\ref{int_ib})) the total flux of inverse bremsstrahlung radiation can be estimated as \begin{equation} F_{\rm{ib}}\sim \bar{Q}\frac{\tau_i}{\tau_{\rm{ib}}}\,, \end{equation} where $\tau_i$ and $\tau_{\rm{ib}}$ are the characteristic times of ionization and bremsstrahlung losses of protons, and $\bar{Q}$ is the integrated power of proton sources. Since both times are proportional to the plasma density, the inverse bremsstrahlung flux is almost independent of it. We notice a significant discrepancy between the GC hard X-ray emission and that of the GRXE. The last one can hardly be due to inverse bremsstrahlung of subrelativistic nuclei since in this case the flux of carbon and oxygen de-excitation gamma-ray lines in the range from 3 to 7 MeV is higher than the upper limit measured by OSSE \citep{val}. However, if the hard X-ray flux from the Galactic center is due to inverse bremsstrahlung of protons with the derived spectrum, the flux of de-excitation lines is still below the OSSE level \citep{dog09}. In this respect common analysis of X-ray and gamma-ray data is crucial for the origin of hard X-rays from GC. \section{Conclusion} We analysed the origin of X-ray emission from GC assuming that it is produced by subrelativistic protons generated by star accretion on the central black hole. The average power of energy release from accretion is about $10^{42}$ erg s$^{-1}$ and the average energy of emitted protons is about 100~MeV. The energy of high energy protons is transformed into plasma heating by ionization losses. As derived by \citet{{sun1},koya1,muno} just this energy release is necessary to heat the plasma up to temperatures about $6-10$~keV, just as observed. Additional ionization of iron ions by nonrelativistic protons can, in principle, violate the ionization balance in GC providing an excess of FeXXVI ions that increases the intensity of 6.9 keV iron line. However, as numerical calculations show the excess due to ionization by subrelativistic protons is negligible for the 6.5 keV plasma temperature. A more significant effect from subrelativistic protons is expected in the X-ray range above 10 keV where influence of thermal emission is insignificant. We show that the inverse bremsstrahlung emission of protons in this energy range may produce a non-thermal X-ray flux. For the parameters of accretion the inverse bremsstrahlung flux of protons is about $3\times 10^{36}$ erg s$^{-1}$, i.e. about the flux observed by the Suzaku from the GC in the $14-40$~keV band. \vspace{5 mm} The authors are grateful to K. Ebisawa, M. Ishida, M. Revnivtsev, and S. Yamauchi for discussions and to the unknown referee for useful comments. VAD and DOC were partly supported by the RFBR grant 08-02-00170-a, the NSC-RFBR Joint Research Project No 95WFA0700088 and by the grant of a President of the Russian Federation "Scientific School of Academician V.L.Ginzburg". KSC is supported by a RGC grant of Hong Kong Government under HKU 7014/07P. CMK is supported in part by National Science Council, Taiwan under the grant NSC-96-2112-M-008-014-MY3. A.~Bamba is supported by JSPS Research Fellowship for Young Scientists (19-1804). \vspace{8 mm}
1,314,259,995,454
arxiv
\section{Derivation of the long-range potential} \label{appendix:potential_derivation} Due to the $L_e-L_\beta$ ($\beta = \mu, \tau$) symmetry, an electron sources a Yukawa potential \begin{equation}\label{equ:potential_one_electron} V_{e\beta} = - \frac{ g_{e\beta}^{\prime 2} } { 4\pi d } e^{-m_{e\beta}^\prime d} \end{equation} at a distance $d$ from it, where $g_{e\beta}^\prime$ is the new coupling between electrons and neutrinos, and $m_{e\beta}^\prime$ is the mass of the $Z_{e\beta}^\prime$ that acts as mediator. For a given value of the mass, the range of the interaction is $1/m_{e\beta}^\prime$; beyond that, the potential is exponentially suppressed. Because we focus on tiny mediator masses, the interaction range is between meters and thousands of Gpc. Below, we compute the most important contributions to the potential, coming from electrons in the Earth, Moon, Sun, Milky Way, and cosmological electrons. When calculating the number of electrons $N_e$ in a concentration of matter, we assume that the matter is isoscalar --- it has roughly equal number of protons $N_p$ and neutrons $N_n$ --- and electrically neutral, so that the electron fraction in them is $Y_e \equiv N_e / (N_p + N_n) = 0.5$. With this, we convert from baryon density to electron density. \subsection{Electrons in the Earth} To calculate the potential due to the $N_{e,\oplus} \sim 4 \cdot 10^{51}$ electrons inside the Earth, we compute the electron column densities traversed by neutrinos inside the Earth prior to arriving at IceCube. To do this, we use the profile of electron number density $n_{e,\oplus}$ built from the matter density profile of the Preliminary Reference Earth Model (PREM)\ \cite{Dziewonski:1981xy}. The profile, constructed from seismic data, consists in concentric layers of increasing density towards the center of the Earth. At the position of IceCube, the net potential acting on neutrinos arriving from all directions is \begin{eqnarray} V_{e\beta}^\oplus &=& 2 \pi \frac{g_{e\beta}^{\prime 2}}{4\pi} \int_0^\pi d\theta \int_0^{r_{\max}(\theta)} dr ~r ~\langle n_{e,\oplus}(r,\theta) \rangle_\theta \nonumber \\ && \qquad\qquad\qquad\qquad\qquad \times \sin \theta ~e^{-m_{e\beta}^\prime r} \;, \end{eqnarray} where $R_\oplus = 6371$~km is the radius of the Earth, $\langle n_{e,\oplus} \rangle_\theta$ is the average electron density along the direction given by $\theta$, and $r_{\max}(\theta) = (R_\oplus-d_{\rm IC}) \cos \theta + \left[ (R_\oplus-d_{\rm IC})^2 \cos^2 \theta + (2R_\oplus-d_{\rm IC}) d_{\rm IC} \right]^{1/2}$ is the length of the chord traversed by the neutrino inside the Earth, with $d_{\rm IC} = 1.5$~km the approximate depth of IceCube. To compute the potential due to standard matter effects inside the Earth, we adopt a simpler prescription: $V_{\rm mat}^\oplus = \sqrt{2} G_F \langle n_e^\oplus \rangle$, where $\langle n_e^\oplus \rangle \equiv Y_e \langle n_N \rangle / (2 m_p)$ is the average electron density and $\langle n_N \rangle \approx 5.5$~g~cm$^{-3}$ is the average nucleon density according to the PREM. We do this because, in the regime where standard matter effects become important --- when the interaction range is smaller than $R_\oplus$ --- other limits on $g_{e\beta}^\prime$ are stronger, as shown in \figu{limits_emu}, avoiding the need for a more sophisticated calculation. \subsection{Electrons in the Moon and the Sun} We treat the Moon and the Sun as point sources of electrons. The potential $V_{e\beta}^{\leftmoon}$ due to electrons in the Moon is obtained by evaluating \equ{potential_one_electron} at $d = d_{\leftmoon} \approx 4 \cdot 10^5$~km --- the distance between the Earth and the Moon --- and multiplying it by $N_{e,\leftmoon} \sim 5 \cdot 10^{49}$ --- the number of electrons in the Moon. Similarly, the potential $V_{e\beta}^{\astrosun}$ due to electrons in the Sun is obtained by evaluating \equ{potential_one_electron} at $d = d_{\astrosun} =1$~A.U. --- the distance between the Earth and the Sun --- and multiplying it by $N_{e,\astrosun} \sim 10^{57}$ --- the number of electrons in the Sun. \subsection{Electrons in the Milky Way} \setcounter{figure}{0} \renewcommand{\thefigure}{A\arabic{figure}} \begin{figure}[t!] \centering \includegraphics[width=\columnwidth]{mw_electron_number_density_total.png} \caption{\label{fig:mw_electron_number_density}Density of electrons in the Milky Way, in Galactocentric coordinates. Electrons are distributed in the central bulge, thin disc, and thick disc of stars and cold gas\ \cite{McMillan:2011wd}, and in the diffuse halo of hot gas\ \cite{Miller:2013nza}.} \end{figure} The baryonic content of the Milky Way consists of stars and cold gas --- distributed in a central bulge, a thick disc, and a thin disc --- and hot gas --- distributed in a diffuse halo. We compute the potential due to the total $N_{e,{\rm MW}} \sim 10^{67}$ electrons, assuming, as before, $Y_e = 0.5$. Figure \ref{fig:mw_electron_number_density} shows the density of electrons in the Milky Way. For the central bulge, thick disc, and thin disc, we assume the simplified profiles of matter density from Ref.\ \cite{McMillan:2011wd}. These were obtained via a Bayesian fit to photometric and kinematic data. Each of the three components is modeled as a flat cylinder centered on the Galactic Center, with the matter density exponentially falling away from the axis and from the Galactic Plane. We adopt the parameter values from the ``convenient model'' of Ref.\ \cite{McMillan:2011wd}. For the diffuse halo of hot gas, we assume the spherical saturated matter density profile from Ref.\ \cite{Miller:2013nza}, obtained from measurements of O VII K$\alpha$ x-ray absorption lines using XMM-Newton. The density is highest at the Galactic Center and falls exponentially outwards. We calculate the potential due to Milky Way electrons by integrating the electron column density along all incoming neutrino directions, {\it i.e.}, \begin{eqnarray}\label{equ:potential_mw} V_{e\beta}^{\rm MW} &=& \frac{g_{e\beta}^{\prime 2}}{4\pi} \int_0^\infty dr \int_0^\pi d\theta \int_0^{2\pi} d\phi ~r ~n_{e, {\rm MW}}(r,\theta,\phi) \nonumber \\ && \qquad\qquad\qquad\qquad\qquad \times \sin\theta ~e^{-m_{e\beta}^\prime r} \;, \end{eqnarray} with the coordinate system centered at the position of the Earth, which is located 8.33~kpc away from the Galactic Center\ \cite{McMillan:2011wd}. The potential is dominated by electrons in stars and cold gas. Though the halo of hot gas accounts for a significant fraction of the baryonic content of the Milky Way, its density is low, so halo electrons are only a tiny contribution to the total potential in \equ{potential_mw}. \subsection{Cosmological electrons} \renewcommand{\thefigure}{A\arabic{figure}} \begin{figure}[t!] \centering \includegraphics[width=0.49\textwidth]{suppression_vs_mass.pdf} \caption{\label{fig:yukawa_suppression}Yukawa suppression $\mathcal{Y}_{e\beta}$ of the potential due to cosmological electrons, as a function of mediator mass $m_{e\beta}^\prime$, for two fixed values of redshift: $z=0$ and $z=6$. For comparison, we show the causal horizon for the two choices.} \end{figure} In addition to the electron repositories in the local Universe, there is, at all redshifts, a cosmological distribution of electrons. The huge number of cosmological electrons --- $N_{e,{\rm cos}} \sim 10^{79}$ --- is what allows us to set the best bounds on the coupling $g_{e\beta}^\prime$ at the lowest values of mediator mass, where the interaction range is of the order of the size of Universe, or larger. Below, we calculate the potential due to cosmological electrons. Consider a neutrino that sits at the center of a sphere of radius $R$ that is homogeneously filled with a constant number density $n_e$ of electrons. The integrated long-range potential at the position of the neutrino is then \begin{equation}\label{equ:potential_def_general} V_{e\beta} = g_{e\beta}^{\prime 2} n_e \left[ \frac { 1-e^{-m_{e\beta}^\prime R}(1+m_{e\beta}^\prime R) } { m_{e\beta}^{\prime 2} } \right] \;. \end{equation} IceCube neutrinos are predominantly extragalactic, and presumably generated in sources at different redshifts. Because of the cosmological expansion, the density of cosmological electrons and the potential that they source varies with redshift. We take into account these effects as follows. The causal horizon defines the largest possible region within which events can be causally connected to each other\ \cite{Weinberg:2008zzc}. At redshift $z$, the comoving size of the causal horizon centered around the neutrino is \begin{equation} d_{\rm H} \left( z \right) = H_0^{-1} \int_0^{\left(1+z\right)^{-1}} \frac{dx}{h\left(x\right)} \;, \end{equation} where $H_0 = 100 h$ km s$^{-1}$ Mpc$^{-1}$ is the Hubble constant, with $h = 0.673$~\cite{Agashe:2014kda}, $x \equiv \left( 1+z \right)^{-1}$, and $h\left(x\right) \equiv H\left(x\right) / H_0$, with the Hubble parameter $H\left(x\right) = H_0 x \sqrt{ \Omega_\Lambda^0 x^2 + \Omega_\text{M}^0 x^{-1} }$. We adopt a $\Lambda$CDM cosmology with vacuum energy density $\Omega_\Lambda = 0.692$ and matter density $\Omega_{\rm M} = 0.308$\ \cite{Ade:2015xua}. The causal horizon changes from about 14.5 Gpc at $z=0$ to about 0.9 Gpc at $z=6$. \setcounter{figure}{0} \renewcommand{\thefigure}{B\arabic{figure}} \begin{figure*}[t!] \centering \includegraphics[width=0.49\textwidth]{eff_mix_angles_vs_v_em_enu_0100_tev_no_z_000_nu_nubar.pdf} \includegraphics[width=0.49\textwidth]{prob_vs_v_em_enu_0100_tev_no_z_000_init_e_nu_nubar.pdf} \caption{\label{fig:mixing_prob_lrp}Effective neutrino mixing parameters ({\it left}) and modified probabilities $P_{ee}$, $P_{e\mu}$, and $P_{e\tau}$ ({\it right}), in the presence of the new long-range interaction from the $L_e-L_\mu$ symmetry, at neutrino energy $E_\nu = 100$~TeV, as a function of the potential $V_{e\mu}$. For comparison, we show the value of the $ee$ element of $\matr{H}_{\rm vac}$ at this energy. Standard mixing parameters are fixed to their best-fit values under normal mass ordering from Ref.\ \cite{deSalas:2017kay}. Dashed lines show the standard values of the quantities, {\it i.e.}, for $V_{e\mu} = 0$. Top panels are for neutrinos; bottom panels are for anti-neutrinos. When computing limits, we consider equal fluxes of $\nu$ and $\bar{\nu}$.} \end{figure*} The content of baryonic matter inside the causal horizon (see Eq.~(16.105) in Ref.~\cite{Giunti:2007ry}) is \begin{equation} M_{\rm H} \left( z \right) = \frac{H_0^2}{16 G_N} d_{\rm H}^3 \left( z \right) \Omega_b^0 \;, \end{equation} where $\Omega_b^0 \approx 0.02207 h^{-2} \approx 0.05$~\cite{Agashe:2014kda} is the density of baryons in the local Universe. The total mass is predominantly made up of protons, neutrons, and electrons, {\it i.e.}, $M_{\rm H} \left( z \right) \simeq N_p \left( z \right) m_p + N_n \left( z \right) m_n + N_e \left( z \right) m_e$, where $m_p$, $m_n$, and $m_e$ are the masses of one proton, neutron, and electron. We estimate the number of electrons by assuming that the number of protons and neutrons is roughly equal ($N_p \approx N_n$) and the net electric charge is zero ($N_p \approx N_e$). Taking $m_n \approx m_p$, this results in \begin{equation} N_e \left( z \right) \simeq M_{\rm H} \left( z \right) / \left( 2 m_p + m_e \right) \;. \end{equation} By evaluating \equ{potential_def_general} with $R = d_{\rm H}(z)$ and $n_e = N_e(z) / V_{\rm H}(z)$, with $V_{\rm H}(z) \equiv (4/3) \pi d_{\rm H}^3(z)$ the causal volume, the potential acting on a neutrino at redshift $z$ is \begin{equation}\label{equ:potential_def} V_{e\beta}^{\rm cos}(z) = \mathcal{C}_{e\beta}(z) \cdot \mathcal{Y}_{e\beta}(z) \;. \end{equation} The term due to the Coulomb part of the potential, \begin{equation}\label{equ:potential_def_coulomb} \mathcal{C}_{e\beta}(z) = \frac{3}{2} \frac{g_{e\beta}^{\prime 2}}{4\pi} \frac{N_e(z)}{d_{\rm H}(z)} \;, \end{equation} describes a potential with infinite range, mediated by a massless mediator. The Yukawa suppression, \begin{equation}\label{equ:potential_def_yukawa} \mathcal{Y}_{e\beta}(z) = \frac{2}{[m_{e\beta}^\prime d_{\rm H}(z)]^2} \left\{ 1 - e^{-m_{e\beta}^\prime d_{\rm H}(z)} [1+m_{e\beta}^\prime d_{\rm H}(z)] \right\} \;. \end{equation} reflects the reduced interaction range due to the mediator being massive and the finite size of the causal horizon. Smaller values of $\mathcal{Y}_{e\beta}$ represent stronger suppression. Figure \ref{fig:yukawa_suppression} illustrates the behavior of the Yukawa suppression. For a fixed redshift, the suppression is important --- {\it i.e.}, $\mathcal{Y}_{e\beta} \ll 1$ --- as long as the interaction range $1/m_{e\beta}^\prime$ is small compared to the causal horizon. This means that the contribution of electrons located far from the neutrino is exponentially suppressed. This occurs for $m_{e\beta}^\prime \gtrsim 10^{-31}$~eV at $z=6$ and $m_{e\beta}^\prime \gtrsim 10^{-33}$~eV at $z=0$. On the other hand, if the range is comparable to or larger than the causal horizon, there is no Yukawa suppression, {\it i.e.}, $\mathcal{Y}_{e\beta} \approx 1$. In this case, the interaction range is effectively infinite, that is, larger than the size of the causally connected Universe. \section{Flavor mixing in a long-range potential} \label{appendix:flavor_mixing_lrp} In the presence of the long-range potential, the average flavor-transition probability is $P_{\alpha\beta}(E_\nu) = \sum_{i=1}^3 \lvert U^\prime_{\alpha i}(E_\nu) \rvert^2 \lvert U^\prime_{\beta i}(E_\nu) \rvert^2$, where $\matr{U}^\prime$ is the matrix that diagonalizes the total Hamiltonian $\matr{H}_{e\beta}(E_\nu,g_{e\beta}^\prime, m_{e\beta}^\prime) \equiv \matr{H}_{\rm vac}(E_\nu) + \matr{V}_{e\beta}(g_{e\beta}^\prime, m_{e\beta}^\prime) + \Theta(R_\oplus-m_{e\beta}^{\prime -1}) \matr{V}_{\rm mat}^\oplus$. The new interaction between neutrinos and electrons modifies the effective mixing angles $\theta_{12,{\rm eff}}$, $\theta_{13,{\rm eff}}$, and $\theta_{23,{\rm eff}}$, and the effective squared-mass differences $\Delta m_{21,{\rm eff}}^2$ and $\Delta m_{31,{\rm eff}}^2$. The effective mixing angles are identified by writing the $\matr{U}^\prime$ as a PMNS-like matrix, while the effective squared-mass differences are the eigenvalues of $\matr{H}_{e\beta}$. Standard flavor mixing occurs because the neutrino flavor and mass bases are different, {\it i.e.}, because $\matr{H}_{\rm vac}$ is non-diagonal. Indeed, if $\matr{V}_{e\beta} \ll \matr{H}_{\rm vac} + \Theta(R_\oplus-m_{e\beta}^{\prime -1}) \matr{V}_{\rm mat}^\oplus$, we recover standard mixing. In \figu{potential}, this happens below the iso-contours of $V_{e\beta} = [\matr{H}_{\rm vac}(E_\nu)]_{ee} + V_{\rm mat}^\oplus$. If, on the other hand, $\matr{V}_{e\beta} \gg \matr{H}_{\rm vac} + \Theta(R_\oplus-m_{e\beta}^{\prime -1}) \matr{V}_{\rm mat}^\oplus$, the total Hamiltonian becomes effectively diagonal and mixing turns off, {\it i.e.}, $P_{\alpha\alpha} \approx 1$. In \figu{potential}, this happens above the iso-contours. In-between, when $\matr{V}_{e\beta} \approx \matr{H}_{\rm vac} + \Theta(R_\oplus-m_{e\beta}^{\prime -1}) \matr{V}_{\rm mat}^\oplus$, flavor mixing occurs with modified probabilities. Figure \ref{fig:mixing_prob_lrp} shows how the effective mixing angles and probability $P_{e\beta}$ ($\beta = e, \mu, \tau$), calculated assuming the $L_e-L_\mu$ symmetry, vary with $V_{e\mu}$. The long-range interaction induces a new resonance in the mixing of neutrinos, at $V_{e\mu} \sim 10^{-17}$~eV, on account of the potential term and the vacuum term having opposite signs. For anti-neutrinos, this does not occur and hence the resonance is not present. The resonance accounts for the wiggles seen in the flavor ratios in \figu{f_E_vary_V}. Because, in obtaining our limits, we averaged over equal fluxes of $\nu$ and $\bar{\nu}$, the wiggles are damped in \figu{f_E_vary_V}. The resonance is softer and broader in the $P_{\mu\beta}$ channels (not shown). At higher values of the potential, mixing turns off, {\it i.e.}, $P_{ee} \approx 1$. Figure \ref{fig:mixing_prob_lrp} uses the best-fit values of the mixing angles under the normal mass ordering. Under the inverted mass ordering (not shown), results are similar, but the curves for $P_{e\mu}$ and $P_{e\tau}$ are swapped below the resonance, though $P_{e\tau}$ remains larger than $P_{e\mu}$ at the resonance. For the $L_e-L_\tau$ symmetry (not shown), results are similar, but $P_{e\mu}$ and $P_{e\tau}$ are swapped near the resonance. \setcounter{figure}{0} \renewcommand{\thefigure}{C\arabic{figure}} \begin{figure}[t!] \centering \includegraphics[width=\columnwidth]{limits_lrp_v_et_src_sel_010_no_enu_0025_2800_tev_alpha_nu_250_1s.pdf} \caption{\label{fig:limits_etau}Same as \figu{limits_emu}, but for the $L_e-L_\tau$ symmetry. Like in \figu{limits_emu}, we assume flavor ratios at the source $\left(\frac{1}{3}:\frac{2}{3}:0\right)_{\rm S}$ and normal mass hierarchy.} \end{figure} \vspace*{0.19cm} \section{Constraints for $L_e-L_\tau$} Figure \ref{fig:limits_etau} shows present and future constraints on $g_{e\tau}^\prime$, in analogy to \figu{limits_emu} for $g_{e\mu}^\prime$ in the main text. The only difference compared to \figu{limits_emu} is that slightly larger values of $g_{e\tau}^\prime$ are allowed than for $g_{e\mu}^\prime$. The similarity between the limits on $g_{e\mu}^\prime$ and $g_{e\tau}^\prime$ is evident from inspecting the behavior of $f_{\alpha,\oplus}$ as a function of the long-range potential, shown in the top row of \figu{ternary_ordering_compare}. For both $L_e-L_\mu$ and $L_e-L_\tau$, the standard-mixing region and the point $\left(\frac{1}{3}:\frac{2}{3}:0\right)_{\oplus}$ lie outside the $1\sigma$ IceCube contour. The main difference between the two cases is the direction of the wiggle in $f_{\alpha,\oplus}$ due to the new resonance; see Appendix~\ref{appendix:flavor_mixing_lrp}. The similarity in the limits holds also when the inverted mass hierarchy is assumed; see Appendix \ref{appendix:mass_ordering}. \section{The effect of mass ordering} \label{appendix:mass_ordering} To derive the limits on $g_{e\mu}^\prime$ in \figu{limits_emu} and on $g_{e\tau}^\prime$ in \figu{limits_etau}, we varied the standard neutrino mixing parameters $\theta_{12}$, $\theta_{23}$, $\theta_{13}$, $\delta_{\rm CP}$, $\Delta m_{21}^2$, and $\Delta m_{31}^2$ within their allowed $1\sigma$~C.L. ranges obtained from the global oscillation analysis of Ref.\ \cite{deSalas:2017kay}, assuming a normal mass ordering. Here we explore how the limits on $g_{e\mu}^\prime$ and $g_{e\tau}^\prime$ change when we assume instead an inverted ordering. Figure \ref{fig:ternary_ordering_compare} shows the flavor ratios at Earth $f_{\alpha,\oplus}$, evaluated at $E_\nu = 100$~TeV, for the lepton-number symmetries $L_e-L_\mu$ and $L_e-L_\tau$, and the normal and inverted mass orderings. The top left panel is the same as \figu{f_E_vary_V}, and is reproduced here to facilitate the comparison. Figure \ref{fig:limits_ordering_compare} shows that, using the present IceCube flavor results, switching to inverted mass ordering --- though it is disfavored --- significantly worsens the limits derived following our procedure. This is because the standard-mixing region centered around $(\frac{1}{3}:\frac{1}{3}:\frac{1}{3})_\oplus$ lies very close to the present $1\sigma$ IceCube contour. Thus, while under normal ordering the standard region lies outside the contour, under inverted ordering it is almost fully contained by it. As a result, due to the hard $1\sigma$ cut implemented in our limit-setting procedure, changing the mass ordering has a large effect on the limits. In contrast, limits derived using future flavor results, centered on $(\frac{1}{3}:\frac{1}{3}:\frac{1}{3})_\oplus$, would be marginally affected by the choice of mass ordering. \setcounter{figure}{0} \renewcommand{\thefigure}{D\arabic{figure}} \begin{figure*}[t!] \centering \includegraphics[width=\columnwidth]{flavor_ratios_earth_var_v_em_enu_0100_tev_src_120_010_100_no_z_000_coded_by_v_1s_avg_nu_nubar.png} \includegraphics[width=\columnwidth]{flavor_ratios_earth_var_v_et_enu_0100_tev_src_120_010_100_no_z_000_coded_by_v_1s_avg_nu_nubar.png} \includegraphics[width=\columnwidth]{flavor_ratios_earth_var_v_em_enu_0100_tev_src_120_010_100_io_z_000_coded_by_v_1s_avg_nu_nubar.png} \includegraphics[width=\columnwidth]{flavor_ratios_earth_var_v_et_enu_0100_tev_src_120_010_100_io_z_000_coded_by_v_1s_avg_nu_nubar.png} \caption{\label{fig:ternary_ordering_compare}Same as \figu{f_E_vary_V}, but for all possibilities of lepton-number symmetry and neutrino mass ordering: $L_e-L_\mu$ with normal ordering (NO, top left; same as \figu{f_E_vary_V}), $L_e-L_\tau$ with NO (top right), $L_e-L_\mu$ with inverted ordering (IO, bottom left), and $L_e-L_\tau$ with IO (bottom right). Like in \figu{f_E_vary_V}, in these plots we fixed $E_\nu = 100$~TeV for illustration, but our limits are obtained using energy-averaged flavor ratios $\langle f_{\alpha,\oplus} \rangle$ (see main text), which behave similarly with $V_{e\beta}$.} \end{figure*} \begin{figure*}[b!] \centering \includegraphics[width=\columnwidth]{limits_lrp_v_src_sel_010_enu_0025_2800_tev_alpha_nu_250_1s_compare.pdf} \includegraphics[width=\columnwidth]{limits_lrp_v_src_sel_000_enu_0025_2800_tev_alpha_nu_250_1s_compare.pdf} \caption{\label{fig:limits_ordering_compare}Constraints at $1\sigma$ on the mass and coupling of the $Z_{e\mu}^\prime$ and $Z_{e\tau}^\prime$ bosons, derived from current IceCube flavor measurements. Like in \figu{limits_emu}, we assumed an astrophysical neutrino spectrum $\propto E_\nu^{-2.5}$. {\it Left:} Assuming the nominal expectation of flavor ratios $\left( \frac{1}{3}:\frac{2}{3}:0 \right)_{\rm S}$ at the source. Here, upper-limit curves of $L_e-L_\mu$ and $L_e-L_\tau$ at NO are on top of each other. {\it Right:} Assuming the alternative muon-damped ratios $(0:1:0)_{\rm S}$.} \end{figure*} \end{document}
1,314,259,995,455
arxiv
\section{Introduction} \label{s:introduction} In this paper we address the classical water wave problem for two-dimensional steady waves with vorticity on water of finite depth, formulated in terms of Euler equations with a free boundary. While allowing for arbitrary exact solutions, representing nonlinear waves, we focus on two questions related to fundamental bounds for the surface profile and a possible range for values of the flow force constant. The latter problem known as Benjamin and Lighthill conjecture was introduced in \cite{Benjamin1954a}. The property conjectured by Benjamin and Lighthill (see also Keady \& Norbury \cite{Keady1975}, Conjecture 2) can be expressed as inequalities \begin{equation} \label{BLconj} \FF_-(r) \leq \FF \leq \FF_+(r), \end{equation} where $\FF$ is the flow force constant of a solution, $r$ is the corresponding Bernoulli constant, while $\FF_-(r)$ and $\FF_+(r)$ are flow force constants of conjugate laminar flows (supercritical and subcritical respectively) determined by the same Bernoulli constant. According to the conjecture, inequality \eqref{BLconj} is valid for arbitrary smooth solutions. It was verified by Benjamin \cite{Benjamin95} for all irrotational Stokes waves and their small perturbations, while the bottom bound in \eqref{BLconj} was obtained earlier by Keady \& Norbury \cite{Keady1975} (also for periodic wavetrains). Kozlov and Kuznetsov \cite{Kozlov2009a,Kozlov2011a} proved \eqref{BLconj} for arbitrary solutions under weak regularity assumptions, provided the Bernoulli constant $r$ is close to it's critical value $R_c$; it was extended to the rotational setting in \cite{Kozlov2017a}, again for $r \approx R_c$; the latter condition guarantees that solutions are of small amplitude. The left inequality in \eqref{BLconj} for periodic waves with a favorable vorticity was obtained by Keady \& Norbury \cite{Keady1978}. Whereas their result is valid under essential restrictions on the vorticity, they point out that in general the statement of the conjecture is probably false: "There is no reason to suppose that the conjectures of Benjamin and Lighthill will hold for all flows with vorticity". Thus, it is especially surprising that \eqref{BLconj} turns out to be true for arbitrary vorticity distributions and arbitrary solutions, which is one the main results of the present paper. Let us outline some difficulties and gaps associated with \eqref{BLconj}. Even so Benjamin \cite{Benjamin95} verified \eqref{BLconj} for all irrotational Stokes waves by estimating certain contour integrals using the divergence structure of the problem, their approach can hardly be extended further and it does not explain the nature of inequalities in \eqref{BLconj}. Furthermore, nonlinear waves on water of finite depth are not limited to Stokes and solitary waves; see \cite{Vanden_Broeck_1983, Baesens1992, Zufiria1987, Craig2002}. In Section 5.3 of \cite{Benjamin95} Benjamin discusses a possibility to extend their method to arbitrary solutions, however the suggested argument depends heavily on the geometry of the flow (symmetry, monotonicity) and is not applicable in general. While for irrotational Stokes waves inequalities in \eqref{BLconj} are strict, the case of equalities becomes a significant problem in the context of arbitrary solutions. It is known that $\FF = \FF_-$ for all solitary waves, while it is unclear if the equality $\FF = \FF_-$ holds true only for solitary waves, symmetric and monotone on each side around the crest. The Benjamin and Lighthill conjecture is closely related with another problem about bounds for the surface profile. If $y = \eta(x)$ determines the surface of the fluid in a moving frame of reference, then the following inequalities are well known: \begin{equation} \label{profile_bounds} d_-(r) \leq \inf_{x \in \R} \eta(x) \leq d_+(r) \leq \sup_{x \in \R} \eta(x), \end{equation} where $d_-(r)$ and $d_+(r)$ are depths of the supercritical and subcritical flows respectively. First obtained by Keady \& Norbury \cite{KeadyNorbury78} for irrotational Stokes waves, it was extended to arbitrary solutions by Kozlov and Kuznetsov \cite{Kozlov2007, Kozlov2009}; see also \cite{Kozlov2012}. We emphasize that Kozlov and Kuznetsov \cite{Kozlov2009} obtained strict inequality $d_+(r) < \sup_{x\in\R} \eta(x)$ for arbitrary irrotational solutions, provided $\eta$ is not a constant identically. While for Stokes waves it can be obtained by using the Hopf lemma, the general case is much more subtle. The argument in \cite{Kozlov2009} required a careful analysis of the Fourier symbol associated with an integro-differential operator and the irrotational nature of the problem was essential. For waves with vorticity only a weak form of \eqref{profile_bounds} is known; see \cite{Kozlov2012, Kozlov2015}. In this paper we consider both problems, inequalities for the flow force \eqref{BLconj} and bounds \eqref{profile_bounds}. For an arbitrary wave with vorticity we prove \eqref{BLconj} and \eqref{profile_bounds} and provide a complete description of all cases when equalities can occur. The case of equalities in \eqref{BLconj} is new even in the irrotational setting. In fact, for all solutions other than streams and classical solitary waves all inequalities in \eqref{BLconj} and \eqref{profile_bounds} are shown to be strict. In particular, if a given solution satisfies $\FF = \FF_-(r)$ (without any assumptions on the surface profile), then it is necessarily a classical solitary wave of elevation, whose profile decays monotonically on each side of the crest. On the other hand, the relation $\FF = \FF_+(r)$ is only valid for subcritical laminar flows. This is a strong statement, because it, in particular, shows that subcritical solitary waves (with the Froude number less than one) do not exist. The latter was an open problem for a long time even in the irrotational setting; it was resolved recently in \cite{KozLokhWheeler2020} by using an asymptotic analysis. In addition we prove that any steady wave is subject to strict inequalities in \eqref{profile_bounds}, provided it is not a parallel flow or a solitary wave, for which the left inequality in \eqref{profile_bounds} turns into the equality. This generalizes a series of previous results. It is remarkable that our argument is based essentially on the classical maximum principle applied for a version of the flow force flux function, introduced recently in \cite{KozLokhWheeler2020}. Thus, the Benjamin and Lighthill conjecture \eqref{BLconj} is basically a consequence of the elliptic maximum principle, which explains the nature of \eqref{BLconj}. \section{Statement of the problem} We consider the classical water wave problem for two-dimensional steady waves with vorticity on water of finite depth. We neglect effects of surface tension and consider a fluid of constant (unit) density. Thus, in an appropriate coordinate system moving along with the wave, stationary Euler equations are given by \begin{subequations}\label{eqn:trav} \begin{align} \label{eqn:u} (u-c)u_x + vu_y & = -P_x, \\ \label{eqn:v} (u-c)v_x + vv_y & = -P_y-g, \\ \label{eqn:incomp} u_x + v_y &= 0, \end{align} which holds true in a two-dimensional fluid domain $D$, defined by the inequality \[ 0 < y < \eta(x). \] Here $(u,v)$ are components of the velocity field, $y = \eta(x)$ is the surface profile, $c$ is the wave speed, $P$ is the pressure and $g$ is the gravitational constant. The corresponding boundary conditions are \begin{alignat}{2} \label{eqn:kinbot} v &= 0&\qquad& \text{on } y=0,\\ \label{eqn:kintop} v &= (u-c)\eta_x && \text{on } y=\eta,\\ \label{eqn:dyn} P &= P_{\mathrm{atm}} && \text{on } y=\eta. \end{alignat} \end{subequations} It is often assumed in the literature that the flow is irrotational, that is $v_x - u_y$ is zero everywhere in the fluid domain. Under this assumption components of the velocity field are harmonic functions, which allows to apply methods of complex analysis. Being a convenient simplification it forbids modeling of non-uniform currents, commonly occurring in nature. In the present paper we will consider rotational flows, where the vorticity function is defined by \begin{equation} \label{vort} \omega = -v_x + u_y. \end{equation} Throughout the paper we assume that the flow is free from stagnation points and the horizontal component of the relative velocity field does not change sign, that is \begin{equation} \label{uni} u-c < 0 \end{equation} everywhere in the fluid. We call such flows unidirectional. In the two-dimensional setup relation \eqref{eqn:incomp} allows to reformulate the problem in terms of a stream function $\psi$, defined implicitly by relations \[ \psi_y = c-u, \ \ \psi_x = v. \] This determines $\psi$ up to an additive constant, while relations \eqref{eqn:kinbot},\eqref{eqn:kinbot} force $\psi$ to be constant along the boundaries. Thus, by subtracting a suitable constant, we can always assume that \[ \psi = m, \ \ y = \eta; \ \ \psi = 0, \ \ y = 0. \] Here $m > 0$ is the mass flux, defined by \[ m = \int_0^\eta (c-u) dy. \] In what follows we will use non-dimensional variables proposed by Keady \& Norbury \cite{KeadyNorbury78}, where lengths and velocities are scaled by $(m^2/g)^{1/3}$ and $(mg)^{1/3}$ respectively; in new units $m=1$ and $g=1$. For simplicity we keep the same notations for $\eta$ and $\psi$. Taking the curl of Euler equations \eqref{eqn:u}-\eqref{eqn:incomp} one checks that the vorticity function $\omega$ defined by \eqref{vort} is constant along paths tangent everywhere to the relative velocity field $(u-c,v)$; see \cite{Constantin11b} for more details. Having the same property by the definition, stream function $\psi$ is strictly monotone by \eqref{uni} on every vertical interval inside the fluid region. These observations together show that $\omega$ depends only on values of the stream function, that is \[ \omega = \omega(\psi). \] This property and Bernoulli's law allow to express the pressure $P$ as \begin{align} \label{eqn:bernoulli} P-P_\mathrm{atm} + \frac 12\abs{\nabla\psi}^2 + y + \Omega(\psi) - \Omega(1) = const, \end{align} where \begin{align*} \Omega(\psi) = \int_0^\psi \omega(p)\,dp \end{align*} is a primitive of the vorticity function $\omega(\psi)$. Thus, we can eliminate the pressure from equations and obtain the following problem: \begin{subequations}\label{eqn:stream} \begin{alignat}{2} \label{eqn:stream:semilinear} \Delta\psi+\omega(\psi)&=0 &\qquad& \text{for } 0 < y < \eta,\\ \label{eqn:stream:dyn} \tfrac 12\abs{\nabla\psi}^2 + y &= r &\quad& \text{on }y=\eta,\\ \label{eqn:stream:kintop} \psi &= 1 &\quad& \text{on }y=\eta,\\ \label{eqn:stream:kinbot} \psi &= 0 &\quad& \text{on }y=0. \end{alignat} \end{subequations} Here $r>0$ is referred to as Bernoulli's constant. Let us define the flow force constant, another motion invariant. Following Benjamin \cite{BENJAMIN1984}, we put \begin{equation} \label{FFbyP} \FF = \int_0^\eta (P - P_{atm} + (u-c)^2) dy. \end{equation} Taking $x$-derivative in \eqref{FFbyP} and using \eqref{eqn:u} together with the formula for the pressure \eqref{eqn:bernoulli}, one verifies that $\FF$ is a constant of motion independent of $x$. In terms of the stream function one obtains \begin{equation} \label{flowforce} \FF = \int_0^{\eta}(\tfrac12(\psi_y^2 - \psi_x^2) - y + \Omega(1) - \Omega(\psi) + r )\, dy. \end{equation} This constant is important in several ways; for instance, it plays the role of the Hamiltonian in spatial dynamics; see \cite{Baesens1992}. \subsection{Stream solutions} Laminar flows or shear currents, for which the vertical component $v$ of the velocity field is zero play an important role in the theory of steady waves. Let us recall some basic facts about stream solutions $\psi = U(y)$ and $\eta = d$, describing shear currents. It is convenient to parameterize the latter solutions by the relative speed at the bottom. Thus, we put $U_y(0) = s$ and find that $U = U(y;s)$ is subject to \begin{equation} \label{eqn:laminar} U'' + \omega(U) = 0, \ \ \ 0 < y < d; \ \ U(0) = 0, \ \ U(d) = 1. \end{equation} Our assumption \eqref{uni} implies $U' > 0$ on $[0; d]$, which puts a natural constraint on $s$. Indeed, multiplying the first equation in \eqref{eqn:laminar} by $U'$ and integrating over $[0; y]$, we find \[ U'^2 = s^2 - 2\Omega(U). \] This shows that the expression $s^2 - 2 \Omega(p)$ is positive for all $p \in [0; 1]$, which requires \[ s > s_0 = \sqrt{\max_{p \in [0,1]}2\Omega(p)}. \] On the other hand, every $s > s_0$ gives rise to a monotonically increasing function $U(y; s)$ solving \eqref{eqn:laminar} for some unique $d = d(s)$, given explicitly by \[ d(s) = \int_0^1 \frac{1}{\sqrt{s^2 - 2\Omega(p)}}. \] This formula shows that $d(s)$ monotonically decreases to zero with respect to $s$ and takes values between zero and \[ d_0 = \lim_{s \to s_0+} d(s). \] The latter limit can be finite or not. For instance, when $\omega = 0$ we find $s_0 = 0$ and $d_0 = +\infty$. On the other hand, when $\omega = -b$ for some positive constant $b \neq 0$, then $s_0 = 0$ but $d_0 < + \infty$. We note that our main theorem is concerned with the case $d_0 < + \infty$. Every stream solution $U(y;s)$ determines the Bernoulli constant $R(s)$, which can be found from the relation \eqref{eqn:stream:kintop}. This constant can be computed explicitly as \[ R(s) = \tfrac12 s^2 - \Omega(1) + d(s). \] As a function of $s$ it decreases from $R_0$ to $R_c$ when $s$ changes from $s_0$ to $s_c$ and increases to infinity for $s>s_c$. Here the critical value $s_c$ is determined by the relation \[ \int_0^1 \frac{1}{(s^2 - 2 \Omega(p))^{3/2}} dp = 1. \] The latter monotonicity property of $R(s)$ shows (see Figure 1) that for any $r \in (R_c;R_0)$ the equation $R(s) = r$ has exactly two solutions $s = s_-(r)$ and $s = s_+(r)$, such that $s_-(r) < s_c < s_+(r)$. The corresponding depths \[ d_-(r) = d(s_+(r)), \ \ d_+(r) = d(s_-(r)) \] satisfy $d_-(r) < d_+(r)$ and are called supercritical and subcritical depths respectively. The flow force constants corresponding to depths $d_\pm(r)$ are denoted by $S_\pm(r)$. Stream solutions $U(y; s_-(r))$ and $U(y; s_+(r))$ are said to be conjugate and are defined only under condition $r < R_0$. This assumption is naturally fulfilled for all irrotational waves, since then $R_0 = +\infty$. Under certain assumptions on the vorticity it is shown in \cite{Kozlov2015} that no unidirectional waves exist for $r > R_0$, except laminar flows. Furthermore, it was verified recently in \cite{Lokharu2020} for all vorticity distributions that solitary waves are absent for $r > R_0$. Thus, the assumption $r<R_0$ appears to be natural in what follows. The bottom bound $r > R_c$ is well known for arbitrary unidirectional waves with vorticity; see \cite{Kozlov2015}, \cite{Kozlov2017a} and \cite{Wheeler15b}. \subsection{Formulations of main results.} Following notations from the previous section, our main theorem is \begin{theorem} \label{thm:BLC} Let $\psi \in C^{2,\gamma}(\overline{D})$ and $\eta \in C^{2,\gamma}(\R)$ be a solution to \eqref{eqn:stream} with $\inf_{D}\psi_y > 0$ and $r \in (R_c,R_0)$. Then the flow force constant $\FF$ given by \eqref{flowforce} enjoys the following properties: \begin{itemize} \item[(i)] inequalities $\FF_-(r) \leq \FF \leq \FF_+(r)$ are always true; \item[(ii)] the equality $\FF = \FF_-(r)$ holds true only for supercritical laminar flows and symmetric solitary waves of positive elevation supported by supercritical streams; \item[(iii)] the equality $\FF = \FF_+(r)$ is true only for subcritical laminar flows. \end{itemize} \end{theorem} The claim (i) of Theorem \ref{thm:BLC} is known as the classical Benjamin and Lighthill conjecture. We emphasise that it is new even in the irrotational case since it covers all possible smooth solutions; the original proof by Benjmain \cite{Benjamin95} deals only with Stokes waves (and small amplitude perturbations of those) and Benjmain's argument relies heavily on that assumption. The parts (ii) and (iii) of Theorem \ref{thm:BLC} are of separate interest and had never been considered in the literature before. For instance, the claim (iii), in particular, forbids the existence of subcritical solitary waves; this was an open problem for a long time and was recently proved in \cite{KozLokhWheeler2020} using an asymptotic analysis. Both statements (ii) and (iii) are new even for irrotational waves. \begin{corollary} Under assumptions of the theorem, the water wave profile $\eta$ is subject to the following properties: \begin{itemize} \item[(i')] for all $x\in\R$ we have $d_-(r) < \eta(x)$; \item[(ii')] denoting $\hat{\eta} = \sup_{\R} \eta$ and $\check{\eta} = \inf_{\R} \eta$, we have $\check{\eta} < d_+(r) < \hat{\eta}$, while equalities $\check{\eta} = d_+(r)$ or $d_+(r) = \hat{\eta}$ are only possible if $\check{\eta} = \hat{\eta} = d_+(r)$; \item[(iii')] the equality $\check{\eta} = d_-(r)$ is valid only for supercritical laminar flows and supercritical solitary waves. \end{itemize} \end{corollary} These statements follow from Proposition \ref{p:bounds} and Theorem \ref{thm:BLC}. Inequalities $d_-(r) < \eta(x)$ and $\check{\eta} < d_+(r) < \hat{\eta}$ for irrotational Stokes waves were first obtained by Keady \& Norbury \cite{Keady1975}. An extension to arbitrary irrotational solutions was done by Kozlov \& Kuznetsov \cite{Kozlov2007,Kozlov2009}. For waves with vorticity only non-strict versions of inequalities were known; see \cite{Kozlov2015} and references therein. The last claim (iii') is new even in the irrotational setting. Our proofs are based on properties of flow force flux functions introduced recently in \cite{KozLokhWheeler2020} and used in \cite{Lokharu2020} for proving the nonexistence of steady waves with $r \geq R_0$. \section{Preliminaries} \subsection{Reformulation of the problem} Under assumption \eqref{uni} we can apply the partial hodograph transform introduced by Dubreil-Jacotin \cite{DubreilJacotin34}.Thus, we present new independent variables \[ q = x, \ \ p = \psi(x,y), \] while new unknown function $h(q,p)$ (height function) is defined from the identity \[ h(q,p) = y. \] Note that it is related to the stream function $\psi$ through the formulas \begin{equation} \label{height:stream} \psi_x = - \frac{h_q}{h_p}, \ \ \psi_y = \frac{1}{h_p}, \end{equation} where \begin{subequations}\label{height} \begin{equation} \label{unih} h_p > 0 \end{equation} throughout the fluid domain by \eqref{uni}. An advantage of using new variables is in that instead of two unknown functions $\eta(x)$ and $\psi(x,y)$ with an unknown domain of definition, we have one function $h(q,p)$ defined in a fixed strip $S = \R \times (0,1)$. An equivalent problem for $h(q,p)$ is given by \begin{alignat}{2} \label{height:main} \left( \frac{1+h_q^2}{2h_p^2} + \Omega \right)_p - \left(\frac{h_q}{h_p}\right)_q &=0 &\qquad& \text{in } S,\\ \label{height:top} \frac{1+h_q^2}{2h_p^2} + h &= r &\quad& \text{on }p=1,\\ \label{height:bot} h &= 0 &\quad& \text{on }p=0. \end{alignat} \end{subequations} The wave profile $\eta$ becomes the boundary value of $h$ on $p = 1$: \[ h(q,1) = \eta(q), \ \ q \in \R. \] Using \eqref{height:stream} and Bernoulli's law \eqref{eqn:bernoulli} we recalculate the flow force constant $\FF$ defined in \eqref{flowforce} as \begin{equation}\label{height:ff} \FF = \int_0^1 \left( \frac{1-h_q^2}{2h_p^2} - h - \Omega + \Omega(1) + r \right) h_p \, dp. \end{equation} Laminar flows defined by stream functions $U(y; s)$ correspond to height functions $h = H(p; s)$ that are independent of horizontal variable $q$. The corresponding equations are \[ \left(\frac{1}{2H_p^2} + \Omega\right)_p = 0, \ \ H(0) = 0, \ \ H(1) = d(s), \ \ \frac{1}{2 H_p^2(1)} + H(1) = R(s). \] Solving equations for $H(p; s)$ explicitly, we find \[ H(p;s) = \int_0^p \frac{1}{\sqrt{s^2 -2\Omega(\tau)}} \, d\tau. \] Given a height function $h(q,p)$ and a stream solution $H(p;s)$, we define \begin{equation}\label{ws} \ws(q,p) = h(q,p) - H(p;s). \end{equation} This notation will be frequently used in what follows. In order to derive an equation for $\ws$ we first write \eqref{height:main} in a non-divergence form as \[ \frac{1+h_q^2}{h_p^2} h_{pp} - 2\frac{h_q}{h_p} h_{qp} + h_{qq} - \omega(p) h_p = 0. \] Now using our ansats \eqref{ws}, we find \begin{equation}\label{ws:main} \frac{1+h_q^2}{h_p^2} \ws_{pp} - 2\frac{h_q}{h_p} \ws_{qp} + \ws_{qq} - \omega(p) \ws_p + \frac{(\ws_q)^2 H_{pp}}{h_p^2} - \frac{\ws_p (h_p + H_p) H_{pp}}{h_p^2 H_p^2} = 0. \end{equation} Thus, $\ws$ solves a homogeneous elliptic equation in $S$ and is subject to a maximum principle; see \cite{Vitolo2007} for an elliptic maximum principle in unbounded domains. The boundary conditions for $\ws$ can be obtained directly from \eqref{height:top} and \eqref{height:bot} by inserting \eqref{ws} and using the corresponding equations for $H$. This gives \begin{subequations}\label{ws:boundary} \begin{alignat}{2} \frac{(\ws_q)^2}{2h_p^2} - \frac{\ws_p (h_p + H_p)}{2h_p^2 H_p^2} + \ws &=r- R(s) &\qquad& \text{for } p=1,\label{ws:top} \\ \ws &= 0&\qquad& \text{for } p=0. \label{ws:bot} \end{alignat} \end{subequations} For $s = s_\pm(r)$, we have $r - R(s_\pm(r)) = 0$ and \eqref{ws:top} turns into \begin{equation} \label{wspm} \frac{\wspm_p}{H_p^3} - \wspm = \frac{(\wspm_q)^2}{2h_p^2} + \frac{(\wspm_p)^2(2h_p + H_p)}{2H_p^3 h_p^2}, \ \ p = 1. \end{equation} This shows that $\wspm_p(q; 1)$ is positive whenever $\wspm(q; 1)$ is positive. This property will be used in what follows. I many formulas, such as \eqref{ws:top}, it is often convenient to omit the dependence on $s$ in the notation of $H$. The right choice of $s$ will be always clear from the context and is the same as for $\ws$. Furthermore, since the Bernoulli constant $r$ will remain unchanged, we will often omit it from notations, such as $s_\pm$ or $\FF_\pm$. \subsection{Subsolutions} Let $h \in C^{2,\gamma}(\overline{S})$ be a solution to \eqref{height} for some $r > 0$ and let $\FF$ be the corresponding flow force constant. For an arbitrary sequence $\{q_j\}_{j=1}^\infty \subset \R$, possibly unbounded, we consider horizontal shifts \[ h_j(q,p) = h(q+q_j,p), \ \ j \geq 1. \] Thus, every function $h_j$ solve the same problem \eqref{height} with the same Bernoulli constant. Now let $\gamma' \in (0,\gamma)$ be given. Then the embedding $C^{2,\gamma}(K) \hookrightarrow C^{2,\gamma'}(K)$ is compact for any compact subset $K \subset \overline{S}$. Because the norms $\|h_j\|_{C^{2,\gamma}(\overline{S})}$ are uniformly bounded in $j$, we can find a subsequence $\{h_{j_k}\}_{k=1}^\infty$ and a function $\tilde{h} \in C^{2,\gamma'}(\overline{S})$ with the following property: for any compact $K \subset \overline{S}$ restrictions of functions $h_{j_k}$ to $K$ converge to $\tilde{h}|_K$ in $C^{2,\gamma'}(K)$. Then it is straightforward to show that $\tilde{h}$ has the same regularity as $h$, that is $\tilde{h} \in C^{2,\gamma}(\overline{S})$. It is also clear that $\tilde{h}$ solves the same elliptic problem \eqref{height} with the same Bernoulli constant $r$ and has the same flow force constant $\FF$. Note that if the convergence takes place for some $\gamma'$ then it is true for all $\gamma' \in (0,\gamma)$ by the interpolation. Such function $\tilde{h}$ will be referred to as a subsolution of $h$. This terminology will be useful in order to avoid multiple repetitions of the argument with subsequences as above. Let us give an explicit definition. \begin{definition} \label{defsub} Given two functions $h,\tilde{h} \in C^{2,\gamma}(\overline{S})$ we say that $\tilde{h}$ is a subsolution of $h$ if there exists a sequence $\{q_j\}_{j=1}^\infty \subset \R$ such that functions $h_j(q, p) = h(q +q_j , p)$ converge to $\tilde{h}$ in $C^{2,\gamma'}(K)$ for all compact $K \subset \overline{S}$ and all $\gamma' \in (0,\gamma)$. \end{definition} Note that the definition is symmetric: $\tilde{h}$ is a subsolution of $h$ if and only if $h$ is a subsolution of $\tilde{h}$. The following property of subsolutions will be useful in what follows. \begin{proposition} \label{subsub} Let $\tilde{h}\in C^{2,\gamma}(\overline{S})$ be a subsolution of $h\in C^{2,\gamma}(\overline{S})$ and $\hat{h}\in C^{2,\gamma}(\overline{S})$ is a subsolution of $\tilde{h}$. Then $\hat{h}$ is a subsolution of $h$. \end{proposition} \begin{proof} Let us fix $\epsilon>0$ and a bounded closed interval $I$. Then, because $\tilde{h}$ is a subsolution of $\hat{h}$, then $\hat{h}$ is a subsolution of $\tilde{h}$ and one finds $\hat{q} \in \R$ such that the function $\tilde{h}(q+\hat{q})$ is close to $\hat{h}(q)$ on $I$ in $C^{2,\gamma'}(I \times [0,1])$, that is \[ \|\tilde{h}(\cdot+\hat{q}) - \hat{h}(\cdot)\|_{C^{2,\gamma'}(I \times [0,1])} < \epsilon/2. \] Now we consider functions $\tilde{h}(\cdot+\hat{q})$ and $h$. It is clear that $h$ is a subsolution of $\tilde{h}(\cdot+\hat{q})$ and a similar argument gives $\tilde{q} \in \R$ such that \[ \|h(\cdot+\tilde{q}) - \tilde{h}(\cdot+\hat{q})\|_{C^{2,\gamma'}(I \times [0,1])} < \epsilon/2. \] Combining two inequalities together, we conclude that for any $\epsilon > 0$ and any interval $I$ there exists $\tilde{q} \in \R$ such that \[ \|h(\cdot+\tilde{q}) - \hat{h}(\cdot)\|_{C^{2,\gamma'}(I \times [0,1])} < \epsilon. \] This shows that $\hat{h}$ is a subsolutions of $h$. \end{proof} \subsection{General bounds for solutions} As noted by Keady and Norbury \cite{KeadyNorbury78}, the bounds \[ d_-(r) \leq \check{\eta} \leq d_+(r) \leq \hat{\eta} \] are closely related to the Benjamin and Lighthill conjecture about the ow force. Here \[ \check{\eta} = \inf_\R \eta, \ \ \hat{\eta} = \sup_\R \eta. \] For periodic waves all inequalities are strict and the case of equalities is quite delicate and is indirectly contained in claims (ii) and (iii) of Theorem \ref{thm:BLC}. Let us recall a precise statement about bounds for the surface profile, mainly borrowed from \cite{Kozlov2015}, that will be used in our proofs. \begin{proposition} \label{p:bounds} Let $(\psi,\eta)$ be as in Theorem \ref{thm:BLC} with $r \in (R_c,R_0)$. Then the following is true: \begin{itemize} \item[(i)] for all $x\in\R$ we have $\eta(x) > d_-(r)$; furthermore, if $\check{\eta} = d_-(r)$, then $\FF = \FF_-(r)$; \item[(ii)] $\hat{\eta} \geq d_+(r)$ and if the equality holds true, then $\FF = \FF_+(r)$; \item[(iii)] $\check{\eta} \leq d_+(r)$ and $\check{\eta} = d_+(r)$ implies $\FF = \FF_+(r)$. \end{itemize} \end{proposition} \begin{proof} The part of the statement about bounds for the surface profile was proved in \cite{Kozlov2015} and we only need to verify the remaining part about the flow force constant. Let $h$ be the height function corresponding to $\psi$, defined in Section 3.1. Assume that $\check{\eta} = d_-(r)$. Then there exists an unbounded sequence $\{q_j\}_{j=1}^\infty$ such that $w^{(s_+)}(q_j,1) \to 0$, where $w^{(s_+)}$ is defined by \eqref{ws} with $s = s_+(r)$. Let $\tilde{w}$ be the corresponding subsolution of $w$. Then $\wtilde(0,1) = 0$ by the construction, while $\wtilde \geq 0$ in $S$, which follows from a similar property for $w^{(s_+)}$. Note that $w^{(s_+)} \geq 0$ by the maximum principle, since $\check{\eta} \geq d_-(r)$ and the boundary condition \eqref{ws:bot} show that $w^{(s_+)} \geq 0$ along the boundary of $S$. Therefore, $\wtilde \geq 0$ in $S$ as a limit of nonnegative functions. We claim that $\wtilde = 0$ identically in $S$. Indeed, if it is not the case, then the Hopf lemma would give $\wtilde_p(0,1) < 0$; note that $\wtilde$ solve the same elliptic problem and the maximum principle is applicable. But this is in a contradiction with the relation \eqref{wspm}, which holds for $\wtilde$ instead of $\wspm$. The latter identity computed at $q=0$ gives $\wtilde_p(0,1) \geq 0$, since $\wtilde(0,1) = \wtilde_q(0,1) = 0$. Thus, we proved that $\wtilde = 0$ in $S$ and then $\FF = \FF_-(r)$. A similar argument with subsolutions works for the cases $\hat{\eta} = d_+(r)$ and $\check{\eta} = d_-(r)$. \end{proof} \begin{proposition} \label{p:wp} Let $h \in C^{2,\gamma}(\overline{S})$ be a solution to \eqref{height} with $r \in (R_c,R_0)$. Then there exists a stream solution $H(p;s)$ with $s \in (s_0,s_-(r))$ such that $\sup_{q \in \R} h_p(q,0) < H_p(0;s)$. \end{proposition} \begin{proof} If $s_0 = 0$, then the statement is trivial, since then $H_p(0;s) = 1/s \to +\infty$ as $s \to s_0$. If $d_0 = +\infty$, then we choose $s \in (s_0,s_-(r))$ sufficiently small so that $\ws < 0$ on $p=0$. Then $\ws < 0$ everywhere in $S$ by the maximum principle and so $\ws_p < 0$ on $p=0$ by the Hopf lemma. The remaining case $s_0 > 0$ and $d_0 < +\infty$ requires that $H_p(1;s) \to 0$ as $s \to +\infty$ (follows from the classification of vorticity functions in \cite{Kozlov2015}). Then we choose $s \in (s_0,s_-(r))$ for which $\ws_p < 0$ on $p=1$. This requires $\ws < 0$ on $p=0$ (as follows from the Hopf lemma) and then $\ws_p < 0$ on $p=0$ as before. \end{proof} \subsection{Asymptotics for solitary waves} A solitary wave solution to \eqref{eqn:stream} is defined by an asymptotic relation \begin{equation} \label{sol} \lim_{x \to \pm\infty} \eta(x) = d. \end{equation} For unidirectional waves it guarantees that the corresponding height function $h$ has a subsolution, which is a laminar flow with the depth $d$. In particular, this requires \[ d = d_-(r) \ \ \text{or} \ \ d = d_+(r), \] where $r$ is the Bernoulli constant. It was recently proved in \cite{KozLokhWheeler2020} that even one-sided, assumption \eqref{sol} requires $d = d_-(r)$; that is all solitary waves are subcritical (supported by subcritical laminar flows $H(p;s)$ with $s>s_c$). Furthermore, one verifies that \begin{equation} \label{solder} h(q,p) \to H(p;s_-(r)), \ \ h_q(q,p) \to 0, \ \ q\to \pm \infty \end{equation} in $C^{2,\gamma'}([0,1])$ and $C^{1,\gamma'}([0,1])$ respectively for all $\gamma' \in (0,\gamma)$, provided $h \in C^{2,\gamma}(\overline{S})$. Asymptotics \eqref{solder} show that $\FF = \FF_-(r)$, which follows from \eqref{height:ff} by passing to the limit $q \to +\infty$. All these considerations are valid even if we assume that \eqref{sol} holds true only at the positive infinity. In order to obtain higher order asymptotics for $h$, we need to introduce the following eigenvalue problem: \[ - \left( \frac{\varphi_p}{H_p^3} \right)_p = \mu \frac{\varphi}{H_p}, \ \ p \in [0,1], \] where $H = H(p;s_+(r))$ and the eigenfunction $\varphi(p)$ is subject to the boundary conditions \[ \varphi(0) = 0, \ \ \varphi_p(1) = H_p^3(1)\varphi(1). \] This Sturm-Liouville problem arises as a form of the dispersion relation; see \cite{ConstantinStrauss04}. The first eigenvalue $\mu_1 = \lambda_1^2 > 0$ is always positive, provided $H = H(p;s_+(r))$ is a supercritical stream solution. This suggests that the difference $h-H$ must decay as $e^{-\lambda_1 q}$ as $q \to +\infty$. A precise statement is given below. \begin{proposition} \label{solasymp} Let $h \in C^{2,\gamma}(\overline{S})$ be a solution to \eqref{height} and satisfy $\lim_{q \to +\infty} h(q,1) = d_-(r)$. Then \[ h(q,p) = H(p;s_-(r)) + a \varphi_1(p) e^{-\lambda_1 q} + f(q,p) e^{-\lambda_1'q}, \ \ (q,p) \in S, \] where $a \neq 0$, $\lambda_1' > \lambda_1$ and $f \in C^{2,\gamma}(\overline{S})$. \end{proposition} These asymptotics were proved in \cite{Hur07}, under an additional assumption $\lim_{q \to -\infty} h(q,1) = d_-(r)$. However this is not essential and proofs from \cite{Hur07} are applicable with minor modifications, so we omit it here. \subsection{Auxiliary functions $\sigma$ and $\kappa$} For a given $r > R_c$ and $s > s_0$ we define \begin{equation} \label{sigma} \sigma(s;r) = \int_0^1 \left( \frac{1}{2H_p^2(p;s)} - H(p;s) - \Omega(p) + \Omega(1) + r \right) H_p(p;s) \,dp. \end{equation} This expression coincides with the flow force constant for $H(p; s)$, but with the Bernoulli constant $R(s)$ replaced by $r$. We also note that \[ \sigma(s_\mp(r);r) = \FF_\pm(r). \] The key property of $\sigma(s; r)$ is stated below. \begin{lemma} \label{lemma:sigma} For a given $r \in (R_c,R_0)$ the function $s \mapsto \sigma(s;r)$ increases for $s \in (s_0,s_-(r))$, decreases for $s \in (s_-(r),s_+(r))$ and increases to infinity for $s \in (s_+(r),+\infty)$. \end{lemma} \begin{proof} Because \[ H_p(p;s) = \frac{1}{\sqrt{s^2 - 2 \Omega(p)}}, \ \ \partial_s H_p(p;s) = -s H_p^3(p;s), \] we can compute the derivative \[ \begin{split} \sigma_s(s;r) & = \int_0^1 \left( \frac{1}{2H_p^2(p;s)} - H(p;s) - \Omega(p) + \Omega(1) + r \right) \partial_s H_p(p;s) \,dp \\ & + \int_0^1 \left( -\frac{\partial_s H_p(p;s)}{H_p^3(p;s)} - \partial_s H(p;s) \right) H_p(p;s) \,dp \\ & = \int_0^1 \left( -\frac{1}{2H_p^2(p;s)} - \Omega(p) + \Omega(1) + r \right) \partial_s H_p(p;s) \,dp - d(s)d'(s) \\ & = \int_0^1 \left( -\tfrac12 s^2 + \Omega(1) + r \right) \partial_s H_p(p;s) \,dp - d(s) \int_0^1 \partial_s H_p(p;s) \\ & = -s (r-R(s)) \int_0^1 H_p^3(p;s) \, dp. \end{split} \] Finally, because $R(s) < r$ for $s_-(r) < s < s_+(r)$ and $R(s) > r$ for $s> s_+(r)$ or $s < s_-(r)$ we obtain the statement of the lemma. \end{proof} Our function $\sigma(s;r)$ and it's role is similar to the function $\sigma(h)$ introduced by Keady and Norbury in \cite{Keady1975}. The main purpose of the latter is to be used for a comparison with the flow force constant $\FF$. The following function will be also involved in our analysis. \begin{equation} \label{kappa} \kappa(s;r) = 2 (\FF - \sigma(s;r)) - (r- R(s))^2. \end{equation} A direct computation gives \[ \begin{split} \partial_s \kappa(s;r) & = - 2 \partial_s \sigma(s;r) + 2 (r - R(s)) R'(s) \\ & = 2 s (r-R(s)) \int_0^1 H_p^3(p;s) \, dp + 2 (r - R(s)) (s + d'(s)) \\ & = 2 s (r - R(s)). \end{split} \] Thus, we obtain \begin{lemma} \label{lemma:kappa} For a given $r \in (R_c,R_0)$ the function $s \mapsto \kappa(s;r)$ decreases for $s \in (s_0,s_-(r))$, increases for $s \in (s_-(r),s_+(r))$ and decreases to minus infinity for $s \in (s_+(r),+\infty)$. \end{lemma} These monotonicity properties of functions $\sigma$ and $\kappa$ will used in what follows. \begin{figure}[!t] \centerin \includegraphics[scale=0.9]{sigmakappa} \caption{Graphs of functions $\sigma(s;r)$ and $\kappa(s;r)$.} \label{fig:stokessolitary} \end{figure} \subsection{Flow force flux functions} Our aim is to extract some information by comparing the flow force constant $\FF$ to $\sigma(s;r)$ for different values of $s > s_0$. For this purpose we introduce the (relative) flow force flux function $\Phi^{(s)}$ by setting \begin{equation} \label{fff} \phis(q,p) = \int_0^p \left( \frac{(\ws_p(q,p'))^2}{h_p(q,p') (H_p(p';s))^2} - \frac{(\ws_q(q,p'))^2}{h_p(q,p')} \right) \, dp'. \end{equation} The latter functions were recently introduced in \cite{Lokharu2020} and \cite{KozLokhWheeler2020}. The same computation as in Section 3 of \cite{KozLokhWheeler2020} gives \begin{equation} \label{fff:der} \phis_q = - \ws_q \left( \frac{1 + (\ws_q)^2}{h_p^2} - \frac{1}{H_p^2}\right), \ \ \phis_p = \frac{(\ws_p)^2}{h_p H_p^2} - \frac{(\ws_q)^2}{h_p}. \end{equation} This shows that $\phis \in C^{2,\gamma}(\overline{S})$, provided $h \in C^{2,\gamma}(\overline{S})$ and $\omega \in C^{\gamma}([0,1])$. A key property of $\phis$ is that it solves a homogeneous elliptic equation as stated in the next proposition, while satisfies certain boundary conditions, involving the flow force constant. \begin{proposition} There exist functions $b_1, b_2 \in L^{\infty}(S)$ such that \begin{equation} \label{fff:main} \frac{1+h_q^2}{h_p^2} \phis_{pp} - 2\frac{h_q}{h_p} \phis_{qp} + \phis_{qq} + b_1 \phis_q + b_2 \phis_p = 0 \ \ \text{in} \ \ S. \end{equation} Furthermore, $\phis$ satisfies the boundary conditions \begin{subequations}\label{fff:boundary} \begin{alignat}{2} \phis &= 2(\FF - \sigma(s;r)) - 2 (r - R(s)) \ws(q,1) + (\ws(q,1))^2 &\qquad& \text{for } p=1,\label{fff:top} \\ \phis &= 0&\qquad& \text{for } p=0. \label{fff:bot} \end{alignat} \end{subequations} In the irrotational case $b_1,b_2 = 0$ and \eqref{fff:main} is equivalent to the Laplace equation. \end{proposition} \begin{proof} For the proof we refer to \cite{KozLokhWheeler2020}. Let us outline the derivation of \eqref{fff:top}. For this purpose we compute the difference \[ \begin{split} \FF - \sigma(s;r) & = \int_0^1 \left( \frac{1-(\ws_q)^2}{2h_p^2} - \ws - \frac{1}{2 H_p^2} \right) H_p \, dp \\ & + \int_0^1 \left( \frac{1-(\ws_q)^2}{2h_p^2} - h - \Omega + \Omega(1) + r \right) \ws_p \, dp \\ & = \int_0^1 \left( \frac{(\ws_p)^2}{2h_p H_p^2} - \frac{(\ws_q)^2}{2h_p} + \ws H_p \right) \, dp \\ & + \int_0^1 \left( -\frac{1}{2H_p^2} - \ws - H - \Omega + \Omega(1) + r \right) \ws_p \, dp. \end{split} \] Now using the identity \[ - \Omega(p) + \Omega(1) + R(s) = \frac{1}{2H_p^2} + H(1) \] and integrating first-order terms, we conclude that \[ 2(\FF - \sigma(s;r)) = 2 (r - R(s)) \ws(q,1) - (\ws(q,1))^2 + \int_0^1 \left( \frac{(\ws_p)^2}{h_p H_p^2} - \frac{(\ws_q)^2}{h_p} \right) \, dp. \] \end{proof} The next proposition borrowed from \cite{Lokharu2020} explains the meaning of the auxiliary function $\kappa(s; r)$. \begin{proposition} \label{p:fff} Let $h \in C^{2,\gamma}(\overline{S})$ be a solution to \eqref{height} with $r>R_c$. Assume that the flow force flux function $\phis$ for some $s > s_0$ satisfies $\inf_{q \in \R} \phis(q; 1) \leq 0$. Then \[ \inf_{q \in \R} \phis(q; 1) = \kappa(s; r), \] where $\kappa(s; r)$ is defined by \eqref{kappa}. Furthermore, the infimum is attained over a sequence $\{q_j\}_{j=1}^\infty$ for which $\lim_{j \to +\infty} \ws(q_j,1) = r- R(s)$. \end{proposition} For a proof of this statement we refer to Proposition 2.4 in \cite{Lokharu2020}. \section{Proof of Theorem \ref{thm:BLC}} First we prove the left inequality in the first statement of Theorem \ref{thm:BLC}. For the rest of the paper we assume that $h$ is the height function corresponding to $\psi$ and $\eta$; see Section 3.1 for details and notations. \subsection{Proof of the lower bound $\FF \geq \FF_-(r)$ (part (i) of Theorem \ref{thm:BLC})} Note that the first claim of Proposition \ref{p:bounds} allows us to assume $\etamin> \dmin$, because otherwise $\FF = \FFmin$ and we have nothing to prove. In fact we are going to prove a stronger statement, which will be used in the proof of part (ii). \begin{proposition} \label{p:lowerbound} Under assumptions of Theorem 2.1 and assuming that $\etamin> \dmin$, we have $\FF > \FFmin$. \end{proposition} \begin{proof} Assume the contrary, that $\FF \leq \FFmin$. First we show that the function $\Phi^{(s_+(r))}$ for $s = s_+(r)$ is nonnegative in $S$ and that \begin{equation} \label{FFinf} \inf_{\R} \Phi^{(s_+(r))}(q,1) > 0. \end{equation} If the latter is not true, then the infimum above is nonpositive and Proposition \ref{p:fff} gives a sequence of points $\{q_j\}_{j=1}^\infty$ for which \[ \lim_{j \to +\infty} w^{(s_+(r))}(q_j,1) = r - R(s_+(r)) = 0, \] which contradicts to the assumption $\check{\eta} > d_-(r)$. Thus, we proved \eqref{FFinf}. Now we show that for some $s \in (s_-(r),s_+(r))$ the function $\phis$ attains negative values somewhere along the upper boundary $p=1$. Indeed, by Proposition \ref{p:bounds} we can find $s_\star \in (s_-(r),s_+(r))$ such that \[ H(1;s_\star) = \inf_\R h(q,1), \] until $\check{\eta} = d_+(r)$ so that $\FF = \FFplus$ by Proposition \ref{p:bounds} (iii) and we have nothing to prove. The choice of $s_\star$ and the boundary relation \eqref{fff:top} show that $\inf_{\R} \phistar (q,1) \leq \FF - \sigma(s_\star;r)$. But according Lemma \ref{lemma:sigma} (see Figure \ref{fig:stokessolitary}) we have $\sigma(s_\star;r) > \FFmin$ and $\FFmin \geq \FF$ by the assumption. Thus, $\inf_{\R} \phistar(q,1) < 0$. Finally, by the continuity we find $s_\dagger \in (s_-(r),s_+(r))$ such that \[ \inf_{\R}\Phi^{(s_\dagger)}(q,1) = 0. \] Then $\kappa(s_\dagger;r) = 0$ by Proposition \ref{p:fff} and the definition of $\kappa$ gives $\FF - \sigma(s_\dagger;r) \geq 0$. But $\sigma(s_\star;r) > \FFmin$ by Lemma \ref{lemma:sigma} and we arrive to a contradiction with the assumption $\FFmin \geq \FF$. Therefore, we proved that $\FF > \FFmin$. \end{proof} \subsection{Proof of the upper bound $\FF \leq \FFplus$ and part (iii) of Theorem \ref{thm:BLC}.} We will complete the proof of the first claim (i) and will prove part (iii) of Theorem \ref{thm:BLC} at the same time. More precisely, we will show below that any nontrivial solution (other than a parallel flow) satisfies the strict inequality $\FF < \FFplus$. Assume the contrary, that $\FF \geq \FFplus$. Then we can prove \begin{lemma} \label{lemma:up} Let $\FF \geq \FFplus$. Then for any $s \in (s_0,s_-(r))$ the function $\phis$ is nonnegative in $S$. \end{lemma} \begin{proof} We prove by a contradiction. For a given $s \in (s_0, s_-(r))$ we assume that \[ \inf_\R \phis(q,1) \leq 0. \] Then $\kappa(s;r) \leq 0$ by Proposition \ref{p:fff}. Thus, Lemma \ref{lemma:kappa} (see Figure \ref{fig:stokessolitary}) gives $\kappa(s_-(r);r) = \FF - \FF_+(r) < 0$, leading to a contradiction. Therefore, function $\phis$ is positive along the upper boundary $p=1$, zero on the bottom and we finish the proof by using a maximum principle. \end{proof} Now we can easily obtain a contradiction. It follows from Lemma \ref{lemma:up} that function $\Phi^{(s_-(r))}$ is nonnegative in $S$, so that \[ \Phi^{(s_-(r))}_p(q,0) = \frac{\left(w_p^{(s_-)}(q,0) \right)^2}{h_p(q,0)} > 0 \] for all $q\in\R$, provided $\Phi^{(s_-(r))}$ is not zero identically. It is not zero, because otherwise $h = H(p;s_-(r))$ and $\FF = \FF_+(r)$. Thus, by Proposition \ref{p:wp} we can find a stream solution $H(p; s)$ with $s_0 < s_\star < s_-(r)$ such that $w^{(s_\star)}_p(q_\star,0) = \phistar_p(q_\star,0) = 0$ for some $q_\star \in \R$. But that is in a contradiction with the conclusion of Lemma \ref{lemma:up}, which requires that $\phis_p(q,0) > 0$ for all $q \in \R$ and all $s \in (s_0,s_-(r))$ by the Hopf lemma. This finishes the proof of (iii) and the upper bound in (i). \subsection{Proof of part (ii) of Theorem \ref{thm:BLC}.} Let $h$ be an arbitrary solution satisfying assumptions of the theorem and such that $\FF = \FF_-(r)$. On a basis of Proposition \ref{p:lowerbound} we can additionally assume that $\check{\eta}= d_-(r)$. Our aim is to prove that the function \[ w(q,p) = h(q,p) - H(p;s_+(r)) \] uniformly converges to zero as $q \to \pm \infty$. Having that it is left to apply Theorem 3.1 in \cite{Hur07}, which proves that $h$ determines a supercritical and symmetric solitary wave, monotonically decaying to its asymptotic level $d_-(r)$ on each side of the crest. Without loss of generality, we only need to show that \begin{equation}\label{wdecay} w(q,1) \to 0 \ \ \text{as} \ \ q \to +\infty. \end{equation} Let us prove \begin{lemma}\label{lemma:decay} Assuming \eqref{wdecay} is not true there exists a non-zero subsolution $\hat{w}$ of $w$ corresponding to a sequence $\{\hat{q}_j\}_{j=1}^\infty$ accumulating at the positive infinity and such that $\hat{w}(q, 1) \to 0$ as $q \to +\infty$. \end{lemma} \begin{proof} First we note that even so \eqref{wdecay} is violated there exists a sequence $q_j \to +\infty$ such that $w(q_j,1) \to 0$ as $j \to +\infty$. Indeed, otherwise we can find a subsolution $\tilde{w}$ for which $\inf_\R w(q,1) > 0$ and then $\FF > \FF_-(r)$ by Proposition \ref{p:lowerbound}, leading to a contradiction. On the other hand, for any sufficiently small $\epsilon > 0$ we have \[ \limsup_{q \to +\infty} w(q,1) > \epsilon. \] Let $I_j = (a_j, b_j)$ be the largest interval containing $q_j$ and such that $w(q; 1) < \epsilon$ for all $q \in I_j$. Assuming $0 < \epsilon < w(1, 0)$, we see that all $I_j$ are bounded, while $|I_j| \to +\infty$ as $j \to +\infty$. Indeed, if lengths are bounded for some subsequence, then we can find a subsolution $\tilde{w}$ corresponding to a subsequence of $\{q_j\}_{j=1}^\infty$, which is not zero identically and such that $\tilde{w}(0,1) = 0$. The latter is forbidden by Proposition \ref{p:bounds} (i). Now, let us consider a subsolution $\tilde{w}$ corresponding to the sequence $\{a_j\}_{j=1}^\infty$, where $a_j$ is the left endpoint of $I_j$. Then $\tilde{w}$ must be nonnegative in $S$ and $\tilde{w}(0, 1) = \epsilon$, so it is not zero identically. Moreover, because $|I_j| \to + \infty$ as $j \to +\infty$ and $w(q, 1) \leq \epsilon$ on each $I_j$, we conclude that \[ \tilde{w}(q,1) \leq \epsilon \ \ \text{for all} \ \ q > 0. \] If $\tilde{w}(q,1)$ converges to zero as $q \to +\infty$ then we are done. If it is not true, then there exists $\hat{\epsilon} > 0$ and a sequence $\hat{q}_j \to +\infty$ such that $\tilde{w}(\hat{q}_j,1) \geq \hat{\epsilon}$, $j \geq 1$. Let $\hat{w}$ be a subsolution of $\tilde{w}$ corresponding to $\{\hat{q}_j\}_{j=1}^\infty$. Then $\hat{w}$ is not zero identically, since $\hat{w}(0,1) \geq \hat{\epsilon}$ by the construction. Furthermore, \[ \hat{w}(q,1) \leq \epsilon \ \ \text{for all} \ \ q \in \R. \] Thus, if $\epsilon$ is sufficiently small from the beginning (depending on $r$), then $\hat{w}$ is a small-amplitude solution, supported by a supercritical flow. Therefore, $\hat{w}$ describes a supercritical solitary wave, as follows from \cite{GrovesWahlen08} (the only small-amplitude solutions are solitary waves). As a solitary wave, $\hat{w}$ decays to zero at infinity and provides the desired subsolution; see Proposition \ref{subsub}. \end{proof} Let us prove \eqref{wdecay}, If it is not the case, then Lemma \ref{lemma:decay} provides with a non-zero subsolution $\hat{w}$ corresponding to a sequence $\hat{q}_j \to +\infty$ that decays to zero. Let $\hat{\Phi}$ be the flow force flux function corresponding to $\hat{w}$. It is defined as a limit over compact subsets of flow force flux functions $\Phi(q+\hat{q}_j,p)$ as $j \to +\infty$, where $\Phi$ is the flow force flux function for $w$. At the same time $\hat{\Phi}$ can be defined explicitly by formula \eqref{fff}, where $\ws$ is replaced by $\hat{w}$ and $h$ is replaced by $\hat{w} + H$. Let us obtain asymptotics for $\hat{\Phi}$. By Proposition \ref{solasymp} function $\hat{w}$ is subject to \[ \hat{w}(q,p) = a \varphi_1(p) e^{-\lambda_1 q} + f(q,p) e^{-\lambda_1'q}, \ \ (q,p) \in S, \] where $a \neq 0$ and $f \in C^{2,\gamma}(\overline{S})$. Using this formula in \eqref{fff} one obtains \[ \hat{\Phi}(q,p) = a^2 \frac{\varphi_1'(p)\varphi(p)}{H_p^3(p;s_-(r))} e^{-2\lambda_1 q} + g(q,p) e^{-(\lambda_1 + \lambda_1')q}, \ \ (q,p) \in S, \] where $g \in C^{2,\gamma}(\overline{S})$. In particular, there exist $\hat{q}_\star > 0$ and $A>0$ such that \[ \hat{\Phi}(q,p) \geq A p e^{-2\lambda_1 q} \] for all $q \geq \hat{q}_\star$ and $p \in [0,1]$. Now we consider an interval $\hat{I} = [0,2\hat{q}_\star]$ containing $\hat{q}_\star$. We know that functions $\Phi(q+\hat{q}_j,p)$ converge to $\hat{\Phi}$ in $C^{2,\gamma'}(\hat{I} \times [0,1])$ as $j \to +\infty$. Therefore, we can find an integer $j_\star > 0$ such that \[ \Phi(\hat{q}_\star + \hat{q}_j,p) \geq \tfrac12 A p e^{-2\lambda_1 \hat{q}_\star} \] for all $p \in [0,1]$ and all $j \geq j_\star$. Note that the right-hand side is independent of $j$. Let us put $Q^{(j)} = [\hat{q}_\star + \hat{q}_j, \hat{q}_\star + \hat{q}_{j+1}] \times [0,1]$ and denote by $Q^{(j)}_l$, $Q^{(j)}_r$ and $Q^{(j)}_t$ the left, right and top boundaries of the rectangle, excluding corner points. Furthermore, we put \[ \check{\eta}_j = d_-(r)+\min_{ [\hat{q}_\star + \hat{q}_j, \hat{q}_\star + \hat{q}_{j+1}]} w(q,1). \] By assumption $\check{\eta} = d_-(r)$ we have that $\check{\eta}_j \to 0$ as $j \to +\infty$. Moreover, since $\Phi = w^2$ along the top boundary $p=1$, we also have \[ \lim_{j \to \infty} \inf_{Q^{(j)}} \Phi = 0. \] \begin{figure}[!t] \centerin \includegraphics[scale=0.7]{subsolitary} \caption{A sketch of the fluid domain in $x,y$-coordinates, corresponding to regions $Q^{(j)}$.} \label{fig:stokessolitary} \end{figure} Let us consider a stream solution $H(p;s_j)$ for which $d(s_j) = \check{\eta}_j$. Because $\check{\eta}_j > d_-(r)$, we have $s_-(r)<s_j < s_+(r)$, while $s_j \to s_+(r)$ as $j \to +\infty$. In particular, functions $\Phi^{(s_j)}$ converge to $\Phi$ in $C^{2,\gamma}(\overline{S})$ as $j \to +\infty$. This allows us to find an integer $j_\dagger \geq j_\star$ such that \begin{equation} \label{phipasymp} \Phi^{(s)}(\hat{q}_\star + \hat{q}_j,p) \geq \tfrac14 A p e^{-2\lambda_1 \hat{q}_\star} \end{equation} for all $p \in [0,1]$, all $s \in [s_j,s_+(r)]$ and all $j \geq j_\dagger$. This shows that function $\Phi^{(s)}$ is positive on $Q^{(j)}_l$ and $Q^{(j)}_l$ for all $j \geq j_\dagger$. On the other hand, by the choice of $s_j$, we have \[ \min_{Q^{(j)}_t} w^{(s_j)} = 0, \ \ j \geq j_\dagger. \] We recall that \eqref{fff:top} gives \[ \Phi^{(s_j)}(q,1) = 2 (\FF - \sigma(s_j;r)) - 2 (r - R(s))w^{(s_j)}(q,1) + [w^{(s_j)}(q,1)]^2, \ \ q\in\R. \] This shows that \[ \min_{Q^{(j)}_t} \Phi^{(s_j)} \leq 2 (\FF - \sigma(s_j;r)) = 2 (\FF_-(r) - \sigma(s_j;r)) < 0, \ \ j \geq j_\dagger \] by Lemma \ref{lemma:sigma}. On the other hand $\min_{Q^{(j)}_t} \Phi > 0$ and by the continuity there exists $s_j^\star \in (s_j,s_+(r))$ such that \[ \min_{Q^{(j)}_t} \Phi^{(s_j^\star)} = 0. \] Because of \eqref{phipasymp} function $\Phi^{(s_j^\star)}$ is positive on the vertical sides $Q^{(j)}_l$ and $Q^{(j)}_r$, while equals to zero on the bottom of the rectangle $Q^{(j)}$, provided $j \geq j_\dagger$. Thus the minimum of $\Phi^{(s_j^\star)}$ on $Q^{(j)}$ is attained for some $(q_j^\star,1) \in Q^{(j)}_t$ on the upper boundary, where $\Phi^{(s_j^\star)}(q_j^\star,1) = 0$. In particular, we have $\Phi^{(s_j^\star)}_p(q_j^\star,1) > 0$ by the Hopf lemma so that $w^{(s_j^\star)}_q(q_j^\star,1) \neq 0$. Now since $\Phi^{(s_j^\star)}_q(q_j^\star,1) = 0$ the boundary relation \eqref{fff:top} after the differentiation with respect to $q$ gives $w^{(s_j^\star)}(q_j^\star,1) = r - R(s_j^\star)$. Using that we find \[ \Phi^{(s_j^\star)}(q_j^\star,1) = \kappa(s_j^\star;r) = 0. \] But $\kappa(s_j^\star;r) = 0$ requires $\kappa(s_+(r);r) > 0$ by Lemma \ref{lemma:kappa}, which leads to a contradiction, since $\kappa(s_+(r);r) = \FF - \FF_-(r) = 0$. Thus, we proved \eqref{wdecay} and the statement (iii) of the theorem now follows from Theorem 3.1 in \cite{Hur07}. \bibliographystyle{siam}
1,314,259,995,456
arxiv
\section{Introduction}\label{sec-Introduction} The design of fluid-saturated poroelastic media (FSPM) present a gradually increasing topic of research interest due to its mathematical complexity and a great application potential. Although the theory of FSPM has been developed in the context of geomechanics and civil engineering, nowadays theses types of materials are abundant in many engineering applications. A convenient design of microstructures can provide a metamaterial property related to controllable fluid transport, or elasticity. In particular, soft robots can be designed as inflatable porous structures generating a motion and force due to variable fluid content, e.g.,~ \cite{Andreasen-Sigmund-2013}. To this aim, the behaviour of the fluid-saturated porous materials is described by the Biot model \cite{Biot1957}, within the small strain theory, which was postulated using a phenomenological approach. The homogenization method enabled the derivation of the quasistatic Biot's equations \cite{Burridge-Keller-1982}. Since then, a number of works extended the results for the dynamic case, which is important for treating wave propagations, see e.g.,~ \cite{Rohan-Naili-ZAMP2020}. As an extension beyond the linear theory, a modified Biot model with strain-dependent poroelastic and permeability coefficients was proposed in \cite{Rohan-Lukes-2015}. Topology optimization of microstructures constituting the FSPM was treated in \cite{Andreasen_2012} and \cite{Andreasen-Sigmund-2013}. Therein, the fluid-structure interaction problem was handled in the homogenization framework and an approximation towards computational simplification was proposed. In this paper, we aim at a two-scale approach optimization allowing for a spatial grading of the microstructure design. Two-scale optimization problems have been already extensively discussed in literature before. The whole idea started with the seminal paper of Bends{\o}e and Kikuchi \cite{BendsoeKikuchi}, in which the following concept was suggested: for a given parametrization of the unit cell, carry out the homogenization procedure on a fixed parameter grid in a preprocessing step. Then, in every step of the optimization, first retrieve, for each design element, (approximate) effective material coefficients by interpolation. Next, plug these coefficients into the state equation, solve the latter and evaluate the cost. The other way round, sensitivities are computed by the chain rule, i.e. first differentiate the quantity of interest \text{ with respect to~} the material coefficients and then differentiate the material coefficients \text{ with respect to~} to the design parametrization. This procedure opens the way for the application of any suitable gradient based optimization solver, like, e.g.,~ OCM \cite{sigmund99}, MMA \cite{Svanberg-MMA-1987} or SnOpt \cite{Gill-Snopt-2002}, to name only those, which are most prominently used in structural topology and material optimization. While this concept essentially carries over to other classes of problems, as it is done by \cite{das2020,zhou2021,chen2023} for thermomechanical settings, we opted to follow a slightly different avenue in this paper. There are several reasons: First, the concept depends, by its nature, to a large extent on the chosen parametrization. If the parameters enter the homogenized properties in a substantially non-convex way (as it is the case, if, e.g.,~ rotations of the base cells are allowed), many local minima might be introduced and additional measures must be taken to avoid getting trapped in one of them. Second, it is not easy to extend the original concept with respect to the use of completely independent types of unit cells, either characterized by different geometries or material configurations. In this case, specifying a smooth parametrization is non-trivial. The typical idea would be to first introduce an independent parametrization for either cell types (for example using sizing variables) and then add on top a smooth interpolation scheme for the effective tensors as used, for instance, in multi-material optimization (see \cite{hvejsel2011}). The problems with that is however, that the second level of interpolation introduces material coefficients, for which typically \emph{no} interpretation in terms of a microstructure exists. Thus, an additional penalization strategy is required, which ensures that those unphysical choices do not remain in the optimal solution. Such an approach was successfully demonstrated in the recent work \cite{YPSILANTIS2022106859}. In another recent article, \cite{LIU2023116485} chose two unit cell types, described via level-set functions, such that the mixture of their geometric parameters can be directly interpreted as a third unit cell type. \cite{PIZZOLATO2019112552} also opted for level-set functions to describe the geometry of the microstructures. But, with respect to the handling of multiple material classes, the authors defined floating patches, where each patch is a subdomain of the design domain and only occupied by one microstructure type. Then, the layout of these patches are optimized on the macroscopic level and their overlaps are combined via a differentiable maximum operator. In our paper, we describe, how these disadvantages can be circumvented using the SGP concept. The basic idea has been already introduced in \cite{Semmler-SIAM-2018} and is now generalized to a multiphysics, two-scale setting. This involves an extension of an MMA-type block-separable model function (see \cite{stingl-siam-2009}) to the poroelastic setting, a split of the computations into an \emph{offline} and an \emph{online} phase, which is particularly suited for homogenization based problems, and a numerical solution scheme for the nearly global optimization of block-separable subproblems. We would like to note here that the term \emph{block-separable} implies that the minimization can be carried out separately for each design element, however a design element itself can be described by multiple design degrees of freedom. For a further motivation of the SGP method, we refer to the first paragraph in \cref{sec:sgp}. Here, we just like to add that, in the whole optimization process, two different types of sensitivities are relevant. First, there are the sensitivities of constraint or cost functions \text{ with respect to~} the effective material coefficients. These constitute a substantial ingredient of the block-separable model used in the heart of the SGP method. Second, there are the sensitivities of the material coefficients \text{ with respect to~} the chosen parametrization. In the context of the suggested two-scale SGP framework, the latter ones are not strictly required, but can help to come up with an improved interpolation model used in the offline phase. In any case, the derivation of sensitivities presented in this paper, for the particular context of fluid saturated porous media, relies on derivations in \cite{Huebner-Solid-2019}, where also the sensitivity of the homogenized coefficients were reported, see also \cite{Rohan-Lukes-2015}. Finally, we would like to comment on the generality of the presented approach. Although the SGP concept outlined in our paper can be applied to a large range of multiphysics two-scale material optimization problems, the Biot model of fluid saturated porous media provides an ideal test bed for the method. This is for several reasons: first, the physical coupling is non-trivial. Second, it is very natural to set up competing objective functions, such as the structural compliance on the one hand and the enhanced fluid flow through an outflow boundary, on the other hand. And third, configurable types of microstructures supporting either the first or the second goal can be deduced in a straightforward manner. The structure of the remainder of this paper is as follows: In \cref{sec-twoscale} all ingredients of the two-scale problem are described. To these belong a brief repetition of the constitutive laws for the Biot model (\cref{sec-biot}), the poroelastic state problem in variational form (\cref{sec:state-problem}), a generic sketch of the two-scale problem constrained by the poroelasticity equations (\cref{sec:two-scale-opt-problem}) and an adjoint analysis providing sensitivities with respect to effective material coefficients, as used later by the SGP method (\cref{sec-sensitivity}). Finally, two types microstructures are suggested in form of configurable unit cells (\cref{sec:design_params}). In \cref{sec:sgp} the SGP concept for the solution of two-scale optimization problems is introduced in greater detail. For this, the two-scale problem is discretized and extended for the use of multiple types of unit cells (\cref{sec-twoscale-discr}). Then, a separable sequential approximation concept is suggested (\cref{sec:subproblems}) and last the SGP method is presented in an algorithmic form (\cref{sec:SGPalg}). In \cref{sec:numerical-results}, the advantages of the SGP algorithm will be discussed using various types of two-scale problems. \section{Formulation of the two-scale optimization problem} \label{sec-twoscale} In this section, we explain our optimization strategy. Although it can be applied to similar problems involving several physical fields or multiphysics problems, in this paper, we consider the fluid saturated porous media represented by the Biot model which can be derived using the homogenization of the fluid-structure interaction problem restricted to small deformation kinematics, see e.g.,~ \cite{Burridge-Keller-1982,Brown2011,Rohan-Naili-Lemaire-CMAT2015}. In the next section we report the homogenization result presented \paragraph{Notation} We employ the following notation. Since we deal with a two-scale problem, we distinguish the ``macroscopic'' and ``microscopic'' coordinates, $x$ and $y$, respectively. We use ${\nabla_x = (\partial_i^x)}$ and ${\nabla_y = (\partial_i^y)}$ when differentiation \text{ with respect to~} coordinate $x$ and $y$ is used, respectively, whereby $\nabla \equiv \nabla_x$. By $\eeb{{\boldsymbol{u}}} = 1/2[(\nabla{\boldsymbol{u}})^T + \nabla{\boldsymbol{u}}]$, we denote the strain of a vectorial function ${\boldsymbol{u}}$, where the transpose operator is indicated by the superscript ${}^T$. The Lebesgue spaces of 2nd-power integrable functions on an open bounded domain $D\subset \mathbb{R}^3$ is denoted by $L^2(D)$, the Sobolev space ${\boldsymbol{W}}^{1,2}(D)$ of the square integrable vector-valued functions on $D$ including the first order generalized derivative, is abbreviated by ${\bf{H}}^1(D)$. Further, ${\bf{H}}_\#^1(Y_m)$ is the Sobolev space of vector-valued Y-periodic functions (the subscript $\#$). \subsection{The homogenized Biot -- Darcy model}\label{sec-biot} We report the homogenization result presented e.g.,~ in \cite{Rohan-Lukes-2015}, {cf.~} \cite{Huebner-Solid-2019}, where the problem of locally optimized microstructures has been described. The homogenized model of the porous elastic medium incorporates local problems for characteristic responses which are employed to compute the effective material coefficients of the Biot model. The local problems specified below, related to the homogenized model, are defined at the microscopic representative unit cell ${Y = \Pi_{i=1}^3]0,\ell_i[ \subset \mathbb{R}^3}$. which splits into the solid part occupying domain $Y_m$ and the complementary channel part $Y_c$. Thus, \begin{align}\label{eq-6} Y &= Y_m \cup Y_c \cup \Gamma_Y \;,\quad \nonumber\\ Y_c &= Y \setminus Y_m \;,\quad \nonumber\\ \Gamma_Y &= \ol{Y_m } \cap \ol{Y_c }\;, \end{align} where by $\ol{Y_d }$ for $d=m,c$, we denote the closure of the open bounded domain $Y_d$. By $\ensuremath \raisebox{.15em}{{$\scriptstyle\sim$}} \kern-.75em \int_{Y_d} = \vert Y \vert^{-1}\int_{Y_d}$, with $Y_d\subset \ol{Y}$ for $d=m,c$, we denote the local average ($\vert Y\vert$ is the volume of domain $Y$). Obviously, the unit volume $\vert Y\vert=1$ can always be chosen. We employ the usual elasticity bilinear form, involving two vector fields ${\boldsymbol{w}}$ and ${\boldsymbol{v}}$, that reads \begin{equation} \aYm{{\boldsymbol{w}}}{{\boldsymbol{v}}} = \sim \kern-1.2em \int_{Y_m} ({{\rm I} \kern-0.2em{\rm D}} \eeby{{\boldsymbol{w}}}) : \eeby{{\boldsymbol{v}}}\;, \end{equation} where ${{\rm I} \kern-0.2em{\rm D}} = (D_{ijkl})$ is the elasticity tensor satisfying the usual symmetries, $D_{ijkl} = D_{klij} = D_{jikl}$, and $\eeby{{\boldsymbol{v}}} = \frac{1}{2}(\nabla_y {\boldsymbol{v}} + (\nabla_y {\boldsymbol{v}})^T)$ is the linear strain tensor associated with the displacement field ${\boldsymbol{v}}$. In what follows, by the microstructure $\mathcal{Y}(x)$, we mean the decomposition \cref{eq-6} of the representative cell $Y$ and the material properties, as represented by the elasticity ${{\rm I} \kern-0.2em{\rm D}}$ only in our case. If the structure is perfectly periodic, microstructures $\mathcal{Y} \equiv \mathcal{Y}(x)$ are independent of the macroscopic position $x \in \Omega$. Otherwise, the local problems must be considered at any macroscopic position, i.e. for almost any $x \in \Omega$, see e.g.,~ \cite{Brown2011} in the context of slowly varying ``quasi-periodic'' microstructures. It should be pointed out, that this issue is of a special importance when dealing with homogenization-based material design optimization; as will be explained below, a regularization is required to control the design variation within $\Omega$. The local microstructural response is obtained by solving the following decoupled problems: \begin{itemize} \item Find ${{\mbox{\boldmath$\omega$\unboldmath}}}^{ij}\in {\bf{H}}_\#^1(Y_m)$ for any $i,j = 1,2,3$ satisfying \begin{equation}\label{eq-h5a} \begin{split} \aYm{{{\mbox{\boldmath$\omega$\unboldmath}}}^{ij} + {\mbox{\boldmath$\Pi$\unboldmath}}^{ij}}{{\boldsymbol{v}}} & = 0\;,\; \forall {\boldsymbol{v}} \in {\bf{H}}_\#^1(Y_m)\;, \end{split} \end{equation} where ${\mbox{\boldmath$\Pi$\unboldmath}}^{ij} = (\Pi_k^{ij})$, $i,j,k = 1,2,3$ with components $\Pi_k^{ij} = y_j\delta_{ik}$. \item Find ${{\mbox{\boldmath$\omega$\unboldmath}}}^P \in {\bf{H}}_\#^1(Y_m)$ satisfying \begin{equation}\label{eq-h5b} \begin{split} \aYm{{{\mbox{\boldmath$\omega$\unboldmath}}}^P}{{\boldsymbol{v}}} & = \sim \kern-1.2em \int_{\Gamma_Y} {\boldsymbol{v}}\cdot {\boldsymbol{n}}^{[m]} \,\mathrm{dS}_{y}, \; \forall {\boldsymbol{v}} \in {\bf{H}}_\#^1(Y_m) \;. \end{split} \end{equation} \item Find $({\mbox{\boldmath$\psi$\unboldmath}}^i,\pi^i) \in {\bf{H}}_\#^1(Y_c) \times L^2(Y_c)$ for $i = 1,2,3$ such that \begin{equation}\label{eq-S3} \begin{split} \int_{Y_c} \nabla_y {\mbox{\boldmath$\psi$\unboldmath}}^k: \nabla_y {\boldsymbol{v}} - \int_{Y_c} \pi^k \nabla\cdot {\boldsymbol{v}} &= \int_{Y_c} v_k\;,\\ \int_{Y_c} q \nabla_y \cdot {\mbox{\boldmath$\psi$\unboldmath}}^k & = 0\;, \\ \end{split} \end{equation} \end{itemize} $\forall {\boldsymbol{v}} \in {\bf{H}}_\#^1(Y_c)$ and $\forall q \in L^2(Y_c)$. Effective material properties of the homogenized deformable fluid-saturated porous medium are described in terms of homogenized poroelastic coefficients: the drained elasticity ${{\rm A} \kern-0.6em{\rm A}}$, the stress coupling ${\boldsymbol{C}}$ and the compressibility $N$, all being related to the solid skeleton. All these coefficients including the intrinsinc hydraulic permeability ${\boldsymbol{K}}$ are computed using the characteristic microscopic responses \cref{eq-h5a,eq-h5b,eq-S3} substituted in following expressions: \begin{equation}\label{eq-h8} \begin{split} A_{ijkl} = \aYm{{{\mbox{\boldmath$\omega$\unboldmath}}}^{ij} + {\mbox{\boldmath$\Pi$\unboldmath}}^{ij}}{{{\mbox{\boldmath$\omega$\unboldmath}}}^{kl} + {\mbox{\boldmath$\Pi$\unboldmath}}^{kl}}\;,\quad \\ C_{ij} = -\sim \kern-1.2em \int_{Y_m}\mbox{\rm div}_y {{\mbox{\boldmath$\omega$\unboldmath}}}^{ij} = \aYm{{{\mbox{\boldmath$\omega$\unboldmath}}}^P}{{\mbox{\boldmath$\Pi$\unboldmath}}^{ij}}\;,\\ N = \aYm{{{\mbox{\boldmath$\omega$\unboldmath}}}^P}{{{\mbox{\boldmath$\omega$\unboldmath}}}^P} = \sim \kern-1.2em \int_{\Gamma_Y} {{\mbox{\boldmath$\omega$\unboldmath}}}^P\cdot {\boldsymbol{n}} \,\mathrm{dS}_{y}\;,\\ K_{ij} = \sim \kern-1.2em \int_{Y_c} \psi_i^j = \sim \kern-1.2em \int_{Y_c} \nabla_y {\mbox{\boldmath$\psi$\unboldmath}}^i : \nabla_y {\mbox{\boldmath$\psi$\unboldmath}}^i \;. \end{split} \end{equation} Obviously, the tensors ${{\rm A} \kern-0.6em{\rm A}} = (A_{ijkl} )$, ${\boldsymbol{C}} = (C_{ij} )$ and ${\boldsymbol{K}} = (K_{ij} )$ are symmetric, ${{\rm A} \kern-0.6em{\rm A}}$ adheres all the symmetries of ${{\rm I} \kern-0.2em{\rm D}}$; moreover ${{\rm A} \kern-0.6em{\rm A}}$ is positive definite and $N > 0$. The hydraulic permeability ${\boldsymbol{K}}$ is, in general, positive semi-definite. It is positive definite whenever the channels constitute a simply connected domain generated as the periodic lattice by $Y_c$; for this, denoting by $\Gamma_Y^k\subset \partial Y$, $k=1,\dots,6$ the faces of $Y$, it must hold that $\Gamma_Y^k\cap \partial Y_c \not = \emptyset$ for all $k=1,\dots,6$. \subsection*{Coupled flow deformation problem} The Biot--Darcy model of poroelastic media for quasi-static, evolutionary problems imposed in $\Omega$ is constituted by the following equations involving stress ${{\mbox{\boldmath$\sigma$\unboldmath}}}$, displacement ${\boldsymbol{u}}$, strain ${\boldsymbol{e}}({\boldsymbol{u}})$, fluid pressure $p$ and the seepage velocity ${\boldsymbol{w}}$: \begin{equation}\label{eq-B1} \begin{split} -\nabla \cdot {{\mbox{\boldmath$\sigma$\unboldmath}}} & = {\boldsymbol{f}}^s,\; \quad {{\mbox{\boldmath$\sigma$\unboldmath}}} = {{\rm A} \kern-0.6em{\rm A}} \eeb{{\boldsymbol{u}}} - {\boldsymbol{B}} p,\\ -\nabla\cdot {\boldsymbol{w}} & = {\boldsymbol{B}}:\eeb{\dot{\boldsymbol{u}}} + M \dot p, \\ {\boldsymbol{w}} &= -\frac{{\boldsymbol{K}}}{\bar\eta} \left (\nabla p - {\boldsymbol{f}}^f \right), \end{split} \end{equation} where the homogenized coefficients are given by \cref{eq-h8} and \begin{equation}\label{eq-B1a} \begin{split} {\boldsymbol{B}} & := {\boldsymbol{C}} + \phi {\boldsymbol{I}}\;,\\ M & := N + \phi \gamma\;. \end{split} \end{equation} Above, $\bar\eta$ is the relative fluid viscosity, $\gamma$ is the fluid compressibility and $\phi = \vert Y_c\vert / \vert Y \vert$ is the porosity (volume fraction of the fluid-filled channels). The effective volume forces in \cref{eq-B1}, acting in the solid and fluid phases, are denoted by ${\boldsymbol{f}}^s$ and ${\boldsymbol{f}}^f$, respectively. It is important to note that $\bar\eta = \eta^{\text{phys}}/\varepsilon_0^2$ is defined for a given fluid ($\eta^{\text{phys}}$) and microstructures scale: $\varepsilon_0 = \ell_0/L$ where $L$ is a characteristic macroscopic length, and $\ell_0$ is the characteristic microstructure size, typically given by the ``pore diameter''. Thus, for a given fluid, the effective permeability ${\boldsymbol{K}}/\bar\eta$ is proportional to $\varepsilon_0^2$, \ie reflecting the microstructure size. In contrast, all other coefficients are scale-independent (when the scale separation holds, \ie $\varepsilon_0$ being small enough). \begin{remark}\label{rem1} In this paper, we only consider steady state problems for the Biot medium, such that all time derivatives in \cref{eq-B1} vanish. Consequently, the Biot compressibility $M$ is not involved, as far as the porous phase, generated as a periodic lattice by channels $Y_c$, is connected. For any microstructure with disconnected pores, such that $\ol{Y_c} \subset Y$, thus, $Y_c$ constitute one, or more inclusions with one cell $Y$, see \cite{Rohan-Naili-Lemaire-CMAT2015}, the permeability vanishes. Then, the time integration in \cref{eq-B1} leads to the mass conservation equation in the form ${\boldsymbol{B}}:\eeb{{\boldsymbol{u}}} + M p = 0$, assuming an undeformed initial configuration with the zero pressure in the inclusions. In the optimization problem, besides microstructures with nondegenerate permeabilities, we shall consider also microstructures with spherical, thus, disconnected pores, constituting impermeable material. For this case, one can choose either fluid filled pores, or empty pores; the only difference is the use of the so-called undrained material elasticity, ${{\rm A} \kern-0.6em{\rm A}}_U = {{\rm A} \kern-0.6em{\rm A}} + M^{-1}{\boldsymbol{B}}\otimes{\boldsymbol{B}}$, or the elasticity ${{\rm A} \kern-0.6em{\rm A}}$ describing effective elasticity of the ``drained'' skeleton, with empty pores. \end{remark} \subsection{State problem formulation}\label{sec:state-problem} Let $\Omega \subset \mathbb{R}^3$ be an open bounded domain. Its boundary $\partial\Omega$ splits, as follows: $\partial\Omega = \Gamma_D \cup \Gamma_N$ and also $\partial\Omega = \Gamma_p \cup \Gamma_w$, where $\Gamma_D \cap \Gamma_N = \emptyset$ and $\Gamma_p \cap \Gamma_w = \emptyset$. Assume $\Gamma_p$ consists of two disconnected, non-overlapping parts $\Gamma_p^k$, $k = 1,2$, $\Gamma_p = \Gamma_p^1\cup \Gamma_p^2$, and $\Gamma_p^1\cap \Gamma_p^2 = \emptyset$. We consider the steady state problems for the linear Biot continuum occupying domain $\Omega$. The poroelastic material parameters and the hydraulic permeability referred to as the homogenized coefficients, in general, are given by the locally defined microstructures $\mathcal{Y}(x)$ which can vary with ${x \in \Omega}$. The two-scale optimization approach proposed in this paper enables to combine microstructures characterized by connected and disconnected pores, the latter characterized by a vanishing permeability. To this aim, the domain $\Omega = \Omega_0 \cup \Omega_+$ is decomposed into in two parts: the permeable $\Omega_+$ and the impermeable $\Omega_0$, which may not constitute connected domains, being split into more disconnected subparts. Consequently, the interface $\Gamma_+ = \partial\Omega_+ \cap \partial\Omega_0$ is impermeable. Regarding the boundary decomposition, we assume that $\Gamma_{p+}^k := \Gamma_p^k\cap\partial\Omega_+ \not = \emptyset$, for $k=1,2$, so that the porous structure permits the fluid transport through domain $\Omega_+$, if this one connects $\Gamma_{p+}^1$ and $\Gamma_{p+}^2$. We consider the following macroscopic problem: Given the traction surface forces ${\boldsymbol{g}}$, and pressures $\bar p^k$ on boundaries $\Gamma_p^k$, find displacements ${\boldsymbol{u}}$ and the hydraulic pressure $P$ which satisfy \begin{equation}\label{eq-opg1} \begin{split} -\nabla\cdot\left({{\rm A} \kern-0.6em{\rm A}} \eeb{{\boldsymbol{u}}} - P {\boldsymbol{B}}\right) & = 0\quad \mbox{ in } \Omega\;,\\ {\boldsymbol{u}} & = 0 \quad \mbox{ in }\Gamma_D\;,\\ \left({{\rm A} \kern-0.6em{\rm A}} \eeb{{\boldsymbol{u}}} - P {\boldsymbol{B}}\right) \cdot {\boldsymbol{n}} & = {\boldsymbol{g}} \quad \mbox{ in }\Gamma_N\;, \end{split} \end{equation} where $P = 0$ in $\Omega_0$. Whereas, in $\Omega_+$, $P$ satisfies \begin{equation}\label{eq-opg2} \begin{split} -\nabla\cdot{\boldsymbol{K}}\nabla P & = 0\quad \mbox{ in } \Omega_+\;,\\ P & = \bar p^k \quad \mbox{ on } \Gamma_{p+}^k\;,\quad k = 1,2\;,\\ {\boldsymbol{n}}\cdot{\boldsymbol{K}}\nabla P & = 0 \quad \mbox{ on } \Gamma_w \cup \Gamma_+\;. \end{split} \end{equation} For the steady state problem the set of equations \cref{eq-B1} yields the two problems \cref{eq-opg1} and \cref{eq-opg2} as a decoupled system: first, \cref{eq-opg2} can be solved for $P$, then \cref{eq-opg1} is solved for ${\boldsymbol{u}}$. Moreover, for the considered type of the boundary conditions and since volume forces are not involved, the solutions are independent of the viscosity $\bar\eta$, see \cref{eq-B1}. Further, we consider an extension of $\bar p^k$ from boundary $\Gamma_p^k$ to the whole domain $\Omega$, such that $\bar p^k = 0$ on $\Gamma_p^l$ (in the sense of traces) for $l\not = k$. Then $P = p + \sum_k\bar p^k$ in $\Omega_+$, such that $p = 0$ on $\Gamma_{p+}$. Note that $p$ can be simply extended by 0 in $\Omega_0$. For the sake of notational simplicity, we introduce $\bar p=\sum_k\bar p^k$. By virtue of the Dirichlet boundary conditions for ${\boldsymbol{u}}$ and $p$, we introduce the following spaces: \begin{equation}\label{eq-opg3} \begin{split} V_0 & = \{{\boldsymbol{v}} \in {\bf{H}}^1(\Omega)\, \vert \; {\boldsymbol{v}} = 0 \mbox{ on } \Gamma_D\}\;,\\ Q_0 & = \{q \in L^2(\Omega)\cap H^1(\Omega_+)\, \vert \; q = 0 \mbox{ on } \Gamma_{p+}\}\;. \end{split} \end{equation} We employ the bilinear forms and the linear functional $g$, \begin{equation}\label{eq-opg4} \begin{split} \aOm{{\boldsymbol{u}}}{{\boldsymbol{v}}} & = \int_\Omega ({{\rm A} \kern-0.6em{\rm A}}\eeb{{\boldsymbol{u}}}):\eeb{{\boldsymbol{v}}}\;,\\ \bOmp{p}{{\boldsymbol{v}}} & = \int_{\Omega_+} p {\boldsymbol{B}}:\eeb{{\boldsymbol{v}}}\;,\\ \cOmp{p}{q} & = \int_{\Omega_+} \nabla q\cdot {\boldsymbol{K}}\nabla p\;,\\ g({\boldsymbol{v}}) & = \int_{\Gamma_N} {\boldsymbol{g}} \cdot {\boldsymbol{v}}\;. \end{split} \end{equation} In order to define the state problem in the context of two-scale optimization, we employ the weak formulation which reads, as follows: Find ${\boldsymbol{u}}\in V_0$ and $p \in Q_0$, such that, for all ${\boldsymbol{v}} \in V_0$ and $q \in Q_0$, \begin{equation}\label{eq-opg5} \begin{split} \aOm{{\boldsymbol{u}}}{{\boldsymbol{v}}} - \bOmp{p}{{\boldsymbol{v}}} & = g({\boldsymbol{v}}) + \bOmp{\bar p}{{\boldsymbol{v}}}, \\ \cOmp{p}{q} & = -\cOmp{\bar p}{q}. \end{split} \end{equation} To define $p$ uniquely in $\Omega$, $p\equiv 0$ in $\Omega_0 = \Omega\setminus \Omega_+$. Since the two fields are decoupled, first $p$ is solved from \cref{eq-opg5}$_2$, then ${\boldsymbol{u}}$ is solved from \cref{eq-opg5}$_1$, where $p$ is already known. \begin{remark}\label{rem2} In the context of the undrained porosity defined by fluid-filled closed pores $Y_c \subset Y$, see Remark~\ref{rem1}, formulation \cref{eq-opg5} is consistent also with this microstructure class type $\mathcal{Y}_0^\square$ with ${{\rm A} \kern-0.6em{\rm A}}_U$ replacing ${{\rm A} \kern-0.6em{\rm A}}$ in the elasticity bilinear form \cref{eq-opg4}$_1$. Pressure is then defined pointwise in $\Omega_0$ by $P:= - {\boldsymbol{B}}:\eeb{{\boldsymbol{u}}}/M$. \end{remark} By $\alpha(x)$ we denote an abstract optimization variable which determines the homogenized coefficients for any position $x\in\Omega$. Below we consider $\alpha$ representing several geometrical parameters characterizing microstructures $\mathcal{Y}(x)$ of a given type. Although, in this section, we disregard some particular details related to the treatment of multiple types of $\mathcal{Y}$, we bear in mind the existence of two microstructure classes, $\mathcal{Y}_+^\square$ and $\mathcal{Y}_0^\square$, associated with the pore connectivity type, as discussed above. The ``permeable'' domain $\Omega_+$ is occupied by the material given pointwise by $\mathcal{Y}(x) \in \mathcal{Y}_+^\square$ for all $x\in \Omega_+$. Hence, both the subdomains of $\Omega$ are defined implicitly by the microstructure type: $\Omega_i$ is the set of $x\in \Omega$, such that $\mathcal{Y}(x) \in \mathcal{Y}_i^\square$, where $i = +,0$. In the next section, we shall consider a two-scale optimization problem which is characterized by the following features: \begin{itemize} \item Geometrical restrictions are stated in respective definitions of the admissibility designs sets for a chosen type of microstructure. For the sake of brevity, let $A$ be the set of admissible designs, further we consider $\alpha(x) \in A$ for any $x \in \Omega$. \item We consider multiple optimization criteria which perform as the objective functions, or equality constraints. Without loss of generality, we confine ourselves to the two criteria $\Phi_\alpha({\boldsymbol{u}})$ and $\Psi_\alpha(p)$ that are defined, as follows: \begin{equation}\label{eq-opg6} \begin{split} \Phi_\alpha({\boldsymbol{u}}) & = g({\boldsymbol{u}})\;,\\ \Psi_\alpha(p) & = - \int_{\Gamma_p^2}{\boldsymbol{K}}\nabla (p+ \bar p) \cdot {\boldsymbol{n}}\;. \end{split} \end{equation} While $\Phi_\alpha({\boldsymbol{u}})$ expresses the structural compliance, criterion function $\Psi_\alpha(p)$ expresses the amount of the fluid flow through surface $\Gamma_p^2$ due to the pressure difference $\bar p^1 - \bar p^2$, see the boundary condition \cref{eq-opg2}$_2$. These two criteria are antagonist: the pore volume reduction leads naturally to stiffening the structure, but reduces the permeability. Hence, for the objective function $\Phi_\alpha$, function $\Psi_\alpha$ serves as a constraint and vice versa. \end{itemize} \subsection{Two-scale optimization problem} \label{sec:two-scale-opt-problem} Here, for the ease of notation, we restrict to one microstructure type only, namely $\mathcal{Y}(x) \in \mathcal{Y}_+^\square$, so that we may consider $\Omega\equiv\Omega_+$. Hence, all the bilinear forms in \cref{eq-opg4} are defined by integration in $\Omega$. Later, in \cref{sec:sgp}, we will consider microstructures characterized by different unit cell types of classes $\mathcal{Y}_+^\square$ and $\mathcal{Y}_0^\square$, however, the formulations introduced below can be adapted easily. We first define the direct optimization problem to find design $\alpha(\Omega)$ that minimizes a cost functional based on the criteria defined in \cref{eq-opg6}. Further, we introduce the set $ \mathcal{T} = \mathbb{S}^6 \times \mathbb{S}^3 \times \mathbb{S}^3 \times \mathbb{R} \times \mathbb{R} $ and denote by ${{\rm I} \kern-0.2em{\rm H}} = ({{\rm A} \kern-0.6em{\rm A}},{\boldsymbol{B}},{\boldsymbol{K}},\rho_m,R) \in \mathcal{T}$ the (local) material parameters involing the effective (homogenized) material coefficients, the solid part volume $\rho_m = 1-\phi = \vert Y_m \vert/ \vert Y\vert$, and a regularization parameter $R$, which typically depends only on the design. We note that the dimension of the regularization label $R$ is, for ease of notation, chosen as 1 for now, although later in \cref{sec:two-cells-reg} more general regularization labels are used. Obviously, ${{\rm I} \kern-0.2em{\rm H}}$ is given uniquely by the local admissible design $\alpha(x) \in A$, $x \in \Omega$, whereby for a suitably chosen parametrization, the admissibility set is given simply by \begin{equation*} A = [\underline{{\boldsymbol{a}}},\overline{{\boldsymbol{a}}}] \subset \mathbb{R}^n. \end{equation*} Examples for such parametrizations along with a description of the lower and upper bounds $\underline{{\boldsymbol{a}}},\overline{{\boldsymbol{a}}} \in \mathbb{R}^n$ are presented in \cref{sec:design_params}. For a given admissible design $\alpha(\Omega)$, the state ${\boldsymbol{z}} = ({\boldsymbol{u}},p)$ is the solution of \cref{eq-opg5}, where the homogenized coefficients ${{\rm I} \kern-0.2em{\rm H}}(\alpha)$ are given in \cref{eq-h8} using the characteristic responses ${\boldsymbol{W}}(\alpha):=({{\mbox{\boldmath$\omega$\unboldmath}}}^{ij},{{\mbox{\boldmath$\omega$\unboldmath}}}^P,{\mbox{\boldmath$\psi$\unboldmath}}^k,\pi^k)$. ${\boldsymbol{W}}(\alpha)$ are the solutions of \cref{eq-h5a,eq-h5b,eq-S3}, which depend on $\alpha(x)$ in terms of the microconfigurations $\mathcal{Y}(x)$. In this way, mapping $\mathcal{S}:\alpha(\Omega) \mapsto {\boldsymbol{z}}(\Omega)$ introduces the admissible state. It can be defined by a composition map, $\mathcal{S} = \mathcal{Z}\circ\mathcal{E}\circ\mathcal{W}$, where $\mathcal{W}$ represents the resolvents of the characteristic problems imposed on the local microconfigurations, $\mathcal{E}$ provides the homogenized material, and $\mathcal{Z}$ is the resolvent of the macroscopic state problem, so that \begin{equation}\label{eq-maps} \begin{split} \mathcal{W}:\alpha & \mapsto W\;,\\ \mathcal{E}:(\alpha, W) & \mapsto {{\rm I} \kern-0.2em{\rm H}}\;,\\ \mathcal{Z}:{{\rm I} \kern-0.2em{\rm H}}(\Omega)& \mapsto{\boldsymbol{z}}(\Omega)\; \end{split} \end{equation} Further, we employ the mapping $$\mathcal{H}:\alpha \mapsto {{\rm I} \kern-0.2em{\rm H}} ,$$ such that $ \mathcal{H} = \mathcal{E}\circ\mathcal{W} $ is the composition map defined for any admissible design $\alpha(x) \in A$, for a.a. $x \in \Omega$. The macroscopic state problem is the implicit form of the mapping $\mathcal{Z}: {{\rm I} \kern-0.2em{\rm H}} \mapsto {\boldsymbol{z}}$, such that ${{\boldsymbol{z}} \in S_0 = V_0\times Q_0}$ satisfies \begin{equation}\label{eq-SP0} \begin{split} \varphi_{{{\rm I} \kern-0.2em{\rm H}}}({\boldsymbol{z}},{\boldsymbol{v}}) = 0\quad \forall {\boldsymbol{v}} \in S_0\;, \end{split} \end{equation} where $S_0$ is the space of admissible state problem solutions. For the Biot medium problem, \cref{eq-SP0} is identified with \cref{eq-opg5}. \subsubsection{Direct two-scale optimization problem} For the given two functions of interest $\Phi$ and $\Psi$, both depending on the material distribution ${{\rm I} \kern-0.2em{\rm H}}(x)$ and the state ${\boldsymbol{z}}(x)$, the two-scale abstract optimization problem reads: \begin{equation}\label{eq-opA1} \begin{split} \min_{\alpha \in A}\ & \Phi({{\rm I} \kern-0.2em{\rm H}},{\boldsymbol{z}}) + \Lambda_\Xi \Xi({{\rm I} \kern-0.2em{\rm H}}) \\ \mbox{ s.t. } & \Psi({{\rm I} \kern-0.2em{\rm H}},{\boldsymbol{z}}) = \Psi_0\;,\\ & {\boldsymbol{z}} = \mathcal{S}(\alpha),\\ & {{\rm I} \kern-0.2em{\rm H}} = \mathcal{H}(\alpha),\\ & \int_\Omega \rho_m \leq \bar \rho_m \vert\Omega\vert\;, \end{split} \end{equation} where the term $\Xi({{\rm I} \kern-0.2em{\rm H}})$ in the objective is related to the design regularization, namely to parameter $R$, and $\Lambda_\Xi \in \mathbb{R}^+$ is a penalty parameter. Recall the chain mapping ${\mathcal{H}:\alpha(x) \mapsto {{\rm I} \kern-0.2em{\rm H}}(x)}$ for any $x\in \Omega$, then ${\boldsymbol{z}} = \mathcal{Z}(\Omega)$. Below, we abbreviate ${\Phi_\alpha({\boldsymbol{z}}) =: \Phi(\mathcal{H}(\alpha),{\boldsymbol{z}})}$ and also ${\Psi_\alpha({\boldsymbol{z}}) =: \Psi(\mathcal{H}(\alpha),{\boldsymbol{z}})}$. In \cref{eq-opg6}, specific examples relevant for the Biot medium optimization were given. Optimization problem \cref{eq-opA1} is associated with the following inf-sup problem, \begin{equation}\label{eq-opA2} \begin{split} \min_{\alpha \in A} \inf_{{\boldsymbol{z}} \in S_0} \sup_{ \mbox{\boldmath$\Lambda$\unboldmath} \in \mathbb{R}^2, \tilde{\boldsymbol{z}}\in S_0}& \mathcal{L}(\alpha, {\boldsymbol{z}},\mbox{\boldmath$\Lambda$\unboldmath}, \tilde{\boldsymbol{z}})\;, \end{split} \end{equation} with the Lagrangian function, \begin{equation}\label{eq-opA3} \begin{split} \mathcal{L}(\alpha, {\boldsymbol{z}},\mbox{\boldmath$\Lambda$\unboldmath}, \tilde{\boldsymbol{z}}) & = \Lambda_\Phi\Phi_\alpha({\boldsymbol{z}}) \\ & + \Lambda_\Xi \Xi(\mathcal{H}(\alpha)) \\ & + \Lambda_\Psi (\Psi_\alpha({\boldsymbol{z}})-\ol{\Psi_0}) \\ & +\varphi_{{{\rm I} \kern-0.2em{\rm H}}(\alpha)}({\boldsymbol{z}},\tilde{\boldsymbol{z}})\;, \end{split} \end{equation} where $\mbox{\boldmath$\Lambda$\unboldmath} = ( \Lambda_\Phi,\Lambda_\Psi) \in \mathbb{R}^2$ are the Lagrange multipliers associated with the objective and constraint functionals $\Phi$ and $\Psi$, and $\tilde{\boldsymbol{z}}\in S_0$ are Lagrange multipliers -- the adjoint variables --- associated with the constraints of the problem \cref{eq-opA1}. For a while, we may consider material coefficients ${{\rm I} \kern-0.2em{\rm H}}$ as the optimization variables (although they are parameterized by $\alpha \in A$). Further, let us assume a given value $\mbox{\boldmath$\Lambda$\unboldmath} \in \mathbb{R}^2$; note that the entries of $\mbox{\boldmath$\Lambda$\unboldmath}$ can be positive or negative depending on the desired flow augmentation, or reduction. In the numerical examples, we chose $\Lambda_\Phi > 0$, whereas $\Lambda_\Psi < 0$ indicates the constraint effect of $ \Psi$ relative to $\Phi$. Upon denoting by $\rm Im(\mathcal{H}) =\mathcal{H}(A)$, the image space of all admissible designs, and defining \begin{align*} U_\text{ad} = & \left\{{{\rm I} \kern-0.2em{\rm H}} \in L^\infty(\Omega;\mathcal{T}) \,\vert\, {{\rm I} \kern-0.2em{\rm H}}(x) \in \rm Im(\mathcal{H}) \, \right.\\ & \left. \text{ for a.a. } x\in \Omega \right\}, \end{align*} the optimization problem \cref{eq-opA1} can be rephrased as the two-criteria minimization problem, \begin{equation}\label{eq-opH1} \begin{split} \min_{ \begin{array}{c} {{\rm I} \kern-0.2em{\rm H}} \in U_\text{ad} \end{array} } \ & \mathcal{F}({{\rm I} \kern-0.2em{\rm H}},{\boldsymbol{z}})\;, \\ \mbox{ s.t. } & {\boldsymbol{z}} = \mathcal{Z}({{\rm I} \kern-0.2em{\rm H}})\;\\ & \int_\Omega \rho_m \leq \bar \rho_m \vert\Omega\vert\;, \end{split} \end{equation} where \begin{align*} \mathcal{F}({{\rm I} \kern-0.2em{\rm H}},{\boldsymbol{z}}) &= \Lambda_{\Phi} \Phi({{\rm I} \kern-0.2em{\rm H}},{\boldsymbol{z}}) + \Lambda_{\Psi}\Psi({{\rm I} \kern-0.2em{\rm H}},{\boldsymbol{z}}) + \Lambda_\Xi \Xi({{\rm I} \kern-0.2em{\rm H}}) \;. \end{align*} For the Biot medium optimization, where the two criterion functions $\Phi_\alpha$ and $\Psi_\alpha$ are given in \cref{eq-opg6}, the Lagrangian function attains the form \begin{equation}\label{eq-opg8} \begin{split} &\mathcal{L}(\alpha, ({\boldsymbol{u}},p),\mbox{\boldmath$\Lambda$\unboldmath}, (\tilde{\boldsymbol{v}},\tilde q)) \\ & = \Lambda_\Phi\Phi_\alpha({\boldsymbol{u}}) + \Lambda_\Psi (\Psi_\alpha(p)-\ol{\Psi_0}) + \Lambda_\Xi \Xi_\alpha({{\rm I} \kern-0.2em{\rm H}})\\ & \quad + \aOm{{\boldsymbol{u}}}{\tilde{\boldsymbol{v}}} - \bOm{p+\bar p}{\tilde{\boldsymbol{v}}} \\ & \quad - g(\tilde{\boldsymbol{v}}) + \cOm{p+\bar p}{\tilde q}\;. \end{split} \end{equation} \subsection{Adjoint responses and the sensitivity analysis} \label{sec-sensitivity} In this section, we provide details concerning the sensitivity analysis employed in the preceding section. We consider $\alpha$ to represent a general optimization variable which is related to the effective medium parameters ${{\rm I} \kern-0.2em{\rm H}}$. It is worth to note that one may also consider $\alpha \equiv {{\rm I} \kern-0.2em{\rm H}}$ in the context of the free material optimization (FMO). To obtain the adjoint equation, we consider the optimality condition for $({\boldsymbol{u}},p)$. Thus, from \cref{eq-opg8} it follows that \begin{equation}\label{eq-opg10} \begin{split} &\delta_{({\boldsymbol{u}},p)}\mathcal{L}(\alpha, ({\boldsymbol{u}},p),\mbox{\boldmath$\Lambda$\unboldmath}, (\tilde{\boldsymbol{v}},\tilde q))\circ ({\boldsymbol{v}},q) \\ & = \Lambda_\Phi\delta_{\boldsymbol{u}} \Phi_\alpha({\boldsymbol{u}};{\boldsymbol{v}}) + \Lambda_\Psi \delta_p\Psi_\alpha(p; q) \\ & \quad + \aOm{{\boldsymbol{v}}}{\tilde{\boldsymbol{v}}} - \bOm{q}{\tilde{\boldsymbol{v}}} + \cOm{q}{\tilde q} \;, \end{split} \end{equation} where \begin{equation}\label{eq-opg11a} \begin{split} \delta_{\boldsymbol{u}}\Phi_\alpha({\boldsymbol{u}};{\boldsymbol{v}}) &= g({\boldsymbol{v}}), \\ \delta_p\Psi_\alpha(p; q) &= -\int_{\Gamma_p^2} {\boldsymbol{K}}\nabla q \cdot{\boldsymbol{n}}. \end{split} \end{equation} To avoid computation of the gradient $\nabla q$ on ${\Gamma_p^2\subset \partial \Omega}$, we consider $\tilde p \in H^1(\Omega)$ such that $\tilde p = 0$ on $\Gamma\setminus\Gamma_p^2$, while $\tilde p = 1$ on $\Gamma_p^2$, then it is easy to see that \begin{equation}\label{eq-opg12} \begin{split} -\Psi_\alpha(p) & = r(p) := \cOm{p+\bar p}{\tilde p}\;,\\ -\delta_p\Psi_\alpha(p; q) & = \delta_p r(p;q) = \cOm{q}{\tilde p}\;. \end{split} \end{equation} The optimality conditions \cref{eq-opg10}, related to the state admissibility, yield the adjoint state ${(\tilde{\boldsymbol{v}},\tilde q) \in V_0 \times Q_0}$ which satisfies the following identities: \begin{align}\label{eq-opg11b} \forall {\boldsymbol{v}} \in V_0&:& \aOm{{\boldsymbol{v}}}{\tilde{\boldsymbol{v}}} & = - \Lambda_\Phi\delta_{\boldsymbol{u}}\Phi_\alpha({\boldsymbol{u}};{\boldsymbol{v}})\;, \nonumber\\ \forall q \in Q_0&:& \cOm{q}{\tilde q} & = \bOm{q}{\tilde{\boldsymbol{v}}} - \Lambda_\Psi\delta_p\Psi_\alpha(p; q). \end{align} These equations can be rewritten using \cref{eq-opg11a} and \cref{eq-opg12}, as follows for all $(\tilde{\boldsymbol{v}},\tilde q) \in V_0 \times Q_0$: \begin{align}\label{eq-opg11} \forall {\boldsymbol{v}} \in V_0&:& \aOm{{\boldsymbol{v}}}{\tilde{\boldsymbol{v}}} & = - \Lambda_\Phi g({\boldsymbol{v}})\quad \;, \nonumber\\ \forall q \in Q_0&:& \cOm{q}{\tilde q} & = \bOm{q}{\tilde{\boldsymbol{v}}} + \Lambda_\Psi \cOm{q}{\tilde p}. \end{align} To allow for the independence of the state adjoint on $\mbox{\boldmath$\Lambda$\unboldmath}$, we define the split \begin{equation}\label{eq-opg11e} \begin{split} \tilde{\boldsymbol{v}} & = \Lambda_\Phi\tilde\mbox{\boldmath$\vartheta$\unboldmath}\;,\\ \tilde q & = \Lambda_\Phi\tilde q_1 + \Lambda_\Psi \tilde q_2\;, \end{split} \end{equation} where $\tilde\mbox{\boldmath$\vartheta$\unboldmath}$ and $\tilde q_k$, $k = 1,2$ satisfy for all ${(\tilde{\boldsymbol{v}},\tilde q,\tilde q) \in V_0 \times Q_0}$ \begin{equation}\label{eq-opg11f} \begin{split} \forall {\boldsymbol{v}} \in V_0:\quad \aOm{{\boldsymbol{v}}}{\tilde\mbox{\boldmath$\vartheta$\unboldmath}} & = - g({\boldsymbol{v}}),\\ \forall q \in Q_0:\quad \cOm{q}{\tilde q_1} & = \bOm{q}{\tilde\mbox{\boldmath$\vartheta$\unboldmath}},\\ \forall q \in Q_0:\quad \cOm{q}{\tilde q_2} & = \cOm{q}{\tilde p}. \end{split} \end{equation} We can compute the total variation of the Lagrangian with \begin{equation}\label{eq-opg13} \begin{split} \delta_\alpha^{\text{tot}} \mathcal{L} & = \Lambda_\Phi\delta_{\boldsymbol{u}} g({\boldsymbol{u}};\delta_\alpha{\boldsymbol{u}}) - \Lambda_\Psi \delta_p r(p;\delta_\alpha p) \\ & \quad + \Lambda_\Phi\delta_\alpha g({\boldsymbol{u}}) - \Lambda_\Psi \delta_\alpha r(p) + \Lambda_\Xi \delta_\alpha\Xi_\alpha({{\rm I} \kern-0.2em{\rm H}}) \\ & \quad + \aOm{\delta_\alpha{\boldsymbol{u}}}{\tilde{\boldsymbol{v}}} - \bOm{\delta_\alpha p}{\tilde{\boldsymbol{v}}} +\cOm{\delta_\alpha p}{\tilde q}\\ & \quad + \delta_\alpha\aOm{{\boldsymbol{u}}}{\tilde{\boldsymbol{v}}}-\delta_\alpha\bOm{p+\bar p}{\tilde{\boldsymbol{v}}} \\ & \quad +\delta_\alpha\cOm{p+\bar p}{\tilde q}\;. \end{split} \end{equation} If the pair $({\boldsymbol{u}},p)$ solves the state problem and $(\tilde{\boldsymbol{v}},\tilde q)$ is its adjoint state, \cref{eq-opg13} is equivalent to the following expression: \begin{align}\label{eq-opg13a} \delta_\alpha^{\text{tot}} \mathcal{L} & = \Lambda_\Phi\delta_\alpha g({\boldsymbol{u}}) - \Lambda_\Psi \delta_\alpha r(p) + \Lambda_\Xi \delta_\alpha\Xi_\alpha({{\rm I} \kern-0.2em{\rm H}}) \nonumber\\ & \quad \quad + \delta_\alpha\aOm{{\boldsymbol{u}}}{\tilde{\boldsymbol{v}}}- \delta_\alpha\bOm{p+\bar p}{\tilde{\boldsymbol{v}}} \nonumber\\ &\quad \quad+\delta_\alpha\cOm{p+\bar p}{\tilde q}\;. \end{align} Above, the shape derivatives $\delta_\alpha$ of the bilinear forms can be rewritten in terms of the sensitivity of the homogenized coefficients. Besides the obviously vanishing derivative $\delta_\alpha g({\boldsymbol{u}})=0$, it holds that \begin{equation}\label{eq-opg15} \begin{split} \delta_\alpha\aOm{{\boldsymbol{u}}}{\tilde{\boldsymbol{v}}}\circ\delta_\alpha{{\rm A} \kern-0.6em{\rm A}} & = \int_\Omega \delta_\alpha{{\rm A} \kern-0.6em{\rm A}}\eeb{{\boldsymbol{u}}}:\eeb{\tilde{\boldsymbol{v}}}\;,\\ \delta_\alpha\bOm{p+\bar p}{\tilde{\boldsymbol{v}}}\circ\delta_\alpha{\boldsymbol{B}} & = \int_\Omega (p+\bar p) \delta_\alpha{\boldsymbol{B}}:\eeb{\tilde{\boldsymbol{v}}}\;,\\ \delta_\alpha\cOm{p+\bar p}{\tilde q}\circ\delta_\alpha{\boldsymbol{K}} & = \int_\Omega \nabla \tilde q \cdot \delta_\alpha{\boldsymbol{K}}\nabla( p+\bar p)\;,\\ \delta_\alpha r(p) &= \delta_\alpha \cOm{p+\bar p}{\tilde p}\circ\delta_\alpha{\boldsymbol{K}} \\ & = \int_\Omega \nabla \tilde p \cdot \delta_\alpha{\boldsymbol{K}}\nabla( p+\bar p)\;. \end{split} \end{equation} Using the ``total pressure'' $P:= p+\bar p$, the following tensors are employed to evaluate the expression in \cref{eq-opg15}: \begin{equation}\label{eq-opg16} \begin{split} \eeb{{\boldsymbol{u}}}\otimes\eeb{\tilde\mbox{\boldmath$\vartheta$\unboldmath}}\;&, \quad P \eeb{\tilde\mbox{\boldmath$\vartheta$\unboldmath}} \;,\\ \nabla P\otimes\nabla\tilde q_1 \;&,\quad \nabla P\otimes\nabla\tilde q_2\;, \\ \nabla \tilde p\otimes\nabla P\;. \end{split} \end{equation} Now, using these tensors, \cref{eq-opg13} is computed, as follows: \begin{equation}\label{eq-opg17} \begin{split} & \delta_\alpha^{\text{tot}} \mathcal{L} = - \Lambda_\Psi \delta_\alpha r(p) \\ & + \Lambda_\Phi\left(\delta_\alpha\aOm{{\boldsymbol{u}}}{\tilde\mbox{\boldmath$\vartheta$\unboldmath}}- \delta_\alpha\bOm{P}{\tilde\mbox{\boldmath$\vartheta$\unboldmath}} +\delta_\alpha\cOm{P}{\tilde q_1}\right) \\ & + \Lambda_\Psi\delta_\alpha\cOm{P}{\tilde q_2} + \Lambda_\Xi\partial_{{\rm I} \kern-0.2em{\rm H}} \Xi({{\rm I} \kern-0.2em{\rm H}})\delta_\alpha{{\rm I} \kern-0.2em{\rm H}} \;. \end{split} \end{equation} Hence the variations of $\mathcal{L}$ \text{ with respect to~} ${{\rm A} \kern-0.6em{\rm A}},{\boldsymbol{B}}$ and ${\boldsymbol{K}}$ are given by the following formulae \begin{equation}\label{eq-deriv-tensors} \begin{split} \delta_{{\rm A} \kern-0.6em{\rm A}}^{\text{tot}} \mathcal{L} & = \Lambda_\Phi \int_\Omega \delta{{\rm A} \kern-0.6em{\rm A}}_e:\eeb{{\boldsymbol{u}}}\otimes\eeb{\tilde\mbox{\boldmath$\vartheta$\unboldmath}} \;, \\ \delta_{\boldsymbol{B}}^{\text{tot}} \mathcal{L} & = -\Lambda_\Phi \int_\Omega \delta{\boldsymbol{B}}_e : P\eeb{\tilde\mbox{\boldmath$\vartheta$\unboldmath}}\;,\\ \delta_{\boldsymbol{K}}^{\text{tot}} \mathcal{L} & = \int_\Omega \delta{\boldsymbol{K}}_e : \left( \Lambda_\Phi \nabla P\otimes\nabla\tilde q_1 \right.\\ &\quad\quad\quad\quad\quad\left. +\Lambda_\Psi \left(\nabla P\otimes\nabla\tilde q_2 - \nabla \tilde p\otimes\nabla P\right) \right) \end{split} \end{equation} As $\Xi({{\rm I} \kern-0.2em{\rm H}})$ solely depends on the regularization parameter ${\boldsymbol{R}}$, see \cref{eq:filter_function}, we get \begin{equation*} \partial_{{\rm I} \kern-0.2em{\rm H}} \Xi({{\rm I} \kern-0.2em{\rm H}})\delta_\alpha{{\rm I} \kern-0.2em{\rm H}} = \int_\Omega({\boldsymbol{R}} - \mathbb{F} ({\boldsymbol{R}})\cdot(\delta {\boldsymbol{R}} - \partial_{\boldsymbol{R}}\mathbb{F} ({\boldsymbol{R}})\circ\delta {\boldsymbol{R}} ) \end{equation*} for the regularization term in \cref{eq-opg17}. In the context of the finite element discretization introduced in \cref{sec:sgp}, the homogenized coefficients are supplied as constants in each element $\Omega_e$ of the partitioned domain $\Omega$. Accordingly, the expressions in \cref{eq-opg16} are supplied elementwise at the Gauss integration points. \subsection{Design parametrization}\label{sec:design_params} The design of the cell $Y$, that is the decomposition into the solid skeleton $Y_m$ and the pores $Y_c$, can be parameterized in a number of ways. In \cite{Huebner-Solid-2019}, we employed a so-called spline-box structure parameterized by design variables defining positions of the spline control polyhedron. This kind of parametrization is convenient due to its generality to handle quite arbitrary design, but leads to complicated formulations of design constraints which are needed to preserve essential geometrical requirements (e.g.,~ positivity of channel crosssections). In this paper, we employ two specific types of microstructures illustrated in \cref{fig:design_params}, where the channels are shaped as a 3D cross (type 1), or a sphere (type 2). Hence, the latter microstructure is featured by zero permeability and therefore, we consider dry pores (voids) in the mechanical model. Due to these specific geometries, we can use a rather simple parametrization, which is listed in \cref{tab-alpha}. For a unit cell of type 1, $r_x$ and $r_y$ refer to the radii of the cylinders pointing in $x$- and $y$-direction respectively. The third parameter $\varphi$ describes the cell rotation, about axis $z$. For the unit cell type 2, the spherical voids, whose radii are described by $r_s$, provide an orthotropic material with nearly isotropic elastic properties. Therefore, rotations are not enabled for this cell type. Importantly, box constraints can be imposed on $r_x,r_y$ and $r_s$ straightforwardly to guarantee geometric feasibility. \begin{table}[h] \begin{tabular}{l|lll} microstructure \# & \multicolumn{3}{l}{cell parameters} \\ \hline 1 & $r_x$ & $r_y$ & $\varphi$ \\ 2 & $r_s$ & - & - \\ \hline \end{tabular} \caption{The parametrization of the pore geometry for the two types of the microstructures: 1: the 3D cross, 2: the sphere.}\label{tab-alpha} \end{table} \begin{figure}[htb] \centering \includegraphics[width=0.55\textwidth]{design_param} \caption{Parametrization of unit cells: unit cell type 1 is parameterized by radii $r_x$ and $r_y$, both ranging from 0.08 to 0.22, $r_z = 0.15$ and $r_s = 0.25$ are kept constant; unit cell type 2 is parameterized by radius $r_s$ ranging from 0.1 to 0.4.} \label{fig:design_params} \end{figure} To illustrate a sensitivity of the material properties determined by the homogenized coefficients ${{\rm I} \kern-0.2em{\rm H}}$, In \cref{fig:design_uc2_A}, for unit cell type 2, the elasticity as the only relevant material property is displayed as function of $r_s$. In \cref{fig:design_uc1_ABK}, for unit cell type 1, selected components of the poroelastic tensors and of the permeability are reported as functions of $r_y$. \begin{figure}[htb] \centering \includegraphics[width=0.4\textwidth]{coefA_beta_beta} \caption{Unit cell type 2: dependence of $A_{1111}$ on parameter $r_s$.} \label{fig:design_uc2_A} \end{figure} \begin{figure}[htb] \centering \includegraphics[width=0.4\textwidth]{coefA_alpha13_remesh_ry}\hfil \includegraphics[width=0.4\textwidth]{coefB_alpha13_remesh_ry} \includegraphics[width=0.4\textwidth]{coefK_alpha13_remesh_ry}\\ \caption{Unit cell type 1: dependence of homogenized coefficients ${{\rm A} \kern-0.6em{\rm A}}$, ${\boldsymbol{B}}$, and ${\boldsymbol{K}}$ on $r_y$; $r_x = 0.15$ is fixed.} \label{fig:design_uc1_ABK} \end{figure} \section{A Sequential Global Programming formulation} \label{sec:sgp} The basic description of the Sequential Global Programming algorithm along with convergence aspects were presented in \cite{Semmler-SIAM-2018}, where SGP was applied to a multi-material optimization based on a two-dimensional time harmonic Helmholtz state equation. The setting and procedure described in this manuscript differs from the one in \cite{Semmler-SIAM-2018} in the following major points: first, in \cite{Semmler-SIAM-2018} a selection of finitely many fixed materials was considered as admissible set. In this paper, each admissible material is computed by homogenizing unit cell, which itself is configurable by a number of geometric parameters. Thus, the designer can choose in each point of the design domain from $M$ different unit cell types \emph{and} adjust the geometric parameters for the latter. Second, the SGP approach is extended to a multi-physics setting using a slightly different separable approximation and third, a different solution strategy is employed for the subproblems arising from this. This strategy does not impose any assumption on the parametrization. In particular, parametrizations can be non-analytical and non-differentiable. This leads to a greater design flexibility. Despite these differences, there is also an important feature, the approach presented here has in common with the one outlined in \cite{Semmler-SIAM-2018}: separable models are established in terms of (effective) material tensors ${{\rm I} \kern-0.2em{\rm H}}$ rather than their parameterization $\alpha$. Then, the parametrization is directly treated at the level of sub-problems without further convexification. Thanks to the separable character of the chosen first order model the resulting generally non-convex sub-problems can - in principal - still be solved to global optimality. The advantages of this approach are twofold: first, due to the separable model functions being able to capture also non-convex features of the original cost function typically a low number of outer iterations, equivalently to the number of state problems to be solved, is required; and second, due to the good fit of the separable models with the cost function as well as the fact that non-convex sub-problems are solved to global optimality the overall algorithm is less start value dependent and less prone to be trapped in poor local minima. This is in contrast to traditional approaches, where a local model is established directly based on the sensitivity of cost functions with respect to the design parameterization $\alpha$. In the following we first derive a fullly discretized counterpart for a slightly generalized of problem \cref{eq-opH1}. Then we describe in detail how the separable first order approximations can be constructed and finally present a practical outline of the full SGP algorithm including a generic sub-solver allowing to compute near globally optimal solutions for sub-problems using a brute-force strategy. \subsection{A fully discretized 2-scale design problem}\label{sec-twoscale-discr} For the sake of simplicity, the definitions of sets and functions were introduced in \cref{sec:state-problem,sec:two-scale-opt-problem} based on the assumption that there is only one type of unit cell such that $M=1$. Here, for a more general setting, we consider $M$ unit cell types, each one with $n_i$ design parameters, and introduce index set $I := \{1, \dots, M\}. $ For each unit cell type $i \in I$, the admissibility set is defined in terms of box constraints and other purely geometrical constraints. By choosing a suitable parameterization, we can identify these with (geometric) parameter sets \begin{equation} A_i = [\underline{{\boldsymbol{a}}}_i,\overline{{\boldsymbol{a}}}_i] \subset \mathbb{R}^{n_i}, \label{eq:parameter-sets} \end{equation} with $\underline{{\boldsymbol{a}}}_i, \overline{{\boldsymbol{a}}}_i \in \mathbb{R}^{n_i}$ being lower and upper bound vectors constraining the corresponding parameter vector $\boldsymbol{\alpha}_i \in \mathbb{R}^{n_i}$. \begin{remark} We note that, while in this manuscript the parameters in \cref{eq:parameter-sets} are always used to vary the geometrical properties of the unit cell, variations in the material parameters could be described in the same way. Thus, SGP can handle both of these situations. \end{remark} We further define for all $i \in I$ map \begin{equation} \mathcal{H}_i: \begin{cases} A_i & \to \mathcal{T} \\ \boldsymbol{\alpha}_i &\mapsto ({{\rm A} \kern-0.6em{\rm A}}, {\boldsymbol{B}}, {\boldsymbol{K}}, \rho_m, R), \end{cases} \label{eq:map-H-continuous} \end{equation} where $\mathcal{H}_i(\boldsymbol{\alpha})$ performs the homogenization procedure described in \cref{sec:two-scale-opt-problem}. \cref{fig:material_catalogue} illustrates the components of $\mathcal{H}_i(\boldsymbol{\alpha}_i)$. \begin{figure*}[ht] \centering \includegraphics[width=0.65\textwidth]{sketch_material_catalogue} \caption{Collection of materials: each material, represented by a unit cell object, comes along with a collection of data such as geometric parameters, physical properties and further labels.} \label{fig:material_catalogue} \end{figure*} We denote the union of the ranges of all $\mathcal{H}_i$ by \begin{equation}\label{eq:def-union-hom-image} H \coloneqq \bigcup_{i=1}^{M} \mathcal{H}_i(A_i) \end{equation} and with that generalize the set of admissible design functions to become \begin{align*} U_\text{ad} = & \left\{{{\rm I} \kern-0.2em{\rm H}} \in L^\infty(\Omega;\mathcal{T}) \,\vert\, {{\rm I} \kern-0.2em{\rm H}}(x) \in H \, \right.\\ & \left. \text{ for a.e. } x \in \Omega \right\}. \end{align*} Now the state problem operator \begin{equation} \mathcal{Z}: \begin{cases} U_\text{ad} & \to \mathbb{R}^3 \times \mathbb{R} \\ {{\rm I} \kern-0.2em{\rm H}} &\mapsto {\boldsymbol{z}} = ({\boldsymbol{u}},p), \end{cases} \end{equation} with displacement function ${\boldsymbol{u}}({{\rm I} \kern-0.2em{\rm H}})$ and hydraulic pressure function $p({{\rm I} \kern-0.2em{\rm H}})$ reads exactly as before. We finally use a slightly more general resource function than in \cref{sec:state-problem,sec:two-scale-opt-problem} as follows: \begin{equation}\label{eq:vol-func} \rho: \begin{cases} U_\text{ad} &\to \mathbb{R} \\ {{\rm I} \kern-0.2em{\rm H}} &\mapsto \rho. \end{cases} \end{equation} A concretization could be the total volume fraction of a specific material phase (see description of $\bar{\rho}_m$ in \cref{sec:two-scale-opt-problem}). Based on these definitions, we then formulate an FMO-type problem \begin{equation}\label{eq-opHgen} \begin{aligned} \min_{{{\rm I} \kern-0.2em{\rm H}} \in U_\text{ad}} \quad \mathcal{F}({{\rm I} \kern-0.2em{\rm H}},{\boldsymbol{z}}) := & \Lambda_\Phi \Phi({{\rm I} \kern-0.2em{\rm H}},{\boldsymbol{z}}) + \Lambda_\Psi \Psi({{\rm I} \kern-0.2em{\rm H}},{\boldsymbol{z}})\\ & + \Lambda_\Xi \Xi({{\rm I} \kern-0.2em{\rm H}}) \\ \textrm{s.t.} \quad\quad\quad\quad {\boldsymbol{z}} =&\mathcal{Z}({{\rm I} \kern-0.2em{\rm H}}), \\ \rho({{\rm I} \kern-0.2em{\rm H}}) \leq& \bar{\rho}_m, \end{aligned} \end{equation} where $\bar{\rho}_m \in \mathbb{R}$ is the resource constraint value and cost functions and $\Phi$, $\Psi$, $\Xi$ and their weights $\Lambda_\Psi, \Lambda_\Phi, \Lambda_\Xi$ have been already introduced in \cref{sec:two-scale-opt-problem}). \\ Although problem \cref{eq-opHgen} is formulated directly in the tensor variable ${{\rm I} \kern-0.2em{\rm H}}$, a realization of the feasibility condition ${{\rm I} \kern-0.2em{\rm H}} \in U_\text{ad}$ would force us to evaluate the homogenization maps $\mathcal{H}_i \; (i\in I)$. This has the consequence that for each evaluation of the cost function, a homogenization procedure, which contains a series of cell problems, has to be conducted. To alleviate this situation, we follow \cite{BendsoeKikuchi} and carry out the homogenization procedure only for discrete samples of the design parameter space. For each unit cell type $i$, we introduce a grid with nodes $A_i ^{\text{nodes}} \subseteq A_i$ and effective material coefficients are only computed, via homogenization, at the sampled nodes of this grid. In addition, we define a piecewise cubic Hermite interpolator for these samples to realize the continuous mapping \begin{equation}\label{eq:param-to-tensor} \tilde{\mathcal{H}}_i: \begin{cases} A_i \to \mathcal{T} \\ \boldsymbol{\alpha}_i \mapsto ({{\rm A} \kern-0.6em{\rm A}}, {\boldsymbol{B}}, {\boldsymbol{K}}, \rho_m, R), \end{cases} \end{equation} for all $i \in I$. We denominate this procedure as the \emph{offline} phase of a two-scale optimization approach, as it can be performed independent from the \emph{online} optimization procedure that is subject to constraints, that go beyond the box constraints on the parameter sets as in \cref{eq:parameter-sets}. For the case $M=1$, the conventional approach would be now, to perform the optimization based on the interpolated functions $\tilde{\mathcal{H}}_1$ over the full parameter set $A_1$. This is not directly possible for $M>1$. One way to get around this would be to introduce another interpolation between the different unit cell types similar as it is done in discrete material optimization (DMO) \cite{hvejsel2011}. Rather than that we introduce design grids \begin{equation} A^\text{grid}_i \subset A_i, \; i \in I, \end{equation} for all unit cell types. Only elements of $A^\text{grid}_i, \; i \in I$ will be considered in the optimization process later. This way, in general, only an approximate solution of the design problem can be computed. However it will turn out that this strategy combines well with the separable non-convex model introduced later in \cref{sec:subproblems}. Moreover the resulting error can be easily controlled by the distance and number of samples in $A^\text{grid}_i, \; i \in I$. The relation of different grids and mappings for the material coefficients are visualized and elaborated in \cref{fig:sketch-images-H}.\\ \begin{figure*}[ht] \centering \includegraphics[width=0.7\textwidth]{image_H2} \caption{Left: Sketch of parameter set $A_i$ and samples from its subsets $A_i^{\text{nodes}}$ (blue dots), that serves as a construction basis of interpolated $\tilde{\mathcal{H}}_i$, and $A^\text{grid}_i$ (red squares), on which the optimization process is performed. In general, $A_i^{\text{nodes}}$ and $A^\text{grid}_i$ can be fully independent from each other. Right: Simplified sketch of the original effective material coefficients spaces $\mathcal{H}_i(A_i)$ (yellow surface) and the the images of interpolated $\tilde{\mathcal{H}}_i(A_i)$ (red surface). The blue dots and red squares represent the images of the parameters from respectively $A_i^{\text{nodes}}$ or $A^\text{grid}_i$.} \label{fig:sketch-images-H} \end{figure*} As we only optimize on $A^\text{grid}_i, \; i \in I$, \cref{eq:def-union-hom-image} is approximated by \begin{equation}\label{eq-Htilte} \tilde{H} \coloneqq \bigcup_{i=1}^{M} \tilde{\mathcal{H}}_i(A^\text{grid}_i). \end{equation} We note that elements of $\tilde{H}$ can be precomputed already in the \emph{offline} phase. In general, this leads to a higher memory requirement, but additionally reduces \emph{online} computation time. Finally, we briefly introduce a finite element approximation, with ${n_\mathrm{el}}$ finite elements, and therefore introduce element index set $ E \coloneqq \{1, \dots, {n_\mathrm{el}}\} $ to indicate a finite element distinctively by its index $e \in E$. We further assume that the design is constant on each element and can thus be represented by $$ {{\bf I} \kern-0.2em{\bf H}} \in \tilde{H}^{{n_\mathrm{el}}} $$ We remark that through the definition of $\tilde{H}$ in \cref{eq-Htilte} this condition already states that only material tensors are eligible, for which a unit cell type $i$ and a parameter vector $\boldsymbol{\alpha}_i$ in $A^\text{grid}_i$ exists. Moreover, we replace physical functions $\Phi$ and $\Psi$, regularization function $\Xi$ and solution operator $\mathcal{Z}$ by their discretized counterparts, e.g.,~ \begin{equation} \mathcal{Z}_h: \begin{cases} \tilde{H}^{{n_\mathrm{el}}} \to \mathbb{R}^{n_{\text{dof}}} \\ {{\bf I} \kern-0.2em{\bf H}} \mapsto (\boldsymbol{\mathrm{u}},\boldsymbol{\mathrm{p}}) \end{cases}, \end{equation} where $n_{\text{dof}}$ is the dimension of the discrete state solution space. The discretized version of resource function $\rho$ \cref{eq:vol-func} is \begin{equation}\label{eq:vol-func-h} \rho_h: \begin{cases} \tilde{H}^{{n_\mathrm{el}}} \to \mathbb{R} \\ {{\bf I} \kern-0.2em{\bf H}} \mapsto \rho_h. \end{cases} \end{equation} The optimization problem, fully discretized in design and state space, then reads \begin{equation} \begin{aligned}\label{eq:sgp-opt-problem} \min_{{{\bf I} \kern-0.2em{\bf H}} \in \tilde{H}^{{n_\mathrm{el}}}} \max_{\lambda_\rho \in \mathbb{R}^+}\quad & \mathcal{F}_h({{\bf I} \kern-0.2em{\bf H}},\boldsymbol{\mathrm{z}},\lambda_\rho) \\ \textrm{s.t.} \quad & \boldsymbol{\mathrm{z}} =\mathcal{Z}_h({{\bf I} \kern-0.2em{\bf H}}), \end{aligned} \end{equation} with \begin{equation*} \begin{aligned} \mathcal{F}_h({{\bf I} \kern-0.2em{\bf H}},\boldsymbol{\mathrm{z}}, \lambda_\rho) := & \Lambda_\Phi \Phi_h({{\bf I} \kern-0.2em{\bf H}},\boldsymbol{\mathrm{z}}) + \Lambda_\Psi \Psi_h({{\bf I} \kern-0.2em{\bf H}},\boldsymbol{\mathrm{z}})\\ & +\lambda_\rho \left(\rho_h({{\bf I} \kern-0.2em{\bf H}})-\bar{\rho}_m\right) + \Lambda_\Xi \Xi_h({{\bf I} \kern-0.2em{\bf H}}). \end{aligned} \end{equation*} We note that we have eliminated the resource constraint by the Lagrange formalism. Later we will suggest to use a bisection strategy as introduced in \cite{sigmund99} for the framework of the well known OCM method. We finally specialize the regularization term to become \begin{equation}\label{eq:filter_function} \Xi_h({{\bf I} \kern-0.2em{\bf H}}) = \frac{1}{2}\|{\boldsymbol{R}}- \mathbb{F} ({\boldsymbol{R}}) \|^2, \end{equation} where $\mathbb{F}$ denotes a standard density filter function (see, e.g.,~ \cite{bourdin-filter}) with \begin{equation} \mathbb{F}: \mathbb{R}^{n_\mathrm{el}} \to \mathbb{R}^{n_\mathrm{el}}. \end{equation} and ${\boldsymbol{R}}$ is the vector of regularization labels associated with all finite elements $e \in E$. \subsection{Construction of subproblems} \label{sec:subproblems} For any sequential programming algorithm first a sequence of subproblems has to be defined. Here, in each iteration $k$, we construct \emph{separable} first order approximations, about an expansion point ${{\bf I} \kern-0.2em{\bf H}}^k \in \tilde{H}^{n_\mathrm{el}}$, for the components of cost function \begin{equation} \mathcal{J}({{\bf I} \kern-0.2em{\bf H}}, \lambda_\rho) := \mathcal{F}_h({{\bf I} \kern-0.2em{\bf H}},{\boldsymbol{z}}, \lambda_\rho) \end{equation} of the original optimization problem in \cref{eq:sgp-opt-problem}. The model problem is \begin{align}\label{eq:model-problem} \min_{{{\bf I} \kern-0.2em{\bf H}}} \max_{\lambda_\rho \in \mathbb{R}} \quad&\mathcal{J}_\mathrm{sep}\left({{\bf I} \kern-0.2em{\bf H}}, \lambda_\rho; {{\bf I} \kern-0.2em{\bf H}}^k \right) \end{align} where our model function is defined as \begin{align}\label{eq:model-function} & \mathcal{J}_\mathrm{sep}\left({{\bf I} \kern-0.2em{\bf H}},\lambda_\rho; {{\bf I} \kern-0.2em{\bf H}}^k\right) := \sum_{e\in E} \mathcal{J}_{\mathrm{sep},e}\left({{\bf I} \kern-0.2em{\bf H}}_e, \lambda_\rho;{{\bf I} \kern-0.2em{\bf H}}_e^k\right) \nonumber\\ &= \sum_{e\in E} \widetilde{\mathcal{J}}_\mathrm{phys}\left({{\bf I} \kern-0.2em{\bf D}}_e;{{\bf I} \kern-0.2em{\bf D}}_e^k\right) + \lambda_\rho \widetilde{\mathcal{J}}_\mathrm{vol}((\rho_m)_e) \nonumber\\ & + \Lambda_\Xi \widetilde{\mathcal{J}}_{\mathrm{reg},e}(R_e;R_e^k) + \Lambda_g \widetilde{\mathcal{J}}_\mathrm{glob}({{\rm A} \kern-0.6em{\rm A}}_e;{{\rm A} \kern-0.6em{\rm A}}_e^k) \end{align} with \begin{align*} & {{\bf I} \kern-0.2em{\bf D}}_e := ({{\rm A} \kern-0.6em{\rm A}}_e,{\boldsymbol{B}}_e,{\boldsymbol{K}}_e) \in \mathbb{S}^6 \times \mathbb{S}^3 \times \mathbb{S}^3, \nonumber\\ & {{\bf I} \kern-0.2em{\bf D}}_e^k := ({{\rm A} \kern-0.6em{\rm A}}_e^k,{\boldsymbol{B}}_e^k,{\boldsymbol{K}}_e^k) \in \mathbb{S}^6 \times \mathbb{S}^3 \times \mathbb{S}^3, \nonumber\\ &{{\bf I} \kern-0.2em{\bf H}}, {{\bf I} \kern-0.2em{\bf H}}^k \in\, \tilde{H}^{n_\mathrm{el}}. \end{align*} In the following, we describe each component of $\mathcal{J}_\mathrm{sep}$ in more details.\\ For this, we split $\mathcal{J}({{\bf I} \kern-0.2em{\bf H}}, \lambda_\rho)$ as \begin{equation*} \mathcal{J}({{\bf I} \kern-0.2em{\bf H}}, \lambda_\rho) = \mathcal{J}_\mathrm{phys}({{\bf I} \kern-0.2em{\bf H}}) + \lambda_\rho \mathcal{J}_\mathrm{vol}({{\bf I} \kern-0.2em{\bf H}}) + \Lambda_\Xi \mathcal{J}_\mathrm{reg}({{\bf I} \kern-0.2em{\bf H}}) \end{equation*} with \begin{align} \mathcal{J}_\mathrm{phys}({{\bf I} \kern-0.2em{\bf H}}) & := \Lambda_\Phi \Phi_h({{\bf I} \kern-0.2em{\bf H}}, {\boldsymbol{z}}) + \Lambda_\Psi \Psi_h({{\bf I} \kern-0.2em{\bf H}},{\boldsymbol{z}}), \label{eq:Jphys} \\ \mathcal{J}_\mathrm{vol}({{\bf I} \kern-0.2em{\bf H}}) & := \rho_h({{\bf I} \kern-0.2em{\bf H}}) - \bar{\rho}_m, \label{eq:Jvol}\\ \mathcal{J}_\mathrm{reg}({{\bf I} \kern-0.2em{\bf H}}) & := \Xi_h({{\bf I} \kern-0.2em{\bf H}}).\label{eq:Jreg} \end{align} From tuple ${{\bf I} \kern-0.2em{\bf H}}$, only the effective material coefficients ${{\rm A} \kern-0.6em{\rm A}}, {\boldsymbol{B}}$ and ${\boldsymbol{K}}$, are relevant for $\mathcal{J}_\mathrm{phys}$. Consequently, for $\mathcal{J}_\mathrm{phys}$, we define a separable approximation of type \begin{equation} \sum_{e \in E} \widetilde{\mathcal{J}}_\mathrm{phys}\left({{\bf I} \kern-0.2em{\bf D}}_e;{{\bf I} \kern-0.2em{\bf D}}_e^k\right), \end{equation} where $\widetilde{\mathcal{J}}_\mathrm{phys}$ is the following generalization of the first-order MMA-like model suggested in \cite{stingl-siam-2009} for functions defined in tensor variables: \begin{align}\label{eq:Jphys_approx} &\widetilde{\mathcal{J}}_\mathrm{phys}\left({{\bf I} \kern-0.2em{\bf D}}_e;{{\bf I} \kern-0.2em{\bf D}}^k\right) \nonumber\\ &= C_{\text{phys}} - \left\langle {{\rm A} \kern-0.6em{\rm A}}_e^k \left[\frac{\partial \mathcal{J}_\mathrm{phys}({{\bf I} \kern-0.2em{\bf D}}^k)}{\partial {{\rm A} \kern-0.6em{\rm A}}}\right]_e{{\rm A} \kern-0.6em{\rm A}}_e^k, {{\rm A} \kern-0.6em{\rm A}}_e^{-1} \right\rangle_{\mathbb{S}^6} \nonumber \\ &- \left\langle {\boldsymbol{B}}_e^k \left[\frac{\partial \mathcal{J}_\mathrm{phys}({{\bf I} \kern-0.2em{\bf D}}^k)}{\partial {\boldsymbol{B}}}\right]_e {\boldsymbol{B}}_e^k, {\boldsymbol{B}}_e^{-1} \right\rangle_{\mathbb{S}^3} \nonumber \\ &- \left\langle {\boldsymbol{K}}_e^k \left[\frac{\partial \mathcal{J}_\mathrm{phys}({{\bf I} \kern-0.2em{\bf D}}^k)}{\partial {\boldsymbol{K}}}\right]_e {\boldsymbol{K}}_e^k, {\boldsymbol{K}}_e^{-1} \right\rangle_{\mathbb{S}^3}. \end{align} Here $C_{\text{phys}}$ is a constant that is chosen to establish the zeroth order correctness of the model and $<\cdot,\cdot>_{\{\mathbb{S}^6,\mathbb{S}^3\}}$ denotes the Frobenius inner products for matrices from $\mathbb{S}^6$ and $\mathbb{S}^3$, respectively. It is further mentioned that in contrast to the model in \cite{stingl-siam-2009}, we refrain from working with flexible generalized asymptotes $L_e^{{\rm A} \kern-0.6em{\rm A}} \in \mathbb{S}^6, L_e^{\boldsymbol{B}}, L_e^{\boldsymbol{K}} \in \mathbb{S}^3$, but simply choose all of them to be zero matrices. The partial derivatives of $\mathcal{J}_\mathrm{phys}$ with respect to the material coefficients ${{\rm A} \kern-0.6em{\rm A}},{\boldsymbol{B}}$ and ${\boldsymbol{K}}$ can be easily extracted from the expressions in \cref{eq-deriv-tensors}. The function $\mathcal{J}_\mathrm{vol}$ that describes the fraction of utilized matrix material, is separable by definition, and depends solely on $\rho_m$. We accordingly choose \begin{equation}\label{eq:Jvol-approx} \widetilde{\mathcal{J}}_\mathrm{vol}((\rho_m)_e;\rho_m^k) = (\rho_m)_e. \end{equation} The function $\mathcal{J}_\mathrm{reg}$ given in \cref{eq:Jreg} solely depends on the regularization label ${\boldsymbol{R}} \in \mathbb{R}^{n_\mathrm{el}}$, which is a component of tuple ${{\bf I} \kern-0.2em{\bf H}} \in \tilde{H}^{n_\mathrm{el}}$. The separable approximation of $\mathcal{J}_\mathrm{reg}$ is thus of the form \begin{equation} \sum_{e \in E} \widetilde{\mathcal{J}}_{\mathrm{reg},e}(R_e;{\boldsymbol{R}}^k), \end{equation} where \begin{align}\label{eq:Jreg-approx} &\widetilde{\mathcal{J}}_{\mathrm{reg},e}(R_e;{\boldsymbol{R}}^k) \\ & = \frac{1}{2}\, \bigg\Vert \widetilde{R}_e\left(R_e;{\boldsymbol{R}}^k\right) - \left[\mathbb{F} \left(\widetilde{R}_e\left(R_e;{\boldsymbol{R}}^k\right)\right)\right]_e \bigg\Vert^2.\nonumber \end{align} In \cref{eq:Jreg-approx}, we further employ function \begin{equation*} \widetilde{R}_e\left(R;{\boldsymbol{R}}^k\right) \coloneqq \left(R_1^{k}, \dots, R_{e-1}^{k}, R, R_{e+1}^{k}, \dots, R_{{n_\mathrm{el}}}^{k} \right), \end{equation*} in which the regularization label is varied only in the $e$-th entry by value $R$, and contributions of expansion point ${\boldsymbol{R}}^k$ are used in the neighboring entries. Is is noted that \cref{eq:Jreg-approx} can be reduced to a convex quadratic function of type \begin{equation*} a_e R_e^2 + b_e R_e + c_e, \end{equation*} by precomputing $a_e,b_e,c_e \in \mathbb{R}$, which are independent from $R_e$. Finally, we implement a step size control for the design from one iteration to the next one by adding \begin{equation}\label{eq:sgp-glob} \sum_{e \in E}\widetilde{\mathcal{J}}_\mathrm{glob}\left({{\rm A} \kern-0.6em{\rm A}}_e,{{\rm A} \kern-0.6em{\rm A}}_e^k\right) = \sum_{e \in E}\frac{1}{2}\left\Vert {{\rm A} \kern-0.6em{\rm A}}_e - {{\rm A} \kern-0.6em{\rm A}}_e^k \right\Vert^2 \end{equation} with a positive factor $\Lambda_g$ to the model cost function. Alternatively, a more general globalization strategy, similar to the regularization approach with regularization label $R$ in \cref{eq:Jreg-approx}, could be pursued by introducing particular globalization labels. Here, we assume that evaluating the design step size based on the stiffness tensor ${{\rm A} \kern-0.6em{\rm A}}_e$ and ${{\rm A} \kern-0.6em{\rm A}}_e^k$ is sufficient, and, in particular, the uniqueness of the globalization labels, such that \begin{equation} {{\rm A} \kern-0.6em{\rm A}}_e = {{\rm A} \kern-0.6em{\rm A}}_e' \Rightarrow \boldsymbol{\alpha}_e = \boldsymbol{\alpha}_e', \end{equation} is satisfied. \subsection{The SGP algorithm with a brute-force sub-solver} \label{sec:SGPalg} Having at hand the separable first-order approximations of the objective function and penalization terms, we are now able to formulate the iterative scheme that is described by \cref{alg:sgp-multimat}. We make extensively use of the separable structure of \begin{equation*} \mathcal{J}_\mathrm{sep}\left({{\bf I} \kern-0.2em{\bf H}}, \lambda_\rho; {{\bf I} \kern-0.2em{\bf H}}^k\right)= \sum_{e \in E} \mathcal{J}_{\mathrm{sep},e}\left({{\bf I} \kern-0.2em{\bf H}}_e, \lambda_\rho;{{\bf I} \kern-0.2em{\bf H}}_e^k\right) \end{equation*} and solve the subproblems, of each iteration $k$, for each finite element $e \in E$ individually. This is done by evaluating $\mathcal{J}_{\mathrm{sep},e}$ for all (finitely many) ${{\bf I} \kern-0.2em{\bf H}}_e \in \tilde{H}$ and, based on these evaluations, identifying a global minimizer ${{\bf I} \kern-0.2em{\bf H}}_e^*$. Note that, with each ${{\bf I} \kern-0.2em{\bf H}}_e$, a unique geometric cell label $\alpha_e$ is associated and thus, by determining ${{\bf I} \kern-0.2em{\bf H}}_e^*$, we also determine respective $\alpha_e^*$ and material class index $i^*$. As mentioned already earlier a bisection strategy is applied to treat the resource constraint, see \cref{alg:sub-problem} for the details. To keep things simple, it is assumed that the resource constraint is always active at a minimizer. If no resource constraint is applied, the outer loop in \cref{alg:sub-problem} is simply omitted. After each iteration, the original cost function $\mathcal{J}$ is evaluated with the current solution of the subproblems ${{\bf I} \kern-0.2em{\bf H}}_e^*$. If a descent in $\mathcal{J}$ was achieved, we continue the iterative process. If not, we employ the step width control, by increasing multiplier $\Lambda_g$ of globalization term \cref{eq:sgp-glob}, and resolve the subproblems using \cref{alg:sub-problem}. \begin{algorithm}[ht] \caption{Sequential Global Programming for parametrized multi-material optimization} \label{alg:sgp-multimat} \begin{algorithmic}[1] \State $k \gets 0$\; \State initialize ${{\bf I} \kern-0.2em{\bf H}}^0 \in \tilde{H}^{n_\mathrm{el}}$\; \State $\mathcal{J}_\mathrm{diff} \gets \infty$\; \While{$ \mathcal{J}_\mathrm{diff} > 0 \; \text{and} \; k \leq k_{\rm max} $} \State initialize $\Lambda_g \in \mathbb{R}$\; \While{$\mathcal{J}_\mathrm{diff} < 0$} \State ${{\bf I} \kern-0.2em{\bf H}}_{\Lambda_g}^* \gets$ solve \cref{eq:model-problem} to global \newline \hspace*{6.5em} optimality using \cref{alg:sub-problem} \; \State increase $\Lambda_g$\; \EndWhile \State ${{\bf I} \kern-0.2em{\bf H}}^* \gets {{\bf I} \kern-0.2em{\bf H}}_{\Lambda_g}^*$\; \State $\mathcal{J}_\mathrm{diff} \gets \mathcal{J}({{\bf I} \kern-0.2em{\bf H}}^k) - \mathcal{J}({{\bf I} \kern-0.2em{\bf H}}^*) $ \State $k \gets k +1$ \; \EndWhile \end{algorithmic} \end{algorithm} \begin{algorithm}[htb] \caption{Solve subproblems via brute force strategy} \label{alg:sub-problem} \begin{algorithmic}[1] \State initialize $\lambda_\rho \in \mathbb{R}$ for volume bisection \While{volume constraint is not satisfied} \ForAll{finite element $e \in E$} \ForAll{unit cell types $i \in I$} \State $\boldsymbol{\alpha}_{i}^* \gets $ minimizer on $A_i^\text{grid}$ \; \EndFor\; \State $\boldsymbol{\alpha}^* \gets$ minimizer among all $\boldsymbol{\alpha}_{i}^* \; (i \in I)$\; \State $i^* \gets$ unit cell type index of $\boldsymbol{\alpha}^*$ \; \State ${{\bf I} \kern-0.2em{\bf H}}_e^* \gets $ evaluate $\tilde{\mathcal{H}}_{i^*}(\boldsymbol{\alpha}^*)$ (see \cref{eq:param-to-tensor}) \EndFor \State $\rho \gets$ evaluate $\rho_h({{\bf I} \kern-0.2em{\bf H}}_e^*)$ (see \cref{eq:vol-func-h}); \If{$\rho > \bar{\rho}_m$} \State increase $\lambda_\rho$\; \Else \State decrease $\lambda_\rho$\; \EndIf \EndWhile \end{algorithmic} \end{algorithm} \section{Numerical results}\label{sec:numerical-results} In this section, we demonstrate the abilities of SGP by means of numerical examples. It is build up successively by first increasing the design freedom to the two-scale optimization problem, while observing the respective optimized designs and then studying the effect of regularization. In \cref{sec:one-cell}, we start with the unit cell that is constructed by three intersection fluid channels, visualized in the top row of \cref{fig:design_params}, and study the impact of the micro-structure's local orientation on the performance of the optimized designs. It will be seen that, thanks to the strength of our model, we do neither have to use smart initial orientations, as proposed e.g.,~ in \cite{pedersen1989,norris2006} by aligning the anisotropic material \text{ with respect to~} principal directions of the stress tensor, nor we have to enforce artificially a regular design. Then, we present a pareto front and investigate the influence of different weightings of compliance and fluid flux, in the cost function, on the resulting designs. When we proceed from one point on the Pareto front to the next one, we intentionally refrain from using the previous design as a warm start. Nevertheless and despite the non-convex character of our weighted cost function, Pareto curves are obtained, in which none of the points is dominated by another one. We trace this observation back to the ability of the SGP method to avoid poor local solutions. In \cref{sec:two-cells}, we proceed to demonstrate the ability of SGP to handle more than one unit cell type. We again compute a Pareto curve for this case. It will be observed that the new Pareto front is, due to the increase in the design freedom, is strictly dominating the previous one. It will be observed that the more complex parametrization does on average not lead to an increase in the number of state problems to be solved per optimization run. Note that for the settings presented in \cref{sec:one-cell} and \cref{sec:two-cells}, it was not necessary to employ a globalization strategy to control design changes from one iteration to the next one. Thus, we set the globalization parameter $\Lambda_g = 0.$ \\ In the end, in \cref{sec:two-cells-reg}, we apply a filtering technique onto the design parameters to both control the speed of variation of local orientation, as well as the interface length between the two unit cell types. Here, we also employ the globalization term described in \cref{eq:sgp-glob}. The setting of the poroelastic problem is depicted in \cref{fig:macro-setting}. It is a recapitulation of the macroscopic problem setting from \cite{Huebner-Solid-2019}, where the authors selected a finite element from the macroscopic domain and optimized the shape of the local microstructure via a spline box approach. In the present paper we provide an extension to this example by solving the two-scale optimization problem with the SGP method described in \cref{sec:sgp}. We note that we work with a rather coarse discretization of the macroscopic domain. The reason is that such a discretization is sufficient to demonstrate the capabilities of SGP as described above. On the other hand, it is readily seen in \cref{alg:sub-problem} that the number of macroscopic elements enters the computational complexity for SGP linearly. Thus, in principle there is no obstacle to work with finer discretizations. \begin{figure}[htb] \centering \includegraphics[width=0.31\textwidth]{macro_setting} \caption{Setup of the macroscopic problem: mechanical traction force $f \!=\! (0,-1,0)^\top$ acts on a part of the body's surface (red) while support is provided on $\Gamma_D$ and pressure values $p_1=1.0$ and $p_2=0.5$ are prescribed on $\Gamma_{p_1}$ and $\Gamma_{p_2}$. The design domain is discretized by 15 x 10 x 2 hexahedra.} \label{fig:macro-setting} \end{figure} \subsection{Optimization with one unit cell type}\label{sec:one-cell} In this section, we employ unit cell type 1, depicted in \cref{fig:design_params}. The geometry consists of three joint cylindrical fluid channels, filled with Glycerine (Young's modulus \SI{4.35}{\giga\pascal}, dynamic viscosity \SI{0.95}{\pascal\second}), that are perpendicular to each other and intersect a hollow sphere in the middle of the cell domain. These channels are embedded in matrix material made of Polystyrene with Young's modulus of \SI{3.9}{\giga\pascal} and dynamic viscosity of \SI{0.34}{\pascal\second}. The feasible range for the geometric design parameters is $A_1 = [0.08,0.22]^2$. Thus, in each finite element $e \in E$, we have the design parameters $\boldsymbol{\alpha}_1 = (r_x,r_y)^\top \in A_1$ to steer the radii of the channels pointing in $y$- and $x$-direction. The radius of the fluid channel that points in $z$-direction (out-of-plane) is kept constant. At the boundaries of the design parameter space, the volume fractions of the stiff material phase are $\rho\left(\mathcal{H}_1\left([0.08,0.08]^\top\right)\right) = 0.7154$ and $\rho\left(\mathcal{H}_1\left([0.22,0.22]^\top\right)\right) = 0.879$. The directional stiffness of the softest version of this unit cell is visualized in \cref{fig:matviz_alpha-022} by means of a polar plot. \begin{figure}[h] \centering \includegraphics[width=0.24\textwidth]{matviz_M1_022-022-axes} \caption{Visualization of directional stiffness of unit cell with maximally opened fluid channels ($r_x = 0.22, r_y = 0.22$). This spherical plot was generated by drawing the entry $A_{1111}$ of the rotated material tensor ${{\rm A} \kern-0.6em{\rm A}} \in \mathbb{S}^{6}$ for varying rotation angles $(\theta, \phi) \in [0,2\pi]^2$ about $z$- and $y$-axes. For instance, the sketched arrow points to $(\pi/2,0)$ and its length of 1.9457 comes from first entry of the material tensor that is rotated by $\pi/2$ about the $z$-axis.} \label{fig:matviz_alpha-022} \end{figure} The interpolation of $\mathcal{H}_1$ is based on $A_1^{\text{nodes}}$. Here, $A_1^{\text{nodes}}$ is the parameter grid spanned by the components of $\boldsymbol{\alpha}_1$, and for each component we chose 11 equally spaced samples. The subproblems of the SGP algorithm are solved based on the discrete parameter grid $A_1^\text{grid}$. For this grid, we chose a sample size of 28 for each of the two channel radii; again the samples are equally spaced.\\ For the following optimization results with the weighted sum formulation of structural compliance and fluid flux, we employ an initial design guess, visualized in \cref{fig:init-015-015}, that is neither particularly favorable for the mechanical nor for the fluid flow state. \begin{figure}[htb] \centering \includegraphics[width=0.27\textwidth]{poroel_pts10_lvl2_15x10x2_lphi1_lpsi_10_init_015_initDesign} \caption{Homogeneous initial design with ${r_x=r_y=0.15}$ and no cell rotation and physical performance $\Phi_\text{init} = 28.9$ and $\Psi_\text{init} = 0.135$.}\label{fig:init-015-015} \end{figure} For the described setting, we choose $\Lambda_\Psi = -10$ and obtain the optimized design shown in \cref{fig:lamb-10-no-rot-z0}. Note that the design domain is discretized by two finite element layers in $z$-direction. We made the experience that, for all numerical results presented in this paper, the differences of optimized designs at layer $z=0$ and layer $z=1$ are so small such that they cannot be visually discernible. For this reason, we will only show optimized designs for layer $z=0$ in the rest of the paper. \begin{figure}[htbp] \centering \subfloat[Optimized design ($z=0$)\label{fig:lamb-10-no-rot-z0}]{\includegraphics[width=0.22\textwidth]{poroel_pts10_lvl2_15x10x2_lphi1_lpsi_10_init_015_optDesign_z-0}}\quad \subfloat[Optimized design ($z=1$)\label{fig:lamb-10-no-rot-z1}]{\includegraphics[width=0.22\textwidth]{poroel_pts10_lvl2_15x10x2_lphi1_lpsi_10_init_015_optDesign_z-1}} \\ \subfloat[Mechanical state\label{fig:lamb-10-no-rot-mech}]{\includegraphics[width=0.24\textwidth]{poroel_pts10_lvl2_15x10x2_lphi1_lpsi_10_init_015_optDesign_u-warped}} \\ \subfloat[Pressure field]{\includegraphics[width=0.24\textwidth]{poroel_pts10_lvl2_15x10x2_lphi1_lpsi_10_init_015_optDesign_pressure}} \\ \subfloat[Velocity field\label{fig:lamb-10-no-rot-velo}]{\includegraphics[width=0.24\textwidth]{poroel_pts10_lvl2_15x10x2_lphi1_lpsi_10_init_015_optDesign_velocity}} \caption{Optimization result for $\Lambda_\Psi= -10$ and fixed local micro-structure orientation (no rotation) with $\Phi_\text{opt}=27.25$ and $\Psi_\text{opt}=0.275$ for the optimized design in \protect\subref{fig:lamb-10-no-rot-z0},\protect\subref{fig:lamb-10-no-rot-z1}. The initial guess is the design shown in \cref{fig:init-015-015}. In \protect\subref{fig:lamb-10-no-rot-mech} the mechanical state of the optimized design is visualized by deforming the domain by the physical displacements. The strain energy is shown in colors. In \protect\subref{fig:lamb-10-no-rot-velo}, the flow direction is visualized by equally scaled arrows and the colors indicate magnitude of the flow field.} \label{fig:min-poroel_no-rot_lambda-10} \end{figure} SGP stopped after 19 iterations, because the difference between the objective values of the old and new design was found to be 0. We note that this comparably low number of iterations is related to the fineness of the design discretization. Thus, using more grid points could lead to a slightly larger number of iterations. On the other hand, in those experiments that we performed in this direction, the visualizations of the obtained result could be hardly distinguished, see \cref{fig:comp-Agrid1}. This is why we do not report results for different choices of $A^\text{grid}_i, \; i \in I$. \begin{figure} \centering \subfloat[\label{fig:comp-Agrid1-10}]{\includegraphics[width=0.22\textwidth]{poroel_pts10_phi_180_15x10x2_init_015_phi_0_lambda_10_optDesign}} \quad \subfloat[\label{fig:comp-Agrid1-28}]{\includegraphics[width=0.22\textwidth]{poroel_alpha28_phi180_beta60_15x10x2_init015_phi0_lambda10_optDesign}} \caption{Two optimized designs for different sample sizes of $A^\text{grid}_1$. \protect\subref{fig:comp-Agrid1-10} 10 samples each for $r_x$ and $r_y$ and 180 samples for $\varphi$. \protect\subref{fig:comp-Agrid1-28} 28 samples each for $r_x$ and $r_y$ and 180 samples for $\varphi$. Here, $\Lambda_\Psi=1$ and $\Lambda_\Psi=-10$. The visual differences are barely perceptible, although \protect\subref{fig:comp-Agrid1-28} has a 1.5\% lower compliance and a 1.7\% higher flux than \protect\subref{fig:comp-Agrid1-10}.} \label{fig:comp-Agrid1} \end{figure} A second observation we can make is that the fluid channels in resulting designs are fully connected. This is due to the fact that no rotational design degrees of freedom were used. On the other hand we will see next that the performance is getting way better, if also local rotations of the micro-structures are allowed. \subsubsection{Optimized local in-plane rotation of micro-structure} We introduce angle variable $\varphi \in [0,\pi]$ to allow in-plane rotation, about the $z$-axis, of the micro-structure. The effective material coefficients are rotated by $\varphi$ with the following analytical expressions: \begin{align} {{\rm A} \kern-0.6em{\rm A}}_\text{rot}(r_x,r_y,\varphi) &= {\boldsymbol{Q}}_6(\varphi) {{\rm A} \kern-0.6em{\rm A}}(r_x,r_y){\boldsymbol{Q}}_6(\varphi)^T, \nonumber\\ {\boldsymbol{B}}_\text{rot}(r_x,r_y,\varphi) &= {\boldsymbol{Q}}_3(\varphi) {\boldsymbol{B}}(r_x,r_y){\boldsymbol{Q}}_3(\varphi)^T, \nonumber\\ {\boldsymbol{K}}_\text{rot}(r_x,r_y,\varphi) &= {\boldsymbol{Q}}_3(\varphi) {\boldsymbol{K}}(r_x,r_y){\boldsymbol{Q}}_3(\varphi)^T, \end{align} where ${\boldsymbol{Q}}_6 \in \mathbb{R}^{6 \times 6}$ are rotation matrices for the stiffness tensor ${{\rm A} \kern-0.6em{\rm A}}$ in Voigt notation and ${\boldsymbol{Q}}_3 \in \mathbb{R}^{3 \times 3}$ are rotation matrices for the Biot coupling and permeability tensor. We note that no additional evaluation of the homogenization operators are required, as, instead of the micro-structure, the effective material tensors are rotated. $\varphi$ is discretized with $180$ steps for the brute force approach to solve the SGP subproblem with \cref{alg:sub-problem}. \begin{figure}[ht] \centering \subfloat[Design after one iteration\label{fig:example-poroel-lambdaPsi-10-iter-1}]{ \includegraphics[width=0.22\textwidth]{poroel_alpha28_phi180_beta60_15x10x2_init015_phi0_lambda10_iter-2}} \, \subfloat[Optimized design\label{fig:example-poroel-lambdaPsi-10-optDesign}]{ \includegraphics[width=0.22\textwidth]{poroel_alpha28_phi180_beta60_15x10x2_init015_phi0_lambda10_optDesign}} \\ \subfloat[Pressure field]{\includegraphics[width=0.24\textwidth]{poroel_alpha28_phi180_beta60_15x10x2_init015_phi0_lambda10_optDesign_pressure}} \quad \subfloat[Velocity field]{\includegraphics[width=0.24\textwidth]{poroel_alpha28_phi180_beta60_15x10x2_init015_phi0_lambda10_optDesign_velocity}} \quad \subfloat[Mechanical strain]{\includegraphics[width=0.24\textwidth]{poroel_alpha28_phi180_beta60_15x10x2_init015_phi0_lambda10_optDesign_strain}} \\ \caption{Optimized design with rotational design degrees of freedom and respective physical state for $\Lambda_\Phi = 1$ and $\Lambda_\Psi = -10$, with $\Phi_\text{opt}=27.1$ and $\Psi_\text{opt}=0.413$.} \label{fig:poroel_rot_lambda-10} \end{figure} Let us again set $\Lambda_\Phi = 1$ and $\Lambda_\Psi = -10$, as in \cref{fig:min-poroel_no-rot_lambda-10}, and observe in \cref{fig:example-poroel-lambdaPsi-10-iter-1,fig:example-poroel-lambdaPsi-10-optDesign} how the design evolves as both physical models counteract each other: the mechanical model strives for as much material as possible to minimize the compliance while the fluid flux is maximized when there is less material in the design domain. The convergence plot for the merit function $\mathcal{J}$ and compliance function $\Phi$, displayed in \cref{fig:poroel_rot_lambda-10-convergence}, shows that the compliance drops in the first iteration, then increases a bit and finally settles around the value of 27.0. In general, we observed in our numerical studies, that the largest design changes occur within a few iterations in the beginning. Afterwards, minor changes are made to further tweak the objective. This behavior shows the good quality of the SGP model and its approximations, described in \cref{sec:sgp}. Let us have a closer look into the intermediate designs shown in \cref{fig:example-poroel-lambdaPsi-10-iter-1}. Again, the initial guess is neither particularly favorable for the mechanical nor for the fluid flow state. After the first iteration, we see in \cref{fig:example-poroel-lambdaPsi-10-iter-1} that some channels, close to the outflow region, are opened widely and cells closer to the mechanical support were adjusted to have narrower fluid channels to improve the mechanical performance of the design. In comparison to the solution in \cref{fig:min-poroel_no-rot_lambda-10}, where the orientation was fixed, this solution has a 1\% smaller compliance and a fluid flux which is about 47\% higher. We would like to emphasize that local orientation field looks rather smooth although we have neither applied a stress based warm start for the rotation variable, as proposed by \cite{pedersen1989,norris2006}, nor we have employed a regularization technique. We also can observe that the total number of iterations required did not increase after addition of the additional design degrees of freedom. \begin{figure} \centering \includegraphics[width=0.29\textwidth]{poroel_rot_init015_phi0_lambda10_convergence_compliance} \quad \includegraphics[width=0.36\textwidth]{poroel_rot_init015_phi0_lambda10_convergence_flux} \caption{Convergence plots for design shown in \cref{fig:poroel_rot_lambda-10}.} \label{fig:poroel_rot_lambda-10-convergence} \end{figure} We conclude this subsection by presenting a Pareto front for this type of bicriterial weighted sum formulation in \cref{fig:pareto-one-cell}. All optimizations were based on the initial guess that is shown in \cref{fig:init-015-015}. This implies that again, no warm starting technique was employed to proceed from one point to the next on the Pareto curve. Nevertheless a Pareto curve is obtained, in which none of the points is dominated by another one. This again is a hint that the SGP method is able to avoid poor local solutions. The number of outer iterations required to solve the problems corresponding to all points on the Pareto curve varied between $3$ and $31$. The rather low number of $3$ iterations was obtained for the extreme case, where $\Lambda_\Psi=0$. \pgfplotstableread{pareto_alpha28_phi180.txt}{\tablefine} \begin{figure}[htbp] \centering \begin{tikzpicture} \begin{axis}[ ymin=0, ymax=0.6, xlabel = {compliance $\Phi$}, ylabel = {flux $\Psi$}, xtick distance = 1, ytick distance = 0.25, grid = both, minor tick num = 1, major grid style = {lightgray}, width = 0.46\textwidth, height = 0.36\textwidth, legend cell align = {left}, legend pos = north west ] \addplot[blue, mark = *] table [x = {phi}, y = {psi}] {\tablefine}; \node [right] at (axis cs: 2.54e+01 , 9.31e-02) {\mbox{\footnotesize$\Lambda_{\Psi}=-3$}}; \node [right] at (axis cs: 2.61e+01 , 2.76e-01) {\mbox{\footnotesize$-5$}}; \node [above] at (axis cs: 2.7e+01 , 4.13e-01) {\mbox{\footnotesize$-10$}}; \node [above] at (axis cs: 2.78e+01 , 4.66e-01) {\mbox{\footnotesize$-15$}}; \node [above] at (axis cs: 2.89e+01 , 5.15e-01) {\mbox{\footnotesize$-30$}}; \node [text=red,below] at (axis cs: 2.96e+01 , 5.38e-01) {\mbox{\footnotesize$-60$}}; \addplot[ color=red, mark=*, ] coordinates { (2.96e+01 , 5.38e-01) }; \end{axis} \end{tikzpicture} \caption{Pareto front for varying $\Lambda_\Psi$ in weighted-sum formulation $\mathcal{F}_\text{phys} = \Phi +\Lambda_\Psi \Psi$. The optimization was based on cells of type $1$ and the initial design was always $[0.15,0.15,0]^{n_\mathrm{el}}$. As we are minimizing $\Phi$ and maximizing $\Psi$, a point $P=(P_\Phi,P_\psi)$ in the image space of $\Phi$ and $\Psi$ is dominating a point $Q=(Q_\Phi,Q_\psi)$ if $P_\Phi \leq Q_\Phi$ \emph{and} $P_\Psi \geq Q_\Psi$.}\label{fig:pareto-one-cell} \end{figure} The optimized designs for various choices of $\Lambda_\Psi$ are visualized in \cref{fig:rot_one_cell_var_lamb}. It is observed that the with decreasing $\Lambda_\Psi$ the compliance minimized is design (\cref{fig:rot_one_cell_var_lamb_a}) is almost smoothly transformed into a fully flux based design (\cref{fig:rot_one_cell_var_lamb_h}). \begin{figure*}[ht!] \centering \subfloat[Compliance minimized design\label{fig:rot_one_cell_var_lamb_a}]{\includegraphics[width=0.23\textwidth]{poroel_alpha28_phi180_beta60_15x10x2_init015_phi0_lambda0_optDesign_z-0}\quad} \subfloat[$\Lambda_\Psi=-3$]{\includegraphics[width=0.23\textwidth]{rot_poroel_alpha28_phi180_15x10x2_init015_phi0_lambda3_optDesign}} \quad \subfloat[$\Lambda_\Psi=-5$]{\includegraphics[width=0.23\textwidth]{rot_poroel_alpha28_phi180_15x10x2_init015_phi0_lambda5_optDesign}} \quad \subfloat[$\Lambda_\Psi=-10$]{\includegraphics[width=0.23\textwidth]{poroel_alpha28_phi180_beta60_15x10x2_init015_phi0_lambda10_optDesign}} \\ \subfloat[$\Lambda_\Psi=-15$]{\includegraphics[width=0.23\textwidth]{rot_poroel_alpha28_phi180_15x10x2_init015_phi0_lambda15_optDesign}} \quad \subfloat[$\Lambda_\Psi=-30$]{\includegraphics[width=0.23\textwidth]{rot_poroel_alpha28_phi180_15x10x2_init015_phi0_lambda30_optDesign}} \quad \subfloat[$\Lambda_\Psi=-60$]{\includegraphics[width=0.23\textwidth]{poroel_alpha28_phi180_beta60_15x10x2_init015_phi0_lambda60_optDesign}} \quad \subfloat[Flux maximized design\label{fig:rot_one_cell_var_lamb_h}]{\includegraphics[width=0.23\textwidth]{min_flux_init_008_phi_0_optDesign_z-0}} \caption{Visualization of optimized designs associated with the labeled points in \cref{fig:pareto-one-cell}.} \label{fig:rot_one_cell_var_lamb} \end{figure*} \subsection{Optimization with two unit cell types}\label{sec:two-cells} We want to study the ability of SGP to handle more than one unit cell type. For this purpose, we add unit cell type $2$ that comprises of a void sphere surrounded by matrix material (see second row of \cref{fig:design_params}). The only design parameter is the radius $r_s \in [0.1,0.4]$ of the void sphere in this case. The smaller the void sphere, the higher the volume fraction of the matrix phase and therefore the stiffer the cell. Thus, cells of type $2$ are particularly favorable for the mechanical part of the objective. When only optimizing the compliance, we obtain the trivial solution shown in \cref{fig:cantilever_no-vol_two-materials}. \begin{figure}[htbp] \centering \subfloat[Only cells of type $1$\label{fig:min-compl-M1}]{\includegraphics[width=0.22\textwidth]{poroel_alpha28_phi180_beta60_15x10x2_init015_phi0_lambda0_optDesign_z-0}}\quad \subfloat[Cells of type 2 Optimized design with $\Phi_\text{opt}=19.62$\label{fig:min-compl-M2}]{ \includegraphics[width=0.22\textwidth]{cantilever_twoCells_init_022_phi_0_optDesign}} \caption{Compliance minimized designs: \protect\subref{fig:min-compl-M1} only allowing cells of type 1 and \protect\subref{fig:min-compl-M2} allowing choices of type 1 and 2. The red dots visualize the void inclusions of cells of type 2. The optimized compliance of design \protect\subref{fig:min-compl-M2} is 24\% better than compliance of the optimized design \protect\subref{fig:min-compl-M1}.} \label{fig:cantilever_no-vol_two-materials} \end{figure} For the fluid flow, cells of type $2$ are futile as they are not permeable. However, for numerical reasons, we set the permeability of the latter cells to 0.001. Cells of type $1$ have orthotropic mechanical properties and transversal isotropic permeability tensors, whereas cells of type $2$ have isotropic mechanical properties and no permeability. Although cell types $1$ and $2$ are disjunct in their parameter spaces, the corresponding ranges of volume fractions, of the stiff matrix material, overlap. We have $\rho\left(\mathcal{H}_1([0.08,0.08])\right) = 87.9\%,\ \rho\left(\mathcal{H}_1([0.22,0.22])\right) = 71.54\%$ and $\rho\left(\mathcal{H}_2(0.4)\right)=73.19\%,\ \rho\left(\mathcal{H}_2(0.1)\right) = 99.6\%$. $A^{\text{nodes}}_2$, the basis for the interpolation of $\mathcal{H}_2$, consisted of 30 uniformly distributed samples for $r_s \in [0.1,0.4]$ and the optimization procedure was performed on $A_2^\text{grid}$ with 60 samples, again uniformly distributed. Next, we present the updated Pareto front for compliance minimization and fluid flux maximization with both unit cell types in \cref{fig:pareto_two-cells}. \pgfplotstableread{pareto_alpha28_phi180_beta60_twoCells.dat}{\paretotwocellsfine} \pgfplotstableread{pareto_alpha28_phi180.txt}{\tableonecell} \begin{figure}[h!] \centering \begin{tikzpicture} \begin{axis}[ ymin=0, ymax=0.6, xlabel = {compliance $\Phi$}, ylabel = {flux $\Psi$}, xtick distance = 1, ytick distance = 0.25, grid = both, minor tick num = 1, major grid style = {lightgray}, width = 0.46\textwidth, height = 0.36\textwidth, legend cell align = {left}, legend pos = north west ] \addplot[blue, mark = *] table [x = {phi}, y = {psi}] {\paretotwocellsfine}; \addplot[red, mark = o] table [x = {phi}, y = {psi}] {\tableonecell}; \node [right] at (axis cs: 2.12e+01 , 5.40e-02){\mbox{\footnotesize$\Lambda_{\Psi}=-2$}}; \node [right] at (axis cs: 2.43e+01 , 3.17e-01) {\mbox{\footnotesize$-5$}}; \node [text=red,below] at (axis cs: 2.95e+01 , 5.46e-01) {\mbox{\footnotesize$-60$}}; \addplot[ color=red, mark=*, ] coordinates { (2.95e+01 , 5.46e-01) }; \end{axis} \end{tikzpicture} \caption{Comparison of Pareto curves for varying $\Lambda_\Psi$. Blue: optimization with cells of type 1 and 2. Red: optimization with only cells of type 1. The blue curve clearly dominates the red curve.} \label{fig:pareto_two-cells} \end{figure} We again stress that we did not use enhanced initial designs for the computation of the points on the Pareto curve. The comparison of the new (blue) curve with the old (red) curve shows that consistently better designs are obtained. Points on the blue curve strictly dominate points on the red curve in the Pareto sense. This is not surprising as, with the addition of a new unit cell type, the design freedom is increased. Still it is worth to mention that the fact that we do not observe any outliers in this respect again underlines the stability of our SGP method. The numbers of required outer iterations varied between 4 and 40, which means that no significant increase in the number of iterations is observed, although a second cell type has been added. In \cref{fig:opt-two-cells-var-lamb}, we can observe how the number of cells of type 2, in the optimized design, decreases with decreasing $\Lambda_\Psi$. This is expected, as cell type 2 is completely useless for a flux favored design. \begin{figure}[ht!] \centering \subfloat[Compliance minimized design\label{fig:two-cells-min-compl}]{\includegraphics[width=0.23\textwidth]{cantilever_twoCells_init_022_phi_0_optDesign}}\quad \subfloat[$\Lambda_\Psi = -2$]{\includegraphics[width=0.23\textwidth]{poroel_alpha28_phi180_beta60_15x10x2_init015_phi0_lambda2_optDesign}} \quad \subfloat[$\Lambda_\Psi = -5$]{\includegraphics[width=0.23\textwidth]{poroel_alpha28_phi180_beta60_15x10x2_init015_phi0_lambda5_optDesign}}\\ \subfloat[$\Lambda_\Psi = -60$]{\includegraphics[width=0.23\textwidth]{poroel_alpha28_phi180_beta60_15x10x2_init015_phi0_lambda60_optDesign}}\quad \subfloat[Flux maximized design]{\includegraphics[width=0.23\textwidth]{min_flux_init_008_phi_0_optDesign_z-0}} \caption{Results of bicriterial optimization with cells from both type 1 and 2 for varying $\Lambda_\Psi$. The designs visualized here corresponds to the labeled data points of the pareto curve in \cref{fig:pareto_two-cells}.} \label{fig:opt-two-cells-var-lamb} \end{figure} We note that so far all results presented have been computed without employing a resource constraint. Just to demonstrate that SGP can also easily handle problems, where a resource constraint is added, we briefly discuss a selected result in \cref{fig:cantilever_vol-constr_two-materials}. \begin{figure}[ht] \centering \subfloat[Optimized design at $z=0$ ]{\includegraphics[width=0.24\textwidth]{cantilever_15x10x2_init_022_phi_0_vol_08_optDesign_z-0}} \\ \subfloat[Mechanical state with $\Phi_\text{opt}=23.78$]{\includegraphics[width=0.24\textwidth]{cantilever_15x10x2_init_022_phi_0_vol_08_optDesign_u-warped}} \caption{Result of pure compliance minimization when allowing unit cells of type $1$ and $2$ with an active volume fraction constraint setting $\bar{\rho}_m = 0.8$ on the stiff material phase. Comparing to \cref{fig:two-cells-min-compl}, it is observed that only now also cells of type 1 appear in the design. Moreover, the resource constraint leads to a variation of the parameter $r_s$ for cell type 2. } \label{fig:cantilever_vol-constr_two-materials} \end{figure} \subsection{Optimization with both cell types and regularization of design labels and interface}\label{sec:two-cells-reg} We introduce a regularization of the optimization problem by applying a weighted-sum filter $\mathbb{F}$ (e.g.,~ \cite{bruns-filter,bourdin-filter}), that is often used in the context of topology optimization, on regularization labels that are directly related to the unit cells' geometric parameters. For this we introduce mappings \begin{equation}\label{eq:reg_type_1} l_1: \begin{cases} A_1 &\to \mathbb{R}^3 \\ (r_x,r_y,\varphi) &\mapsto R_1 \end{cases} \end{equation} where \begin{align*} R_1 &= \left(\frac{r_x-0.08}{0.14}, \frac{r_y-0.08}{0.14}, \cos\left(2\frac{\varphi}{\pi}-\frac{\pi}{2}\right)\right)^\top, \end{align*} and \begin{equation}\label{eq:reg_type_2} l_2: \begin{cases} A_2 &\to \mathbb{R}^3 \\ r_s &\mapsto R_2 = (-1,-1,-1)^\top. \end{cases} \end{equation} This choice of labeling has the following effects: Within type 1, the maximal distance from lower to upper label bound is 1. This is the same distance required to jump from the stiffest cell of type 1, with $r_x=r_y=0.08$, to any cell of type 2. Therefore, the interface between cells of type 1 and 2 is also penalized. The most expensive change is a jump from type 1, which is preferred by the compliance, to any cell of type 2, which is most beneficial for the fluid flux. The shifted cosine function appearing in the expression for $(R_1)_3$ is employed to circumvent disambiguities for the angular variable. Employing these regularization labels, $\mathcal{J}_\text{reg}$ from \cref{eq:Jreg-approx} changes to \begin{equation} \mathcal{J}_\mathrm{reg}({\boldsymbol{R}}) = \frac{1}{2} \sum_{\ell=1}^3 \|{\boldsymbol{R}}_\ell - \mathbb{F} ({\boldsymbol{R}}_\ell) \|^2, \end{equation} where ${\boldsymbol{R}}_\ell \in \mathbb{R}^{{n_\mathrm{el}}}$ collects the $\ell$-the components of the regularization label assigned to each finite element, which is defined by formula \cref{eq:reg_type_1} or \cref{eq:reg_type_2}, if cell type 1 or cell type 2 is chosen for the corresponding finite element $e$, respectively.\\ Next, we study the influence of regularization with the optimized result for the particular choice $\Lambda_\Psi = -3$. The result displayed in \cref{fig:opt_two_cells_reg} displays the changes in design with increasing regularization parameter $p_\text{filt}$. \begin{figure}[ht] \centering \subfloat[Initial design]{\includegraphics[width=0.22\textwidth]{poroel_alpha28_phi180_incl60_15x10x2_init_008_phi_0_l_3_nofilter_initDesign}} \quad \subfloat[No regularization\label{fig:opt_two_cells_reg_nofilt}]{\includegraphics[width=0.22\textwidth]{poroel_alpha28_phi180_incl60_15x10x2_init_008_phi_0_l_3_nofilter_optDesign}} \quad \subfloat[$\Lambda_\Xi = 0.01$\label{fig:opt_two_cells_reg_p001}]{\includegraphics[width=0.22\textwidth]{poroel_alpha28_phi180_incl60_15x10x2_init_008_phi_0_l_3_rad_13_penfl_001_optDesign}} \\ \subfloat[$\Lambda_\Xi = 0.02$]{\includegraphics[width=0.22\textwidth]{poroel_alpha28_phi180_incl60_15x10x2_init_008_phi_0_l_3_rad_13_penfl_002_optDesign}} \quad \subfloat[$\Lambda_\Xi = 0.025$]{\includegraphics[width=0.22\textwidth]{poroel_alpha28_phi180_incl60_15x10x2_init_008_phi_0_l_3_rad_13_penfl_0025_optDesign}} \quad \caption{Results for varying $\Lambda_\Xi$ with filter radius of $1.3$ elements and $\Lambda_\Psi = -3$.} \label{fig:opt_two_cells_reg} \end{figure} The respective objective values are listed in \cref{tab:opt_two_cells_reg}. The regularization of fluid channel radii can be observed well when comparing the designs in the right lower corner of \cref{fig:opt_two_cells_reg_nofilt} and \cref{fig:opt_two_cells_reg_p001}. With increasing $p_\text{filt}$, the interface between unit cell types 1 and 2, at the right upper corner of the design domain, vanishes and the design is dominated by cells of type 1. \begin{table}[htb] \centering \begin{tabular}{c||c|c|c|c} $\Lambda_\Xi$ & $\mathcal{J}_\text{mer,opt}$ & $\mathcal{J}_\text{reg,opt}$ & $\Phi_\text{opt}$ & $\Psi_\text{opt}$ \\ \hline 0 &21.34 &11.5 &21.57426 &0.0765\\ 0.01 &21.65 &0.0389 &21.65041 &0.0140\\ 0.011 &21.67 &0.0484 &21.66846 &0.0142\\ 0.015 &21.99 &0.0747 &21.95892 &0.0139\\ 0.02 &22.40 &0.092 &22.34912 &0.0135\\ 0.025 &22.70 &0.0712 &22.66730 &0.0136 \end{tabular} \caption{Performance of designs shown in \cref{fig:opt_two_cells_reg} with $\mathcal{J}_\text{mer,opt}(\Lambda_\Xi) = \mathcal{J}_\text{reg,opt}(\Lambda_\Xi) + \Phi_\text{opt} + \Lambda_\Psi \Psi_\text{opt}$} \label{tab:opt_two_cells_reg} \end{table} \section{Conclusion and Outlook} We presented an Sequential Global Programming (SGP) approach to homogenization-based structural optimization which can be viewed as an free material optimization constrained by the set of admissible geometric material parameters. By means of numerical examples, where we successively added more ingredients to the optimization problem, we demonstrated that the proposed SGP approach, with its first-order approximations, provides good and reasonable optimized designs without the necessity of particular design initialization or the employment of a regularization strategy for purposes of convergence. Furthermore, SGP is able to handle several material classes with disjunct parameter sets without additional interpolation and penalization strategies. We further observed that optimizing the local orientation of the microstructure brings along a significant improvement, up to 48\%, of the fluid flux. We have not actively addressed the subject of connectivity within the microstructure, that is to ensure connectivity of the fluid saturated channels. However, the regularization approach presented in \cref{sec:two-cells-reg} can be used to control the degree of variation of the local microstructure rotation and we have seen, by means of the presented numerical examples, that only a mild regularization has already a fair impact on the design. Although the resolution of the finite element approximation, and thus the number of design elements, of the examples in section \cref{sec:numerical-results} was chosen rather coarsely, it served the purpose of demonstrating the presented features of SGP. With regard to finer resolutions: the algorithm can be well parallelized with respect to the design elements due to the block-separability of the first-order approximations. The brute-force approach in the subproblem solver, described in \cref{alg:sub-problem}, can further be speeded up by employing a hierarchical scanning of the design grids $A^\text{grid}_i$: Start with a rather coarse number of samples and determine the minimizer among those. In the next level, consider only the current minimizer and its neighbors and perform the same search within this subset of $A^\text{grid}_i$, for all $i \in I$. Repeat this step until the maximum desired number of levels or some accuracy is achieved. Note that, with this strategy, the quality of the design depends on the number of samples on the coarsest grid level. An alternative would be to apply a Lipschitz optimization solver, see \cite{Hansen1995}, to each design element and type in a black box manner. Further research will focus on extending the SGP approach for homogenization-based optimization to transient problems and, in particular, to dynamic metamaterial design. Another challenge is to extend the proposed optimization approach for an approximate treatment of nonlinear two-scale problems with the homogenized coefficients depending on the macroscopic response by virtue of the sensitivity analysis as discussed in \cite{Rohan-Lukes-2015}. \section{Acknowledgments} The authors B. N. Vu and M. Stingl gratefully acknowledge the financial support by the German Federal Ministry for Economic Affairs and Climate Action (BMWK) in the course of the FIONA (LuFo VI-1, FKZ: 20W1913F) project. The research conducted by E. Rohan and V. Luke\v{s} was supported by the grant projects GACR 19-04956S and GACR 22-00863K of the Czech Scientific Foundation. \section{Statements and Declarations} The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. \section{Replication of results} The algorithm of the proposed optimization approach was described in \cref{alg:sgp-multimat} and \cref{alg:sub-problem}. Its implementation, as well as exemplary problem settings and respective data to reproduce the numerical results presented in \cref{sec:numerical-results}, are publicly available on \href{https://gitlab.com/bnvu/sgp-poroel}{https://gitlab.com/bnvu/sgp-poroel}.
1,314,259,995,457
arxiv
\subsection{Reprojection-based registration} \label{subsec:reprojection} As discussed earlier, there are several sensors present in the sensor suite which captures information in a time-synchronized manner. However, to utilize the information from all channels together, the images captured by the sensors should be registered accurately. A 3D Re-projection method is used for the robust registration between the sensors. Re-projection is a process allowing synchronized video streams from different cameras or sensors to be precisely aligned despite cameras being at different spatial positions. For this algorithm to be functional, the precise relative rotations and positions of the cameras, also known as extrinsic parameters, as well as the lens distortion coefficients and projection matrix, or intrinsic parameters, have to be inferred. This is achieved by capturing a series of images of a chosen pattern, for instance, a flat checkerboard target, by all cameras. Both intrinsic and extrinsic parameters can be calculated by detecting positions of the target's features on the images, and using OpenCV's camera calibration module \cite{opencv_library}. \begin{figure}[h] \centering \includegraphics[width=0.95\linewidth]{figures/reprojection.png} \caption{Diagram of the reprojection based registration of images from multiple sensors.} \label{fig:reproj_flowchart} \end{figure} \begin{figure}[h] \centering \includegraphics[width=0.95\linewidth]{figures/hqwmca_front.jpg} \caption{Sample results from reprojection based alignment color channel, stereo depth, thermal channel, NIR composite, and SWIR composite. } \label{fig:reproj_channels} \end{figure} \begin{figure}[t!] \centering \includegraphics[width=0.99\linewidth]{figures/3dtool.png} \caption{Interface developed for visualizing the alignment procedure. The best calibration and parameter tuning can be selected based on this tool. On the right side of the interface, any channel (here RGB) can be projected in 3D and then after rectification reprojected to get same alignment between all channels.} \label{fig:3dtool} \end{figure} The set of cameras is assumed to have a depth-sensing device, for the present discussion this is achieved by a pair of NIR sensing cameras. The diagram of the registration process is shown in Fig. \ref{fig:reproj_flowchart}. An example of the stereo image computed and registered channels are shown in Fig. \ref{fig:reproj_channels}. The NIR cameras have an extra calibration step, adapting their intrinsic and extrinsic parameters such that they are properly aligned for depth reconstruction, these views are called rectified left and right in the following, respectively. The depth reconstruction is performed using a block matching algorithm \cite{opencv_library}, that measures the disparity between the pair of images. The latter is transformed to a point cloud, yielding a point in 3D space for each pixel in the left image. The re-projection algorithm then proceeds by projecting the point cloud, with points tagged by their coordinates on the left rectified image, on the virtual image plane of the remaining cameras. This procedure thus creates a set of maps from the left rectified image plane to the other cameras' images planes, by inverting these maps the video streams can be precisely aligned together, for instance to the reference left stream. A screen-shot of the graphical user interface (GUI) developed for viewing the alignment process is shown in Fig. \ref{fig:3dtool}. \begin{figure*}[ht!] \centering \subfloat[]{\includegraphics[height=2.3cm]{./figures/attacks/print.png}}% \hfil \subfloat[]{\includegraphics[height=2.3cm]{./figures/attacks/replay.png}}% \hfil \subfloat[]{\includegraphics[height=2.3cm]{./figures/attacks/rigidmask.png}}% \hfil \subfloat[]{\includegraphics[height=2.3cm]{./figures/attacks/papermask.png}}% \hfil \subfloat[]{\includegraphics[height=2.3cm]{./figures/attacks/flexiblemask.png}}% \hfil \subfloat[]{\includegraphics[height=2.3cm]{./figures/attacks/mannequin.png}}% \hfil \subfloat[]{\includegraphics[height=2.3cm]{./figures/attacks/glasses.png}}% \hfil \subfloat[]{\includegraphics[height=2.3cm]{./figures/attacks/makeup.png}}% \hfil \subfloat[]{\includegraphics[height=2.3cm]{./figures/attacks/tattoo.png}}% \hfil \subfloat[]{\includegraphics[height=2.3cm]{./figures/attacks/wig.png}}% \caption{(a) Print, (b) Replay, (c) Rigid mask, (d) Paper mask, (e) Flexible mask, (f) Mannequin, (g) Glasses, (h) Makeup, (i) Tattoo and (j) Wig. Image taken from \cite{heusch2020deep}.} \label{fig:attacks} \end{figure*} \subsection{Protocols} The \textit{HQ-WMCA} dataset is distributed with three different protocols, namely \textit{grand-test}, \textit{impersonation} and \textit{obfuscation}. In each of these protocols, there are three folds, namely train, validation, and test sets. These folds contain both bonafide samples and attacks (Fig. \ref{fig:attacks}) with a disjoint set of identities across the three sets. The bonafide and attacks are roughly equally distributed across the folds and each video contain 10 frames, sampled uniformly from the captured data. We also noticed that some attacks are questionable and may be considered as ``occluded'' bonafide which could potentially confound the analysis. Specifically, these borderline cases are: \\ \begin{itemize} \item \textbf{Wigs}: The wigs do not occlude a lot of face regions in most of the cases, and since the PAD and face recognition module removes the non-face region in the preprocessing stage itself, its effect in spoofing face recognition system is not clear. Hence we removed wigs from all the protocols to avoid any bias. \item \textbf{Retro-glasses}: These are similar to normal medical glasses and is identical to a bonafide subject wearing a medical glass and hence we removed this attack from all our new protocols. \item \textbf{Light makeup}: In the original data collected, for each subject, for each makeup session, three samples were collected at different levels of makeup, namely level 0, level 1, and level 2 depending on the level of makeup applied to the subject. The level of makeup in level 0 is very insignificant and could be identical to routine makeup present in bonafide samples. Hence we have removed the level 0 makeup samples from the newly created protocols to have a consistent set of ground truths. \end{itemize} Three experiment protocols were newly created, as curated versions of the protocols provided with the dataset \cite{heusch2020deep}. The newly created curated protocols are hence appended with ``-c'' to emphasize the difference from the original protocols shipped with the dataset, i.e., the adapted and newly created protocols are referred to as, \textit{Grandtest-c}, \textit{Impersonation-c} and \textit{Obfuscation-c} protocols. In addition, we have also created a set of leave-one-out protocols (\textit{LOO}), to emulate the unseen attack scenarios. To summarize, the protocols used in the present work are (Table \ref{tab:attacks-distribution}): \begin{itemize} \item \textbf{Grandtest-c}: This is the same grand-test protocol shipped with the \textit{HQ-WMCA} dataset, after removing the borderline cases. Here all the attacks, appear in different folds of the dataset \textit{train}, \textit{validation}, and \textit{test}. \item \textbf{Impersonation-c}: This is the same impersonation protocol shipped with the \textit{HQ-WMCA} dataset, after removing the borderline cases. Only attacks for impersonation are present in this protocol, i.e., attacks for which the attacker is trying to authenticate himself as another user. The attacks present in this protocol are shown in Table \ref{tab:attacks-distribution}. \item \textbf{Obfuscation-c}: In a similar manner, this is the same obfuscation protocol shipped with the \textit{HQ-WMCA} dataset, after removing the borderline cases. This protocol consists of obfuscation attacks, which refers to attacks in which the appearance of the attacker is altered so as to prevent being identified by a face recognition system. The attacks present in this protocol are makeups, tattoos, and glasses. \item \textbf{Leave-One-Out protocol}: A set of different sub-protocols are newly created to emulate unseen attacks. The sub-protocols are created by leaving out each attack category in the training set. There are eight sub-protocols in the \textit{LOO} protocol, which emulate unseen attack scenarios. We start with the splits used in the \textit{grandtest-c} protocol and systematically remove one attack in the \textit{train} and \textit{validation} sets, the \textit{test} set consists of \textit{bonafide} and the attack which was left out. We created \textit{LOO} protocols for the different attacks present in the dataset. And the protocols are listed below:\textit{LOO\_Flexiblemask}, \textit{LOO\_Glasses}, \textit{LOO\_Makeup}, \textit{LOO\_Mannequin}, \textit{LOO\_Papermask}, \textit{LOO\_Rigidmask}, \textit{LOO\_Tattoo} and \textit{LOO\_Replay}. The distribution of attacks in this protocol can be found in Table \ref{tab:bf_pa-distribution}. \end{itemize} Overall there are $360,080$ images considering all the modalities after excluding the borderline attacks. The statistics of bonafide and attacks in each of the folds in each protocol are given in Table \ref{tab:bf_pa-distribution}. \begin{table*}[h] \centering \caption{Distribution of bonafide and attack in the various protocols.} \begin{tabular}{@{}lcccccc@{}} \toprule \multirow{2}{*}{\textbf{Protocol}} & \multicolumn{2}{c}{\textbf{Train}} & \multicolumn{2}{c}{\textbf{Validation}} & \multicolumn{2}{c}{\textbf{Test}} \\ \cmidrule(l){2-7} & Bonafide & Attacks & Bonafide & Attacks & Bonafide & Attacks \\ \midrule Grandtest-c & 228 & 618 & 145 & 767 & 182 & 632 \\ Impersonation-c & 228 & 384 & 145 & 464 & 182 & 440 \\ Obfuscation-c & 228 & 234 & 145 & 303 & 182 & 192 \\ \cmidrule{1-7} LOO\_Flexiblemask & 228 & 528 & 145 & 681 & 182 & 48 \\ LOO\_Glasses & 228 & 582 & 145 & 729 & 182 & 36 \\ LOO\_Makeup & 228 & 444 & 145 & 526 & 182 & 132 \\ LOO\_Mannequin & 228 & 598 & 145 & 729 & 182 & 77 \\ LOO\_Papermask & 228 & 590 & 145 & 743 & 182 & 49 \\ LOO\_Rigidmask & 228 & 456 & 145 & 649 & 182 & 140 \\ LOO\_Tattoo & 228 & 594 & 145 & 743 & 182 & 24 \\ LOO\_Replay & 228 & 582 & 145 & 667 & 182 & 126 \\ \bottomrule \end{tabular} \label{tab:bf_pa-distribution} \end{table*} The distribution of attack types, in the three main protocols, are shown in Table \ref{tab:attacks-distribution}. \begin{table} \centering \caption{Distribution of attacks in the different protocols.} \begin{tabular}{llccc} \toprule &\textbf{Attack type} & \textbf{Train} & \textbf{Validation} & \textbf{Test}\\ \midrule \multirow{10}{*}{\textbf{Grandtest-c}} & Flexiblemask & 90 & 86 & 48 \\ & Glasses & 36 & 38 & 36 \\ & Makeup & 174 & 241 & 132 \\ & Mannequin & 20 & 38 & 77 \\ & Papermask & 28 & 24 & 49 \\ & Print & 48 & 98 & 0 \\ & Replay & 36 & 100 & 126 \\ & Rigidmask & 162 & 118 & 140 \\ & Tattoo & 24 & 24 & 24 \\ \cmidrule{2-5} & \textit{Bonafide} & 228 & 145 & 182 \\ \midrule \multirow{7}{*}{\textbf{Impersonation-c}} & Flexiblemask & 90 & 86 & 48 \\ & Mannequin & 20 & 38 & 77 \\ & Papermask & 28 & 24 & 49 \\ & Print & 48 & 98 & 0 \\ & Replay & 36 & 100 & 126 \\ & Rigidmask & 162 & 118 & 140 \\ \cmidrule{2-5} & \textit{Bonafide} & 228 & 145 & 182 \\ \midrule \multirow{4}{*}{\textbf{Obfuscation-c}} & Glasses & 36 & 38 & 36 \\ & Makeup & 174 & 241 & 132 \\ & Tattoo & 24 & 24 & 24 \\ \cmidrule{2-5} & \textit{Bonafide} & 228 & 145 & 182 \\ \bottomrule \end{tabular} \label{tab:attacks-distribution} \end{table} \subsection{Metrics} We have used the standardized ISO metrics for the performance comparison of the various configurations. The Attack Presentation Classification Error Rate (APCER) which is defined as probability of a successful attack: \begin{equation} APCER = \frac{\text{ \# of falsely accepted attacks}}{\text{ \# of attacks}} \end{equation} and Bonafide Presentation Classification Error Rate (BPCER) is the probability of bonafide sample being classified as an attack. \begin{equation} BPCER = \frac{\text{ \# of rejected real attempts}}{\text{ \# of real attempts}} \end{equation} Since we are dealing with a wide range of attacks, and comparison purposes we group all the attacks together while computing the Average Classification Error Rate (ACER), which is different from the ISO/IEC 30107-3 standard \cite{ISO-30107-3}, where each attack needs to be considered separately. However, we report in ACER in an aggregated manner, since its easier to understand the variation in performance over a large search space of configurations. \begin{equation} \label{eq:ACER} ACER = \frac{APCER + BPCER}{2} \quad \textrm{[\%]} \end{equation} In this work, a BPCER threshold of 1\% is used in the \textit{dev} set to find the threshold. This threshold is applied to the \textit{test} set to compute the ACER in the \textit{test} set, as in \cite{george_mccnn_tifs2019}. \subsection{Channel-wise Experiments} \label{sec:channel_exp} \begin{table*}[ht!] \caption{Performance of different models in the \textit{Grandtest-c}, \textit{Impersonation-c} and \textit{Obfuscation-c} protocols in \textit{HQ-WMCA}, with reprojection and unit spectral normalization. ACER in the \textit{test} set corresponding to BPCER 1\% threshold in \textit{validation} set is shown in the table. The notation ``D'' indicates ``D-Stereo'' in all the experiments.} \centering \begin{tabular}{lRRRR} \toprule Channels & \multicolumn{1}{c} {Grandtest-c} & \multicolumn{1}{c} {Impersonation-c} & \multicolumn{1}{c} {Obfuscation-c} & \multicolumn{1}{c} {Mean} \\ \midrule RGB & 4.6 & 0.6 & 14.8 & 6.6 \\ D-Stereo & 26.7& 3.8 & 48.7 & 26.4 \\ D-Intel & 29.0& 10.2& 41.8 & 27.0\\ T & 44.9 & 0.9 & 50.0 & 31.9 \\ NIR & 9.7 & \textbf{0.1} & 39.6 & 16.4 \\ \textbf{SWIR} & \textbf{4.1} & 1.8 & \textbf{9.2} & \textbf{5.0} \\ \midrule \textbf{RGB-SWIR} & 0.3 & 2.0 & \textbf{0.0} & \textbf{0.7} \\ RGB-D-T-SWIR &\textbf{0.0} & 2.5 & \textbf{0.0} & 0.8 \\ RGB-D-T-NIR-SWIR &\textbf{0.0} & 0.7 & 2.3 & 1.0 \\ RGB-NIR & 0.7 & \textbf{0.0} & 7.4 & 2.7 \\ NIR-SWIR & 2.7 & 0.4 & 8.4 & 3.8 \\ RGB-D & 6.4 & 1.7 & 12.4 & 6.8 \\ RGB-D-T-NIR & 3.4 & 0.1 & 17.4 & 6.9 \\ RGB-T & 6.9 & 2.5 & 17.6 & 9.0 \\ RGB-D-T & 6.2 & 3.4 & 20.1 & 9.9 \\ D-T & 17.0 & 4.1 & 49.9 & 23.6 \\ \bottomrule \end{tabular} \label{tab:ablation_chanels} \end{table*} While the \textit{HQ-WMCA} dataset contains a lot of channels, many of them could be redundant for the PAD task. Also, considering the cost of hardware, it is essential to identify the important channels; respecting the cost factor. Given the set of channels, we perform an ablation study in terms of channels used to identify the minimal subset of channels from the original set to achieve acceptable PAD performance. The channels present in our study are listed below: \begin{itemize} \item \textbf{RGB}: Color camera at full HD resolution \item \textbf{Thermal}: VGA resolution \item \textbf{Depth} from Intel Realsense/ \textbf{Stereo} from \textit{NIR} Left/Right Pair. For this study, the depth values used are from the stereo depth computed. \item \textbf{NIR}: 4 NIR wavelengths collected with \textit{NIR} camera \item \textbf{SWIR}: 7 wavelengths collected with \textit{SWIR} camera \end{itemize} It is to be noted that, the multi-spectral modalities \textit{NIR} and \textit{SWIR} might contain redundant channels, however, this is not considered at this stage since the cost of adding or subtracting another wavelength in the \textit{NIR} and \textit{SWIR} is minimal, i.e., only the cost of LED illuminators is added. We treat the channels as "Blocks" which come from a specific device. This was done since the objective is to reduce/ select an optimal subset of devices. \subsubsection{Additional baselines} In addition to the MC-PixBiS Network, we have added two additional PAD baselines in this section. The objective is to identify the performance change with different channels when different models are used. We add one feature based baseline and another recent CNN based PAD method for this comparison. These baselines are listed below \begin{itemize} \item \textit{MC-RDWT-Haralick-SVM}: This is the handcrafted feature baseline we have considered. This is a multi-channel extension of the RDWT-Haralick-SVM method in \cite{agarwal2017face}. In our setting, for the multi-channel images, each channel is divided into a $4 \times 4$ grid first. Haralick \cite{haralick1979statistical} features from RDWT decompositions are extracted from each of these patches. The final feature vectors are obtained by concatenating features from all grids across all the channels. An SVM framework is used together with these features for the PAD task. \item \textit{MC-CDCN}: This is the multi-channel variant of the method proposed in \cite{yu2020multi}. This approach won first place in Multi-Modal Track in the ChaLearn Face Anti-spoofing Attack Detection Challenge@CVPR2020 \cite{liu2021cross}. This approach is essentially an extension of their previous work on central difference convolutional (CDC) networks to multiple channels. The core idea in CDC is the aggregation of the center-oriented gradient in local regions, the CDC is further combined with Vanilla convolutions. In the multi-modal setting, they extend the number of branches with CDC networks, following a late fusion strategy using non-shared component branches. We adapted the model to accept a varying number of input channels by introducing additional input branches. However, model complexity and computations requirements increase greatly as the number of channels increases due to non-shared branches. \end{itemize} We have performed experiments with individual channels as well as with the combination of channels in the ``grandtest-c'' protocol. We could not perform experiments involving MC-CDCN in some channel combinations due to the huge increase in parameter and computational complexity with the increase in the number of input channels. The change in number of parameters and computations are shown in Table. \ref{tab:parameters_complexity}. This also shows a practical advantage of fusing channels at input level as the computational and parameter increase are very minimal. The experimental results with three methods are shown in Table. \ref{tab:ablation_chanels_baselines}, in all three baselines the SWIR channel achieves the best individual performance, followed by RGB channel. This trend is common for both CNN based and feature based baselines. From the channel combinations models involving RGB and SWIR channel obtains the best results for both MC-PixBiS and Haralick-SVM baselines. In general, CNN based methods are outperforming feature based baseline in all the combinations. We have used the MC-PixBiS model for the further experiments as it obtains good results with a minimal parameter and computational complexity. \begin{table}[ht] \caption{Comparison of number of parameters and compute for the compared CNN models. For MC-CDCN the number of parameters and compute increases greatly as more channels are added.} \begin{tabular}{c|r|r|r|r} \toprule \multicolumn{1}{c|}{\multirow{2}{*}{Channels}} & \multicolumn{2}{c|}{MC-PixBiS} & \multicolumn{2}{c}{MC-CDCN} \\ \cmidrule(l){2-5} \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{Compute} & \multicolumn{1}{c|}{Parameters} & \multicolumn{1}{c|}{Compute} & \multicolumn{1}{c}{Parameters} \\ \midrule 1 & 4.52 GMac & 3.19 M & 47.48 GMac & 2.32 M \\ 2 & 4.58 GMac & 3.19 M & 94.96 GMac & 4.64 M \\ 3 & 4.64 GMac & 3.20 M & 142.44 GMac & 6.95 M \\ 4 & 4.70 GMac & 3.20 M & 189.91 GMac & 9.27 M \\ 5 & 4.76 GMac & 3.21 M & 237.39 GMac & 11.59 M \\ 6 & 4.81 GMac & 3.21 M & 284.87 GMac & 13.90 M \\ 7 & 4.87 GMac & 3.22 M & 332.35 GMac & 16.22 M \\ 8 & 4.93 GMac & 3.22 M & 379.83 GMac & 18.54 M \\ 9 & 4.99 GMac & 3.23 M & 427.30 GMac & 20.85 M \\ 10 & 5.05 GMac & 3.23 M & 474.78 GMac & 23.17 M \\ 11 & 5.11 GMac & 3.24 M & 522.26 GMac & 25.49 M \\ 12 & 5.17 GMac & 3.24 M & 569.74 GMac & 27.80 M \\ 13 & 5.23 GMac & 3.25 M & 617.21 GMac & 30.12 M \\ \bottomrule \end{tabular} \label{tab:parameters_complexity} \end{table} \begin{table*}[ht!] \caption{Performance of three different models in the \textit{Grandtest-c} protocol in \textit{HQ-WMCA}. ACER in the \textit{test} set corresponding to BPCER 1\% threshold in \textit{validation} set is shown in the table.} \centering \begin{tabular}{lrrr} \toprule Channels & \multicolumn{1}{c} {MC-PixBiS} & \multicolumn{1}{c} {Haralick-SVM} & \multicolumn{1}{c} {MC-CDCN} \\ \midrule RGB & 4.6 & 16.1 & 12.7 \\ D & 26.7 & 35.2 & 36.9 \\ T & 44.9 & 50.0 & 19.1 \\ NIR & 9.7 & 24.3 & 21.3 \\ \textbf{SWIR} & 4.1 & 5.8 & 1.6 \\ \midrule \textbf{RGB-SWIR} & 0.3 & 2.5 & - \\ RGB-D-T-SWIR & 0.0 & 1.8 & - \\ RGB-D-T-NIR-SWIR & 0.0 & 1.5 & - \\ RGB-NIR & 0.7 & 11.6 & 6.0 \\ NIR-SWIR & 2.7 & 4.3 & - \\ RGB-D & 6.4 & 18.8 & 11.2 \\ RGB-D-T-NIR & 3.4 & 11.0 & 5.8 \\ RGB-T & 6.9 & 10.4 & 11.0 \\ RGB-D-T & 6.2 & 13.4 & 9.3 \\ D-T & 17.0 & 26.2 & 16.4 \\ \bottomrule \end{tabular} \label{tab:ablation_chanels_baselines} \end{table*} In each of the combinations, we carried out model training and evaluation on all the protocols. The results in each of the protocols are tabulated in Table \ref{tab:ablation_chanels}. \subsubsection{Results in \textit{Grandtest-c} protocol } The \textit{Grandtest-c} protocol emulates the performance of a system under a wide variety of attacks, in a known attack scenario. Out of individual channels (Table \ref{tab:ablation_chanels}), the \textit{SWIR} channel performs the best closely followed by the \textit{RGB} channel. Both these channels could detect the attacks very well even in the presence of a wide variety of attacks. Not surprisingly, the combination of \textit{RGB-SWIR} achieves very good results with an ACER of 0.3 \%, which is better than their independent error rates, indicating the complementary nature of the information gained by the combination. The combinations \textit{RGB-D-T-NIR-SWIR} and \textit{RGB-D-T-SWIR} achieve perfect separation, however, these are supersets of \textit{RGB-SWIR} combination and the addition of other channels does not contribute much to the performance overall. Another combination that fares well is \textit{RGB-NIR}, which achieves a notable ACER of 0.7\%. The t-SNE plots for different combinations of channels is shown in Fig. \ref{fig:tSNE}. From the plots, it can be seen that combining different channels improves the separation between bonafide and attack samples. \begin{figure*}[ht!] \centering \includegraphics[width=0.99\textwidth]{figures/tSNEs.png} \caption{t-SNE plots corresponding to different combinations of channels in the \textit{Grandtest-c} protocol. The first row shows the individual channels, second and third rows shows different combinations of channels (best viewed in color). } \label{fig:tSNE} \end{figure*} \subsubsection{Results in \textit{Impersonation-c} protocol } The \textit{Impersonation-c} protocol mostly consists of attacks aimed at impersonating another subject. From the experimental results, it can be seen that this is by far the easiest protocol among the three protocols considered. \textit{NIR} channel performs the remarkably well followed by the \textit{RGB} channel. However, it is to be noted that most of the combinations of channels performed reasonably well in this protocol. Consequently, the \textit{RGB-NIR} combination achieves perfect performance in the impersonation protocol. Several other combinations also perform reasonably well in this protocol. \subsubsection{Results in \textit{Obfuscation-c} protocol } The \textit{Obfuscation-c} protocol emulates the detection of obfuscation attacks such as makeups, glasses, and tattoos. This is by far the most difficult protocol among the three protocols considered. Most of the cases, the attacks are partial and appear only in a part of the face. This makes it harder to detect these attacks in general. In this protocol, the \textit{SWIR} channel performs better compared to other channels, followed by the \textit{RGB} channel. Most of the other channels perform poorly. The success of the SWIR channel could be due to the specific spectral properties of skin, as compared to other PAIs. There is a quick jump in performance when channels are used together. In fact, the \textit{RGB-SWIR} combination achieves an ACER of 0.7 \% in this challenging protocol, indicating the complementary nature of these channels. Another notable combination is \textit{RGB-NIR} with an ACER of 7.4 \%. \subsubsection{Summary of channel-wise study} Among the three different protocols with different difficulties and challenges, out of individual channels, \textit{SWIR} channels seem to perform the best. However, interestingly, it is followed closely by \textit{RGB}. The usefulness of these channels is visible in the combinations as well, with \textit{RGB-SWIR} achieving the best average results across the three protocols. This also indicates the complementary nature of discriminative information present in these channels for PAD. Another notable combination is \textit{RGB-NIR} which achieves an average error rate of 2.7\%. We have experimented with the depth coming from Intel Realsense (D-Intel) and the stereo depth computed (D-stereo), and have observed that the depth from stereo is slightly better in performance. The stereo camera obviates the requirement for an additional depth camera and hence we have used the depth coming from stereo for all the subsequent experiments. \subsection{Score Fusion Experiments} \begin{table}[!b] \caption{ACER in the \textit{test} set for various score fusion methods, feature fusion as compared to early fusion, in the \textit{Grandtest-c} protocol} \centering \resizebox{0.45\textwidth}{!}{ \begin{tabular}{crrrrrr \toprule \multirow{2}{*}{Channels} & \multicolumn{4}{c}{Score fusion} & \multicolumn{1}{c}{\multirow{2}{*}{Feature fusion}} & \multicolumn{1}{c}{\multirow{2}{*}{Early fusion}} \\ \cmidrule{2-5} & \multicolumn{1}{c} {GMM} & \multicolumn{1}{c} {LLR} & \multicolumn{1}{c} {MLP} & \multicolumn{1}{c} {MEAN} & \multicolumn{1}{c}{} \\ \midrule (RGB,SWIR) & 4.4 & 0.8 & 0.7 & 0.7 &4.1 & \textbf{0.3} \\ (RGB,D,T,SWIR) & 4.3 & 3.3 & 4.0 & 3.6 &6.4 & \textbf{0.0} \\ (RGB,D,T,NIR,SWIR) & 4.0 & 4.1 & 5.1 & 4.1 &6.9 & \textbf{0.0} \\ (RGB,NIR) & 3.9 & 4.9 & 9.0 & 4.9 &4.1 & \textbf{0.7} \\ (NIR,SWIR) & 7.7 & 2.7 & 4.7 & \textbf{2.1} &6.1 & 2.7 \\ (RGB,D) & 4.6 & \textbf{3.4} & 4.2 & 4.2 &4.9 & 6.4 \\ (RGB,D,T,NIR) & 7.1 & 6.7 & 6.6 & 6.6 &7.1 & \textbf{3.4} \\ (RGB,T) & \textbf{4.3} & 7.0 & 6.2 & 7.0 &6.7 & 6.9 \\ (RGB,D,T) & 7.1 & 6.8 & 9.6 & \textbf{6.2} &6.7 & \textbf{6.2} \\ (D,T) & 43.2 & 18.4 & 18.4 & 18.4 &41.0 & \textbf{17.0} \\ \bottomrule \end{tabular} } \label{tab:fusion} \end{table} \begin{figure*}[!t] \centering \includegraphics[width=0.75\textwidth]{figures/ImageScale.png} \caption{Preprocessed images with various scaling factors in RGB, Thermal, Depth, NIR and SWIR; upto a scaling of 0.3 the image quality is not affected since the original face size is larger than the resized size of $224 \times 224$. The approximate resolution of face region after scaling is also shown. } \label{fig:images_caling} \end{figure*} So far in the channel selection experiments, for combining channels we have stacked them at the input stage. This could have some disadvantages in case a channel is faulty or missing at test time. Also, the stacked model could have more parameters and could be more prone to over-fitting, while trained with a small dataset. The objective here is to evaluate the performance of various models with score-level fusion and feature-level fusion and to contrast the performance as compared to model-level. Instead of training CNNs with stacked channels, we performed fusion experiments on CNN models trained on individual channels separately. For score-level fusion, scores from individual systems are used as features to train a fusion model. For feature fusion, features extracted from each individual model are concatenated and combined with SVM to obtain the final scores. This allows redundancy if a channel is missing at the deployment stage. We used four different fusion models trained on top of the scalar scores returned by the component CNN's. The score fusion models used are: \begin{itemize} \item \textbf{GMM}: Gaussian Mixture Model \item \textbf{Mean}: Simple mean of different model outputs \item \textbf{LLR}: Linear Logistic Regression \item \textbf{MLP}: Multi-layer Perceptron \end{itemize} The results from the score fusion, and feature fusion models on the \textit{Grandtest-c} protocol are summarized in Table. \ref{tab:fusion}. The channels column shows the channels corresponding to individual CNN's used in the model, and the performance with different fusion models are tabulated. From the results and comparing the results of score fusion, feature fusion and early fusion with the same combination of channels (as in Table \ref{tab:ablation_chanels}), it can be seen that in most of the cases, early fusion performs better except for \textit{NIR-SWIR}, \textit{RGB-D} and \textit{RGB-T} combinations where score fusion performs better. This shows that score level fusion and feature level fusion fails to capture inter-channel dependencies in many cases which could be done with an early fusion approach. Nevertheless, the fusion method is still interesting from a deployment point of view. \subsection{Evaluation on Varying Image Resolutions} \begin{table*}[h] \caption{ACER in the \textit{eval} for various scaling values in the \textit{Grandtest-c} protocol in \textit{HQ-WMCA}, with reprojection and unit spectral normalization.} \centering \begin{tabular}{lRRRRRRRRR \toprule Channels & \multicolumn{1}{c} {0.0125} & \multicolumn{1}{c} {0.025} & \multicolumn{1}{c} {0.05} & \multicolumn{1}{c} {0.1} & \multicolumn{1}{c} {0.2} & \multicolumn{1}{c} {0.25} & \multicolumn{1}{c} {0.5} & \multicolumn{1}{c} {1.0} \\ \midrule RGB & 16.9 & 10.4 & 5.5 & 5.3 & \textbf{3.2} & 9.1 & \textbf{3.2} & 4.6 \\ D & 38.8 & 36.4 & 26.0 & 24.8 & 23.6 & 25.4 & \textbf{22.4} & 26.7 \\ T & 32.9 & 24.8 & \textbf{23.2} & 42.7 & 34.6 & 33.6 & 31.5 & 44.9 \\ NIR & 24.5 & 21.1 & 30.2 & 23.7 & 17.6 & 16.6 & 13.2 & \textbf{9.7} \\ SWIR & 4.1 & 1.8 & 1.7 & 1.6 & \textbf{1.5} & 3.0 & 3.2 & 4.1 \\ \midrule RGB-T & 16.7 & 11.7 & 7.2 & 11.6 & 8.9 & \textbf{4.2} & \textbf{4.2} & 6.9 \\ RGB-D-T & 20.4 & 8.1 & 7.5 & \textbf{5.1} & 8.2 & 9.1 & 9.6 & 6.2 \\ D-T & 25.1 & 28.9 & 24.6 & 18.6 & 26.3 & 33.2 & 45.8 & \textbf{17.0} \\ RGB-D & 26.2 & 8.1 & 5.2 & 6.7 & \textbf{3.4} & 10.2 & 4.8 & 6.4 \\ RGB-D-T-NIR & 10.4 & 11.3 & 6.7 & 5.1 & 5.3 & 5.2 & 7.6 & \textbf{3.4} \\ RGB-D-T-NIR-SWIR & 1.2 & 0.3 & \textbf{0.0} & 0.3 & \textbf{0.0} & \textbf{0.0} & \textbf{0.0} & \textbf{0.0} \\ RGB-D-T-SWIR & 1.1 & 0.1 & \textbf{0.0} & 0.1 & \textbf{0.0} & \textbf{0.0} & \textbf{0.0} & \textbf{0.0} \\ NIR-SWIR & 5.9 & 3.2 & 1.9 & 2.2 & \textbf{1.2} & 3.7 & 3.1 & 2.7 \\ RGB-NIR & 11.2 & 5.0 & 2.5 & 0.6 & 2.7 & 1.0 & \textbf{0.5} & 0.7 \\ RGB-SWIR & 1.1 & 0.6 & \textbf{0.0} & 0.5 & 0.1 & \textbf{0.0} & \textbf{0.0} & 0.3 \\ \bottomrule \end{tabular} \label{tab:res_grandtest} \end{table*} \begin{table*}[h] \caption{ACER in the \textit{eval} for various scaling values in the \textit{Impersonation-c} protocol in \textit{HQ-WMCA}, with reprojection and unit spectral normalization.} \centering \begin{tabular}{lRRRRRRRRR \toprule Channels & \multicolumn{1}{c} {0.0125} & \multicolumn{1}{c} {0.025} & \multicolumn{1}{c} {0.05} & \multicolumn{1}{c} {0.1} & \multicolumn{1}{c} {0.2} & \multicolumn{1}{c} {0.25} & \multicolumn{1}{c} {0.5} & \multicolumn{1}{c} {1.0} \\ \midrule RGB & 6.7 & 1.8 & 0.8 & 1.4 & \textbf{0.0} & 1.1 & 0.4 & 0.6 \\ D & 23.1 & 9.1 & 5.9 & 5.3 & 4.5 & 4.1 & 4.1 & \textbf{3.8} \\ T & 3.8 & 1.7 & 2.7 & 2.1 & 1.3 & 1.2 & \textbf{0.9} & \textbf{0.9} \\ NIR & 22.7 & 8.2 & 0.5 & 0.4 & 0.3 & 0.3 & \textbf{0.0} & 0.1 \\ SWIR & 0.1 & \textbf{0.0} & 0.6 & 0.2 & 1.1 & 0.2 & 0.4 & 1.8 \\ \midrule RGB-T & 2.1 & 1.7 & 4.3 & 4.3 & \textbf{0.9} & 2.5 & 3.1 & 2.5 \\ D-T & 2.9 & \textbf{1.8} & 3.6 & 6.1 & 2.8 & 1.9 & 2.2 & 4.1 \\ RGB-D & 5.7 & 2.8 & 2.0 & 2.0 & 1.9 & \textbf{0.5} & 1.3 & 1.7 \\ RGB-D-T-NIR & 1.1 & 0.5 & 0.1 & \textbf{0.0} & 0.5 & 0.1 & \textbf{0.0} & 0.1 \\ RGB-D-T & \textbf{1.0} & 4.2 & 3.8 & 4.3 & 5.6 & 4.3 & 2.6 & 3.4 \\ RGB-NIR & 0.8 & 0.5 & \textbf{0.0} & 0.2 & 0.1 & \textbf{0.0} & 0.4 & \textbf{0.0} \\ RGB-D-T-NIR-SWIR & 0.3 & 0.4 & \textbf{0.1} & 2.2 & 0.5 & 0.3 & 0.6 & 0.7 \\ NIR-SWIR & 0.1 & 0.1 & 0.4 & 0.2 & \textbf{0.0} & 1.5 & 0.9 & 0.4 \\ RGB-SWIR & \textbf{0.1} & 0.7 & 1.3 & 2.4 & 1.4 & 2.0 & 0.7 & 2.0 \\ RGB-D-T-SWIR & 0.2 & 0.4 & 0.1 & 0.4 & 0.9 & \textbf{0.0} & 1.9 & 2.5 \\ \bottomrule \end{tabular} \label{tab:res_impersonation} \end{table*} \begin{table*}[h] \caption{ACER in the \textit{test} for various scaling values in the \textit{Obfuscation-c} protocol in \textit{HQ-WMCA}, with reprojection and unit spectral normalization.} \centering \begin{tabular}{lRRRRRRRRR \toprule Channels & \multicolumn{1}{c} {0.0125} & \multicolumn{1}{c} {0.025} & \multicolumn{1}{c} {0.05} & \multicolumn{1}{c} {0.1} & \multicolumn{1}{c} {0.2} & \multicolumn{1}{c} {0.25} & \multicolumn{1}{c} {0.5} & \multicolumn{1}{c} {1.0} \\ \midrule RGB & 28.2 & 21.3 & 16.6 & 20.3 & 16.7 & \textbf{12.2} & 12.5 & 14.8 \\ D & 47.3 & 48.9 & 46.0 & 48.2 & \textbf{45.1} & 50.0 & 47.9 & 48.7 \\ T & 48.6 & 49.7 & 49.9 & 49.6 & \textbf{48.2} & 49.8 & 49.9 & 50.0 \\ NIR & 45.5 & 41.1 & 41.5 & 39.6 & \textbf{34.6} & 42.3 & 40.6 & 39.6 \\ SWIR & 11.6 & 5.6 & 6.1 & 5.6 & \textbf{4.7} & 6.5 & 7.1 & 9.2 \\ \midrule RGB-D & 30.3 & 20.5 & 21.4 & 20.6 & 16.7 & 20.0 & 23.6 & \textbf{12.4} \\ RGB-T & 35.7 & 26.7 & 19.4 & 20.0 & \textbf{16.4} & 22.6 & 23.2 & 17.6 \\ D-T & \textbf{48.1} & 49.5 & 50.0 & 50.0 & 49.9 & 48.8 & 50.0 & 49.9 \\ RGB-D-T & 35.2 & 31.2 & 22.8 & 18.2 & \textbf{18.0} & 25.8 & 25.6 & 20.1 \\ RGB-D-T-SWIR & 9.0 & 1.7 & 4.8 & 0.1 & 2.1 & \textbf{0.0} & \textbf{0.0} & \textbf{0.0} \\ RGB-D-T-NIR & 28.9 & 26.4 & 18.7 & \textbf{13.9} & 19.9 & 17.8 & 19.9 & 17.4 \\ RGB-SWIR & 3.7 & 0.3 & \textbf{0.0} & \textbf{0.0} & \textbf{0.0} & \textbf{0.0} & 0.1 & \textbf{0.0} \\ RGB-NIR & 21.5 & 18.5 & 16.3 & 12.1 & 10.1 & 13.7 & 12.2 & \textbf{7.4} \\ NIR-SWIR & 7.2 & 8.8 & \textbf{6.7} & 7.9 & 10.1 & 12.5 & 7.9 & 8.4 \\ RGB-D-T-NIR-SWIR & 3.0 & 8.4 & 0.5 & 0.5 & 0.2 & \textbf{0.0} & \textbf{0.0} & 2.3 \\ \bottomrule \end{tabular} \label{tab:res_obfuscation} \end{table*} This subsection discusses the variation of the performance with respect to the image resolution of different imaging modalities. To emulate a lower resolution sensor, we have systematically down-sampled the source image to a lower resolution followed by scaling as required by the framework. The average face size in the \textit{RGB} channel in \textit{HQ-WMCA} dataset was $819.5 (\pm 94.3) \times 614.1 (\pm 63.9)$ pixels. The MC-PixBiS model requires an input of size $224 \times 224$, and hence the resolution of the original image is much higher than what is required for the framework. This also means that we can safely scale the images up to a scaling factor close to $0.3$ without causing much degradation in the preprocessed files. For different scaling factors, the source image (not just the face part) is down-sampled with the scaling factor first to emulate a low-resolution sensor. The rest of the processing is performed with this scaled image, i.e., face detection and resizing of cropped face region to $224 \times 224$ image in the scaled raw image. In summary, the raw image undergoes two scaling steps, i.e., first to emulate a low-resolution sensor and then resizing the cropped face region. This introduces some minor artifacts, due to the interpolation stages (even with scale factors greater than 0.3). The image quality after preprocessing, with different scaling factors, is shown in Fig. \ref{fig:images_caling}. All the channels are first aligned to the \textit{RGB} channel spatially, and the scaling is applied uniformly to all the aligned channels. The \textit{SWIR} channel is the most expensive sensor, by far from other acquisition channels in the sensor suite. Typically available sensor resolutions for \textit{SWIR} are, $640 \times 512$, $320 \times 256$, $128 \times 128$ and $64 \times 64$ (based on availability in market), approximately corresponding to the scale factors of 1, 0.5, 0.2, and 0.1 respectively in this study. We have performed the experiments in all the protocols with different scaling factors and the results are tabulated in Table \ref{tab:res_grandtest}, \ref{tab:res_impersonation} and \ref{tab:res_obfuscation}. From the results, it can be seen that the performance degradation occurs only at very low resolutions. We observed that the performance starts degrading greatly at a very low scaling factor like $0.0125$. Surprisingly, in some cases, it can be seen that the performance even gets better at lower resolutions. This can be due to the removal of high-frequency signals, which may not be relevant in the specific scenarios. In some cases, the spectra are more important compared to the spatial information, and in such scenarios, the performance improves when the resolution is low so that some amount of over-fitting which could have been occurred in original resolution goes away. One way to use this information is to use blurring as a data augmentation strategy while training future models with these channels. Another interesting point to note in the tables is that, in all the protocols, the \textit{SWIR} channel performance is largely unaffected by the scaling. The performance degrades only at very low resolutions. This is remarkable since the \textit{SWIR} sensor is the most costly sensor among the sensors in the hardware suite. If the \textit{SWIR} channel could operate at a lower resolution, this would significantly reduce the cost of the entire sensor suite. One reason for this performance is the spectral nature of the SWIR channel, as compared to the spatial nature of other channels. The spectral nature of SWIR channel could be important for identifying skin pixels. In such a scenario, a very low resolution such as $64 \times 64$ could be enough to achieve the desired performance. This is a very important result since it reduces the cost of the entire sensor suite by a significant amount. Besides, the combination of \textit{RGB-SWIR} performs best overall. \subsection{Unseen Attack Experiments} So far, all the experiments considered known attack scenarios. In this set of experiments, we evaluate the robustness of the PAD systems in unseen attack scenarios. Specifically, different sub-protocols were created, which systematically exclude one specific attack in the training and validation sets. The \textit{test} set consists of bonafide samples and the attack which was left out of the training set. This emulates actually encountering an unseen attack in a real-world environment. The performance of various combinations of channels is shown in Table \ref{tab:unseen}. \begin{table*}[h] \caption{ACER in the \textit{test} for various unseen attack protocols, with reprojection based alignment and unit normalization, with original resolution} \centering \resizebox{0.9\textwidth}{!}{ \begin{tabular}{lRRRRRRRRRR \toprule \multicolumn{1}{l} {Channels} &\multicolumn{1}{l} {FlexMask} &\multicolumn{1}{l} {Glasses} &\multicolumn{1}{l} {Makeup} &\multicolumn{1}{l} {Mannequin} &\multicolumn{1}{l} {Papermask} &\multicolumn{1}{l} {Rigidmask} &\multicolumn{1}{l} {Tattoo} &\multicolumn{1}{l} {Replay} &\multicolumn{1}{l} {Mean} &\multicolumn{1}{l} {Std} \\ \midrule RGB & 11.0 & 49.3 & 32.3 & 0.0 & 0.5 & 28.2 & 46.4 & 2.3 &21.2 & 20.4 \\ D & 41.6 & 49.1 & 50.1 &34.8 & 18.1 & 49.7 & 41.3 & 8.6 &36.6 & 15.5 \\ T & 47.3 & 50.3 & 49.9 &28.2 & 50.0 & 3.1 & 50.0 & 0.4 &34.9 & 21.7 \\ NIR & 26.6 & 48.3 & 46.5 & 0.4 &\textbf{0.0} & 15.7 & 41.0 & 3.0 &22.6 & 20.7 \\ SWIR & \textbf{0.0} & 44.6 & 38.6 &\textbf{0.0}&\textbf{0.0} & 3.0 & 47.3 & \textbf{0.0} &\textbf{16.6} & 22.3 \\ \midrule RGB-D-T-NIR & 4.2 & 50.0 & 41.7 & 0.0 & 0.0 & 33.3 & 38.0 & 0.0 &20.9 & 21.7 \\ RGB-D-T & 28.3 & 50.0 & 33.8 & 3.7 & 0.0 & 40.2 & 41.6 & 6.5 &25.5 & 19.4 \\ RGB-D-T-SWIR & 0.4 & 6.1 & 26.5 & 0.0 & 0.0 &\textbf{0.3} & 41.8 & 0.0 & 9.3 & 15.9 \\ D-T & 26.5 & 50.0 & 50.5 &46.2 & 44.2 & 33.7 & 50.0 & 41.4 &42.8 & 8.6 \\ RGB-D-T-NIR-SWIR & 0.6 & 1.4 & 28.4 & 0.0 & 0.0 & 0.4 & 44.9 & 0.0 & 9.4 & 17.3 \\ RGB-T & 25.5 & 50.0 & 34.8 &15.8 & 0.0 & 1.7 & 32.9 & 0.2 &20.1 & 18.7 \\ NIR-SWIR & 0.0 & 47.2 & 31.6 & 0.0 & 0.0 & 1.7 & 50.0 & 0.0 &16.3 & 22.6 \\ RGB-D & 12.7 & 47.4 & 29.1 & 0.9 & 0.0 & 42.1 & 20.8 & 4.1 &19.6 & 18.4 \\ RGB-NIR & 13.5 & 41.4 & 46.2 & 0.6 & 0.1 & 29.6 &\textbf{2.4} & 0.1 &16.7 & 19.5 \\ \textbf{RGB-SWIR} & 0.3 & \textbf{0.5} & \textbf{23.1} & 0.8 & 0.6 & 1.2 & 27.4 & 1.3 & \textbf{6.9} & 11.3 \\ \bottomrule \end{tabular} } \label{tab:unseen} \end{table*} From Table \ref{tab:unseen}, it can be seen that out of individual channels, the \textit{SWIR} channel achieves the best individual average ACER. For the channel combinations, \textit{RGB-SWIR} achieves the best performance with an average ACER of 6.9\%. Among the different attacks, makeups, tattoos, and glasses appears to be the most difficult attacks to detect if they are not seen in the training set. The \textit{RGB-NIR} model works best in detecting tattoos. Surprisingly, the \textit{RGB-SWIR} combination achieved reasonable performance in both seen and unseen attack protocols. \begin{figure}[ht!] \centering \includegraphics[width=0.48\textwidth]{figures/Cost.png} \caption{Cost of the hardware vs. average ACER in unseen attack protocol, cost and performance was calculated based on the original sensor resolutions as available with the dataset.} \label{fig:cost_vs_perf} \end{figure} We have also added a figure (Fig. \ref{fig:cost_vs_perf}) showing the performance in terms of average ACER in the unseen attack protocols against the cost of the sensors used. The cost and performance were calculated based on the original sensor resolutions as available with the dataset. However, the cost would further reduce with lower resolution sensors. \subsection{Performance with different Wavelengths in NIR and SWIR} The channel wise study in section \ref{sec:channel_exp} aimed to identify the best channels as groups. However based on the interesting results, we investigated the importance of different wavelengths in different SWIR and NIR channels. Specifically, since \textit{RGB-SWIR} and \textit{RGB-NIR} were performing well, we conducted additional experiments with different combinations of \textit{RGB} and individual wavelengths from \textit{NIR} and \textit{SWIR}, one at a time. The results from the unseen attack protocols are tabulated in Table \ref{loo:subnir_swir}. Surprisingly, the \textit{RGB\_SWIR\_1450nm} alone performs comparable to or better than \textit{RGB-SWIR} combination. Indeed, the $1450 nm$ is closer to water absorption frequency \cite{wilson2015review}, which is characteristic to human skin \cite{nicolo2012long} which could explain the robustness of this frequency. This wavelength was also observed to be important in previous studies as well \cite{heusch2020deep}. \begin{table*}[h] \caption{ACER in the \textit{test} for various unseen attack protocols, with reprojection based alignment and unit normalization, with original resolution} \centering \resizebox{0.9\textwidth}{!}{ \begin{tabular}{lRRRRRRRRRR \toprule \multicolumn{1}{l} {Channels} &\multicolumn{1}{l} {FlexMask} &\multicolumn{1}{l} {Glasses} &\multicolumn{1}{l} {Makeup} &\multicolumn{1}{l} {Mannequin} &\multicolumn{1}{l} {Papermask} &\multicolumn{1}{l} {Rigidmask} &\multicolumn{1}{l} {Tattoo} &\multicolumn{1}{l} {Replay} &\multicolumn{1}{l} {Mean} &\multicolumn{1}{l} {Std} \\ \midrule RGB & 11.0 & 49.3 & 32.3 & 0.0 & 0.5 & 28.2 & 46.4 & 2.3 &21.2 & 20.4 \\ \midrule \textbf{RGB\_NIR\_735nm} & 10.1 & 42.0 & 39.7 & 0.0 & 0.0 & 26.6 & 4.5 & 0.0 & \textbf{15.3} & 18.0\\ RGB\_NIR\_850nm & 13.2 & 45.6 & 33.0 & 0.2 & 0.0 & 4.1 & 46.7 & 7.2 & 18.7 & 19.9\\ RGB\_NIR\_940nm & 0.4 & 49.3 & 43.2 & 0.7 & 0.0 & 28.3 & 13.6 & 4.1 & 17.4 & 20.2\\ RGB\_NIR\_1050nm & 22.3 & 50.0 & 29.2 & 0.9 & 0.0 & 31.8 & 36.1 & 0.3 & 21.3 & 19.0\\ \midrule RGB\_SWIR\_940nm & 8.1 & 48.8 & 32.6 & 0.3 & 0.6 & 21.5 & 32.4 & 7.0 & 18.9 & 17.7\\ RGB\_SWIR\_1050nm & 0.0 & 14.5 & 29.7 & 0.0 & 0.0 & 1.7 & 49.8 & 0.0 & 11.9 & 18.6\\ RGB\_SWIR\_1200nm & 24.0 & 18.0 & 24.1 & 0.3 & 1.0 & 9.8 & 1.8 & 0.0 & 9.8 & 10.6\\ RGB\_SWIR\_1300nm & 0.0 & 42.5 & 29.5 & 0.0 & 0.0 & 0.1 & 4.9 & 0.0 & 9.6 & 16.7\\ \textbf{RGB\_SWIR\_1450nm} & 0.1 & 4.2 & 25.2 & 0.3 & 0.5 & 0.0 & 24.1 & 0.0 & \textbf{6.8} & 11.1\\ RGB\_SWIR\_1550nm & 0.6 & 13.4 & 26.8 & 1.2 & 0.3 & 0.2 & 37.9 & 0.5 & 10.1 & 14.7\\ RGB\_SWIR\_1650nm & 0.3 & 14.7 & 29.5 & 0.0 & 0.4 & 0.0 & 35.2 & 0.4 & 10.01 & 14.7\\ \bottomrule \end{tabular} } \label{loo:subnir_swir} \end{table*} \begin{table}[h] \caption{ACER in the \textit{test} for the main three protocols with sub channel performance} \centering \resizebox{0.49\textwidth}{!}{ \begin{tabular}{lRRRR \toprule \multicolumn{1}{l} {Channels} &\multicolumn{1}{l} {Grandtest-c} &\multicolumn{1}{l} {Impersonation-c} &\multicolumn{1}{l} {Obfuscation-c} &\multicolumn{1}{l} {Mean} \\ \midrule RGB & 4.6 & 0.6 & 14.8 & 6.6 \\ NIR & 9.7 & 0.1 & 39.6 & 16.4 \\ SWIR & 4.1 & 1.8 & 9.2 & \textbf{5.0} \\ \midrule \textbf{RGB\_NIR\_735nm} & 0.9 & 0.0 & 4.4 & \textbf{1.7} \\ RGB\_NIR\_850nm & 4.7 & 0.2 & 10.6 & 5.1 \\ RGB\_NIR\_940nm & 3.1 & 0.0 & 16.3 & 6.4 \\ RGB\_NIR\_1050nm & 3.8 & 0.0 & 18.1 & 7.3 \\ \midrule RGB\_SWIR\_940nm & 1.2 & 0.7 & 8.4 & 3.4 \\ RGB\_SWIR\_1050nm & 0.9 & 1.9 & 5.2 & 2.6 \\ RGB\_SWIR\_1200nm & 1.2 & 1.1 & 7.5 & 3.2 \\ RGB\_SWIR\_1300nm & 0.2 & 0.9 & 6.9 & 2.6 \\ RGB\_SWIR\_1450nm & 0.2 & 1.1 & 1.0 & 0.7 \\ \textbf{RGB\_SWIR\_1550nm} & 0.3 & 0.6 & 0.0 & \textbf{0.3} \\ RGB\_SWIR\_1650nm & 0.5 & 0.5 & 0.8 & 0.6 \\ \bottomrule \end{tabular} } \label{all:subnir_swir} \end{table} Also, we have performed the same set of experiments with the known attack protocols as well, and the results are tabulated in Table. \ref{all:subnir_swir}. In this set of experiments, \textit{RGB\_SWIR\_1550nm} appears to perform the best on average, closely followed by \textit{RGB\_SWIR\_1650nm} and \textit{RGB\_SWIR\_1450nm} \footnote{$SWIR_{w}$ and $NIR_{w}$, denote the images captured at a wavelength of $w$ nanometers}. And among \textit{NIR} channels, \textit{RGB\_NIR\_735nm} performs the best as with the unseen attack protocols. \subsection{Discussions} From the experiments, the findings from different analyses and overall observations are summarized in this section. From the channel wise selection experiments, it was clear that combining different channels improves the performance. Even when the performance of individual channels is poor, the combinations of channels were found to improve the performance even in most challenging scenarios. In general, the SWIR channel was found to be very beneficial for a wide variety of scenarios, followed by the \textit{RGB} channel. Also, combining \textit{RGB} and \textit{SWIR} improved the performance greatly, and it could achieve significant performance improvement in most of the challenging scenarios. \textit{RGB} and \textit{NIR} combination also works very well in some cases. As for the attacks in impersonation attack protocols, most of the channel seems to work well. Most of the methods fail to correctly identify obfuscation attacks, and they are indeed much harder to identify. The fusion experiments performed show that using channels together in an early fusion strategy provides more accurate results compared to score fusion and feature fusion in general. In most cases, the performance degraded when the models were fused in score level or feature level as opposed to stacking at the input level. Experiments with different image resolutions reveal important aspects useful for practical deployment scenarios. The accuracy of the models degraded only at very low resolutions, meaning that PAD systems could achieve competitive performance with a low-resolution \textit{SWIR} sensor in the hardware, since the cost of the sensor decreases a lot with a lower sensor resolution. For reliable use of face recognition, the PAD modules used should be robust against unseen attacks. Hence the evaluation of various channels against unseen attacks is of specific importance from a security perspective. It was observed that the \textit{RGB-SWIR} model achieves remarkable performance in unseen attack protocol. Intrigued by the success of \textit{SWIR}, and specifically \textit{RGB-SWIR} models, we performed further experiments to understand which wavelength is more informative. This experiment was performed in both \textit{NIR} and \textit{SWIR} channels (Table. \ref{all:subnir_swir}, and Table. \ref{loo:subnir_swir}). Interestingly, \textit{RGB\_SWIR\_1450nm} achieved comparable performance to \textit{RGB-SWIR} in both unseen and known attack protocol. Not surprisingly, the \textit{SWIR} wavelength \textit{1450nm} is closer to a characteristic feature of water absorption, which explains the robustness of this particular wavelength. Indeed, this makes the separation of skin from other attack instruments easier, even when the sensor resolution is low. One could opt for a low-resolution \textit{SWIR} camera with a high-resolution \textit{RGB} camera, keeping the cost low retaining the good performance. In this study, we have focused on the analysis of channels and image resolutions. Further, this study can be extended to other image degradations like quantization of channels, the dynamic range of sensors, and the effect of noise. \section{Introduction}\label{sec:introduction}} \input{introduction} \section{Details of Study} \label{sec:scalability} \input{scalability} \section{The HQ-WMCA Dataset} \label{sec:database} \input{database} \section{Presentation Attack Detection Approach} \label{sec:pad} \input{pad} \section{Experiments \& Results} \label{sec:experiments} \input{experiments} \section{Conclusions} \label{sec:conclusion} \input{conclusion} \section*{Acknowledgments} Part of this research is based upon work supported by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via IARPA R\&D Contract No. 2017-17020200005. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the ODNI, IARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon. The authors would like to thank Zohreh Mostaani for conducting the data collection campaign and the data curation. \ifCLASSOPTIONcaptionsoff \newpage \fi \subsection{Network Architecture} \label{subsec:arch} The architecture used is the multi-channel extension of \cite{george2019deep}, as in \cite{heusch2020deep}. From our previous work, it was observed that out of the architectures we considered, \textit{MC-PixBiS} obtained much better accuracy, with a relatively smaller model size. Hence, the rest of the analysis in this work utilizes the \textit{MC-PixBiS} architecture proposed in our previous work \cite{heusch2020deep}. The main idea in \textit{MC-PixBiS} is to use pixel-wise supervision as auxiliary supervision for training a multi-channel PAD model. The pixel-wise supervision forces the network to learn shared representations, and it acts like a patch wise method (see Figure~\ref{fig:pixbis}) depending on the receptive field of the convolutions and pooling operations used in the network. The method proposed in \cite{wang-eccv-2016} is used to initialize the newly formed first layers, i.e. averaging the filters in the first layer and replicating the weights for different channels. This makes the network very flexible to perform the experiments since we can arbitrarily change the number of input channels and the number of filters in the first convolutional layer. The number of parameters of the rest of the layers remains identical for all the experiments. An increase in the input channel changes the size of the kernels of the first convolutional layer, and hence the parameter size does not increase drastically with newly added channels. The details of the architecture except for the first layer can be found in \cite{george2019deep}. The output from the eighth layer is a map of size $14\times14$ with 384 features. A $1 \times 1$ convolution layer is added along with sigmoid activation to produce the binary feature map and another fully connected layer on top for binary supervision. The loss to be minimized in the training is a weighted combination of the binary and pixel-wise binary loss: \begin{equation} \label{eq:pixbis-loss} \mathcal{L} = 0.5 \mathcal{L}_{pix}+ 0.5 \mathcal{L}_{bin} \end{equation} where $\mathcal{L}_{pix}$ is the binary cross-entropy loss applied to each element of the $14 \times 14$ score map and $\mathcal{L}_{bin}$ is the binary cross-entropy loss on output after the fully connected layer. At test time, the average of the score map is used as the final PAD score. \subsection{Preprocessing} \label{subsec:preprocess} The preprocessing stage assumes the data from different channels to be registered to each other. This is made sure by the reprojection method described in section \ref{subsec:reprojection}. After acquiring the registered data, the face images are aligned with face detection and landmark detection. MTCNN \cite{zhang2016joint} face detector was used in \textit{RGB} channel to localize face and facial landmarks. This is followed by a warping based alignment so that the positions of eyes are at the predefined position, post the transformation. The aligned face images are resized to a fixed resolution of $224 \times 224$. This process is repeated for each channel individually. In addition to spatial alignment, another normalization is applied to different channels in a case by case basis. The objective of this normalization is to convert all channels to an 8-bit format preserving the maximum dynamic range. The RGB images do not need this normalization stage since it is already in an 8-bit format. The depth and thermal channels are converted to 8-bit range using Median of Absolute Deviation (MAD) based normalization as described in our earlier work \cite{george_mccnn_tifs2019}. However, \textit{NIR} and \textit{SWIR} channels required a special treatment due to their spectral nature. Specifically, we have performed pixel-wise unit normalization for \textit{NIR} and \textit{SWIR} spectra. Consider the SWIR spectra cube $S$ of size $W \times H \times C$, where $W$ and $H$ are the width and height and $C$ the number of wavelengths ($C=7$). Now a pixel-spectra $\vec{X}=S(i,j,1..C)$, a vector of dimension $C$ ($\vec{X} \in R^C$). Now each of this pixel $\vec{X}$ is divided by the norm of this vector to form the normalized spectra. \begin{equation} \vec{\hat{X}}=\frac{\vec{X}}{\left \| X \right \|_{2}} \end{equation} This normalization is performed independently for \textit{NIR} and \textit{SWIR} channels. We stack all the normalized channels (i.e., \textit{RGB}, depth, thermal, \textit{NIR}, \textit{SWIR}) into a 3D volume of dimension $224 \times 224 \times 16$, which is fed as the input to the network. This is equivalent to the early fusion approach where channels are stacked at the input level. \subsection{Implementation details} We used data augmentation using random horizontal flips with a probability of 0.5. Adam Optimizer \cite{kingma2014adam} was used as the optimizer with a learning rate of $1\times10^{-4}$ and a weight decay parameter of $1\times10^{-5}$. We trained the network for 30 epochs on a GPU with a batch size of 32. The final model was selected based on the best validation loss in each of the experiments. The framework was implemented using PyTorch \cite{paszke2017automatic} and all the experimental pipelines were implemented using Bob toolbox \cite{anjos-icml-2017}. The source code to reproduce the results is available in the following link \footnote{Available upon acceptance.}. \subsection{Channel Selection studies} \label{sec:channel-selection} \begin{figure*}[ht!] \centering \includegraphics[width=0.99\textwidth]{figures/study_framework_channels_scale.png} \caption{The framework for the channel selection and image resolution selection study. Each set of experiments is repeated in several challenging protocols to evaluate the robustness.} \label{fig:scalability_diagram} \end{figure*} The objective of this study is to identify important channels for deploying a reliable PAD system. The cost of using all the channels in a PAD system is high, and hence the understanding of different combinations of these channels as a cost vs performance trade-off could be useful in selecting sensors for practical deployment scenarios. The PAD model used in this study can be adapted to a different number of input channels. In the channel selection study, we perform experiments with a different combination of channels as the input to the model. As described in the preprocessing sub-section \ref{subsec:preprocess}, the input after the preprocessing stage is a data cube of size $224 \times 224 \times 16 $ which consists of all channels present in the dataset, i.e., \textit{RGB}-Depth-Thermal-\textit{NIR}(4)-\textit{SWIR}(7) stacked together depth-wise. Now, if we want to perform the experiment with \textit{RGB} alone, then we keep only the first 3 channels and run the complete experimental pipeline (training the PAD model, evaluation, and scoring). Similarly, if we want to perform the experiment with \textit{RGB} and \textit{SWIR} (\textit{RGB-SWIR}), we stack the first 3 channels and the last 7 channels and run the experimental pipeline. In a similar manner, we can perform experiments with different combinations of channels present in the dataset. The results from this experiment could point to the effectiveness of a PAD model with a particular channel or a combination of channels for different protocols, emulating practical scenarios. \subsection{Score Fusion Experiments} In the Channel Selection studies (Subsection \ref{sec:channel-selection}), when two channels are used together, they are stacked together at the input (fusion at the model level). Another way to combine different models is to perform score level fusion \cite{chingovska2013anti}. If we want to perform a fusion of \textit{RGB-SWIR}, we train two different PAD models for \textit{RGB} and \textit{SWIR} separately, and score level fusion is performed on the scores returned by each of the models. In addition to score fusion ,we perform experiments with feature fusion as well. The objective of these set of experiments is to analyze the performance of score level fusion and feature level fusion, as opposed to early fusion as used in the channel selection study. \subsection{Effect of changing quality (image resolution)} From a practical deployment point of view, the cost of the sensors varies significantly with image resolution. This is particularly evident when considering the SWIR channel, as high-resolution SWIR sensors are rather expensive. Hence we perform a set of experiments to evaluate the change in performance with respect to image resolution. To emulate a lower resolution sensor, we have down-sampled the original image by various scaling factors. We repeated all the experiments with the scaled images, so as to emulate the PAD performance with a lower resolution sensor. \subsection{Unseen attack robustness} One important requirement of the PAD system is the robustness against unseen attacks. This is particularly important since at the time of training a PAD system the designer may not know of all the types of novel attacks the system could encounter in a real-world scenario. To understand the robustness of the system to such adverse conditions, we emulate the performance of the system on previously unseen attacks by systematically removing some attacks in the training set. These systems are then exposed to attacks that were not seen in the training phase. This analysis reveals important aspects of different channels in terms of unseen attack robustness.
1,314,259,995,458
arxiv
\section{Introduction} There are many purely astrophysical and nuclear physics uncertainties in the core collapse supernova problem. However, the weak interaction in general and neutrino physics in particular play pivotal roles in nearly every aspect of the collapse of the core of a massive star and likely in any subsequent supernova explosion as well. It is sobering to contemplate that collapsing stellar cores will pass through regimes of matter density and neutrino flux which have never been probed in the laboratory and which could be affected significantly by new physics in the weakly interacting sector. Moreover, the existence of neutrino rest masses, unexplained and unpredicted by the Standard Model of particle physics, points directly at the possibility of new neutrino physics. In this paper we explore the effects of plausible extensions of the Standard Model in the weakly interacting sector on models for the explosion mechanism for core collapse supernovae. In particular, we investigate the effects of an electroweak singlet (\lq\lq sterile\rq\rq) neutrino $\nu_s$ on the physics of energy and lepton number transport in the supernova core and on the process of shock re-heating. The ranges of sterile neutrino rest mass and active-sterile vacuum mixing angle investigated here include those parameters of interest for sterile neutrino dark matter \cite{DM1,DM2,XSF,AFP,DH,AF,2006PhRvD..73f3506A,BiermannKusenko,Kev,AbazajianKoushiappas,1994PhLB..323..360S,x-ray,x-ray_1,Kev2,Boyarsky,Viel,2006PhRvD..74c3009W,2006MNRAS.370..213B} and pulsar kicks and related issues \cite{kicks,kicks2,FK}. The LSND experiment \cite{LSND,LSND_1,LSND_2} and recent mini-BooNE experiment \cite{MiniBooNE} do not constrain the sterile neutrino mass and mixing parameters considered in this paper. The general features of core collapse supernova evolution are dictated largely by entropy considerations \cite{BBAL}. Stars with initial masses in excess of $\sim 10\,{\rm M}_\odot$ evolve quickly to their evolutionary endpoint: a low entropy core supported by relativistically-degenerate electrons and, therefore, subject to dynamical instability. The collapse of this core is halted at or just beyond the point where nuclear density is reached. The gravitational binding energy released in this prompt collapse and in subsequent quasi-static contraction is more or less efficiently converted into seas of neutrinos of all kinds. The \lq\lq bounce\rq\rq\ of the core generates a shock wave which moves out. However, the energy in this shock is sapped by the photo-dissociation of nuclei passing through it. This process is an inevitable consequence of the substantial entropy jump across the shock front and of basic nuclear physics. The details of the mechanism or mechanisms whereby the deleterious effects of nuclear photo-dissociation are ameliorated, a viable shock is re-born, and an explosion originates remain elusive. However, ever since the work of Bethe and Wilson \cite{BW85} the broad outlines of a solution are plausibly clear. The prodigious energy in the neutrino and antineutrino reservoirs in the collapsed core is radiated from the surface of the proto-neutron star (the neutrino sphere) and is deposited in material behind the stalled bounce-shock, \lq\lq re-heating\rq\rq\ it and thereby driving a Type II, Ib, or Ic supernova explosion. However, one-dimensional simulations of this process, though containing detailed treatments of the nuclear equation of state and neutrino transport, nevertheless are challenged in producing convincing explosions. Much recent attention has focussed on multi-dimensional hydrodynamic, convective, or acoustic enhancement of neutrino energy transport \cite{2006ApJ...640..878B,2006ApJ...642..401B,2002ApJ...574L..65F,2006A&A...453..661K} above the neutrino sphere as a means of augmenting neutrino heating of matter below the shock. These schemes succeed in producing explosions. However, as yet they do not include the level of sophistication in, {\it e.g.,} neutrino transport and nuclear equation of state employed in the one-dimensional models for all relevant regimes of time and space. Our previous work \cite{Hidaka-Fuller} on the effects of active-sterile-active ($\nu_e \rightarrow \nu_s \rightarrow \nu_e$) neutrino flavor transformation in the in-fall epoch of supernova core collapse suggested a means by which neutrino energy transport could be augmented. Conceivably, this could be a solution to the shock re-heating problem. However, a key uncertainty not addressed in Ref.~\cite{Hidaka-Fuller} was the effect on this process of the shock wave itself. Here we will tackle this issue. In Section II we summarize the salient features of active-sterile-active neutrino flavor transformation physics and its effects during the in-fall epoch. In Section III we consider the ways in which the shock wave modifies the thermodynamic conditions which help determine how sterile neutrino production and re-conversion proceed. We also discuss sterile neutrino induced \lq\lq pre-heating\rq\rq\ and the possibility of a reduced nuclear photo-dissociation burden on the shock. In Section IV we discuss shock re-heating and the enhanced prospects for a supernova explosion which could be a by-product of active-sterile-active neutrino conversion schemes. We give conclusions in Section V. \section{In-Fall Phase Neutrino Flavor Conversion} In this section we briefly summarize our previous work \cite{Hidaka-Fuller} on the effects of active-sterile neutrino flavor conversion on the in-fall phase of a core collapse supernova. The key result of this earlier work was the discovery that electron neutrino conversion into a sterile neutrino species $\nu_e \rightarrow \nu_s$ could feed back on electron capture ($e^-+p\rightarrow n+\nu_e$) during collapse and alter the potential governing flavor transformation so as to produce a double Mikeyev-Smirnov-Wolfenstein (MSW) resonance \cite{MSW,MSW_1}. It is this double resonance structure which can lead to the the re-conversion of the sterile neutrinos. With such a double resonance arrangement, at least some electron neutrinos will experience $\nu_e \rightarrow \nu_s \rightarrow \nu_e$ as they move from higher toward lower density in the core. For simplicity, we consider $2\times 2$ neutrino flavor mixing where, in vacuum, we have \begin{eqnarray} \vert\nu_e\rangle& = & \cos\theta \vert\nu_1\rangle + \sin\theta\vert\nu_2\rangle ,\\ \label{mix1} \vert\nu_s\rangle & = & -\sin\theta \vert\nu_1\rangle + \cos\theta\vert\nu_2\rangle . \label{mix12} \end{eqnarray} Here $\theta$ is an effective $2\times 2$ vacuum mixing angle for the $\nu_e\rightleftharpoons\nu_s$ channel, and $\vert\nu_1\rangle$ and $\vert\nu_2\rangle$ are light and heavy, respectively, neutrino energy (mass) eigenstates with mass eigenvalues $m_1$ and $m_2$, respectively. The relevant mass-squared difference is $\delta m^2 \equiv m_2^2-m_1^2$. Since we will be concerned with sterile neutrino rest mass scales $\sim {\rm keV}$, we will have $m_2 \gg m_1$, and so $\delta m^2 \approx m_2^2 \equiv m_{\rm s}^2$. An electron neutrino ($\nu_e$) propagating coherently in the medium of the core will experience a potential stemming from forward scattering on all particles (electrons/positrons, nucleons/quarks, and other neutrinos) that carry weak charge. This potential is \begin{equation} V = {{3\sqrt{2}}\over{2}} G_{\rm F} n_{\rm b} \left(Y_e - {{1}\over{3}}+{{4}\over{3}} Y_{\nu_e} + {{2}\over{3}} Y_{\nu_\mu} + {{2}\over{3}} Y_{\nu_\tau} \right), \label{V} \end{equation} where $n_{\rm b} = \rho N_A$ is the baryon number density, $\rho$ is the density in ${\rm g}\,{\rm cm}^{-3}$ and $N_A$ is Avogadro's number, $G_{\rm F}$ is the Fermi constant, and the net lepton abundances relative to baryons are, {\it e.g.,} $Y_e \equiv {\left( n_{e^-}-n_{e^+}\right)}/n_{\rm b}$ with, {\it e.g.,} $n_{e^-}$ the electron number density. The terms proportional to $Y_{\nu_e}$, $Y_{\nu_\mu}$, and $Y_{\nu_\tau}$ in this potential stem from neutrino-neutrino forward scattering and must be corrected for the non-isotropic nature of the neutrino distribution functions at locations which are above the neutrino sphere \cite{Fuller87}. At any point inside the star or above it, electron antineutrinos, {\it i.e.,} $\bar\nu_e$'s, will experience a potential with the same magnitude as that experienced by $\nu_e$'s, but with opposite sign. Of course, sterile neutrinos $\nu_s$ experience no forward scattering potential. \begin{figure}[htbp] \includegraphics[width=3.4in]{E_resAndDensityProfile_r_min_mu_nu_e.eps} \caption{\label{fig:E_resAndDensityProfile}Right panel shows the core density profile with radius $r$, while the corresponding profiles for MSW resonance energy $E_{\rm res}$ (solid) and $\nu_e$ chemical potential $\mu_{\nu_e}$ (dashed) are shown in the left panel. Here $E_{\rm res}$ takes its minimum value $E_{\rm res}^{\rm min}$ at $r_0$. For a particular neutrino energy, an MSW resonance can occur at two locations (densities), {\it e.g.}, $r_1$ ($\rho_1$) and $r_2$ ($\rho_2$). } \end{figure} At a given location, a neutrino ($\nu_e$ or $\nu_s$) with energy $E_{\rm res}$ will experience an MSW medium-enhanced resonance where \begin{equation} E_{\rm res} = {{\delta m^2 \cos2\theta}\over{ 2 V}}\approx {{m_{\rm s}^2}\over{ 2 V}}. \label{rescond} \end{equation} Physically, this is the neutrino energy where the effective in-medium mass associated with the active neutrino matches the rest mass associated with the sterile state, $m_{\rm s}$. The last approximation in Eq.~(4) follows for the reasons given above and because the vacuum mixing angles we consider here are very small ({\it e.g.,} satisfying $\sin^22\theta \sim {10}^{-9}$). In medium the forward scattering potential will modify not only the effective masses of the active neutrinos but also the unitary relation between the neutrino flavor states (weak interaction eigenstates) and the (instantaneous) mass eigenstates $\vert \nu_1(t)\rangle$ and $\vert\nu_2(t)\rangle$, where $t$ represents any Affine parameter along the neutrino's world line. We can express the in-medium transformation in direct analogy to that in vacuum, \begin{eqnarray} \vert\nu_e\rangle& = & \cos\theta_{\rm M}(t)\vert\nu_1(t)\rangle + \sin\theta_{\rm M}(t)\vert\nu_2(t)\rangle ,\\ \label{medmix1} \vert\nu_s\rangle & = & -\sin\theta_{\rm M}(t) \vert\nu_1(t)\rangle + \cos\theta_{\rm M}(t)\vert\nu_2(t)\rangle . \label{medmix2} \end{eqnarray} A similar unitary transformation applies to the antineutrinos but with a different mixing angle $\bar\theta_{\rm M}(t)$. In an active-sterile neutrino oscillation scenario where neutrino transformation is enhanced and antineutrino transformation is suppressed, at resonance we will have $\theta_{\rm M}(t_{\rm res}) = \pi/4$, {\it i.e.,} maximal mixing. The region in space where the effective in-medium mixing angles $\theta_{\rm M}$ (or $\bar\theta_{\rm M}$) are large and near maximal is termed the resonance width. This width is $\delta t \approx {\vert d \ln V/dt \vert}^{-1}\,\tan2\theta$ and so is expected to be small for the neutrino parameters and conditions we treat here. So long as the the neutrino mean free paths are large compared to the MSW resonance width, we can regard neutrino flavor evolution as coherent, at least as far as the application of the MSW formalism is concerned \cite{Fuller87}. This is true even when the active neutrinos are trapped and thermalized in the core. Note, however, that at very high densities, such as those we expect to encounter deep in the core near and after core bounce, this condition will break down. There may be so many scattering targets for the active neutrinos in this case that the neutrino mean free paths are comparable to or shorter than the MSW resonance widths. We term this the incoherent or scattering-dominated case. In this regime, scattering-induced de-coherence of the neutrino fields will dominate the conversion of neutrino flavors. In particular, this can be the case for the $\nu_e \rightarrow \nu_s$ channel of most interest here. Note, however, that since the de-coherent neutrino (antineutrino) flavor conversion rate is proportional to $\sin^22\theta_{\rm M}(t)$ ($\sin^22\bar\theta_{\rm M}(t)$), the potential $V$ and the MSW resonance condition still play a significant role in determining the locations where this conversion is significant. Ref.~\cite{AFP} and references therein discuss this physics in detail, while Ref.~\cite{2007PhRvD..75h5004B,2007arXiv0705.0703B} discusses uncertainties and controversies associated with de-coherence in high density matter. Employing a simple nuclear liquid drop model \cite{BBAL,Fuller82} and degenerate electron equation of state in a one-zone homologous collapse code \cite{Fuller82}, we found the double resonance structure discussed above. Fig.~\ref{fig:E_resAndDensityProfile} gives a graphic summary of these results. The equation of state and one-zone collapse code employed in obtaining these results is discussed in the Appendix of Ref.~\cite{Hidaka-Fuller}. These calculations also showed that near the surface of the core, where the density is $\rho\sim 10^{12}\,{\rm g/cm^3}$, the MSW resonance energy $E_{\rm res}$ for $\nu_e\rightleftharpoons\nu_s$ tends to be much larger than the $\nu_e$ chemical potential (Fermi energy) $\mu_{\nu_e}$. Progressing inward from the edge of the collapsing core, $E_{\rm res}$ first decreases while $\mu_{\nu_e}$ increases continuously. Near the density $\rho\sim 10^{13}\,{\rm g/cm^3}$, $E_{\rm res}$ and $\mu_{\nu_e}$ become comparable and large-scale $\nu_e\rightarrow\nu_s$ conversion starts. Once this conversion process begins in earnest, $E_{\rm res}$ increases with further increases in density. In this latter phase of the collapse, $E_{\rm res}$ stays slightly above $\mu_{\nu_e}$, with both quantities increasing with increasing density. As will be discussed in the next section, a feedback process keeps $E_{\rm res}$ hovering just above $\mu_{\nu_e}$. Ultimately, at core bounce, when the collapse is halted, the matter density is near nuclear matter density ($\rho\sim 10^{14}\,{\rm g/cm^3}$) and the relevant neutrino energies are large since $\mu_{\nu_e}\sim 150\,{\rm MeV}$. Fig.~\ref{fig:E_resAndDensityProfile} illustrates these trends. Our earlier work \cite{Hidaka-Fuller} speculated that the double MSW resonance structure could facilitate enhanced neutrino energy, entropy, and lepton number transport from deep in the core to regions nearer the proto-neutron star surface ({\it i.e.,} the neutrino sphere). Essentially this enhancement comes about because a neutrino, initially a $\nu_e$ in our case, will spend part of its time as a sterile neutrino. While it is in the sterile state, this neutrino will move at almost the speed of light. As a result, the effective mean free paths and diffusion coefficients for these neutrinos will be re-normalized upward. Interestingly, our estimates suggested that the best prospects for transport enhancement through this mechanism could be obtained with sterile neutrino mass and vacuum flavor mixing parameters which overlap the ranges of these that give viable sterile neutrino dark matter \cite{DM1,DM2,XSF,AFP,DH,AF,2006PhRvD..73f3506A,BiermannKusenko,Kev,AbazajianKoushiappas,1994PhLB..323..360S,x-ray,x-ray_1,Kev2,Boyarsky,Viel,2006PhRvD..74c3009W,2006MNRAS.370..213B}. However, a significant caveat on these conclusions is that the calculations of Ref.~\cite{Hidaka-Fuller} dealt only with the in-fall epoch of core evolution. The bounce shock generated near the edge of the homologous core could be expected to move outward, through the outer core, and modify the thermodynamic variables and composition in this region. These modifications, in turn, could be expected to alter the $\nu_e$ forward scattering potential which governs sterile neutrino production and/or re-conversion. \section{Effect of Shock Wave Passage}\label{sec:shockwave_inner} Assessing the impact of post-shock active-sterile-active neutrino flavor transformation requires adroit attention to a few key issues in supernova shock formation and propagation. As the initial iron core collapses, an inner, homologous core will maintain a roughly self-similar, index 3 polytropic structure \cite{Goldreich-Weber,BBAL}. This makes intuitive sense because the pressure support in the star is dominated by relativistically degenerate electrons with Fermi level (chemical potential) $\mu_e \approx 11.1\,{\rm MeV} {\left( \rho_{10}\, Y_e\right)}^{1/3}$, where $\rho_{10}$ is the density in units of ${10}^{10}\,{\rm g}\,{\rm cm}^{-3}$. However, as electron capture proceeds and the pressure is relatively reduced, only a smaller, \lq\lq inner core\rq\rq\ can continue to collapse in this self similar and homologous manner. Homology (in-fall velocity proportional to radius) allows a one-zone calculation to be meaningful, as each location in the inner core will experience a portion of a common temperature, density, and composition history \cite{BBAL}. The remainder of the initial iron core which is above and outside the inner core is termed the \lq\lq outer core.\rq\rq\ The inner core is essentially an instantaneous Chandrasekhar mass $M_{\rm IC} \sim \langle Y_e\rangle^2$. When the central density reaches the point where nucleons touch (nuclear density), this core will bounce as a unit and serve as a piston. The shock will form at the edge of this inner core. The initial shock energy will be of order the gravitational binding energy of the inner core and will scale as $~\langle Y_e\rangle^{10/3}$ \cite{Fuller82}. As a result, there is some uncertainty in this initial shock strength depending on nuclear and sub-nuclear density equation of state, composition, and electron capture physics issues. In broad brush, however, we expect the entropy-per-baryon $S$ (in units of Boltzmann's constant $k_{\rm B}$) to jump by a few units at the shock front. This entropy jump can be significant because the core's material during the collapse itself, as well as the un-shocked material in the outer core ahead of the shock, is characterized by low entropy, $S \approx 1$. In the lower density regions of the outer core, an entropy jump $\Delta S \ge 3$, for example, is usually enough to shift the nuclear composition in Nuclear Statistical Equilibrium (NSE) from heavy nuclei to free nucleons and alpha particles. We will refer to this phenomenon as nuclear photo-dissociation or nuclear \lq\lq melting.\rq\rq\ As the shock propagates through the outer core and melts nuclei it loses energy. This is because each nucleon is bound in a nucleus by $\sim 8\,{\rm MeV}$. This represents ${10}^{51}\,{\rm ergs}$ ($\equiv 1\,{\rm Bethe}$) per $0.1\,{\rm M}_\odot$ of material transiting the shock front. Since the shock is born with an energy $\sim 1\,{\rm Bethe}$ and the outer core mass may be $\sim 0.7\,{\rm M}_\odot$, nuclear photo-dissociation quickly degrades the shock into a \lq\lq dead,\rq\rq\ standing accretion shock. Whether subsequently the shock can be re-energized by, {\it e.g.,} direct or convectively- or hydrodynamically-enhanced neutrino heating or electromagnetic or acoustic energy transport remains an open question as discussed in the Introduction \cite{2006ApJ...640..878B,2006ApJ...642..401B,2002ApJ...574L..65F,2006A&A...453..661K}. However, by any objective standard, the energy ($\sim 1\,{\rm Bethe}$) in observed Type II supernova shocks/explosions is small compared to the energy ($\sim 10\,{\rm Bethe}$) in the neutrino seas initially trapped in the core, and miniscule compared to the energy ($\sim 100\,{\rm Bethe}$) in the neutrino seas a few seconds post-core-bounce. Active-sterile neutrino transformation can tap into this reservoir and change the way in which neutrino energy is transported in and around the supernova core. As discussed in the last section, direct active-sterile-active neutrino flavor transformation could re-normalize upward the neutrino energy transport rate, thereby increasing the neutrino luminosity at the neutrino sphere and so boosting the shock re-heating rate. Also, the efficacy of the various re-heating schemes may depend on how far out the shock progresses before it stalls. In turn, this depends, among other variables, on electron capture and the shock energy remaining after nuclear photo-dissociation in the outer core. (See the discussion on this point in Ref.~\cite{Hix}.) Any effect like pre-heating which diminishes the nuclear photo-dissociation burden could translate into a larger stall radius for the shock, in turn, helping to increase the effectiveness of the various shock re-heating processes. \subsection{Feedback between resonance energy and $\nu_e$ Fermi level} An important finding in the calculations of Ref.~\cite{Hidaka-Fuller} was that the active-sterile MSW resonance energy $E_{\rm res}$ exhibited a minimum which was located well inside the core. The density profile and $E_{\rm res}$ profile at bounce is illustrated in Fig.~\ref{fig:E_resAndDensityProfile}. The location of the minimum in $E_{\rm res}$ at bounce is another way to divide the core. As a consequence of this minimum, the first resonance $\nu_e\rightarrow\nu_s$ may occur in the inner core, while the re-conversion resonance, the second one, $\nu_s\rightarrow\nu_e$, typically occurs in the outer core. Note that at the inner resonance, inside of the location of the minimum in $E_{\rm res}$, the $\nu_e$ Fermi energy $\mu_{\nu_e} \approx 11.1\,{\rm MeV} {\left( 2 \rho_{10}\, Y_{\nu_e}\right)}^{1/3}$ tracks just below $E_{\rm res}$, increasing with increasing density just as does $E_{\rm res}$. Another key finding of Ref.~\cite{Hidaka-Fuller} was that in the region inside of the resonance energy minimum there is a feedback between sterile neutrino production, $Y_{\nu_e}$, and $Y_e$ which keeps $E_{\rm res}$ tracking just above $\mu_{\nu_e}$. This feedback process is a result of the high degeneracy in the electron neutrino distribution function. If the system were perturbed so that $E_{\rm res}$ were lower than $\mu_{\nu_e}$, there would be prodigious sterile neutrino production which would tend to lower the local net electron lepton number and return the system to a state with $E_{\rm res} > \mu_{\nu_e}$. \subsection{Shock wave modification of sterile neutrino production} The passage of the shock through a region can alter the relation between $E_{\rm res}$ and $\mu_{\nu_e}$ and so can influence sterile neutrino production there. As long as $E_{\rm res}$ stays well above the electron neutrino Fermi energy $\mu_{\nu_e}$, the production of sterile neutrinos is negligible. However, the shock wave can supply heat/entropy and can cause a discontinuous change of physical quantities ({\it e.g.}, density and entropy). Immediately behind the shock front, we might expect the density jump to result in a smaller gap between $E_{\rm res}$ and $\mu_{\nu_e}$. This could be accompanied by enhanced $\nu_s$ production. However, as outlined above, we expect this condition to be temporary, as the feedback effect will push $E_{\rm res}$ above $\mu_{\nu_e}$ again. To take into acount this effect in our one-zone calculation, we added heat and entropy \lq\lq by hand.\rq\rq\ Specifically, to simulate the conditions in newly shocked regions of the core, we instantaneously increased the density by $\Delta \rho = 10^{13} \,{\rm g/cm^3}$ and the entropy-per-baryon (in units of $k_{\rm B}$) in three different cases by $\Delta S\sim 0.6,\ 2,\ 3$ as measured at density $\rho={10}^{13}\,{\rm g/cm^3}$. We assume that $\beta$-equilibrium and Nuclear Statistical Equilibrium (NSE) are attained instantaneously. This will be a decent approximation in the very high density regions where the first MSW resonance will be located, {\it e.g.,} inside or just outside the inner core. The entropy increments $\Delta S$ that we employ are chosen to be values characteristic of the early stages of shockwave formation. These values are smaller than the $\Delta S\sim 10$ entropy jump across the shock which is expected at later times or larger radius. However, our values make sense in a rough, physical sense: For a Chandrasekhar mass initial iron core ($\sim {10}^{57}$ baryons) collapsing to nuclear saturation density, we expect an in-fall kinetic energy at bounce $\sim {10}^{51}\,{\rm erg}$ which, if dissipated as heat at temperature $T\sim 1\,{\rm MeV}$, would give $\Delta S\sim 1$. (See the discussions in Ref.~\cite{BBAL} and Ref.~\cite{Fuller82}.) Going beyond this crude estimate is tricky. As best we can ascertain, our values of $\Delta S$ at relevant locations and epochs in the core bracket the results of some published large-scale and detailed numerical simulations. Both Ref.~\cite{2003PhRvL..91t1102H} and Ref.~\cite{2006A&A...447.1049B} seem to infer values of $\Delta S$ for relevant locations and epochs which are within the range we consider here. However, as we will see below, within this range of entropy jump there can be significant differences in $\nu_e\rightarrow\nu_s\rightarrow\nu_e$ effects. We calculate $\nu_s$ production and the influence of this process on the core in the following manner. First, we prepare an initial density profile. This is meant to be characteristic of the core just prior to core bounce. We take this profile to be that of a self-similarly contracted (homologous) index $n=3$ polytrope with central density $\rho_{\rm central} = 3\times 10^{14} \,{\rm g/cm^3}$. We then choose the location of the shock front on this profile and take the density there as the initial density when the shock front arrives. We take the other initial physical quantities from the results of our in-fall one-zone calculation at the initial density. We then apply our increments in density and entropy. Following a numerical procedure similar to that used to get the initial model, we use the results of an appropriate one-zone calculation to get the new, post-shock thermodynamic and lepton number quantities for the given increments $\Delta \rho$ and $\Delta S$. We use these altered conditions to estimate the production of sterile neutrinos and the feedback of this process on the potential $V$. \subsection{Heating of the outer core} For a given neutrino energy, we can identify the location of the second, outer resonance by using one-zone collapse calculation results for the run of potential $V$ (or, equivalently, $E_{\rm res}$) and the corresponding density profile. In order to assess the effects of neutrino flavor re-conversion $\nu_s\rightarrow\nu_e$ in this outer region, we need to estimate how many $\nu_e$'s are delivered and how much energy is deposited at the second resonance. This can be estimated by assuming adiabatic neutrino flavor evolution through MSW resonances. (Ref.~\cite{Hidaka-Fuller} discusses why adiabatic evolution is a good approximation here.) In the adiabatic limit we can assume that all $\nu_e$'s contained in neutrino energy range $\Delta E_{\rm res}$, corresponding to the MSW resonance potential width $\delta V$, are converted to sterile neutrinos $\nu_s$. The width of the resonance in radial coordinate is $\delta r = \left( dr/dV\right) \delta V= {\cal{H}} \tan2\theta $. Here the potential (\lq\lq density\rq\rq ) scale height is ${\cal{H}}=\vert d\ln{V}/dr\vert^{-1}$. Another expression for the spatial resonance width is $\delta r ={\cal{H}} \Delta E_{\rm res}/E_{\rm res}$. Making use of the resonance condition Eq.~(\ref{rescond}), we can express this as \begin{equation} \delta r \approx {{2 V^2 \Delta E_{\rm res}}\over{m_s^2}} {\Bigg\vert{{dV}\over{dr}}\Bigg\vert}^{-1}. \label{rescond2} \end{equation} Using this, we can show that the re-conversion rate per baryon for $\nu_s\rightarrow\nu_e$ at the second resonance is related to the corresponding rate per baryon for $\nu_e\rightarrow\nu_s$ conversion at the first resonance by \begin{equation} {\dot L}_{\nu_s\rightarrow\nu_e}=\frac{r_{\rm 1st}^2\rho_{\rm 1st}}{r_{\rm 2nd}^2\rho_{\rm 2nd}} \frac{dV/dr|_{\rm 2nd}}{dV/dr|_{\rm 1st}} {\dot L}_{\nu_e\rightarrow\nu_s}. \label{eq:conversion_rate} \end{equation} At any location we can designate $L \equiv Y_e+Y_{\nu_e}$ as the total electron lepton number per baryon. Neutrino flavor conversion $\nu_e\rightarrow\nu_s$ ($\nu_s\rightarrow\nu_e$) produces a negative (positive) time rate of change of this quantity, ${\dot L}= dL/dt$, respectively. In employing Eq.~(\ref{eq:conversion_rate}), we evaluate $dV/dr$ numerically using the in-fall one-zone calculation profile. In this equation, $\rho_{\rm 1st}$ ($\rho_{\rm 2nd}$) and $r_{\rm 1st}$ ($r_{\rm 2nd}$) are the density and the location of the first (second) resonance, respectively, as illustrated schematically in Fig.~\ref{fig:E_resAndDensityProfile}. The energy transfer rate per baryon from the first to the second resonance obeys a relationship in obvious analogy to that in Eq.~(\ref{eq:conversion_rate}). \begin{figure}[htbp] \includegraphics[width=3.2in]{plot_ShockEffectInnerOuter.eps} \caption{\label{fig:ShockEffectHeating}Effects of the shock in the interior (curves on the right) and shock-modified $\nu_e\rightarrow\nu_s\rightarrow\nu_e$ pre-heating in the outer core (curves on the left). Heavy nucleus mass fraction $X_{\rm H}$, temperature $k_{\rm B} T$ (in MeV), and entropy per baryon $S$ (in units of $k_{\rm B}$) are shown for three different cases. These cases correspond to three different shock strength scenarios with entropy jump (as measured at $\rho=3\times{10}^{13}\,{\rm g}\,{\rm cm}^{-3}$) $\Delta S =0.6$ (triangles), $\Delta S =2$ (squares), and $\Delta S =3$ (circles), respectively. Results of the pre-shock, in-fall one-zone calculation are also included (dotted lines). The minimum of the resonance energy is located around $\rho=7\times 10^{12}\, {\rm g}\,{\rm cm}^{-3}$. This location divides the curves on the left and right, as described in the text. } \end{figure} At the location of the second, outer resonance we take account of the heat and lepton number deposited by $\nu_s\rightarrow \nu_e$ by re-running the one-zone code with these updated quantities but with the density fixed at its original value. This gives us estimates of the change in thermodynamic variables that accompany this \lq\lq pre-heating.\rq\rq\ We continue this calculation of $\nu_e\rightarrow\nu_s\rightarrow\nu_e$ energy transfer to locations in the outer core until the shock wave reaches the position $r_{0}$ where $E_{\rm res}$ takes its minimum value (see Fig.~\ref{fig:E_resAndDensityProfile}). Fig.~\ref{fig:ShockEffectHeating} shows the profiles of entropy, temperature, and heavy nucleus mass fraction $X_{\rm H}$ at the completion of this energy transfer process. Three profiles are shown, corresponding to three different shock strength scenarios with entropy jump (as measured at $\rho=3\times{10}^{13}\,{\rm g}\,{\rm cm}^{-3}$) $\Delta S =0.6$ (triangles), $\Delta S =2$ (squares), and $\Delta S =3$ (circles), respectively. We may view these profiles as snapshots of conditions when the shock front is located at $r_{0}$. The figure also includes the results of the original in-fall calculation for comparison. Fig.~\ref{fig:ShockEffectHeating} shows that, depending on shock strength and the $E_{\rm res}$ profile, sterile neutrino-induced pre-heating could result in at least partial ($\sim 50\%$) melting of heavy nuclei in the outer regions of the core ahead of the shock. This could represent a substantial reduction in the nuclear photo-dissociation burden for the shock. Even though our estimates are schematic in nature and crude on a quantitative level, this result is sufficiently dramatic that it is clear that the existence of sterile neutrinos in the mass and mixing ranges discussed here could alter the the energetics of core collapse supernova shock propagation. \section{Shock Re-Heating} In their pioneering work on core collapse supernovae, Mayle and Wilson \cite{1988ApJ...334..909M,1993PhR...227...97W} obtained vigorous explosions in the late-time shock re-heating model, even in one dimension. This result was, and continues to be, at odds with the results of other detailed one-dimensional simulations, some more sophisticated in their treatments of the nuclear equation of state and neutrino transport \cite{2004ApJS..150..263L,2003ApJ...592..434T,2005ApJ...620..840L,2005AAS...207.1701M,2006ApJ...642..401B,2004rpao.conf..224L,2005ApJ...626..317W,2002ApJ...574L..65F,2005PhRvD..72d3007C,2001ApJ...560..326B,2006astro.ph..7281S}. Mayle and Wilson got their result by invoking neutrino convective transport in the core to increase the neutrino luminosity at the neutrino sphere. Though the physical basis for this effect ({\it i.e.,} their \lq\lq neutron fingers\rq\rq) has been repudiated, their result taught us a valuable lesson: The efficacy of neutrino heating in re-enegizing the stalled shock is a sensitive function of neutrino and antineutrino transport in the core and the corresponding luminosities at the neutrino sphere. The process of $\nu_e\rightarrow\nu_s\rightarrow\nu_e$ flavor conversion in the core could be just the sort of neutrino energy transport augmentation that could aid the core collapse supernova explosion process \cite{Hidaka-Fuller}. \subsection{De-coherent production of sterile neutrinos inside the proto-neutron star} Neutrino flavor evolution deep in the central region of the post-bounce core will be collisionally-dominted. The characteristic density in the central core at this epoch will be near or above nuclear saturation density, $\rho\sim 3\times{10}^{14}\,{\rm g}\,{\rm cm}^{-3}$, and scattering-induced de-coherence will be the primary channel through which sterile neutrinos are produced from the seas of active neutrinos \cite{2001PhRvD..64b3501A}. The total (left-handed $\nu_s$ plus right-handed $\bar\nu_s$) sterile neutrino emissivity $\mathcal{E}$ (energy emission per unit mass per unit time) can be estimated by employing average neutrino and antineutrino flavor conversion probabilities $\langle P_m(\nu_{e}\rightarrow\nu_{s};p,t)\rangle$ and $\langle P_m(\bar{\nu}_{e}\rightarrow\bar{\nu}_{s};p,t)\rangle$, respectively, as functions of neutrino or antineutrino momentum $p$ and location parameter $t$, energy-dependent neutrino and antineutrino scattering cross sections (in principle on all weakly interacting targets) $\sigma_{\nu_{s}}(E)$ and $\sigma_{\bar{\nu}_{e}}(E)$, respectively, and integrating over neutrino and antineutrino fluxes and energies $E$ \cite{1991NuPhB.358..435K,1993APh.....1..165R,1996slfp.book.....R,2001PhRvD..64b3501A}, \begin{eqnarray} \mathcal{E}&\approx& \frac{1}{m_N}\int d\Phi_{\nu_{e}} E \sigma_{\nu_{s}}(E)\frac{1}{2}\langle P_m(\nu_{e}\rightarrow\nu_{s};p,t)\rangle \nonumber \\ &+& \frac{1}{m_N}\int d\Phi_{\bar{\nu}_{e}} E \sigma_{\bar{\nu}_{e}}(E)\frac{1}{2}\langle P_m(\bar{\nu}_{e}\rightarrow\bar{\nu}_{s};p,t)\rangle, \label{eq:emissivity} \end{eqnarray} where $m_N$ is an atomic mass unit (essentially, the average free nucleon mass). In the conditions of near weak and near thermal equilibrium in the post-bounce central core, the differential neutrino and antineutrino fluxes $d\Phi_{\nu_{e}}$ and $d\Phi_{\bar{\nu}_{e}}$ (or number densities $dn_{\nu_e}$ and $dn_{\bar\nu_e}$), respectively, can be expressed as \begin{eqnarray} d\Phi_{\nu_{e}}=c dn_{\nu_{e}} &\approx& \frac{d^3p}{(2\pi)^3}\frac{1}{e^{E/T_{\nu_{e}}-\eta_{\nu_{e}}}+1}\nonumber\\ &\approx& \frac{1}{(2\pi)^3}\frac{E^2dE}{e^{E/T_{\nu_{e}}-\eta_{\nu_{e}}}+1}, \end{eqnarray} \begin{eqnarray} d\Phi_{\bar{\nu}_{e}}=c dn_{\bar{\nu}_{e}} &\approx& \frac{d^3p}{(2\pi)^3}\frac{1}{e^{E/T_{\bar{\nu}_{e}}-\eta_{\bar{\nu}_{e}}}+1}\nonumber\\ &\approx& \frac{1}{(2\pi)^3}\frac{E^2dE}{e^{E/T_{\bar{\nu}_{e}}-\eta_{\bar{\nu}_{e}}}+1}, \end{eqnarray} where the $\nu_e$ ($\bar\nu_e$) degeneracy parameter is $\eta_{\nu_{e}}=\mu_{\nu_{e}}/T_{\nu_{e}}$ ($\eta_{\bar\nu_{e}}=\mu_{\bar\nu_{e}}/T_{\bar\nu_{e}}\approx -\eta_{\nu_{e}}$), respectively. The neutrino and antineutrino temperatures $T_{\nu_e}$ and $T_{\bar\nu_e}$, respectively, are essentially the same as the matter temperature. Here the speed of light is $c$. The average oscillation (transformation) probabilities in Eq.~(\ref{eq:emissivity}) are given by \begin{eqnarray} \label{P1} \lefteqn{\langle P_m(\nu_{e}\rightarrow\nu_{s};p,t)\rangle}\nonumber\\ &\approx& \frac{1}{2}\frac{\Delta(E)^2\sin^2 2\theta}{\Delta(E)^2\sin^2 2\theta+D^2+[\Delta(E)\cos 2\theta-V]^2},\nonumber\\ && \end{eqnarray} \begin{eqnarray} \label{P2} \lefteqn{\langle P_m(\bar{\nu}_{e}\rightarrow\bar{\nu}_{s};p,t)\rangle}\nonumber\\ &\approx& \frac{1}{2}\frac{\Delta(E)^2\sin^2 2\theta}{\Delta(E)^2\sin^2 2\theta+\bar{D}^2+[\Delta(E)\cos 2\theta+V]^2}.\nonumber\\ && \end{eqnarray} Following Ref.~\cite{2001PhRvD..64b3501A}, and for the purpose of simple estimation, here we will take the $\nu_e$ and $\bar\nu_e$ scattering cross sections to be those appropriate for free nucleons. These are roughly \begin{equation} \sigma_{\nu_{e}}(E)\approx\sigma_{\bar{\nu}_{e}}(E)\approx 1.66 G_{\rm F}^2 E^2. \end{equation} In Eqs.~(\ref{P1}) and (\ref{P2}), we employ the notation \begin{equation} \Delta(p)\equiv \delta m^2/2p\approx m_s^2/2E\approx \Delta(E). \end{equation} The quantum damping rate for neutrinos is \begin{equation} D=\Gamma_{\nu_{e}}/2=\int d\Phi_{\nu_{e}} \sigma_{\nu_{e}}(E)/2. \end{equation} The analogous quantum damping rate for antineutrinos, $\bar{D}$, has a form directly analogous to that for $D$. The effect of the de-coherent $\nu_s$ and $\bar\nu_s$ production on the potential $V$ has been studied in the context of a collapsed stellar core in Ref.~\cite{2001PhRvD..64b3501A}. There it was argued that $V$ should evolve toward zero on a time scale short compared to the characteristic proto-neutron star core dynamical time scale. Accordingly, we shall take $V=0$ in Eq.~(\ref{P1}) and Eq.~(\ref{P2}) in the following discussion. This will facilitate a simple estimate of the sterile neutrino emissivity deep in the central region of the proto-neutron star after bounce. \subsection{Enhancement of neutrino luminosity behind the shock} We have estimated the effects of shock passage on thermodynamic and composition variables in the outer parts of the core by employing one-zone simulations of shock propagation through these regions. In doing this, we use the same numerical procedure described in Section III for gauging the effects of shock passage in the inner parts of the core. However, in the case of the outer core, we take account of the $\nu_e\rightarrow\nu_s\rightarrow\nu_e$ pre-heating of the material prior to the arrival of the shock. Therefore, our initial conditions for shock passage in the outer core for this calculation are chosen to be those given by the $\nu_s\rightarrow\nu_e$ energy deposition process described in Section III and shown in Fig.~\ref{fig:ShockEffectHeating}. The results are intriguing. For the case of a strong initial shock ($\Delta S=3$ as measured at density $\rho = {10}^{13}\,{\rm g}\,{\rm cm}^{-3}$), our calculations show that the double resonance structure characteristic of the in-fall regime is destroyed. In this case, however, the resonance energy $E_{\rm res}$ remains well above the $\nu_e$ Fermi energy $\mu_{\nu_e}$. This, in turn, suggests that any $\nu_e$ which is converted to a sterile neutrino $\nu_s$ by scattering-induced de-coherence deep inside the core, yet possesses an energy above the value of $E_{\rm res}$ at the neutrino sphere, will encounter an MSW resonance further out, nearer the neutrino sphere, and will be coherently and adiabatically re-converted to a $\nu_e$ there. Fig.~\ref{fig:ShockEffectInnerOuterEresProfileStrongShockType} shows the results of the one-zone calculations that suggest this scenario. \begin{figure}[htbp] \includegraphics[width=3.2in]{plot_ShockEffectInnerOuterEresProfileStrongShockType.eps} \caption{\label{fig:ShockEffectInnerOuterEresProfileStrongShockType} One-zone calculation results for resonance energy $E_{res}$ (in MeV) and $\nu_e$ chemical potential $\mu_{\nu_e}$ (Fermi energy, in MeV) are shown as functions of density $\rho$ (in ${\rm g}\,{\rm cm}^{-3}$). Circles and squares represent $E_{\rm res}$ and $\mu_{\nu_e}$, respectively. Filled symbols correspond to the values of these quantities for an assumed strong shock ($\Delta S=3$, as described in Section III). This case includes the effect of $\nu_s\rightarrow\nu_e$ reconversion and associated pre-heating ahead of the shock, as well as the effect of the shock itself. The effect of post-bounce pre-heating alone is shown by the quantities with the open circles and squares. For comparison, $E_{\rm res}$ (dashed line) and $\mu_{\nu_e}$ (dotted line) are given for the in-fall (pre-shock, no pre-heating) case.} \end{figure} \begin{figure}[htbp] \includegraphics[width=3.2in]{plot_ShockEffectInnerOuterEresProfileWeakShockType.eps} \caption{\label{fig:ShockEffectInnerOuterEresProfileWeakShockType} Same as Fig.3, but for the case of a weak shock ($\Delta S = 0.6$). } \end{figure} It can be seen in Fig.~\ref{fig:ShockEffectInnerOuterEresProfileStrongShockType} that both the $E_{\rm res}$ and $\mu_{\nu_e}$ curves are monotonic with increasing density and each has positive slope. Therefore, the highest energy neutrinos will tend to deposit their energy ({\it i.e.,} be re-converted to $\nu_e$'s) deepest in the core. This could result in more heating by $\nu_e\rightarrow\nu_s\rightarrow\nu_e$ transport enhancement with increasing depth which could, in turn, promote convective instability and further augmentation of neutrino energy transport. In any case, since our estimates show that the resonance energy $E_{\rm res}$ asymptotes out to about $E_{\rm res}^{\rm edge} \approx 100\,{\rm MeV}$ at the outer edge of the core, we can conclude that the $\nu_e$'s converted to sterile species in the inner regions of the core where $\mu_{\nu_e} \ge E_{\rm res}^{\rm edge}$ will be reconverted to $\nu_e$'s prior to escaping the core. On account of the quadratic energy dependence of the $\nu_e$ absorption cross sections, such re-converted high energy $\nu_e$'s are certain to deposit their energy and be thermalized on times scales short compared to any transport time scale. Our calculations suggest that a weaker initial shock will not eliminate the double resonance structure left at the end of the in-fall epoch. Fig.~\ref{fig:ShockEffectInnerOuterEresProfileWeakShockType} is analogous to Fig.~\ref{fig:ShockEffectInnerOuterEresProfileStrongShockType} but shows the results of a one-zone calculation with initial shock strength $\Delta S=0.6$. In this case neither the $\nu_e\rightarrow\nu_s\rightarrow\nu_e$ pre-heating of the outer core or the shock passage event itself can change composition, density, and temperature enough to disrupt the general form of the runs for $E_{\rm res}$ and $\mu_{\nu_e}$. We conclude that there may be a threshold in shock strength beyond which the double resonance structure at the end of in-fall is replaced by the single outer resonance regime in Fig.~\ref{fig:ShockEffectInnerOuterEresProfileStrongShockType}. What is this threshold in shock strength? The answer to this question is hard to get at with our simplistic model. However, a fair guess based on our one-zone scheme with its liquid drop equation of state would be $\Delta S \ge 2$ (as measured at $\rho = {10}^{13}\,{\rm g}\,{\rm cm}^{-3}$). This is significant but, ultimately, unsatisfying because large-scale numerical supernova simulations, depending on the initial model and on in-fall physics, may produce initial bounce shocks with strengths below, near, or above this threshold. For example, the calculations by the Mezzacappa group \cite{Hix} , appear to produce shocks with strengths $\Delta S \approx 2$ by our measure. This would be near or above the threshold for erasing the in-fall epoch double resonance structure. However, the simulations by the Janka group \cite{2006A&A...447.1049B} suggests a range of shock strengths which could be near the threshold. This issue has to be resolved before we can be confident of the effects of $\nu_e\rightarrow\nu_s\rightarrow\nu_e$ on core collapse supernovae. Note, however, that in either the weak or strong shock case, sterile neutrinos produced at high energies deep in the core could be converted to $\nu_e$'s further out. This is all we need to enhance energy deposition behind the shock and, therefore, increase the shock re-heating rate. All that remains is an estimate of this heating rate. This requires an estimate of the sterile neutrino emissivity deep in the core. Following Ref.~\cite{2001PhRvD..64b3501A}, we can get a rough estimate of the energy radiated in sterile neutrinos $\nu_s$ per unit mass and per unit time \-- the emissivity \-- in the region of the core where the $\nu_e$ Fermi energies are $\mu_{\nu_e} \ge 100\,{\rm MeV}$. As outlined above, we take $V=0$ deep inside the proto-neutron star and approximate the Fermi distribution as a step function, {\it i.e.,} completely degenerate, with degeneracy parameter $\eta_{\nu_{s}} \gg 1$. In this limit, flavor conversion in the channel $\nu_e\rightarrow\nu_s$ gives rise to sterile neutrino $\nu_s$ emissivity \begin{eqnarray} \mathcal{E}&=& \frac{1.66\, G_{\rm F}^2}{8\pi^2 m_N} \sin^22\theta \int_0^{\mu_{\nu_e}} dE \frac{E^5}{1+{{4D^2 E^2}/{m_{\nu_s}^4}}} \nonumber\\ &=&\frac{1.66\, G_{\rm F}^2}{16\pi^2 m_N} \sin^22\theta \left(\frac{m_{\nu_s}^2}{2D}\right)^6\nonumber\\ &&\qquad\times\left[\frac{\xi^4}{2}-\xi^2+\ln (1+\xi^2)\right] {\Bigg\vert}_{\xi={{2D \mu_{\nu_e}}/{m_{\nu_s}^2}}} \label{emissivity1} \end{eqnarray} where we ignore contributions to the emissivity stemming from $\bar{\nu}_e$'s. Noting that the integration parameter $\xi$ satisfies $\xi\ll 1$ for $100\,{\rm eV}< m_{\nu_s} < 1\,{\rm MeV}$ and that typically $\mu_{\nu_e}\sim 150\,{\rm MeV}$, we can calculate the emissivity to leading order in $\xi$ to find \begin{equation} \mathcal{E}\approx \left( 2\times 10^{28}\, {\rm erg}\ {\rm s}^{-1}\,{\rm g}^{-1}\right) \sin^22\theta . \end{equation} This is then the rate per gram at which energy in sterile neutrinos is flowing out of the inner parts of the core. On account of adiabatic MSW resonant $\nu_s\rightarrow\nu_e$ flavor conversion, the fraction of the deep core's $\nu_s$ energy flux which is carried by neutrinos with energies above the resonance energy at the outer edge of the core, $E_{\rm res}^{\rm edge}$, will be deposited in the regions just below the neutrino sphere. Using a calculation in obvious analogy to that in Eq.~(\ref{emissivity1}), we can estimate the effective emissivity for this \lq\lq re-captured\rq\rq\ sterile neutrino energy, \begin{eqnarray} \lefteqn{\mathcal{E}(\nu_s\rightarrow\nu_e)}\nonumber\\ &\approx&\frac{1}{m_N}\int_{E_{\rm res}^{\rm edge}}^{\mu_{\nu_e}} d\Phi_{\nu_{s}} E \sigma_{\nu_{e}}(E)\nonumber\\ &&\qquad\times\frac{1}{2}\langle P_m(\nu_{s}\rightarrow\nu_{e};p,t)\rangle \end{eqnarray} where, as argued above, $E_{\rm res}^{\rm edge} \approx 100\,{\rm MeV}$. Using the same approximations made in evaluating Eq.~(\ref{emissivity1}), we find \begin{equation} \mathcal{E}(\nu_s\rightarrow\nu_e)\approx 1.4\times 10^{52}\,{\rm erg}\ {\rm s}^{-1}\,{\rm M}_\odot^{-1}\left( {{\sin^22\theta}\over{{10}^{-9}}}\right). \label{re-emit} \end{equation} Since the inner part of the core which generates the sterile neutrinos has a mass $\sim 1\,{\rm M}_\odot$, the energy deposited per unit time near the edge of the neutron star could be prodigious. Of course, this conclusion depends on a host of active-sterile neutrino mass/mixing matrix issues including, {\it e.g.,} the effective $2\times 2$ angle $\theta$ characterizing $\nu_e\rightleftharpoons\nu_s$ vacuum mixing. If we take $m_s \sim 1\,{\rm keV}$ and $\sin^22\theta = {10}^{-9}$, corresponding to the \lq\lq sweet spot\rq\rq\ for sterile neutrino dark matter and beneficial supernova effects picked out in Ref.~\cite{Hidaka-Fuller}, then the emissivity in Eq.~(\ref{re-emit}) suggests that we could possibly {\it double} the $\nu_e$ energy resident just below the neutrino sphere. Though this energy would be deposited in the form of $\nu_e$'s, rapid re-establishment of beta equilibrium would imply that this energy is shared among all six active neutrino species. This energy sharing roughly will be weighted by the relative numbers of active neutrino species in equilibrium. However, if the extra $\nu_e$'s are deposited quite close to neutrino sphere, energy re-distribution becomes a difficult neutrino transport issue. Since there is a preponderance of $\nu_e$'s, we can guess that there will not be equal amounts of energy in the $\nu_e$, $\bar\nu_e$, $\nu_\mu$, $\bar\nu_\mu$, $\nu_\tau$, and $\bar\nu_\tau$ seas. On the other hand, since shock re-heating is mostly effected through the charged current capture processes $\nu_e+n\rightarrow p+e^-$ and $\bar\nu_e + p\rightarrow n +e^+$, it is the $\nu_e$ and $\bar\nu_e$ luminosities at the neutrino sphere which are most important. We could be conservative and assume equal energy sharing so that the $\nu_e$ and $\bar\nu_e$ seas get a third of the extra energy deposited by $\nu_s\rightarrow\nu_e$ near the neutrino sphere. In this case, and for a range of mixing angles relevant for sterile neutrino dark matter, we could expect roughly a $\sim 10\%$ to $\sim 100\%$ increase in the sum of the $\nu_e$ and $\bar\nu_e$ luminosities. This, in turn, could lead to comparable increases in the re-heating rate of the shock. \section{Lepton Number Transport and the Role of $\mu$- and $\tau$- Flavor Neutrinos} In this section we discuss the active-sterile-active neutrino flavor transformation-induced flows of electron, muon, and tau lepton numbers and the effects of these on supernova physics. The $\nu_e\rightarrow\nu_s\rightarrow\nu_e$ process outlined above will transport electron lepton number from deep in the core to the vicinity of the neutrino sphere. In the course of describing this process, we made no consideration for mu ($\nu_\mu$, $\bar\nu_\mu$) and tau ($\nu_\tau$, $\bar\nu_\tau$) flavor neutrinos. Surely, if electron neutrino flavors mix in vacuum with a sterile species, likely so will mu and tau flavor neutrinos. In broad brush, the lepton number transport rate for $\nu_e\rightarrow\nu_s\rightarrow\nu_e$ should dominate over the rate for $\bar\nu_e\rightarrow\bar\nu_s\rightarrow\bar\nu_e$ and, for that matter, the rates for $\nu_\mu\rightarrow\nu_s\rightarrow\nu_\mu$, $\bar\nu_\mu\rightarrow\bar\nu_s\rightarrow\bar\nu_\mu$, $\nu_\tau\rightarrow\nu_s\rightarrow\nu_\tau$, and $\bar\nu_\tau\rightarrow\bar\nu_s\rightarrow\bar\nu_\tau$ as well. The argument to support this assertion is based on the relative populations of the various active neutrino species. Keep in mind that the inner core, the \lq\lq piston\rq\rq\ for shock generation at bounce, though experiencing an increase in entropy stemming from the dissipation of in-fall kinetic energy, nevertheless remains relatively low in entropy and full of its original electron lepton number excess. Immediately after bounce, the temperature in the core is $T\sim 10\,{\rm MeV}$, while the $\nu_e$ Fermi energy is $\mu_{\nu_e} \sim 100\,{\rm MeV}$. (See the discussion in Ref.~\cite{1999ApJ...513..780P}.) In the standard stellar collapse model, these conditions will persist for of order a neutrino diffusion time scale, {\it i.e.,} seconds. This is a time comparable to or longer than the shock re-heating time of interest here. The $\bar\nu_e$'s will have a negative chemical potential ($-\mu_{\nu_e}$). The mu and tau flavor neutrinos must be pair produced, and as a consequence they will have zero chemical potential. In the conditions of beta equilibrium in the inner core, the number density of $\nu_e$'s will be $\sim {\mu_{\nu_e}^3}$, while the number density of $\bar\nu_e$'s will be $\sim T^3 \exp{\left(-{\mu_{\nu_e}}/T\right)}$, and the number densities of all mu and tau flavor neutrino species will be $\sim T^3$. Clearly, there should be a large excess of $\nu_e$'s over the other neutrino species in the time frame of interest. As a result, during this time, de-coherence associated with the scattering of active neutrino species will produce far more sterile neutrinos ($\nu_s$'s) than the opposite handedness \lq\lq anti\rq\rq-sterile neutrinos ($\bar\nu_s$'s). The picture we have of the supernova core in this time frame is then as follows. We have an inner core \lq\lq source\rq\rq\ producing a large flux of very high energy $\nu_s$'s and lower fluxes of lower energy $\bar\nu_s$'s. The $\nu_s$'s will be preferentially transformed to $\nu_e$'s via $\nu_s\rightarrow\nu_e$ near the neutrino sphere. This is because in this region the forward scattering potential for mu or tau neutrino conversion to sterile neutrinos will be negative. With a negative potential, only antineutrinos can be matter-enhanced. For example, the forward scattering potential for the flavor conversion channel $\nu_s\rightleftharpoons\nu_{\mu}$ is given by \begin{eqnarray} V_{\nu_\mu} & = & {{\sqrt{2}}\over{2}}\, G_{\rm F}\, n_{\rm b} ( Y_e-1+2Y_{\nu_e}\nonumber\\ &&\qquad\qquad+4Y_{\nu_{\mu}}+2Y_{\nu_{\tau}} ). \label{mupot} \end{eqnarray} An analogous expression holds for the potential, $V_{\nu_\tau}$, relevant for $\nu_s\rightleftharpoons\nu_\tau$, but with the coefficients of $Y_{\nu_\mu}$ and $Y_{\nu_\tau}$ in Eq.~(\ref{mupot}) swapped. Since we expect $Y_e \approx 0.35$, $Y_{\nu_e} \sim 0.05$, and $Y_{\nu_\mu} = Y_{\nu_\tau}=0$ initially, we will have $V_{\nu_\mu} < 0$ and $V_{\nu_\tau} < 0$. This, in turn, implies that only the channels $\bar\nu_s\rightleftharpoons\bar\nu_\mu$ and $\bar\nu_s\rightleftharpoons\bar\nu_\tau$, respectively, can be matter-enhanced and resonant near the neutrino sphere. These processes will be sub-dominant compared to $\nu_s\rightleftharpoons\nu_e$ because the energy and fluxes for the $\bar\nu_s$'s will be lower than those for $\nu_s$'s as argued above. The dominant $\nu_s\rightarrow\nu_e$ conversion process will lead to the region near the neutrino sphere being \lq\lq charged up\rq\rq\ with positive electron lepton number. Given the energy emissivities discussed in the last section, for example we might expect an additional electron lepton number per baryon $\Delta Y_{\nu_e} \sim {10}^{52}\,{\rm erg}\,{\rm s}^{-1}/\left( 100\,{\rm MeV}\cdot {10}^{57}\ {\rm baryons}\right) \approx 0.1$ to be deposited over a time $\sim 1\,{\rm s}$ after core. (Likewise, there will be a corresponding, though far smaller increase in negative mu and/or tau lepton number stemming from $\bar\nu_s\rightarrow\bar\nu_{\mu,\tau}$.) The $\nu_e$'s deposited by $\nu_s\rightarrow\nu_e$ could represent a significant increase in electron lepton number. In turn, this will tend to decrease the neutron excess. This is because the additional $\nu_e$'s will tend to shift the equilibrium relation, $\nu_e+n\rightleftharpoons p+e^-$, to the right, producing higher electron fraction $Y_e$ and more protons. Likewise, the neutron-to-proton ratio, $n/p= Y_e^{-1}-1$, in the material near the neutrino sphere will be transmitted by the $\nu_e$ and $\bar\nu_e$ fluxes emergent from the neutrino sphere to the material in the region between the neutron star and the shock \cite{Qian}. In the early shock re-heating regime, this increase in electron fraction $Y_e$ in the material ejected by neutrino heating could be beneficial for nucleosynthesis. In the calculations of nucleosynthesis in early, shock re-heating epoch neutrino-heated ejecta performed by Woosley {\it et al.} using the Mayle and Wilson supernova simulation results \cite{1994ApJ...433..229W}, it was found that there was an overproduction of neutron number $N=50$ nuclei. Subsequently it was pointed out in Ref.~\cite{1996ApJ...460..478H} that a modest increase in $Y_e$ could cure this problem. The $\nu_e\rightarrow\nu_s\rightarrow\nu_e$ lepton number transfer process at least sends $Y_e$ in the right direction at the right epoch to help. Effects of active-sterile-active neutrino flavor transformation at later times, in the post-shock revival hot bubble, may be very interesting, but are beyond the scope of the current work. We note that re-conversion of sterile neutrinos has been considered previously in models for r-process nucleosynthesis \cite{1999PhRvC..59.2873M,Fetter:2002xx}. These calculations, however, concentrated on the late-time regime above the core and considered a much different sterile neutrino mass and vacuum mixing range from the one considered here. Additionally, active-active neutrino flavor transformation in the supernova environment is a very difficult problem \cite{Fuller87,1988NuPhB.307..924N,1992PhRvD..46..510P,1993NuPhB.406..423S,FM,Qian,PastorRaffelt,2005NJPh....7...51B,2006PhRvD..73b3004F,2006PhRvD..74j5010H,2006PhRvD..74l3004D,2006PhRvD..74j5014D,2006PhRvL..97x1101D,duan:125005}. A complete assessment of nucleosynthesis effects would neccssitate treating all active-active and active-sterile neutrino flavor conversion processes. \section{Conclusions} Generally it has been assumed that the emission of sterile neutrinos from the supernova core will tend to decrease the prospects for obtaining a successful core collapse supernova explosion. This may be true if a large enough amount of energy is lost from the core. This is because, after all, most of the gravitational binding energy released in the collapse of the core and subsequent quasi-static contraction of the hot proto-neutron star is \lq\lq stored\rq\rq\ in trapped seas of active neutrinos of all species. Moreover, it is this neutrino energy which, ultimately, will be invoked one way or another to revive the nuclear photo-dissociation-degraded bounce shock. However, in this paper we point out that the notion that sterile neutrino emission is bad for shock revival is predicated on the assumption that there will be no re-conversion of these sterile neutrinos to active neutrino species. Indeed, our calculations suggest that such a re-conversion process could take place under some circumstances and that this re-conversion could effect an enhancement in energy and electron lepton number transport from deep in the core to the regions just below the neutrino sphere. This could {\it increase} the prospects for a viable explosion through: (1) pre-heating of the material ahead of the shock causing a reduction in the nuclear photo-disintegration burden on the shock; and (2) enhancement of the $\nu_e$ and $\bar\nu_e$ heating rate of the material under the bounce shock. We have found that the sterile neutrino mass and mixing parameters for which these enhancement processes can take place conform to our earlier estimates \cite{Hidaka-Fuller} of these: sterile neutrino rest mass range $1\, {\rm keV} \lesssim m_s \lesssim5\,{\rm keV}$; and $\nu_e\rightleftharpoons\nu_s$ effective $2\times 2$ vacuum mixing angle in a range satisfying ${10}^{-10} \lesssim \sin^22\theta \lesssim {10}^{-8}$. Most significantly, we find that the neutrino mass and mixing parameter ranges which give supernova explosion enhancement include those ranges of parameters which give a possibility for viable sterile neutrino dark matter. What was missing in our earlier work \cite{Hidaka-Fuller} was an assessment of the effects of the shock itself on the neutrino forward scattering potential which governs active-sterile neutrino flavor transformation. In this paper we have done this assessment. However, there are many uncertainties and our one-zone calculations can be regarded only as rough outlines for how active-sterile-active neutrino flavor conversion processes affect supernova core and shock physics. How can our calculations be improved on? First, in the context of a realistic proto-neutron star model, a self consistent hydrodynamic treatment of shock propagation coupled with active-sterile and sterile-active neutrino flavor transformation processes is in order. This could resolve tricky issues associated with the effectiveness of pre-heating in relieving the nuclear photo-dissociation burden on the shock. Second, it would be useful to employ a detailed treatment of neutrino transport, coupled with a realistic model for the structure and equation of state of the region of the proto-neutron star near the neutrino sphere, to assess the way in which energy deposited via $\nu_e\rightarrow\nu_s\rightarrow \nu_e$ is divided up among the various active neutrino species. Also, we need to know how this deposited energy affects $Y_e$ and the emergent luminosities of the active neutrino species at and above the neutrino sphere. There is yet a third source of uncertainty, one which may be an issue for all core collapse supernova models. We have pointed out in this paper that the initial core bounce shock strength is an important quantity for characterizing how the shock modifies the \lq\lq fossil\rq\rq\ neutrino forward scattering potential profile which is left at the end of the core in-fall epoch. The initial shock strength depends on many factors in both the pre-collapse hydrostatic evolution epochs of the progenitor star as well as on in-fall physics issues like nuclear weak interaction rates and the sub-nuclear density equation of state. Ultimately, of course, the core collapse supernova problem is a grossly nonlinear one. We will have to grapple with this nonlinearity, as well as a host of fundamental nuclear physics and multi-dimensional hydrodynamic issues, if we ever hope to realize the awesome power of this \lq\lq laboratory\rq\rq\ for revealing/constraining new physics beyond the Standard Model. \begin{acknowledgments} This work was supported in part by NSF grant PHY-04-00359 at UCSD and the TSI collaboration's DOE SciDAC grant at UCSD. We thank K. Abazajian, P. Amanik, A. Kusenko, A. Mezzacappa, M. Patel, and J.~R. Wilson for valuable discussions. \end{acknowledgments}
1,314,259,995,459
arxiv
\section{INTRODUCTION}\label{intro} Various quantum spin systems with frustration have been extensively studied, motivated by exotic characters such as quantum spin liquid at zero temperature and quantization of magnetization with spontaneously broken translational symmetry. Actually, gapless quantum spin liquid states and gapped quantized states of magnetization have been reported not only theoretically but experimentally~\cite{balents10,oshikawa97,totsuka98,kageyama99}. These states are often induced by frustration, and are switchable by applied magnetic field. For example, zigzag spin chain, where geometrical frustration originates from antiferromagnetic first- and second-neighbor interactions, is known as a typical quantum spin system exhibiting a gapped-to-gapless transition induced by magnetic field at zero temperature~\cite{okunishi03}. As compared with the ground-state properties, dynamical behaviors in magnetic fields almost have not been clarified so far. In particular, dynamical properties in the quantized state of magnetization, so-called magnetization plateau (MP) state, mostly remains unclear, despite possible emergence of novel elementary excitations due to spontaneous symmetry breaking. In fact, recent studies on a weakly-coupled spin-ladder compound have reported a Higgs mode due to spontaneously broken symmetries~\cite{hong17,hong17-2,ying19}. Furthermore, these dynamical behaviors are crucial for understanding spin/heat transport, which is applicable to spintronics devices~\cite{uchida08,hirobe16}. In this paper, we focus on magnetic excitations in a frustrated spin ladder (FSL), where antiferromagnetic interactions are assigned to the first- and second-neighbor bonds in a leg and the first-neighbor bond in a rung. This model exhibits three MPs at normalized finite magnetization $m\equiv M/M_\mrm{sat}= 1/3$, $1/2$, and $2/3$ with saturation magnetization $M_\mrm{sat}$~\cite{sugimoto15,sugimoto18}. Interestingly, all of these MPs are induced by spontaneously breaking of translational symmetry, so that the MPs exhibit extended magnetic unit cells that are different from the original unit cell of this model. In addition, this model is regarded as an effective spin model to reproduce magnetic behaviors in real materials \ce{BiCu2PO6}~\cite{abraham94,koteswararao07,mentre09,tsirlin10} and \ce{Li2Cu2O(SO4)2}~\cite{rousse17,sun15,vaccarelli17,vaccarelli19}. Actually in \ce{BiCu2PO6}, external field dependences~\cite{kohama12,casola13,kohama14,colmont18}, dynamical properties~\cite{plumb13,plumb16}, and thermal conductivity~\cite{nagasawa14,jeon16,kawamata18} observed experimentally have been theoretically explained in the FSL model~\cite{lavarelo11,sugimoto13,shyiko13}, though additional terms such as Dzyaloshinskii-Moriya interaction are required to obtain a quantitative coincidence~\cite{splinter16,hwang16}. Therefore, the FSL model deserves to be investigated in terms of the relation between low-energy excitations and spontaneously broken symmetries of MP phases. The preceding studies on the MP states~\cite{sugimoto15,sugimoto18} have presented equivalence of two different models, the FSL in the strong rung limit and an anisotropic frustrated spin chain (AFSC). According to these studies, the $m=1/3$, $1/2$, and $2/3$ MP states in the FSL correspond to $m^\prime=-1/3$, $0$, and $1/3$ MP states in the AFSC, respectively. Since the $m^\prime=-1/3$ and $m^\prime=1/3$ MP states in the AFSC are connected with each other via a spin-flip pair, the corresponding $m=1/3$ MP and $m=2/3$ states in the FSL should have a common origin. Therefore, the dynamics of the $m=1/3$ MP state is expected to be equivalent to that of the $m=2/3$ MP state, while the $m=1/2$ MP state can show qualitatively-different dynamics. We perform numerical calculations of dynamical spin structure factor (DSSF) by using the dynamical density-matrix renormalization group (DDMRG) method~\cite{white92,jeckelmann02,schollwock05,sota10} to clarify the difference of dynamics in the $m=1/3$, $1/2$, and $2/3$ MP states for the FSL in the strong-rung limit. The dynamical behaviors of the $m^\prime=0$ and $1/3$ MP states in the AFSC are also examined as compared with the dynamics of the FSL. The AFSC is useful for intuitive understanding of dynamical properties because of its simplicity. Moreover, a perturbative clusterization approach imposing spontaneously breaking of translational symmetry is used to obtain intuitive physical pictures of spin dynamics. The contents of this paper are as follows. In Sec.~II, we introduce the model Hamiltonians of the FSL and the AFSC. The equivalence of the two different models is briefly reviewed with a projection operator to low-lying states in the strong rung limit of the FSL. We also introduce the DSSFs and model parameters for calculation. In Sec.~III, the DSSFs obtained by the DDMRG are shown for three MP states in the FSL and two MP states in the AFSC. Section IV is used to give qualitative explanation of characteristic structures in the DSSFs and intuitive physical picture of spin dynamics. For this sake, we introduce a perturbative clusterization approach imposing spontaneously breaking of translational symmetry. Finally, we summarize our results in Sec.~V. \section{MODEL AND METHOD}\label{modelmethod} In this section, we introduce two model Hamiltonians: the FSL and its corresponding model in the strong rung limit, the AFSC. Additionally, the DSSFs that we calculate to investigate dynamical properties are defined. \subsection{Frustrated spin ladder (FSL)} The Hamiltonian of the FSL is defined as \begin{align} \mcl{H}=\mcl{H}_{\perp}+\mcl{H}_{\parallel}+\mcl{H}_Z, \label{H} \end{align} with \begin{align} \mcl{H}_{\perp}&=J_{\perp}\sum_{i=1}^{N}{\bm S}_{i,1}\cdot{\bm S}_{i,2} \\ \mcl{H}_{\parallel}&=\sum_{\eta=1,2}J_{\eta}\sum_i\sum_{j=1,2}{\bm S}_{i,j}\cdot{\bm S}_{i+\eta,j} \label{Hp}\\ \mcl{H}_{Z}&=-H\sum_i\sum_{j=1,2}S^z_{i,j} \end{align} where ${\bm S}_{i,1}~({\bm S}_{i,2})$ is the $S=1/2$ spin operator on $i$th rung in the upper (lower) chain. Exchange energies of the first-neighbor bond in a leg, the second-neighbor bond in a leg, and the first-neighbor bond in a rung, are denoted by $J_1$, $J_2$, and $J_\perp$, respectively. The magnitude of magnetic field is represented by $H$. In this paper, we focus on the strong rung region of the FSL, because three MPs at $m=1/3$, $1/2$, and $2/3$ become robust in this limit. Moreover, this limit enables us to map the FSL to the AFSC, which is used to obtain an intuitive picture of dynamical behaviors. \subsection{The effective model of an FSL: AFSC} The Hamiltonian of AFSC is given by the bond-operator (quasi-spin) transformation~\cite{sugimoto15,sugimoto18,sachdev90,giamarchi99}. To obtain the AFSC Hamiltonian with quasi-spin operators, we use the basis of singlet and triplet states on $i$th rung: \begin{align} \ket{s}_i =& \frac{1}{\sqrt{2}}\br{\ket{\uparrow}_{i,1}\ket{\downarrow}_{i,2} - \ket{\downarrow}_{i,1}\ket{\uparrow}_{i,2}} \\ \ket{t^+}_i=&\ket{\uparrow}_{i,1}\ket{\uparrow}_{i,2} \\ \ket{t^0}_i=& \frac{1}{\sqrt{2}}\br{\ket{\uparrow}_{i,1}\ket{\downarrow}_{i,2} + \ket{\downarrow}_{i,1}\ket{\uparrow}_{i,2}} \\ \ket{t^-}_i=&\ket{\downarrow}_{i,1}\ket{\downarrow}_{i,2}. \end{align} For simplicity, we call $\ket{t^\alpha}_i$ ($\alpha = +,0,-$) state ``$\alpha$ triplet'' in the following. The Hamiltonian $\mcl{H}$ (\ref{H}) is rewritten by these bases. In the strong rung limit, leg interactions in $\mcl{H}_\parallel$ (\ref{Hp}) are regarded as perturbative terms. For $\mcl{H}_\parallel=0$, magnetization $M$ jumps from zero to saturation magnetization $M_\mrm{sat}$ at critical magnetic field $H_c=J_\perp$. Finite contributions from $\mcl{H}_\parallel$ change the magnetization jump into a continuous curve including plateaux around the critical field. The range of field, $\Delta H$, for partially-magnetized states approximately equals to $\Delta H\sim J_\parallel$. In this field region, the $0$ and $-$ triplets are much higher in energy than the $+$ triplet. Therefore, we can ignore the $0$ and $-$ triplets, and thus obtain a low-energy effective Hamiltonian. To abandon the high-energy triplets, we introduce the projection operator $\mcl{P}=\prod_i\br{\ket{s}_i\bra{s}_i+\ket{t^+}_i\bra{t^+}_i}$. The effective Hamiltonian is given by \begin{align} \mcl{H}^\prime =& \mathcal{P}\mcl{H}\mathcal{P} \notag \\ =& \sum_{\eta=1,2}\sum_{i} \frac{J_\eta}{2}(T_{i}^{+}T_{i+\eta}^{-}+T_{i}^{-}T_{i+\eta}^{+}+T_{i}^{z}T_{i+\eta}^{z})\notag \\ &-H^\prime\sum_i T_i^z + \mathrm{const.} \label{hamp}, \end{align} where spin-$1/2$ quasi-spin operators at site $i$ are denoted by ${\bm T}_i$ given by $T^+_i=\ket{t^+}_i\bra{s}_i$, $T^-_i=\ket{s}_i\bra{t^+}_i$, and $T^z_i=\ket{t^+}_i\bra{t^+}_i-1/2$. The effective magnetic field $H^\prime$ is defined by $H^\prime=H-J_\perp-(J_1+J_2)/2$. The effective Hamiltonian \eqref{hamp} describes AFSC. Note that the $z$ component of quasi spin is given by $T^z_i=\ket{t^+}_i\bra{t^+}_i-1/2$. This leads to the relation that a normalized magnetization $m^\prime\equiv M^\prime/M^\prime_\mrm{sat}=2m-1$, $M^\prime$ ($M^\prime_\mrm{sat}$) being magnetization (saturation magnetization) in the quasi-spin system. For example, $m=1/3$, $1/2$, and $2/3$ in the FSL correspond to $m^\prime=-1/3$, $0$, and $1/3$ in the AFSC, respectively. \subsection{Dynamical spin structure factor (DSSF)} To investigate magnetic excitations of the FSL, we calculate the DSSF defined by \begin{align} S^\pm({\bm q},\omega) = -\frac{1}{\pi}\Im\bra{\psi_0}S_{{\bm q}}^\mp\frac{1}{\omega-\mcl{H}+E_0+\mrm{i}\gamma}S_{{\bm q}}^\pm\ket{\psi_0}, \label{DSSF} \end{align} where $\ket{\psi_0}$ is the ground state, $E_0$ is the ground-state energy, and $\gamma$ is an infinitesimal value. The Fourier component $S^\pm_{\bm q}$ under the open boundary condition is given by \begin{align} S^\pm_{\bm q} = \sqrt{\frac{2}{N+1}} \sum_i \sin(q_x i)S_{i,q_y}^\pm \label{Sq} \end{align} with \begin{align} S^\pm_{i,q_y=0} = \frac{1}{\sqrt{2}}\br{S_{i,1}^\pm+S_{i,2}^\pm}, \hspace{1em} S^\pm_{i,q_y=\pi} = \frac{1}{\sqrt{2}}\br{S_{i,1}^\pm-S_{i,2}^\pm}, \label{Si} \end{align} where $S_{i,j}^\pm=S_{i,j}^x\pm i S_{i,j}^y$ and the wave number of the leg (rung) direction is given by $q_x=\frac{\pi}{N+1}n$ ($q_y=0,\pi$) with $n=1,2,\cdots,N$, $N$ being the total number of rung along the leg direction. Similarly, the DSSF for AFSC denoted by $T^\pm(q,\omega)$ is given by substituting the quasi-spin operator $T_{q}^\pm$ and $\mcl{H}^\prime$ for $S_{{\bm q}}^\pm$ and $\mcl{H}$, respectively, in Eq.~\eqref{DSSF}. Here, $T_{q}^\pm=\sqrt{\frac{2}{N+1}} \sum_i \sin(q i)T_i^\pm$ and $q=\frac{\pi}{N+1}n$ with $n=1,2,\cdots,N$, $N$ being the total number of site in the chain. To obtain the DSSF numerically, we use the DDMRG method~\cite{jeckelmann02,schollwock05}. This method requires three target states, $\ket{\psi_0}$, $S^{\pm}_{\bf q}\ket{\psi_0}$ and $[\omega-\mcl{H}+E_0+\mrm{i}\gamma]^{-1}S^{\pm}_{\bf q}\ket{\psi_0}$. The correction vector $[\omega-\mcl{H}+E_0+\mrm{i}\gamma]^{-1}S^{\pm}_{\bf q}\ket{\psi_0}$ is obtained by kernel-polynomial expansion method~\cite{sota10}. In this method, a Gaussian broadening with a width $\sigma$ is introduced instead of Lorentzian broadening in Eq.~(\ref{DSSF}). \section{RESULT}\label{result} $S^\pm_{\bm q}$ in Eq.~\eqref{Sq} has two modes, $q_y=0$ and $q_y=\pi$, with respect to rung parity. These two modes in Eq.~\eqref{Si} are rewritten by using the singlet and triplet bases; \begin{align} S^{\pm}_{i,q_y=0} &= \ket{t^\pm}_i\bra{t^0}_i + \ket{t^0}_i\bra{t^\mp}_i, \\ S^{\pm}_{i,q_y=\pi} &= -\br{\ket{t^\pm}_i\bra{s}_i + \ket{s}_i\bra{t^\mp}_i}. \end{align} Since the $0$ triplet $\ket{t^0}_i$ in plateau regions is much higher in energy than the singlet $\ket{s}_i$ in the strong rung limit, only the $q_y=\pi$ mode is enough to describe elementary excitations in the low-energy region. We thus discuss only the $q_y=\pi$ mode in this paper, and abbreviate $S^\pm(q_x,q_y=\pi,\omega)$ for FSL to $S^\pm(q_x,\omega)$ hereafter. In our calculations, we use the following parameters: $J_1/J_\perp=0.2$ and $J_2/J_1=0.65$ in the 48-rung FSL and $\sigma=0.02J_\perp$. These parameters are chosen for the following reasons. Firstly, in the real material \ce{BiCu2PO6}, preceding studies have concluded that two spins on each rung form a singlet at low temperatures, i.e., the ground state without magnetic fields is so-called the rung-singlet state~\cite{plumb13,plumb16,lavarelo11,sugimoto13}. The strong rung condition $J_1/J_\perp=0.2$ also belongs to the rung-singlet phase, and we can apply perturbation analysis based on the strong-rung limit to clarify an intuitive physical picture of elementary excitations. Additionally, to stabilize the $m=1/3$, $1/2$, and $2/3$ MPs, we introduce the frustration $J_2/J_1=0.65$. In fact, we have confirmed that the MPs emerge due to spontaneously breaking symmetry [see \figref{curve}(a) in Appendix A]. The system size, $N=48$ rungs, is sufficient to discuss dynamical behaviors at least qualitatively. With these parameters, the truncation error in DDMRG is less than $10^{-4}$ with 600 states kept in the DDMRG calculations. \subsection{$S^+$ excitation in the $m=\dfrac{1}{2}$ MP phase} \begin{figure*}[htbp] \begin{center} \includegraphics[height=55mm]{./comp24.eps} \end{center} \caption{ (a) $S^+(q_x,\omega)$ in the $m=1/2$ MP phase of FSL. (b) $T^+(q_x,\omega)$ in the $m^\prime=0$ MP phase of AFSC. A broad excitation with minimum energy excitations around $q_x=2\pi/3$ and an intensive peak at $q_x=\pi/2$ are common in (a) and (b). } \label{24} \end{figure*} Figure 1(a) shows $S^+(q_x,\omega)$ in the $m=1/2$ MP phase [for MP see in \figref{curve}(a) in Appendix A]. A broad but intensive peak centered at $\omega/J_\perp=0.08$ is seen at $q_x=\pi/2$. Its peak width is wider than the Gaussian width $\sigma$, indicating intrinsic broadening of the peak. We consider that this peak originates from a dimerized ground state in the $m=1/2$ MP phase, which breaks the translational symmetry spontaneously~\cite{sugimoto15} and causes a doubled period of lattice. A broad but dispersive structure extends above $q_x=\pi/2$ with minimum-energy excitations around $q_x=2\pi/3$. This structure indicates multi-spinon excitation, and thus we consider it as a manifestation of fractionalized excitation with strong quantum fluctuation. In fact, the ground state of the $m=1/2$ MP in the FSL corresponds to the dimer state of $m^\prime=0$ MP in the AFSC and thus we can interpret the broad excitation as multi spinons in the AFSC~\cite{lavarelo14}. To confirm this interpretation, we calculate the DSSF of AFSC $T^+(q,\omega)$ in the $m^\prime=0$ MP phase [see \figref{24}(b)]. As compared with \figref{24}(a), we find similar structures in \figref{24}(b): a broad excitation with minimum energy excitations around $q_x=2\pi/3$ and an intensive peak at $q_x=\pi/2$. Therefore, we conclude that low-energy excitations are due to multi quasi-spinon excitations including the intensive peak caused by the doubled period of lattice. \begin{figure*}[t] \begin{center} \includegraphics[height=55mm]{./comp32.eps} \end{center} \caption{ (a) $S^-(q_x,\omega)$ in the $m=1/3$ MP phase of FSL. (b) $S^+(q_x,\omega)$ in the $m=2/3$ MP phase of FSL. (c) $T^+(q_x,\omega)$ in the $m^\prime=1/3$ MP phase of AFSC, . The white line shows the dispersion relation obtained by an analytical calculation assuming clusterization (See \secref{discussion}). } \label{32} \end{figure*} \subsection{$S^-\, (S^+)$ excitation in the $m=\dfrac{1}{3}\, \br{\dfrac{2}{3}}$ MP phase}\label{dssf2} The $S^-(q_x,\omega)$ in the $m=1/3$ MP phase and $S^+(q_x,\omega)$ in the $m=2/3$ MP phase are shown in Figs.~\ref{32}(a) and \ref{32}(b), respectively. These two spectra show a similar behavior with a dispersive feature with zero-energy excitation at $q_x=2\pi/3$, indicating a period with three times of the original unit-cell length in the real space. This similarity is actually expected from the fact that both the $m=1/3$ and $m=2/3$ MPs can be associated with the array of quasi-spinons and share a common origin each other~\cite{sugimoto15}. Since the $m=2/3$ MP corresponds to the $m^\prime=1/3$ MP, the DSSF of ASFC, $T^+(q,\omega)$, in the $m^\prime=1/3$ MP phase also shows a similar dispersive feature with zero-energy excitation at $q_x=2\pi/3$ as shown in \figref{32}(c). Based on the similarity, we may construct an intuitive view of spin dynamics via full examination of $T^+(q,\omega)$ in the $m^\prime=1/3$ MP phase. We will discuss this view in \secref{discussion} using a clusterization approach. \subsection{$S^+\, (S^-)$ excitation in the $m=\dfrac{1}{3}\, \br{\dfrac{2}{3}}$ MP phase} Figures~\ref{16}(a) and \ref{16}(b) show $S^+(q_x,\omega)$ in the $m=1/3$ MP phase and $S^-(q_x,\omega)$ in the $m=2/3$ MP phase, respectively. These two figures share common features: two dispersive low-energy ($\omega/J_\perp<0.3$) excitations with the minimum-energy excitation at $q_x=2\pi/3$ and high-energy broad excitations at $\omega/J_\perp>0.3$. We find that $T^-(q,\omega)$ in the $m^\prime=1/3$ MP phase shown in \figref{16}(c) exhibits spectral distributions similar to Figs.~\ref{16}(a) and \ref{16}(b). To understand the origin of the spectra, we will introduce a clusterization approach for $T^-(q,\omega)$ in \secref{discussion}. \begin{figure*}[t] \begin{center} \includegraphics[height=58mm]{./comp16.eps} \end{center} \caption{ (a) $S^+(q_x,\omega)$ in the $m=1/3$ MP phase of FSL. (b) $S^-(q_x,\omega)$ in the $m=1/3$ MP phase of FSL. (c) $T^-(q_x,\omega)$ in the $m^\prime=1/3$ MP phase of AFSC. Every figure is split into upper and lower panels to make the distribution of spectrum visible. In (c), the pink line shows the dispersion of $\beta^-$ excitation obtained by an analytical calculation assuming clusterization (See \secref{discussion}). The blue and yellow lines show the dispersion relations of $\gamma^-$ and $\delta^-$, respectively. } \label{16} \end{figure*} \section{DISCUSSION} \label{discussion} Our purpose in this section is to give an intuitive physical view of elementary excitations in the $m=1/3$ and $2/3$ MP phases using an analytical approach. The following discussion is based on spontaneous translational symmetry breaking in the MP phases, where the magnetic unit cell is larger than the original unit cell. In such a case, quantum entanglement between the magnetic unit cells is expected to be suppressed because of the enlargement of unit cell. Therefore, effective interactions between the enlarged unit cells can be approximated to a semi-classical one [see Appendix B]. If the interactions are totally classical, the ground state is given by the direct product of local quantum states, which are obtained by exact diagonalization of local Hamiltonian in the enlarged unit cell. Such a local quantum state contributing to the ground state should be one of low-lying states in the local Hamiltonian. Otherwise, the inter-cell interactions will become larger than intra-cell interaction, contradicting with localized nature of spins in the magnetic unit cell. Even in the case of semi-classical interactions where the ground state becomes a superposition of the direct product of local states, the local states contributing to the ground state should be low-lying states. Based on this reasoning, we restrict the Hilbert space of the enlarged unit cell to several low-lying states obtained by exact diagonalization of local Hamiltonian. Moreover, the interactions between the enlarged unit cells are projected onto the restricted Hilbert space. We call this approach a clusterization based on spontaneously broken symmetry (CBSBS). In the following, we apply the CBSBS to the $m^\prime=1/3$ MP phase in the AFSC, because magnetic excitations in the $m=1/3$ and $2/3$ MP phases of FSL are qualitatively similar to the $T^\pm$ excitations in the $m^\prime=1/3$ MP phase of AFSC (see~\secref{result}). The magnetic unit cell is enlarged three times as long as the original one due to spontaneous translational symmetry breaking. Therefore, we use the following Hamiltonian instead of Eq.~\eqref{hamp}: \begin{equation} \mcl{H}^\prime = \mcl{H}_c^\prime + \lambda\mcl{V}_c^\prime \label{Heff} \end{equation} with \begin{align} \mcl{H}_c^\prime&=\frac{J_1}{2}\sum_{i=1,2 (\mathrm{mod}\,3)} D_1(i)+\frac{J_2}{2}\sum_{i=1 (\mathrm{mod}\,3)} D_2(i) -H^\prime\sum_j T_j^z, \label{intra} \\ \mcl{V}_c^\prime&=\frac{J_1}{2}\sum_{i=0 (\mathrm{mod}\,3)} D_1(i)+\frac{J_2}{2}\sum_{i=0,2 (\mathrm{mod}\,3)} D_2(i), \label{inter} \end{align} where the $\eta$th neighbor two-spin (dimer) interaction is represented by $D_\eta(i)=T_i^+T_{i+\eta}^- + T_i^-T_{i+\eta}^+ + T^z_iT_{i+\eta}^z$. $\mcl{H}_c^\prime$ ($\mcl{V}_c^\prime$) corresponds to the intra (inter) cluster Hamiltonian denoted by the red (blue) lines in \figref{3spin}(a). We introduce the coupling strength $\lambda$ to control the inter-cluster interactions. We note that Eq.~\eqref{Heff} with $\lambda=1$ is equivalent to Eq.~\eqref{hamp}. \begin{figure}[t] \begin{center} \includegraphics[width=70mm]{./3spin4.eps} \end{center} \caption{ (a) Clusterization based on spontaneously broken symmetry (CBSBS) of AFSC. The numbers denote the quasi-spin site. The solid (dotted) lines correspond to $J_1$ $(J_2)$ interactions. Red and blue colors of the lines are used to distinguish intra-cluster Hamiltonian $\mcl{H}_c^\prime$ and inter-cluster Hamiltonian $\mcl{V}_c^\prime$, respectively. (b) Energy levels of local states $\ket{\chi^\pm}$ in effective magnetic fields $H^\prime$ in the case of $J_2/J_1=0.65$. The lowest-energy state encounters level cross at $H^\prime=0$ and $H_c^\prime$ denoted by the green circles. } \label{3spin} \end{figure} \subsection{Eigenstates of magnetic unit cell} Since the Hamiltonian \eqref{intra} does not include the interaction between the clusters, we can diagonalize it in each cluster. The resulting eigenstates $\ket{\chi^\pm}$ ($\chi=\alpha,\beta,\gamma$, and $\delta$) are shown in Table~\ref{eigen3} with \begin{equation} C_\pm=\sqrt{\frac{1}{2}\left(1\pm\frac{J_1+J_2}{\sqrt{33J_1^2+2J_1J_2+J_2^2}}\right)}. \end{equation} \begin{table}[htbp] \centering\caption{Eigenstates $\ket{\chi^\pm}$ ($\chi=\alpha,\beta,\gamma$, and $\delta$) of $\mcl{H}_c^\prime$}\label{eigen3} \begin{tabular}{l|cc} \hline & Configuration \\ \hline $\ket{\alpha^+}$ & $\ket{\uparrow\uar\uparrow}$ \\ $\ket{\beta^+}$ & $\frac{1}{\sqrt{2}}C_- \ket{\downarrow\uparrow\uar} -C_+ \ket{\uparrow\downarrow\uparrow} + \frac{1}{\sqrt{2}}C_-\ket{\uparrow\uar\downarrow}$ \\ $\ket{\gamma^+}$ & $-\frac{1}{\sqrt{2}}(\ket{\downarrow\uparrow\uar} - \ket{\uparrow\uar\downarrow})$ \\ $\ket{\delta^+}$ & $\frac{1}{\sqrt{2}}C_+\ket{\downarrow\uparrow\uar} +C_- \ket{\uparrow\downarrow\uparrow} +\frac{1}{\sqrt{2}}C_+ \ket{\uparrow\uar\downarrow}$ \\ $\ket{\beta^-}$ &  $\frac{1}{\sqrt{2}}C_-\ket{\uparrow\downarrow\dar} -C_+ \ket{\downarrow\uparrow\downarrow} + \frac{1}{\sqrt{2}}C_-\ket{\downarrow\dar\uparrow}$ \\ $\ket{\gamma^-}$ & $-\frac{1}{\sqrt{2}}(\ket{\uparrow\downarrow\dar} - \ket{\downarrow\dar\uparrow})$ \\ $\ket{\delta^-}$ & $\frac{1}{\sqrt{2}}C_+\ket{\uparrow\downarrow\dar} +C_- \ket{\downarrow\uparrow\downarrow} +\frac{1}{\sqrt{2}}C_+ \ket{\downarrow\dar\uparrow}$ \\ $\ket{\alpha^-}$ & $\ket{\downarrow\dar\downarrow}$ \\ \hline \end{tabular} \end{table} The eigenenergy $\epsilon_{\chi^\pm}$ is given by \begin{align} \epsilon_{\alpha^\pm} &= \frac{1}{8}(2J_1+J_2)\mp \frac{3}{2}H^\prime, \hspace{1em} \epsilon_{\gamma^\pm} = -\frac{5}{8}J_2\mp \frac{H^\prime}{2}, \\ \epsilon_{\beta^\pm} &= \frac{1}{8}\left(-J_1+2J_2-\sqrt{33J_1^2+2J_1J_2+J_2^2}\right)\mp \frac{H^\prime}{2}, \\ \epsilon_{\delta^\pm} &= \frac{1}{8}\left(-J_1+2J_2+\sqrt{33J_1^2+2J_1J_2+J_2^2}\right)\mp \frac{H^\prime}{2}. \end{align} $\ket{\chi^+}$ and $\ket{\chi^-}$ correspond to a Kramers doublet due to the time-reversal symmetry when $H^\prime=0$. With applying magnetic field, every Kramers doublet splits off and degeneracy lifts [see \figref{3spin}(b)]. If interactions between the magnetic unit cells are completely classical and weaker than $\mcl{H}_c^\prime$, the ground state under the classical limit of inter-cluster interactions is given by the direct product of $\beta^+$ for $J_2/J_1=0.65$ as expected from \figref{3spin}(b). We note that the product state describes the $m^\prime=1/3$ MP phase. In the following, we use this ground state as an approximated $m^\prime=1/3$ MP state, i.e. $\ket{1/3}\equiv \bigotimes_l \ket{\beta^+}_l$ where $l$ denotes the cluster number. In order to obtain dispersion relations of $T^\pm$ excitations, we use a semi-classical approximation of inter-cluster interactions in subsequent subsections, where low-lying states in the cluster are taken into account in addition to the ground state. \subsection{$T^+$ excitation} \label{SecIVc} To understand the $T^+$ excitation in the $m^\prime=1/3$ MP phase of AFSC, we consider the fully spin-polarized $\ket{\alpha^+}$ state as a low-lying state in the cluster, because $T^+$ increases magnetization from the ground state $\ket{1/3}$. There are, of course, higher-order processes including other excited states, e.g., a mixed state of two $\ket{\alpha^+}$ and one $\ket{\beta^-}$ in three magnetic unit cells, but such processes can be ignored because only two states, $\ket{\alpha^+}$ and $\ket{\beta^+}$, contribute to low-energy excitations around the critical field $H_c^\prime=\left(3J_1-J_2+\sqrt{33J_1^2+2J_1J_2+J_2^2}\right)/8$ where $\ket{\alpha^+}$ and $\ket{\beta^+}$ are degenerate [see \figref{3spin}(b)]. Therefore, we use the projection to these two states, $\mcl{Q}_+=\prod_l \br{\ket{\alpha^+}_l\bra{\alpha^+}_l+\ket{\beta^+}_l\bra{\beta^+}_l}$. In the projected Hamiltonian, there is the constraint that $\ket{\alpha^+}_l\bra{\alpha^+}_l+\ket{\beta^+}_l\bra{\beta^+}_l=1$. The operators $\tau_l^\dag=\ket{\alpha^+}_l\bra{\beta^+}_l$, $\tau_l=\ket{\beta^+}_l\bra{\alpha^+}_l$, and $n_l=\ket{\alpha^+}_l\bra{\alpha^+}_l$ correspond to a hard-core bosonic creation, annihilation, and number operators, respectively. Thus, by using these operators with $\lambda=1$, the projected Hamiltonian $\mcl{H}_+^{\prime\prime}=\mcl{Q}_+\mcl{H}^\prime\mcl{Q}_+$ except for the constant term is given by \begin{equation} \mcl{H}_+^{\prime\prime} = t\sum_l (\tau^\dagger_l\tau_{l+1}+\mathrm{H.c.})+ V\sum_l n_l n_{l+1} +\mu \sum_l n_l \label{PM} \end{equation} with \begin{align} t&=\frac{1}{4}C_-(J_1C_--2\sqrt{2}J_2C_+),\\ V&=\frac{1}{8}\BR{J_1 C_-^4+J_2\br{C_-^2-C_+^2}^2},\\ \mu&= \epsilon_{\beta^+}-\epsilon_{\alpha^+}-\frac{1}{4}\br{J_1C_-^2+J_2C_+^2} . \end{align} If we define the Fourier transform of one-particle excited state as \begin{equation} \ket{K} = \sum_l \mrm{e}^{\mrm{i} Kl} \ket{\alpha^+}_l \bigotimes_{l'(\neq l)}\ket{\beta^+}_{l'}, \end{equation} this state is an eigenstate of the projected Hamiltonian $\mcl{H}_+^{\prime\prime}$ with dispersion energy \begin{equation} \epsilon_K^{\prime\prime} = 2t \cos K + \mathrm{const.} \label{dr} \end{equation} This eigenstate represents one-particle (hard-core boson) excited state obtained by the creation and annihilation operators, $\tau^\dag$ and $\tau$. Analyzing the $T^+$ excitation in the $m^\prime=1/3$ MP phase of AFSC, we use the relation $K=3q_x$ because the periodicity of AFSC is three times as long as that of the projected model (\ref{PM}). The white line in Fig.~\ref{32}(c) exhibits the dispersion relation $\epsilon_K^{\prime\prime}$ including the constant energy in Eq.~\eqref{dr}. The result reproduces well the dispersion relation of peak structure in the DSSF of AFSC. This hard-core bosonic excitation is a collective mode of three-spin clusters (trimer), and thus we call it {\it trimeron} in this paper. Furthermore, as explained in \secref{dssf2}, $T^+(q,\omega)$ in the $m^\prime=1/3$ MP phase is qualitatively equivalent to $S^-(q_x,\omega)$ in the $m=1/3$ MP phase and $S^+(q_x,\omega)$ in the $m=2/3$ MP phase of the FSL. We thus conclude that the origin of these DSSFs of FSL is the trimeron, that is, one-particle excitation of hard-core boson based on three-spin clusters. \subsection{$T^-$ excitation} The $T^-$ excitation in the $m^\prime=1/3$ MP phase of AFSC is more difficult to understand than the $T^+$ excitation, because we have to take several states into account as low-lying states of the cluster. However, our purpose in this section is to give an intuitive physical picture to explain the DSSF in \figref{16}(c). Hence, quantitative reproduction is not necessary. Here, we discuss the excitation through the CBSBS similar to the $T^+$ excitation. We consider three excited states $\ket{\beta^-}$, $\ket{\gamma^-}$, and $\ket{\delta^-}$ as low-lying excited states of the cluster. If interactions between clusters do not exist, the DSSF shows local excitations corresponding to $\ket{\beta^-}$, $\ket{\gamma^-}$, and $\ket{\delta^-}$, which induce three flat bands. With adding the interactions as perturbation, the three bands become dispersive, indicating three modes of hard-core bosonic one-particle excitation (three trimerons) [see \figref{evo}(a)]. When the energy scale of interactions is larger than energy gaps between three excited states, theses modes are hybridized and split off. \begin{figure*}[htbp] \begin{center} \includegraphics[height=55mm]{./evo.eps} \end{center} \caption{ $T^-(q,\omega)$ in the effective model (\ref{Heff}) for the $m^\prime=1/3$ MP phase of AFSC with various coupling ratios (a) $\lambda=0.1$, (b) $\lambda=0.5$, and (c) $\lambda=1.0$. The dispersion relations colored by pink, blue, and yellow indicates $\beta^-$, $\gamma^-$, and $\delta^-$ modes, respectively, in the projected Hamiltonian $\mcl{H}_-^{\prime\prime}$ without hybridization among these modes. } \label{evo} \end{figure*} Figure~\ref{evo} shows the $\lambda$ dependence of $T^-(q,\omega)$ in the effective model (\ref{Heff}) for the $m^\prime=1/3$ MP phase of AFSC. Note that \figref{evo}(c) (the case of $\lambda = 1$) is the same as \figref{16}(c). In Fig.~\ref{evo} we also plots dispersion relations of three modes, $\ket{\beta^-}$, $\ket{\gamma^-}$, and $\ket{\delta^-}$, which is obtained from the projected Hamiltonian $\mcl{H}_-^{\prime\prime}=\mcl{Q}_-\mcl{H}^\prime\mcl{Q}_-$ with $\mcl{Q}_-=\prod_l \br{\ket{\beta^+}_l\bra{\beta^+}_l+\sum_{\chi=\beta,\gamma,\delta}\ket{\chi^-}_l\bra{\chi^-}_l}$ but neglecting hybridization among the three modes for simplicity. From \figref{evo}(c), we can easily imagine that, if we introduce hybridization effect, the $\beta^-$ and $\gamma^-$ modes would be repulsively separated more around $q=2\pi/3$ and the $\gamma^-$-originated mode would construct the lowest-energy excitations around $q=2\pi/3$ that are seen in $T^-(q,\omega)$. This speculation based on hybridized trimerons will explain the change of spectral distribution form $\lambda=1$ to $\lambda=0.5$, where the splitting of spectral weight around $q=2\pi/3$ disappears with decreasing $\lambda$, because of the reduction of inter-cluster interactions controlling hybridization of trimerons. Based on these considerations in this subsection and \secref{SecIVc}, we conclude that $S^+(q_x,\omega)$ in the $m=1/3$ MP phase and $S^-(q_x,\omega)$ in the $m=2/3$ MP phase of the FSL originate from the hybridized trimerons. \section{SUMMARY}\label{conclusion} In this paper, we have studied magnetic excitations in the MP phases of the FSL, where the three types of antiferromagnetic interactions, $J_1$, $J_2$ and $J_\perp$, are taken into account as a leg nearest-neighbor, a leg second-neighbor, and a rung nearest-neighbor couplings of a two-leg ladder, respectively. This model exhibits three MPs at fractionalized finite magnetization $m=M/M_\mrm{sat}=1/3$, $1/2$, and $2/3$ with respect to saturation magnetization $M_\mrm{sat}$. These MPs emerge robustly in the strong rung limit. Moreover, this condition allows us to map the model Hamiltonian into another quasi-spin model AFSC, by ignoring high-energy states in a rung. To obtain the intuitive physical picture of spin dynamics through the mapping to the AFSC, we have focused on the strong rung range. We first have obtained magnetic excitations of the FSL by calculating DSSFs using DDMRG method. We have found that magnetic excitations in the MP phase are commensurate to the enlarged unit cell of the MP ground state. For the sake of comparison, we have also calculated DSSFs in the AFSC of quasi-spins, and have confirmed that the AFSC reproduces low-energy excitations of the FSL qualitatively. The $m=1/3$, $1/2$, and $2/3$ MP states in the FSL correspond to the $m^\prime=-1/3$, $0$, and $1/3$ MP states in the AFSC, respectively. The zero-magnetization ground state of the AFSC is well known as the dimerized state, so that elementary excitations are regarded as bound spinons. Therefore, we conclude that low-energy magnetic excitations of the FSL correspond to bound quasi-spinons based on the singlet and + triplet states of rung. To clarify the $S^+$ ($S^-$) excitation in the $m=2/3$ ($1/3$) MP state, we have additionally analyzed spin dynamics through the CBSBS in the AFSC. In the CBSBS, one cluster corresponds to an enlarged unit cell after spontaneously breaking of translational symmetry, and inter-cluster interactions are treated as perturbative effects as compared with the intra-cluster interaction. We have found a new quasi-particle mode as a hard-core bosonic excitation in the $m=1/3$ MP state of the AFSC, which we call trimeron because it is a collective mode of spin trimers. This trimeron picture is common to the $S^+$ ($S^-$) excitation in the $m=2/3$ ($1/3$) MP state of the FSL. On the other hand, the $S^-$ ($S^+$) excitation in the $m=2/3$ ($1/3$) MP state is not well described as single trimeron mode. We have thus examined the intra-cluster interaction dependence of the DSSF. The obtained result indicates that inter-mode coupling enhanced by the inter-cluster interactions is crucial even for low-energy excitation. Actually, we have confirmed that two low-lying modes are hybridized in the excitation spectra with increasing the inter-cluster interactions, which are regarded as a hybridized trimerons. Consequently, we conclude that the $S^-$ ($S^+$) excitation in the $m=2/3$ ($1/3$) MP state corresponds to the hybridized trimerons of quasi-spins. Our results will be useful for understanding the low-energy physics in not only FSL materials such as \ce{BiCu2PO6}~\cite{abraham94,koteswararao07,mentre09,tsirlin10} and \ce{Li2Cu2O(SO4)2}~\cite{rousse17,sun15,vaccarelli17,vaccarelli19} but also weakly-coupled spin dimer compounds~\cite{kodama02,ruegg03,kimura16}, where magnetic excitations originating from the new quasi-particle can be clarified by inelastic neutron scattering experiments in a magnetic field. In such materials, spin dynamics we have clarified is also important for understanding the spin/heat transport~\cite{nagasawa14,jeon16,kawamata18} including their application to spintronics~\cite{uchida08,hirobe16}. Furthermore, the CBSBS is also useful for the analyses of low-energy excitations in MPs of various spin systems. We expect that new quasi-particles $N$-merons will be discovered as elementary excitations in the MPs, where the enlarged unit cell includes $N$ original cells after spontaneously breaking of translational symmetry. \begin{acknowledgments} We would like to thank H. Onishi, M. Mori, S. Maekawa, and K. Okamoto for fruitful discussions. This work was partly supported by the creation of new functional devices and high-performance materials to support next- generation industries (CDMSI) to be tackled by using a post-K computer, Grants-in-Aid for Young Scientists (B) (Grant No.16K17753), and the inter-university cooperative research program of IMR, Tohoku University. Numerical computation in this work was carried out on the Supercomputer Center at Institute for Solid State Physics, University of Tokyo and the supercomputers at JAEA. \end{acknowledgments}
1,314,259,995,460
arxiv
\section{Introduction}\label{S:introduction} We consider the 3D quadratic Zakharov-Kuznetsov equation $$ \text{(3D ZK)} \qquad \qquad \partial_t u + \partial_x \Delta u + \partial_x u^2 =0, \qquad \qquad $$ where $u=u(\mathbf{x},t)$, for $\mathbf{x} = (x,y,z) \in \mathbb{R}^3$, $t \in \mathbb{R}$. This equation is a natural multidimensional generalization of the well-known Korteweg-de Vries (KdV) equation, which models weakly nonlinear waves in shallow water. The 3D ZK equation was originally proposed by Zakharov and Kuznetsov to describe weakly magnetized ion-acoustic waves in a low-pressure magnetized plasma and the typical reference for that is \cite{ZK1974}. Actually the original announcement and formal derivation from hydrodynamics appeared in 1972 in a preprint of the Soviet Academy of Sciences \cite{ZK1972}, see Figure \ref{fig:ZK-1972}, where the authors write ``until now in hydrodynamics and plasma physics the attention was paid only to the one-dimensional solitons". In that paper (and its JETP 1974 analog) the discussion of stability of the 3D solitons appeared by giving an argument that a Lyapunov type functional ($E+\lambda M$) is minimized on the soliton. \begin{figure}[ht] \includegraphics[height=4in]{ZK-1972-p4c.jpg} \caption{\small{V.E.Zakharov and E.A.Kuznetsov, ``On three-dimensional solitons", Siberian branch of USSR Academy of Sciences, Novosibirsk 1972; the title page and p.4 with the derivation of the equation and conserved quantities.} } \label{fig:ZK-1972} \end{figure} The formal and then rigorous derivation of the 3D Zakharov-Kuznetsov equation as a long-wave small-amplitude limit of the Euler-Poisson system in the cold-plasma approximation was done in \cite{LS} and \cite{LLS}, respectively. Other derivations exist as well -- see, for example, references in \cite{LLS, FHRY, FHR3}. Unlike KdV and other generalizations such as Kadomtsev-Petiashvili or Benjamin-Ono equations, the Zakharov-Kuznetsov equation is not completely integrable. However, it has a Hamiltonian structure with three conserved quantities: during their lifespan, solutions $u(t)$ (with sufficient decay) conserve energy (Hamiltonian), $L^2$-norm (often called mass) and the integral: \begin{equation}\label{MC} M(u(t))=\int_{\mathbb{R}^3} u^2(t)\, d {\bf{x}} = M(u(0)), \end{equation} \begin{equation}\label{EC} E(u(t))=\dfrac{1}{2}\int_{\mathbb{R}^3}|\nabla u(t)|^2\;d{\bf{x}} - \dfrac{1}{3}\int_{\mathbb{R}^3} u^{3}(t)\;d{\bf{x}} = E(u(0)), \end{equation} \begin{equation}\label{L1-inv} \int_{\mathbb{R}} u(t,{\bf{x}}) \, dx = \int_{\mathbb{R}} u(0, {\bf{x}}) \, dx, \end{equation} where the last conservation is obtained by integrating the original equation on $\mathbb{R}$ in the first coordinate $x$. The equation has a family of traveling waves called solitary waves (sometimes called solitons, although the model is not integrable), moving only in the positive $x$-direction: $$ u(t, {\bf{x}}) = Q_\lambda(x - \lambda^2 t, y, z) $$ with $Q_\lambda({\bf{x}}) \to 0$ as $|{\bf x}| \to \infty$, and $Q_\lambda$ is the dilation of the ground state $Q$: $$ Q_\lambda({\bf{x}}) = \lambda^2 \, Q(\lambda \, {\bf{x}}) $$ with $Q$ being the unique radial positive solution in $H^1(\mathbb{R}^3)$ of the well-known nonlinear elliptic equation $-\Delta Q+Q - Q^p = 0$. It is well-known that $Q \in C^{\infty}(\mathbb{R}^3)$, $\partial_r Q(r) <0$ for any $r = |{\bf{x}}|>0$ and for any multi-index $\alpha$ \begin{equation}\label{prop-Q} |\partial^\alpha Q(x, y, z)| \leq c(\alpha) \, e^{-r} \quad \mbox{for any}\quad {\bf{x}} \in \mathbb{R}^3. \end{equation} The orbital stability of these traveling waves was proved by de Bouard \cite{deB}, where she followed the KdV argument of Grillakis, Shatah \& Strauss \cite{GSS}, while considering solutions in weighted spaces. The more delicate question of asymptotic stability for ZK in dimension $d\geq 2$ was first considered by C\^{o}te, Mu\~{n}oz, Pilod \& Simpson \cite{CPMS}. They covered the case of the 2D ZK, but their result does not apply to the 3D ZK, since their Liouville theorem fails (e.g., due to their choice of orthogonality conditions and manner of addressing the local virial estimate). The present paper fills this gap by establishing asymptotic stability for the physical case of the 3D ZK. The Cauchy problem for the 3D ZK equation has been studied by several authors. First, local well-posedness can be established via the classical Kato method in $H^s$ for $s>\frac52$. This was improved by Linares \& Saut \cite{LS}, who obtained the local well-posedness in $H^s$ with $s>\frac98$ following the method of Kenig \cite{K2004}, which was then further improved by Ribaud \& Vento \cite{RV} down to $H^s$ with $s>1$. The global well-posedness in $H^s$, $s>1$ was established by Molinet \& Pilod \cite{MP}. At the time we started writing the present paper, this was the best result, and therefore, we arranged our argument to establish the statement of asymptotic stability as formulated below in Theorem \ref{T:main} for certain weak solutions that we termed Class B (as defined in Definition \ref{D:ClassB}) that were \emph{assumed} to be orbitally stable. The best known well-posedness results at the time (Ribaud \& Vento \cite{RV}, Molinet \& Pilod \cite{MP}), combined with the orbital stability argument of de Bouard \cite{deB}, gave a corollary that solutions in $H^s$, $s>1$, with initial data close to a $Q$ with respect to the $H^1$ norm, were $H^1$ orbitally stable, thus, meeting the hypotheses of our Theorem \ref{T:main}, and allowing for the conclusion of $H^1$ asymptotic stability for such solutions. Recently and after we had nearly completed the present paper, Herr \& Kinoshita \cite{HK} announced a proof of local well-posedness for the 3D ZK in $H^s$ for $s>-\frac12$. This, when combined with the orbital stability argument of de Bouard \cite{deB} establishes that $H^1$ solutions, initially close to $Q$ in $H^1$, are orbitally stable, thus, meeting the hypotheses of our Theorem \ref{T:main}. Therefore, we can now state an unconditional version of asymptotic stability as our main result: \begin{theorem}[main theorem] For $\alpha \ll 1$, the following statement holds: if the initial condition $u_0\in H_{\mathbf{x}}^1$ and \begin{equation} \label{E:initial} \|u_0 - Q \|_{H_{\mathbf{x}}^1} \leq \alpha, \end{equation} then the corresponding solution $u(\mathbf{x},t)$ to the 3D ZK is \emph{asymptotically stable} in the following sense \begin{enumerate} \item (orbital stability) there exist trajectories $c(t)>0$ and $\mathbf{a}(t)\in \mathbb{R}^3$ such that $$ \| c(t)^2 u(c(t)\mathbf{x} + \mathbf{a}(t),t) - Q(\mathbf{x}) \|_{H_{\mathbf{x}}^1} \lesssim \alpha, $$ \item (convergence of trajectories) there exists $c_*$ such that $|c_*-1| \lesssim \alpha$ such that $$ c(t) \to c_* \,, \quad \text{and} \quad \mathbf{a}'(t) \to c_*^{-2} \mathbf{i} \quad \text{as} \quad t\to +\infty, $$ \item (weak convergence as $t\nearrow \infty$) there holds \begin{equation} \label{E:main-weak} c(t)^2 u(c(t) \mathbf{x} + \mathbf{a}(t),t) \rightharpoonup Q(\mathbf{x}) \quad \text{ (weakly) in } H_{\mathbf{x}}^1 \text{ as }t\to +\infty, \end{equation} \item ($L^2$ strong convergence in conic right-half space) for any $\delta\gtrsim \alpha$, we have strong convergence in $L^2_{\mathbf{x}}$ on the conic right-half space (see Figure \ref{F:cut-cone}) \begin{equation} \label{E:window-conv-b} \| c(t)^2 u(c(t) \mathbf{x} + \mathbf{a}(t),t) - Q(\mathbf{x}) \|_{L^2_{\mathbf{x}}(x> (-1+\delta) t -\sqrt{y^2+z^2}\tan \theta )} \to 0 \text{ as } t\to +\infty \end{equation} for all $\theta$ such that $$ 0\leq \theta \leq \frac{\pi}{3}-\delta. $$ \end{enumerate} \end{theorem} The $L^2$ convergence is stated in \eqref{E:window-conv-b} in the reference frame of the soliton (being at the origin). In the reference frame of the solution, the rightward shifting external conic region is $x> \delta t -\sqrt{y^2+z^2} \, \tan \theta$. As mentioned, this theorem follows from the orbital stability result of de Bouard \cite{deB}, the recent well-posedness result of Herr \& Kinoshita \cite{HK}, and our key theorem (Theorem \ref{T:main} below). We note that any $u_0(\mathbf{x})$ for which there exists $c_0>0$ and $\mathbf{a}_0 \in \mathbb{R}^3$ so that $$\| c_0^2 u_0(c_0 \mathbf{x} + \mathbf{a}_0) - Q(\mathbf{x}) \|_{H_{\mathbf{x}}^1} \lesssim \alpha$$ can be rescaled and translated to meet the hypothesis \eqref{E:initial}. In \S\ref{S:outline} below, we provide an outline of the paper with definitions, the statement of the main theorem (Theorem \ref{T:main}), supporting propositions and lemmas. These supporting propositions and lemmas are each proved in the sections of the paper (\S \ref{S:RV-estimates}-\ref{S:numerics}) indicated after their statement. The broad outline of the argument is as follows: monotonicity estimates based on calculating $$ \partial_t \int [u(\mathbf{x}+\mathbf{a}(t), t)]^2 \phi(\mathbf{x} ,t) \, d\mathbf{x} $$ for a suitable monotonic-in-$x$ weight $\phi(\mathbf{x},t)$, provide the strong $L^2$ convergence in \eqref{E:window-conv-b} away from the soliton center. Once the weak convergence in \eqref{E:main-weak} is established, the strong $L^2$ convergence on a compact region around the soliton center in \eqref{E:window-conv-b} will follow. Thus, the main remaining task is to establish \eqref{E:main-weak}, which is proved in several steps. Taking a limit of solutions along a time sequence $t_n\nearrow +\infty$ yields a radiation-free solution $\tilde u(\mathbf{x},t)$. The monotonicity estimates give exponential spatial decay of this solution, but the functional analytic methods that produce this limiting solution $\tilde u$ only yield that it has $H_{\mathbf{x}}^1$ regularity and is a weak type solution (that we call a Class B solution). One key new element of the paper is showing that the uniform in time strong spatial decay of $\tilde u$ is enough to boost its regularity -- we are, in fact, able to show it is \emph{smooth}, and thus, a strong solution to the 3D ZK. Then we show that $\tilde u$, once renormalized, is $Q$, the soliton, by a rigidity argument based on a virial estimate for the linearized equation. This is achieved by contradiction -- if the rigidity statement failed, then there would be a sequence of solutions $\tilde u$, from which we could extract (after renormalization) a solution to the linearized equation \emph{without nonlinear terms} (we call this the linear linearized equation). In the passage of this limit, we again use our regularity boost techniques. Finally, we can prove a virial estimate for the linear linearized equation by a positive commutator argument after passing to a dual problem and checking a spectral condition with robust numerical analysis. The regularity boost arguments mentioned above are new to this type of problem, and involve Littlewood-Paley analysis, a discrete Gronwall argument, and the local theory estimates of Ribaud \& Vento, even though these estimates lie at regularity slightly above $H_{\mathbf{x}}^1$. \subsection{Acknowledgements} L.G.F. was partially supported by CNPq, CAPES and FAPEMIG (Brazil). From September 2017 till August 2019, J.H. served as Program Director in the Division of Mathematical Sciences at the National Science Foundation (NSF), USA, and as a component of this job, J.H. received support from NSF for research, which included work on this paper. Any opinion, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation. S.R. was partially supported by the NSF CAREER grant DMS-1151618/1929029 and NSF grant DMS-1815873/1927258. K.Y. research support to work on this project came from grants DMS-1929029 and 1927258 (PI: Roudenko). \section{Outline of the paper} \label{S:outline} \begin{definition}[Class B solutions] \label{D:ClassB} We call $u(\mathbf{x},t)$ a \emph{Class B} global solution of the 3D ZK if \begin{enumerate} \item for each $T>0$ and for each $s<1$, $$ u\in C([-T,T]; H_{\mathbf{x}}^s) \,, \qquad \partial_t u \in C([-T,T]; H_{\mathbf{x}}^{s-3}), $$ \item for each $t\in \mathbb{R}$, $u(t) \in H_{\mathbf{x}}^1$ and $\partial_t u(t) \in H_{\mathbf{x}}^{-2}$ and there exists $C>0$ such $$ \sup_{t\in \mathbb{R}} \| u(t) \|_{H_{\mathbf{x}}^1} + \sup_{t\in\mathbb{R}} \| \partial_t u(t) \|_{H_{\mathbf{x}}^{-2}} \leq C, $$ \item for each $t\in \mathbb{R}$, the equation $$ \partial_t u(t) + \partial_x \Delta u(t) + \partial_x u(t)^2=0 $$ holds as an equality of the sum of three functions each belonging to $H^{-2}_{\mathbf{x}}$. \end{enumerate} \end{definition} \begin{lemma}[Class B solutions satisfy mass conservation] \label{L:mass-conservation} Suppose that $u$ is a Class B solution to the 3D ZK. Then $u$ satisfies mass conservation, i.e., $\| u(t) \|_{L_{\mathbf{x}}^2}^2$ is constant in time, and is denoted $M(u)$. \end{lemma} This is proved in \S\ref{S:ClassB-mass} by computing $\partial_t \| P_{\leq N}u\|_{L_{\mathbf{x}}^2}^2$, deducing a near conservation law with error bounded by $N^{-1/2}$, and then sending $N\to \infty$. We note that a similar method does not work to prove energy conservation. \begin{definition}[orbital stability] \label{D:orb-stab} Let $\alpha>0$. We say that $u$ is an \emph{$\alpha$-orbitally stable} solution to the 3D ZK if $u$ is a Class B solution such that $$ \sup_{t\in \mathbb{R}} \inf_{\substack{\mathbf{a}(t) \in \mathbb{R}^3 \\ c(t) \in (0,+\infty)}} \| c(t)^2 u(c(t)\mathbf{x}+\mathbf{a}(t), t) -Q( \mathbf{x}) \|_{H_{\mathbf{x}}^1} \leq \alpha. $$ \end{definition} \begin{lemma}[unique parameters] \label{L:geom-decomp} There exists $\alpha>0$ sufficiently small so that, if $u$ is a Class B $\alpha$-orbitally stable solution to the 3D ZK, then there exist \emph{unique} translation $\mathbf{a}(t)$ and scale parameters $c(t)>0$ so that $\epsilon$ defined by $$\epsilon(\mathbf{x},t) = c(t)^2u(c(t)\mathbf{x}+\mathbf{a}(t),t) - Q(\mathbf{x})$$ satisfies, for all $t$, the orthogonality conditions $$ \langle \epsilon(t), \nabla Q \rangle =0 \quad \mbox{and} \quad \langle \epsilon(t), Q^2 \rangle =0, $$ and $$ \| \epsilon \|_{L_t^\infty H_{\mathbf{x}}^1} \lesssim \alpha. $$ Let $\mathcal L = I-\Delta -2Q$ and define $$ f\stackrel{\rm{def}}{=} \frac{\mathcal{L}\partial_x (Q^2)}{\langle \Lambda Q, Q\rangle} \,, \qquad \mathbf{g} \stackrel{\rm{def}}{=} \left( \frac{ \mathcal{L}Q_{xx} }{ \|Q_x\|_{L_{\mathbf{x}}^2}}, \frac{\mathcal{L}Q_{xy}}{ \|Q_y\|_{L_{\mathbf{x}}^2}}, \frac{\mathcal{L}Q_{xz} }{ \|Q_z\|_{L_{\mathbf{x}}^2}}\right). $$ Denote $b(t) = \|\epsilon(t)\|_{L_{\mathbf{x}}^2}$. Then the parameters $c(t)$ and $\mathbf{a}(t)$ are $C^{1,\frac23}$ and satisfy $$ |\,c^2c' - \langle \epsilon, f\rangle | + |\, c(\mathbf{a}' - c^{-2} \mathbf{i}) - \langle \epsilon, \mathbf{g}\rangle | \lesssim b(t)^2. $$ \end{lemma} This is proved in \S\ref{S:geom-decomp} by an implicit function theorem argument. The equations for the parameters follow by differentiating the orthogonality conditions in time. We mention that parameter estimates can be found in \cite{FHRY, FHR1,FHR2,FHR3, FHR4}. Our main theorem is the following \begin{theorem}[main theorem for Class B] \label{T:main} For $\alpha \ll 1$, any $\alpha$-orbitally stable Class B solution $u$ to the 3D ZK with $M(u)=M(Q)$ is \emph{asymptotically stable} in the following sense: there exists $c_*$ such that $|c_*-1| \lesssim \alpha$ such that as $t\to +\infty$, $$ c(t) \to c_* \,, \qquad \mathbf{a}'(t) \to c_*^{-2} \mathbf{i} $$ and $$ c(t)^2 u(c(t) \mathbf{x} + \mathbf{a}(t),t) \rightharpoonup Q(\mathbf{x}) \quad \text{ (weakly) in } H_{\mathbf{x}}^1. $$ Moreover, for any $\delta\gtrsim \alpha$, we have strong convergence in $L^2_{\mathbf{x}}$ on the conic right-half space \begin{equation} \label{E:window-conv} \| c(t)^2 u(c(t) \mathbf{x} + \mathbf{a}(t),t) - Q(\mathbf{x}) \|_{L^2_{\mathbf{x}}(x> (-1+\delta) t -\sqrt{y^2+z^2}\tan \theta )} \to 0 \end{equation} for all $\theta$ such that $$0\leq \theta \leq \frac{\pi}{3}-\delta.$$ \end{theorem} The proof of Theorem \ref{T:main} follows from Propositions \ref{P:wk-lim} and \ref{P:rigidity} below, as detailed in \S \ref{S:proof-main-theorem}. It is deduced from these main results plus the monotonicity estimate in \S\ref{S:monotonicity}, in particular, Lemma \ref{L:Ipm-estimates}, which gives an estimate on the mass of the solution in a conic right-half space region $(\cos \theta, \sin\theta) \cdot (x+(1-\delta) t, \sqrt{1+y^2+z^2})>0$, in the reference frame, where the soliton is at the origin. Specifically, it estimates this cut-off mass in the future in terms of its value in the past. In \S \ref{S:proof-main-theorem}, this is applied to give a ``decay on the right'' estimate in the conic region depicted in Figure \ref{F:decay}. But it can be applied for two different slopes (for example $x>-\frac{1}{10}t$ and $x>-\frac{19}{20}t$) to show that both regions asymptotically trap the same mass, and thus, the region \emph{between} these lines has asymptotically vanishing mass. This results in a ``decay on the left'' estimate also depicted in Figure \ref{F:decay}. By the decay on the right and decay on the left estimates, it suffices to prove that the solution in the soliton region $|\mathbf{x}| \lesssim r$ converges weakly to a rescaling of $Q(\mathbf{x})$. This is accomplished in Propositions \ref{P:wk-lim} and \ref{P:rigidity}. \begin{proposition}[construction of a smooth spatially decaying asymptotic solution] \label{P:wk-lim} There exists $\alpha_0>0$ such that for all $0< \alpha \leq \alpha_0$, the following holds. Let $u$ be an $\alpha$-orbitally stable Class B solution to the 3D ZK with $M(u)=M(Q)$, and let $c(t)>0$ and $\mathbf{a}(t) \in \mathbb{R}^3$ be the associated modulation parameters of scale and position given by Lemma \ref{L:geom-decomp}. For each sequence of times $t_m \nearrow +\infty$, there exists a subsequence $t_{m'} \nearrow +\infty$ such that for each $t\in \mathbb{R}$, $$ u(\mathbf{x} + \mathbf{a}(t_{m'}), t+ t_{m'}) \rightharpoonup \tilde u(\mathbf{x}, t) \quad \text{(weakly) in }H_{\mathbf{x}}^1, $$ where $\tilde u$ is a \emph{smooth} $\alpha$-orbitally stable solution to the 3D ZK. Moreover, letting $\tilde c(t)>0$ and $\tilde{\mathbf{a}}(t)\in \mathbb{R}^3$ be the modulation parameters associated to $\tilde u$ given by Lemma \ref{L:geom-decomp}, we have the uniform-in-time spatial decay property: for each $r>0$, $$ \| \tilde u( \mathbf{x} + \tilde{\mathbf{a}}(t), t) \|_{L_{t\in \mathbb{R}}^\infty L^2_{\mathbf{x}}(|\mathbf{x}|>R)} \lesssim e^{- \frac{R}{32}}. $$ \end{proposition} \begin{proposition}[rigidity of orbitally stable smooth solutions with spatial decay] \label{P:rigidity} There exists $\alpha_0>0$ such that for all $0< \alpha \leq \alpha_0$, the following holds. Let $\tilde u$ be a smooth $\alpha$-orbitally stable solution to the 3D ZK with associated modulation parameters $\tilde c(t)>0$ and $\tilde{\mathbf{a}}(t)\in \mathbb{R}^3$ given by Lemma \ref{L:geom-decomp}. Suppose that $\tilde u$ satisfies the uniform-in-time spatial decay property: for each $k\geq 0$, \begin{equation} \label{E:intro-101} \| \langle \mathbf{x} \rangle^k \tilde u( \mathbf{x} + \tilde{\mathbf{a}}(t), t) \|_{L_{t\in \mathbb{R}}^\infty L^2_{\mathbf{x}}} < \infty. \end{equation} Then there exists $c_+>0$ and $\mathbf{a}_+\in \mathbb{R}^3$ such that $$ \tilde u( \mathbf{x}, t) = c_+^{-2} Q\left( c_+^{-1}(\mathbf{x} - \mathbf{a}_+- t\,c_+^{-2})\right). $$ \end{proposition} \subsection{Outline of proof of Proposition \ref{P:wk-lim}} The proof of Proposition \ref{P:wk-lim} is decomposed into three key lemmas, as follows. \begin{lemma} \label{L:soft-step} There exists $\alpha_0>0$ sufficiently small so that for all $0<\alpha \leq \alpha_0$, the following holds. Suppose that $u$ is a Class B solution to the 3D ZK and is $\alpha$-orbitally stable. Let $t_m \nearrow +\infty$ be an arbitrary sequence of times. Then there exists a subsequence $t_{m'}$ such that the following holds \begin{enumerate} \item For each\footnote{meaning that the weak limit exists and we \emph{define} $\tilde u(t)$ to be the value of the limit.} $t\in \mathbb{R}$, $u(\bullet+\mathbf{a}(t_{m'}), t+t_{m'}) \rightharpoonup \tilde u(t)$ weakly in $H_{\mathbf{x}}^1$. \item For each $R>0$ and each finite time interval $I$, $u(\mathbf{x}+\mathbf{a}(t_{m'}), t+t_{m'}) \mathbf{1}_{<R}(\mathbf{x})$ converges strongly in $C(I; L_{\mathbf{x}}^2)$ to $\tilde u(\mathbf{x},t) \mathbf{1}_{<R}(\mathbf{x})$. \item $\tilde u$ is a Class B solution to the 3D ZK. \item $\tilde u$ is $\alpha$-orbitally stable with associated parameters (as in Lemma \ref{L:geom-decomp}) $\tilde{\mathbf{a}}(t)$ and $\tilde c(t)$. In fact, for every $t\in \mathbb{R}$, we have \begin{equation} \label{E:param-conv} \begin{aligned} &\mathbf{a}(t+t_{m'}) - \mathbf{a}(t_{m'}) \to \tilde{\mathbf{a}}(t)\quad \mbox{and} \quad c(t+t_{m'}) \to \tilde c(t)\quad \text{ as }m'\to \infty. \end{aligned} \end{equation} In particular, $\tilde{\mathbf{a}}(0)=0$. \end{enumerate} \end{lemma} This is proved in \S \ref{S:wk-lim}. Lemma \ref{L:soft-step} provides the $\alpha$-orbitally stable limiting solution $\tilde u$, but only as a Class B solution, and it is constructed by weak-* compactness methods. Using that $\mathbb{Q}$ is countable, a subsequence $t_{m'}$ is obtained along which $u(\bullet+\mathbf{a}(t_{m'}), t+t_{m'})$ converges weakly in $H_{\mathbf{x}}^1$ for each $t\in \mathbb{Q}$. Using a frequency projected uniform continuity in time property of $u$ and density of $\mathbb{Q}$ in $\mathbb{R}$, this weak convergence is extended to hold for all $t\in \mathbb{R}$ (not just $t\in \mathbb{Q}$). Defining $\tilde u(t)$ to be this weak limit, the fact that it is an $\alpha$-orbitally stable Class B solution to the 3D ZK is inherited from the corresponding properties of $u$ via elementary arguments. The limiting solution $\tilde u$ provided in Lemma \ref{L:soft-step} is obtained merely as a Class B solution -- this is all that is possible using weak-* compactness machinery. The fact that $\tilde u$ is exponentially decaying and smooth is separately obtained in Lemma \ref{L:exp-decay} and Lemma \ref{L:regularity-boost} below, using monotonicity lemmas and a virial-type regularity gain estimate, respectively \begin{lemma} \label{L:exp-decay} The Class B solution $\tilde u$ constructed in Lemma \ref{L:soft-step} satisfies exponential decay in space, uniformly-in-time. Specifically, $$ \| \tilde u(\mathbf{x}+\tilde{\mathbf{a}}(t),t) \|_{L_t^\infty L^2_{\mathbf{x}}(|\mathbf{x}|>R)} \lesssim e^{-R/32}. $$ \end{lemma} This is proved in \S \ref{S:app-mon}, by applying the monotonicity estimates \eqref{E:pf-main-1} and \eqref{E:pf-main-5} in Lemma \ref{L:u-decay}, which were obtained from the $I_+$ monotonicity estimate \eqref{E:Ip-right} in Lemma \ref{L:Ipm-estimates} (in \S\ref{S:monotonicity}). \begin{lemma} \label{L:regularity-boost} Any Class B solution $\tilde u$ of the 3D ZK satisfying the exponential decay as in Lemma \ref{L:exp-decay} is in fact smooth. \end{lemma} This is proved in \S \ref{S:higher-regularity}. The proof hinges on a frequency projected virial-type identity \eqref{E:HR10} for Class B solutions. When it is integrated in time and terms are estimated using weighted Sobolev estimates and Bernstein's inequality, we obtain in Lemma \ref{L:L2boost} a bound on $\|u\|_{L_I^2 H_{\mathbf{x}}^{5/4-}}$ in terms of weighted $L^2_{\mathbf{x}}$ bounds and (unweighted) energy bounds $H_{\mathbf{x}}^1$. Note that $\|u\|_{L_I^2 H_{\mathbf{x}}^{5/4-}}$ reflects a gain in regularity, but \emph{averaged in time}. At this point, we are able to tap into the feature of the Ribaud \& Vento local well-posedness machinery (as outlined in \S\ref{S:RV-estimates}) that the right-side bounds in their argument are slightly above $H_{\mathbf{x}}^1$ but have time integration ``to spare''. We can then use discrete Grownwall type estimates in the frequency decomposition in Lemmas \ref{L:maximal-compare} and \ref{L:reg-boost-last} to bootstrap the regularity gain to $L_I^\infty H_{\mathbf{x}}^{9/8}$, an honest improvement in regularity (it is $L^\infty$ in time). This argument can be, in fact, be applied recursively to achieve any level of regularity. We note that it is possible to gain regularity in this way because the solution is assumed to have exponential spatial decay. It is apparent that the conclusions of Lemmas \ref{L:soft-step}, \ref{L:exp-decay}, and \ref{L:regularity-boost} combined yield the conclusions of Proposition \ref{P:wk-lim}. \subsection{Outline of proof of Proposition \ref{P:rigidity}} The proof of Proposition \ref{P:rigidity} proceeds by contradiction. Suppose that the conclusion of Proposition \ref{P:rigidity} is false. Then there exists a sequence $\tilde u_n$ of smooth $\alpha_n$-orbitally stable solutions to the 3D ZK, $|\alpha_n| \to 0$ such that the following holds. Let $\tilde c_n(t)>0$ and $\tilde{\mathbf{a}}_n(t) \in \mathbb{R}^3$ be the modulation parameters associated to $\tilde u_n$ given by Lemma \ref{L:geom-decomp}, and let \begin{equation} \label{E:intro-100} \tilde \epsilon_n(t) \stackrel{\rm{def}}{=} \tilde c_n(t)^2 \tilde u_n (\tilde c_n(t)\mathbf{x} + \tilde{\mathbf{a}}_n(t), t) - Q(\mathbf{x}). \end{equation} Then for each $n$, for some $t$, $$ b_n(t) \stackrel{\rm{def}}{=} \| \tilde \epsilon_n(t) \|_{L_{\mathbf{x}}^2}>0. $$ It follows that for \emph{all} $t\in \mathbb{R}$, $b_n(t)>0$. (Indeed, if $b_n(t)=0$ for some $t$, then $b_n(t)=0$ for all $t\in \mathbb{R}$). We can assume, without loss of generality by replacing $\tilde u_n(t)$ by $\tilde u_n(t+t_{*n})$ for some $t_{*n}$ that $$ b_n(0) \geq \frac12 \sup_{t\in \mathbb{R}} b_n(t) \stackrel{\rm{def}}{=} B_n>0. $$ Moreover, by a shift and slight rescaling of $\tilde u_n$, for each $n$, we can assume that $$ \tilde c_n(0) =1 \quad \text{ and } \quad \tilde{\mathbf{a}}_n(0)=0. $$ Let \begin{equation} \label{E:intro-102} w_n(t) = \frac{\tilde \epsilon_n(t)}{B_n} \end{equation} so that for all $n$, $$ \|w_n(0) \|_{L_{\mathbf{x}}^2} \geq \frac12 \,, \qquad \|w_n \|_{L_t^\infty L_{\mathbf{x}}^2} \leq 1. $$ We will obtain a contradiction from the following five lemmas, which, in particular, imply that $w_n(0) \to 0$ strongly in $L_{\mathbf{x}}^2$. Although we know from \eqref{E:intro-101} that each $\tilde u_n$, and hence, each $\tilde \epsilon_n$, satisfies uniform-in-time spatial decay, we do not know \emph{a priori} that this decay is \emph{uniform in $n$}, and moreover, \emph{normalized} according the mass of $\tilde \epsilon_n$. Nevertheless, these properties can be proved using the $J_\pm$ monotonicity estimates in \S\ref{S:monotonicity}. The result is \begin{lemma}[uniform spatial decay] \label{L:ep-decay} Let $\tilde \epsilon_n$ be as defined in \eqref{E:intro-100}. Then $\tilde \epsilon_n$ satisfies uniform-in-$n$, uniform-in-time, exponential spatial decay: $$ \| \tilde \epsilon_n \|_{L_t^\infty L_\mathbf{x}^2({|\mathbf{x}|>R})} \lesssim e^{-R/32} \| \tilde \epsilon_n \|_{L_t^\infty L_{\mathbf{x}}^2}. $$ Consequently, $w_n$ defined by \eqref{E:intro-102} satisfies $$ \| w_n \|_{L_t^\infty L_\mathbf{x}^2({|\mathbf{x}|>R})} \lesssim e^{-R/32} $$ uniformly in $n$. \end{lemma} This is proved in \S \ref{S:uniform-n-decay}. As mentioned, it is rather quickly deduced as a consequence of the $J_\pm$ monotonicity in Lemma \ref{L:Jpm-estimates}. \begin{lemma}[comparability of Sobolev norms] \label{L:ep-comparability} Let $\tilde \epsilon_n$ be as defined in \eqref{E:intro-100}. Then $\tilde \epsilon_n$ satisfies, for all $k$, \begin{equation} \label{E:intro-tilde-comp} \| \tilde \epsilon_n \|_{L_t^\infty H_{\mathbf{x}}^k} \lesssim_k \| \tilde \epsilon_n \|_{L_t^\infty L_{\mathbf{x}}^2} \end{equation} uniformly in $n$. Consequently, $w_n$ defined by \eqref{E:intro-102} satisfies, for each $k\geq 0$, $$ \| w_n \|_{L_t^\infty H_{\mathbf{x}}^k } \lesssim_k 1 $$ uniformly in $n$. \end{lemma} This is proved in \S \ref{S:Sobolev-comparability}. The proof is similar to the proof of Lemma \ref{L:regularity-boost} given in \S\ref{S:higher-regularity}, although additional ingredients are introduced to handle the $H_{\mathbf{x}}^1$ bound ($k=1$ case of Lemma \ref{L:ep-comparability}), which was automatic in the context of Lemma \ref{L:regularity-boost}. At issue here is the need to obtain the small factor $\| \tilde \epsilon_n \|_{L_t^\infty L_{\mathbf{x}}^2}$ on the right side of \eqref{E:intro-tilde-comp}. The idea is to couple a virial-type identity without frequency localization to one with frequency localization. The one without frequency location allows for a reduction of order of derivatives via integration by parts in the nonlinear term, which gives a bound that can be used in the nonlinear term estimates for the virial-type identity \emph{with} frequency localization. \begin{lemma}[convergence] \label{L:convergence} For each $T>0$, $w_n \to w$ in $C([-T,T]; L_{\mathbf{x}}^2)$ satisfying the following: \begin{enumerate} \item $w$ is uniform-in-time smooth: for each $k\geq 0$ $$ \| w\|_{L_t^\infty H_{\mathbf{x}}^k} < \infty, $$ \item $w$ has uniform-in-time spatial decay: $$ \| w \|_{L_t^\infty L_{\mathbf{x}}^2(|\mathbf{x}|>R)} \lesssim e^{-\delta R}, $$ \item $w(0)$ is nontrivial: $$ \| w(0) \|_{L_{\mathbf{x}}^2} = 1, $$ \item $w$ satisfies the equation $$ \partial_t w = \partial_x \mathcal{L}w + \alpha \Lambda Q + \boldsymbol{\beta} \cdot \nabla Q, $$ where $\alpha$ and $\boldsymbol{\beta} = (\beta_1,\beta_2,\beta_3)$ are time-dependent coefficients, \item $w$ satisfies the orthogonality conditions $$ \langle w, \nabla Q \rangle =0 \quad \text{and} \quad \langle w, Q^2 \rangle =0. $$ \end{enumerate} \end{lemma} This is proved in \S \ref{S:convergence}. Working with $\tilde \zeta_n$, a recentered and renormalized version of $\tilde \epsilon_n$ (see \eqref{E:tilzeta}), first pass to a subsequence via Rellich-Kondrachov compactness so that $\tilde \zeta_n(0) \to \zeta_\infty(0)$, which is smooth and exponentially decaying. Taking $\zeta_\infty(t)$ to solve the expected limiting equation \eqref{E:con1}, we aim to prove that $\tilde \zeta_n(t) \to \zeta_\infty(t)$ for all $t\in \mathbb{R}$. Letting $\hat \zeta_n= \tilde \zeta_n - \zeta_\infty$, we derive the evolution equation for the difference, from which we deduce a Gronwall estimate on $\hat \zeta_n$, which shows the convergence in terms of $\tilde b_n \to 0$. In the original frame of reference, the limit is $w$, as described in the statement of Lemma \ref{L:convergence}. All the properties of $w$ stated in Lemma \ref{L:convergence} are inherited from the sequence $w_n =\tilde \epsilon_n/B_n$. Now that we have constructed a nontrivial limiting solution $w$ with the properties stated in Lemma \ref{L:convergence}, the next step in the argument by contradiction is to prove that it cannot exist. This is achieved in the following lemma. \begin{lemma}[linear Liouville property] \label{L:linear-Liouville} Suppose that $w$ solves \begin{equation} \label{E:w-eq} \partial_t w =\partial_x \mathcal{L}w + \alpha \Lambda Q + \boldsymbol{\beta} \cdot \nabla Q, \end{equation} where $\alpha$ and $\boldsymbol{\beta}$ are time-dependent, and further suppose that $w$ satisfies the orthogonality conditions \begin{equation} \label{E:extra-orth} \langle w, Q^2 \rangle =0 \quad \text{and} \quad \langle w, \nabla Q \rangle =0. \end{equation} If $w$ satisfies global uniform-in-time spatial decay \begin{equation} \label{E:w-dec} \| \langle x \rangle^{1/2} w \|_{L_t^\infty H_{\mathbf{x}}^2} < \infty, \end{equation} then $w\equiv 0$. \end{lemma} This is proved in \S \ref{S:linear-Liouville} by observing that the quadratic in $w$ quantity $$ Q(w) \stackrel{\rm{def}}{=} \langle \mathcal{L}w,w \rangle + \frac{2}{\langle \Lambda Q, Q \rangle} \langle w, Q \rangle^2 $$ is constant in time. This follows by computing $\partial_t Q(w)$, plugging in the equation for $w$, and appealing to the orthogonality conditions \eqref{E:extra-orth}. However, the time integral $\int_{t=-\infty}^{\infty} Q(w) \, dt$ is in fact controlled by the left side of \eqref{E:vir-bd}, but the right side of \eqref{E:vir-bd} is finite by the assumption \eqref{E:w-dec}. This forces $Q(w)\equiv 0$, and by the positive definiteness of $\mathcal{L}$ (subject to \eqref{E:extra-orth}), this forces $w\equiv 0$. \begin{lemma}[virial estimate] \label{L:virial} Suppose that $w$ solves $$ \partial_t w =\partial_x \mathcal{L}w + \alpha \Lambda Q + \boldsymbol{\beta} \cdot \nabla Q, $$ where $\alpha$ and $\boldsymbol{\beta}$ are time-dependent, and further suppose that $w$ satisfies the orthogonality conditions $$ \langle w, Q^2 \rangle =0 \quad \text{and} \quad \langle w, \nabla Q \rangle =0. $$ Then $w$ satisfies the global-in-time estimate \begin{equation} \label{E:vir-bd} \| w \|_{L_t^2 H_{\mathbf{x}}^3} \lesssim \| \langle x \rangle^{1/2} w \|_{L_t^\infty H_{\mathbf{x}}^2}. \end{equation} \end{lemma} This is proved in \S \ref{S:virial}. This inequality is proved via passage to a dual problem in $v=\mathcal{L}w$ and the proof that $v$ satisfies a virial identity. The desired inequality reduces to the positivity of a certain quadratic form. The positivity of this quadratic form is checked numerically, and details of the numerical method are provided in \S \ref{S:numerics}. \subsection{Notational conventions} We will use $\mathbf{x} = (x,y,z)$ for the spatial variable and $\boldsymbol{\xi}$ for the Fourier variable in $\mathbb{R}^3$. The Littlewood-Paley frequency projection is $\widehat{P_N f}(\boldsymbol{\xi}) = m(\boldsymbol{\xi}/N) \hat f(\xi)$, where $m(\boldsymbol{\xi})$ is smooth, supported in $\frac12\leq |\boldsymbol{\xi}| \leq 2$, and satisfies $\sum_{N \in 2^{\mathbb{Z}}} m(\boldsymbol{\xi}/N) = 1$. However, we will use only $N \geq 1$, and in fact replace $P_1 = \sum_{N\leq 1} P_N$ to cover all low frequencies. We will use the notation $P_{\leq M} = \sum_{N\leq M} P_N$. While weighted estimates use weight $x$ (not $\mathbf{x}$), all frequency projections are done with respect to all three variables using $P_N$ as defined above in terms of $m(\boldsymbol{\xi})$. In some arguments in \S\ref{S:RV-estimates}, \S\ref{S:higher-regularity} and \S\ref{S:Sobolev-comparability}, we use the shorthand $\ln^+N = \ln(N+2)$ so that for all $N \geq 1$, we have $\ln^+N \geq 1$ (avoiding $\ln 1=0$). Throughout the paper we refer to Class B solutions, which were defined in Definition \ref{D:ClassB}. For an $\alpha$-orbitally stable solution $u$ to the 3D ZK (as defined in Definition \ref{D:orb-stab}) and modulation parameters $\mathbf{a}(t)$ and $c(t)$ (as given in Lemma \ref{L:geom-decomp}), we use the following notations for the \emph{remainder}: $$ \epsilon(\mathbf{x},t) \stackrel{\rm{def}}{=} c(t)^2u(c(t)\mathbf{x}+\mathbf{a}(t),t) - Q(\mathbf{x}). $$ With $Q_{c,\mathbf{a}}(\mathbf{x}) = c^{-2} Q(c^{-1}(\mathbf{x}-\mathbf{a}))$, we define $$ \eta(\mathbf{x},t) \stackrel{\rm{def}}{=} c^{-2}\epsilon(c^{-1}(\mathbf{x}-\mathbf{a})) = u(\mathbf{x},t) - Q_{c, \mathbf{a}}(\mathbf{x}) $$ (see \eqref{E:eta-def} and \eqref{E:eta-eq}), and $$ \zeta(\mathbf{x},t) \stackrel{\rm{def}}{=} B^{-1} \eta(t) $$ for $B = \|b(t) \|_{L_t^\infty}$, where $b(t) = \|\eta(t) \|_{L_{\mathbf{x}}^2}$ (see \eqref{E:zeta-1}). Integrals related to the monotonicity property of solutions to the 3D ZK are denoted by $I_\pm$ and $J_\pm$ and defined in \eqref{E:Ipm-de} and \eqref{E:J-def}, respectively. \section{Review of local theory estimates} \label{S:RV-estimates} In this section we review Ribaud \& Vento \cite{RV} local estimates as they become an essential tool later in our arguments. We start with the following result. \begin{lemma}[Ribaud \& Vento, Lemma 3.3] \label{L:RV1} For $M \geq 1$, and $I$, a time interval of length $|I|\leq 1$, we have \begin{equation} \label{E:HR20} \| P_M U(t) \phi \|_{L_x^2 L_{yz I}^\infty} \lesssim (\ln^+ M)^2 M \| P_M \phi \|_{L_{\mathbf{x}}^2}, \end{equation} \begin{equation} \label{E:HR21} \| P_M \int_0^t\partial_x U(t-s)f(\bullet,s) \,ds \|_{L_x^2 L_{yz I}^\infty} \lesssim (\ln^+ M)^2 M \| P_M f \|_{L_x^1 L_{yzI}^2}, \end{equation} \begin{equation} \label{E:HR21b} \| P_M \int_0^t\partial_x U(t-s)f(\bullet,s) \,ds \|_{L_I^\infty L_{\mathbf{x}}^2} \lesssim \| P_M f \|_{L_x^1 L_{yzI}^2}. \end{equation} \end{lemma} \begin{proof} In all of the estimates, the time variables are restricted to the unit-sized interval $I$. The boundedness of the following are equivalent \begin{itemize} \item $P_M \Lambda: L_x^2L_{yzI}^1 \to L_{\mathbf{x}}^2 $, with operator norm $(\ln^+ M)^2M$, \item $P_M \Lambda^*: L_{\mathbf{x}}^2 \to L_x^2 L_{yzI}^\infty $, with operator norm $(\ln^+ M)^2M$, \item $P_M^2 \Lambda^* \Lambda: L_x^2L_{yzI}^1 \to L_x^2 L_{yzI}^\infty$, with operator norm $(\ln^+ M)^4M^2$, \end{itemize} where $$\Lambda f(\mathbf{x}) = \int_{s=0}^1 U(-s) f(\mathbf{x}, s) \, ds,$$ $$\Lambda^* \phi(\mathbf{x},t) = U(t) \phi(\mathbf{x}),$$ $$\Lambda^*\Lambda f(\mathbf{x},t) = \int_{s=0}^1 U(t-s) f(\mathbf{x},s) \, ds.$$ The kernel of the operator $P_M^2 \Lambda^*\Lambda$ is $$ K_M(\mathbf{x},t) = \int_{|\boldsymbol{\xi}|\sim M} e^{i(\mathbf{x}\cdot \boldsymbol{\xi} + t\xi|\boldsymbol{\xi}|^2)} \, d\boldsymbol{\xi}. $$ To establish that $P_M^2 \Lambda^*\Lambda: L_x^2L_{yzI}^1 \to L_x^2 L_{yzI}^\infty$ is bounded with operator norm $\lesssim (\ln^+ M)^4M^2$, it suffices to show that $$ \|K_M \|_{L_x^1L_{yzI}^\infty} \lesssim (\ln^+ M)^4M^2. $$ This was proved in Ribaud \& Vento, Lemma 3.3. Since this establishes that $P_M^2 \Lambda^*\Lambda: L_x^2L_{yzI}^1 \to L_x^2 L_{yzI}^\infty$ is bounded with operator norm $\lesssim (\ln^+ M)^4M^2$, we have equivalent fact that $P_M \Lambda_*: L_{\mathbf{x}}^2 \to L_x^2 L_{yz I}^\infty$ is bounded with operator norm $\lesssim (\ln^+ M)^2M$, which is precisely \eqref{E:HR20}. The local smoothing estimate from Ribaud \& Vento (and other references) asserts the boundedness of $$ \partial_x \Lambda^*: L_{\mathbf{x}}^2 \to L_x^\infty L_{yz I}^2. $$ Hence, we have also the boundedness of \begin{equation} \label{E:RV1} \partial_x \Lambda: L_x^1 L_{yz I}^2 \to L_{\mathbf{x}}^2. \end{equation} This, combined with the fact that $P_M \Lambda^*: L_{\mathbf{x}}^2 \to L_x^2 L_{yzI}^\infty $ is bounded with operator norm $(\ln^+ M)^2M$, yields the boundedness of $$ P_M \partial_x \Lambda^* \Lambda: L_x^1 L_{yz I}^2 \to L_x^2 L_{yzI}^\infty $$ with operator norm $(\ln^+ M)^2M$. Combining with the Christ-Kiselev lemma gives \eqref{E:HR21}. The standard unitarity property for $U(t)$ implies the boundedness of the map $\Lambda^*: L_{\mathbf{x}}^2 \to L_I^\infty L_{\mathbf{x}}^2$, which together with \eqref{E:RV1} yields the boundedness of $$ \partial_x \Lambda^* \Lambda : L_x^1 L_{yz I}^2 \to L_I^\infty L_{\mathbf{x}}^2. $$ Again, combined with the Christ-Kiselev lemma, it gives \eqref{E:HR21b}. \end{proof} \section{Class B solutions satisfy mass conservation}\label{S:ClassB-mass} In this section, we prove Lemma \ref{L:mass-conservation}, demonstrating that Class B solutions satisfy mass conservation. Let $P_{<N}$ be the Littlewood-Paley projection onto frequencies $\lesssim N$. We note that $P_{<N}^2 \neq P_{<N}$, since the frequency cutoff is smoothed, but nevertheless $P_{<N}^2-P_{<N}$ is a multiplier operator with symbol supported in $|\boldsymbol{\xi}| \sim N$. We define $P_{>N} = I-P_{<N}$, which yields $$ \partial_t \| P_{<N} u\|_{L_{\mathbf{x}}^2}^2 = 2 \int P_{<N}u \, \partial_t P_{<N}u \, d\mathbf{x}. $$ Substituting ZK, we continue as $$ \partial_t \| P_{<N} u\|_{L_{\mathbf{x}}^2}^2 = - 2\int P_{<N} u \, \partial_x \Delta P_{<N}u \, d \mathbf{x} - 2 \int P_{<N} u \, \partial_x P_{<N} u^2 \, d\mathbf{x}, $$ noting that both integrals are finite (absolutely convergent) due to the frequency cutoff (so we are not manipulating infinities!). By integration by parts $$ \partial_t \| P_{<N} u\|_{L_{\mathbf{x}}^2}^2 = 2\int \nabla P_{<N} u \cdot \partial_x \nabla P_{<N}u \, d \mathbf{x} + 2 \int \partial_x P_{<N}^2 u \, u^2 \, d\mathbf{x}. $$ The first integral is zero, and for the second integral we insert $I = P_{<N}+P_{>N}$ in front of each copy of $u$ and expand to obtain \begin{align*} \partial_t \| P_{<N} u\|_{L_{\mathbf{x}}^2}^2 &= 2 \int \partial_x P_{<N}^2 u \, P_{<N} u \, P_{<N} u \, d\mathbf{x} + 4 \int \partial_x P_{<N}^2 u \, P_{<N} u \, P_{>N} u \, d\mathbf{x} \\ & \qquad + 2 \int \partial_x P_{<N}^2 u \, P_{>N} u \, P_{>N} u \, d\mathbf{x}\,. \end{align*} The key is to notice that the first integral becomes zero when $P_{<N}^2$ is replaced by $P_{<N}$, so \begin{align*} \partial_t \| P_{<N} u\|_{L_{\mathbf{x}}^2}^2 &= -4\int (P_{<N}^2-P_{<N}) u \, P_{<N} u \, \partial_x P_{<N} u \, d\mathbf{x}+ 4 \int \partial_x P_{<N}^2 u \, P_{<N} u \, P_{>N} u \, d\mathbf{x} \\ &\qquad + 2 \int \partial_x P_{<N}^2 u \, P_{>N} u \, P_{>N} u \, d\mathbf{x}. \end{align*} Now all three integrals involve at least one term at frequency $|\boldsymbol{\xi}| \gtrsim N$. We use H\"older as follows for each of the three terms: \begin{align*} \left| \partial_t \| P_{<N} u\|_{L_{\mathbf{x}}^2}^2 \right| &\lesssim \| (P_{<N}^2-P_{<N}) u \|_{L_{\mathbf{x}}^3} \|P_{<N} u\|_{L_{\mathbf{x}}^6} \| \partial_x P_{<N} u\|_{L_{\mathbf{x}}^2} \\ &\qquad + \| \partial_x P_{<N}^2 u \|_{L_{\mathbf{x}}^2} \| P_{<N} u \|_{L_{\mathbf{x}}^6} \| P_{>N} u\|_{L_{\mathbf{x}}^3} \\ &\qquad + \| \partial_x P_{<N}^2 u \|_{L_x^2} \| P_{>N} u \|_{L_{\mathbf{x}}^6} \| P_{>N} u\|_{L_{\mathbf{x}}^3}. \end{align*} Following with Sobolev embedding, we get \begin{align*} \left| \partial_t \| P_{<N} u\|_{L_{\mathbf{x}}^2}^2 \right| &\lesssim \| (P_{<N}^2-P_{<N}) u \|_{\dot H_\mathbf{x}^{1/2}} \|P_{<N} u\|_{\dot H_\mathbf{x}^1} \| P_{<N}u\|_{\dot H_\mathbf{x}^1} \\ &\quad + \| P_{<N}^2 u \|_{\dot H_\mathbf{x}^1} \| P_{<N} u \|_{\dot H_\mathbf{x}^1} \| P_{>N} u\|_{\dot H_\mathbf{x}^{1/2}} \\ &\quad + \| P_{<N}^2 u \|_{\dot H_\mathbf{x}^1} \| P_{>N} u \|_{\dot H_\mathbf{x}^1} \| P_{>N} u\|_{\dot H_\mathbf{x}^{1/2}}. \end{align*} Since the $\dot H_\mathbf{x}^{1/2}$ norms lie on terms with $P_{>N}$, we can boost to $\dot H_\mathbf{x}^1$ and gain $N^{-1/2}$, i.e., use $\| P_{>N} u \|_{\dot H_\mathbf{x}^{1/2}} \lesssim N^{-1/2} \| u \|_{\dot H_\mathbf{x}^1}$. This gives $$ \left| \partial_t \| P_{<N} u\|_{L_{\mathbf{x}}^2}^2 \right| \lesssim N^{-1/2} \| u \|_{\dot H_{\mathbf{x}}^1}^3. $$ Now integrate in time, for fixed $t_1<t_2$, to obtain $$ \left| \|P_{<N} u(t_1) \|_{L_{\mathbf{x}}^2}^2 - \|P_{<N} u(t_2) \|_{L_{\mathbf{x}}^2}^2 \right| \lesssim N^{-1/2} \| u \|_{L_{[t_1,t_2]}^\infty \dot H_{\mathbf{x}}^1}^3 |t_2-t_1|. $$ Send $N\to \infty$, to obtain that $$ \|u(t_1) \|_{L_{\mathbf{x}}^2}^2 = \|u(t_2) \|_{L_{\mathbf{x}}^2}^2, $$ which indicates that the mass at any two distinct times $t_1$ and $t_2$ is the same, completing the proof of Lemma \ref{L:mass-conservation}. \section{Decomposition of orbitally stable solutions} \label{S:geom-decomp} In this section, we introduce three versions of the remainder function: $\epsilon$, $\eta$, and $\zeta$, and derive the equations that each of these functions satisfy, and derive the parameter dynamics. Some of these lemmas will be proved only under the assumption that the solution is of Class B. In particular, we will cover the proof of Lemma \ref{L:geom-decomp}. Note that in Lemma \ref{L:implicit1}, it is possible to use $s,k \ll -1$, since $Q_{c,\mathbf{a}}$, $\partial_c Q_{c,\mathbf{a}}$, $\nabla_{\mathbf{a}} Q_{c,\mathbf{a}}$, etc., are smooth and exponentially decaying in space, and $u$ appears as a dual object in the proof. This will be exploited in Lemma \ref{L:implicit1b}. \begin{lemma} \label{L:implicit1} Suppose $\alpha \ll 1$, $s, k \in \mathbb{R}$. Suppose $u(\mathbf{x})\in H_{\mathbf{x}}^{s,k}$ (suppressing time dependence) and there are given $\hat c>0$ and $\hat{\mathbf{a}}\in \mathbb{R}^3$ such that $$ \|\hat c^2 u(\hat c \mathbf{x}+\hat{\mathbf{a}}) - Q(\mathbf{x}) \|_{H_{\mathbf{x}}^{s,k}} \leq \alpha. $$ Then there exists $c>0$ and $\mathbf{a}\in \mathbb{R}^3$ with $$ |c-\hat c | \lesssim \alpha \quad \mbox{and} \quad |\mathbf{a} - \hat{\mathbf{a}}| \lesssim \alpha $$ such that, if we define $$ \epsilon(\mathbf{x}) = c^2 u(c\mathbf{x}+\mathbf{a}) - Q(\mathbf{x}), $$ then $\epsilon$ satisfies the orthogonality conditions $$ \langle \epsilon, \nabla Q \rangle =0 \quad \mbox{and} \quad \langle \epsilon, Q^2 \rangle =0. $$ Moreover, this defines an infinitely differentiable mapping $$ H_{\mathbf{x}}^{s,k} \to \mathbb{R}^4\, \quad \text{given by} \quad u \mapsto (c, \mathbf{a}). $$ Specifically, each of the derivative maps $c'$, $a_j'$, for $j=1,2,3$, are Lipschitz continuous maps $H_{\mathbf{x}}^{s,k} \to H_{\mathbf{x}}^{-s,-k}$. \end{lemma} \begin{proof} By scaling and translation, we can assume that $\hat c=1$ and $\hat{\mathbf{a}}=0$. Let $Q_{c,\mathbf{a}}(\mathbf{x}) = c^{-2}Q(c^{-1}(\mathbf{x}-\mathbf{a}))$. Then $$ F(u,c,\mathbf{a}) = \begin{bmatrix} \langle u - Q_{c,\mathbf{a}}, \partial_c Q_{c,\mathbf{a}} \rangle \\ \langle u - Q_{c,\mathbf{a}}, \nabla_{\mathbf{a}} Q_{c,\mathbf{a}} \rangle \end{bmatrix} $$ defines a mapping $$ F: H_{\mathbf{x}}^{s,k} \times \mathbb{R}^4 \to \mathbb{R}^4, $$ for which we know that $F(Q,1,0)=0$. The mapping $F$ is infinitely differentiable in each component ($u$, $c$, $\mathbf{a}$), and each derivative has uniform norms for $\frac12\leq c \leq 2$ and $\mathbf{a}\in \mathbb{R}^3$. We compute the $4$-vector valued first derivative functions as $$ \langle d_u F(u,c,\mathbf{a}), v\rangle = \begin{bmatrix} \langle v , \partial_c Q_{c,\mathbf{a}} \rangle \\ \langle v, \nabla_{\mathbf{a}} Q_{c,\mathbf{a}} \rangle \end{bmatrix}, $$ $$ \partial_c F(u,c,\mathbf{a}) = -\begin{bmatrix} \langle \partial_c Q_{c,\mathbf{a}} , \partial_c Q_{c,\mathbf{a}} \rangle \\ \langle \partial_c Q_{c,\mathbf{a}}, \nabla_{\mathbf{a}} Q_{c,\mathbf{a}} \rangle \end{bmatrix}+ \begin{bmatrix} \langle u - Q_{c,\mathbf{a}}, \partial_c^2 Q_{c,\mathbf{a}} \rangle \\ \langle u - Q_{c,\mathbf{a}}, \partial_c \nabla_{\mathbf{a}} Q_{c,\mathbf{a}} \rangle \end{bmatrix}, $$ $$ \partial_{a_j} F(u,c,\mathbf{a}) = -\begin{bmatrix} \langle \partial_{a_j} Q_{c,\mathbf{a}} , \partial_c Q_{c,\mathbf{a}} \rangle \\ \langle \partial_{a_j} Q_{c,\mathbf{a}}, \nabla_{\mathbf{a}} Q_{c,\mathbf{a}} \rangle \end{bmatrix}+ \begin{bmatrix} \langle u - Q_{c,\mathbf{a}}, \partial_{a_j}\partial_c Q_{c,\mathbf{a}} \rangle \\ \langle u - Q_{c,\mathbf{a}}, \partial_{a_j} \nabla_{\mathbf{a}} Q_{c,\mathbf{a}} \rangle \end{bmatrix}. $$ It is straightforward to check that the $4\times 4$ matrix-valued map $\partial_{c,\mathbf{a}}F(u,c,\mathbf{a})$ is invertible at $(u,c, \mathbf{a}) = (Q,1,0)$, and thus, by the implicit function theorem, the mappings $u \mapsto c(u)$ and $u \mapsto \mathbf{a}(u)$ that satisfy the $4$-vector equation $$ F(u, c(u), \mathbf{a}(u))=0 $$ exist and are unique. By implicit differentiation, the following $4$-vector valued identity holds \begin{align*} 0 &= \langle d_u [F(u,c(u),\mathbf{a}(u))], v\rangle \\ &= \langle (d_uF)(u,c(u),\mathbf{a}(u)), v\rangle + (\partial_c F)(u,c(u),\mathbf{a}(u)) \langle c'(u), v\rangle \\ & \qquad + \sum_{j=1}^3 (\partial_{a_j} F)(u,c(u),\mathbf{a}(u)) \langle a_j'(u), v\rangle. \end{align*} This is actually four equations in the four unknowns $\langle c'(u), v\rangle$ and $\langle a_j'(u), v\rangle$, for $j=1, 2, 3$. Due to the invertibility of $\partial_{c,\mathbf{a}}F(u,c,\mathbf{a})$, we can solve for $\langle c'(u), v\rangle$ and $\langle a_j'(u), v\rangle$, for $j=1, 2, 3$. We obtain that $c'(u)$, which is a bounded linear map $H_{\mathbf{x}}^{s,k} \to \mathbb{R}$, and hence, associated with an element of $H_{\mathbf{x}}^{-s,-k}$. Thus, $c'$ itself a Lipschitz continuous map $c':H_{\mathbf{x}}^{s,k} \to H_{\mathbf{x}}^{-s,-k}$. \end{proof} \begin{lemma} \label{L:implicit1b} There exists $\alpha>0$ sufficiently small so that, if $u$ is a Class B $\alpha$-orbitally stable solution to the 3D ZK, then there exist \emph{unique} translation $\mathbf{a}(t)$ and scale parameters $c(t)>0$ so that $\epsilon$ defined by $$\epsilon(\mathbf{x},t) = c(t)^2u(c(t)\mathbf{x}+\mathbf{a}(t),t) - Q(\mathbf{x})$$ satisfies, for all $t$, the orthogonality conditions $$\langle \epsilon(t), \nabla Q \rangle =0 \quad \mbox{and} \quad \langle \epsilon(t), Q^2 \rangle =0.$$ The translation and scale parameters $\mathbf{a}(t)=(a_x(t),a_y(t),a_z(t))$ and $c(t)$ are $C^{1,\frac23}$ functions. \end{lemma} We remark that even though the function space mappings $c: H^{s,k} \to \mathbb{R}$ and $a_j:H^{s,k} \to \mathbb{R}$ in Lemma \ref{L:implicit1} are infinitely differentiable, the compositions $c(t) = c(u(t))$ and $a_j(t) =c(u(t))$ are not more than once differentiable, since we do not have a meaning for $u''(t)$ when $u(t)$ is a Class B solution. Lemma \ref{L:implicit1b} asserts that these parameters have H\"older continuous first derivatives of order $\frac23$, and this seems to be the best we can do. To see that $u''(t)$ is not defined, formally compute, by substitution of ZK, $$ \partial_t^2 u = -\partial_t (\partial_x \Delta u + \partial_x (u^2))= - \partial_x \Delta \partial_t u - 2\partial_x( u \, \partial_t u). $$ All that we know is $u\in H_{\mathbf{x}}^1$ and $\partial_t u \in H_{\mathbf{x}}^{-2}$, and there is no way to define the product of two such functions in 3D. \begin{proof}[Proof of Lemma \ref{L:implicit1b}] To see this, we apply Lemma \ref{L:implicit1} at each time $t$ with $s=-4$ and $k=0$. Since in Lemma \ref{L:implicit1}, $c$ and $\mathbf{a}$ are functions of $u$, we have $c: H_{\mathbf{x}}^s \to \mathbb{R}$, $$ c': H_{\mathbf{x}}^s \to (H_{\mathbf{x}}^s)^* \simeq H_{\mathbf{x}}^{-s}, $$ and for $u_1, u_2 \in H_{\mathbf{x}}^s$, $$ \| c'(u_2)-c'(u_1) \|_{H_{\mathbf{x}}^{-s}} \lesssim \| u_2 - u_1 \|_{H_x^s}. $$ Similar statements hold for $a_j'$. Taking $c(t) = c(u(t))$ and $\mathbf{a}(t) = \mathbf{a}(u(t))$, we obtain $$ c'(t) = \langle c'(u(t)),u'(t)\rangle \,, \qquad a_j'(t) = \langle a_j'(u(t)), u'(t) \rangle. $$ With our choice of $s=-4$, we have $c'(u(t)) \in H_{\mathbf{x}}^4$ and $a_j'(u(t)) \in H_{\mathbf{x}}^4$, and thus, we need to estimate $u'(t) \in H_{\mathbf{x}}^{-4}$. Since the argument for $a_j'(t)$ is similar, we only write the argument for $c'(t)$. Note that for $t_1<t_2$, \begin{align*} c'(t_2) - c'(t_1) &= \langle c'(u(t_2)),u'(t_2) \rangle - \langle c'(u(t_1)),u'(t_1)\rangle\\ &= \langle c'(u(t_2))-c'(u(t_1)), u'(t_2) \rangle + \langle c'(u(t_1)), u'(t_2)-u'(t_1) \rangle, \end{align*} and thus, \begin{align*} |c'(t_2)-c'(t_1)| &\lesssim \| c'(u(t_2)) - c'(u(t_1))\|_{H_{\mathbf{x}}^4} \| u'(t_2) \|_{H_{\mathbf{x}}^{-4}} + \| c'(u(t_2)) \|_{H_{\mathbf{x}}^4} \|u'(t_2)-u'(t_1) \|_{H_{\mathbf{x}}^{-4}} \\ &\lesssim \|u(t_2) - u(t_1) \|_{H_{\mathbf{x}}^{-4}} \| u'(t_2) \|_{H_{\mathbf{x}}^{-4}} + \|u'(t_2)-u'(t_1) \|_{H_{\mathbf{x}}^{-4}} \, . \end{align*} By \eqref{E:wk-106b} and \eqref{E:wk-107b}, $$ |c'(t_2)-c'(t_1)| \lesssim |t_2-t_1|^{2/3}. $$ \end{proof} Now we prove the remaining properties of the parameters $c(t)=c(u(t))$ and $\mathbf{a}(t)=\mathbf{a}(u(t))$ asserted in Lemma \ref{L:geom-decomp}. Let $u$ be a solution to the 3D ZK that is $\alpha$-orbitally stable and let $c(t)$ and $\mathbf{a}(t)$ be the unique parameters so that the orthogonality conditions $$ \langle \epsilon, Q^2 \rangle =0 \quad \mbox{and} \quad \langle \epsilon, \nabla Q \rangle=0 $$ hold. Let \begin{equation} \label{E:ep-def} \epsilon(\mathbf{x},t) = c(t)^2 \,u\left(c(t) \mathbf{x} + \mathbf{a}(t),t\right) - Q(\mathbf{x}) \end{equation} and $$ Q_{c, \mathbf{a}}(\mathbf{x}) = c^{-2}Q(c^{-1}(\mathbf{x}-\mathbf{a})). $$ We further extend this notational convention to an arbitrary function $f(\mathbf{x})$, denoting \begin{equation} \label{E:shift-notation} f_{c, \mathbf{a}}(\mathbf{x}) \stackrel{\rm{def}}{=} c^{-2}f(c^{-1}(\mathbf{x}-\mathbf{a})). \end{equation} In particular, $$ \nabla Q_{c,\mathbf{a}} = c^{-1} (\nabla Q)_{c,\mathbf{a}} \quad \mbox{and} \quad \partial_c Q_{c,\mathbf{a}} = -c^{-1} (\Lambda Q)_{c,\mathbf{a}}. $$ Rewriting \eqref{E:ep-def} as $$ u(\mathbf{x},t) = Q_{c,\mathbf{a}}(\mathbf{x}) + c^{-2}\epsilon(c^{-1}(\mathbf{x}-\mathbf{a})), $$ and substituting into the 3D ZK equation, using the equation for $Q$, we obtain the equation for $\epsilon$: \begin{align} \label{ep-eq} c^3 \partial_t \epsilon = \partial_x \mathcal{L}\epsilon + c^2c' \Lambda Q + c^2(\mathbf{a}' - c^{-2} \mathbf{i}) \cdot \nabla Q + c^2c' \Lambda \epsilon + c^2(\mathbf{a}' - c^{-2}\mathbf{i}) \cdot \nabla \epsilon - \partial_x \epsilon^2, \end{align} where $$ \mathcal{L} = I - \Delta - 2Q. $$ Now let \begin{equation} \label{E:eta-def} \eta(\mathbf{x},t) = c^{-2}\epsilon(c^{-1}(\mathbf{x}-\mathbf{a})) \end{equation} so that $$ u(\mathbf{x},t) = Q_{c, \mathbf{a}}(\mathbf{x}) + \eta(\mathbf{x},t). $$ The equation for $\eta$ is \begin{equation} \label{E:eta-eq} \partial_t \eta = - \partial_x \Delta \eta - 2 \partial_x ( Q_{c,\mathbf{a}} \eta) - \partial_x \eta^2 + c'c^{-1} (\Lambda Q)_{c,\mathbf{a}} + c^{-1}(\mathbf{a}' - c^{-2} \mathbf{i}) \cdot (\nabla Q)_{c,\mathbf{a}}. \end{equation} \begin{lemma} \label{L:ODE-bounds} Suppose that $u$ is a Class B, $\alpha$-orbitally stable solution to the 3D ZK with associated parameters $\mathbf{a}(t)$ and $c(t)$ as in Lemma \ref{L:geom-decomp}. Let $b(t) \stackrel{\rm{def}}{=} \|\epsilon(t) \|_{L_{\mathbf{x}}^2}$. Then $\mathbf{a}(t)=(a_x(t),a_y(t),a_z(t))$ and $c(t)$ are $C^{1,\frac23}$ functions, and moreover, \begin{equation} \label{E:param-ODEs} \begin{aligned} &\left|c^2c' - \frac{\langle \epsilon, \mathcal{L} \partial_x (Q^2)\rangle}{ \langle \Lambda Q, Q^2 \rangle} \right| \lesssim b^2, && \left| c^2(a_x' -c^{-2}) - \frac{ \langle \epsilon, \mathcal{L} Q_{xx}\rangle }{ \|Q_x\|_{L^2}^2 } \right| \lesssim b^2, \\ &\left| c^2a_y' - \frac{ \langle \epsilon, \mathcal{L} Q_{xy}\rangle }{ \|Q_y\|_{L^2}^2 } \right| \lesssim b^2, && \left| c^2a_z' - \frac{ \langle \epsilon, \mathcal{L} Q_{xz}\rangle }{ \|Q_z\|_{L^2}^2 } \right| \lesssim b^2. \end{aligned} \end{equation} \end{lemma} \begin{proof} Multiplying equation \eqref{ep-eq} by $Q^2$ and $Q_{x}$, $Q_{y}$, $Q_{z}$, respectively, and integrating by parts, we formally obtain the following equations (with regularization arguments to make computations rigorous) \begin{align*} 0 &= -\langle \epsilon, \mathcal{L} \partial_{x} (Q^2)\rangle + c^2c' \langle \epsilon, Q^2-\Lambda Q^2\rangle +c^2c' \langle \Lambda Q, Q^2\rangle\\ &+c^2(a'_{x} -c^{-2}) \left[\langle Q_{x},Q^2\rangle-\langle \epsilon, (Q^2)_{x}\rangle\right]\\ &+c^2a'_{y} \left[\langle Q_{y},Q^2\rangle-\langle \epsilon, (Q^2)_{y}\rangle\right]+c^2a'_{z} \left[\langle Q_{z},Q^2\rangle-\langle \epsilon, (Q^2)_{z}\rangle\right] +\langle \epsilon^2, \partial_{x}(Q^2)\rangle\\ \end{align*} and \begin{align*} 0 &= -\langle \epsilon, \mathcal{L} Q_{xx}\rangle + c^2c' \langle \epsilon, Q_{x}-\Lambda Q_{x}\rangle +c^2c' \langle \Lambda Q, Q_{x}\rangle\\ &+c^2(a'_{x} -c^{-2}) \left[\langle Q_{x},Q_{x}\rangle-\langle \epsilon, Q_{xx}\rangle\right]\\ &+c^2a'_{y} \left[\langle Q_{y},Q_{x}\rangle-\langle \epsilon, Q_{xy}\rangle\right]+c^2a'_{z} \left[\langle Q_{z},Q_{x}\rangle-\langle \epsilon, Q_{xz}\rangle\right] +\langle \epsilon^2, Q_{xx}\rangle, \end{align*} similarly, \begin{align*} 0 &= -\langle \epsilon, \mathcal{L} Q_{yx}\rangle + c^2c' \langle \epsilon, Q_{y}-\Lambda Q_{y}\rangle +c^2c' \langle \Lambda Q, Q_{y}\rangle\\ &+c^2(a'_{x} -c^{-2}) \left[\langle Q_{x},Q_{y}\rangle-\langle \epsilon, Q_{yx}\rangle\right]\\ &+c^2a'_{y} \left[\langle Q_{y},Q_{y}\rangle-\langle \epsilon, Q_{yy}\rangle\right]+c^2a'_{z} \left[\langle Q_{z},Q_{y}\rangle-\langle \epsilon, Q_{yz}\rangle\right] +\langle \epsilon^2, Q_{yx}\rangle, \end{align*} and \begin{align*} 0 &= -\langle \epsilon, \mathcal{L} Q_{zx}\rangle + c^2c' \langle \epsilon, Q_{z}-\Lambda Q_{z}\rangle +c^2c' \langle \Lambda Q, Q_{z}\rangle\\ &+c^2(a'_{x} -c^{-2}) \left[\langle Q_{x},Q_{z}\rangle-\langle \epsilon, Q_{zx}\rangle\right]\\ &+c^2a'_{y} \left[\langle Q_{y},Q_{z}\rangle-\langle \epsilon, Q_{zy}\rangle\right]+c^2a'_{z} \left[\langle Q_{z},Q_{z}\rangle-\langle \epsilon, Q_{zz}\rangle\right]+\langle \epsilon^2, Q_{zx}\rangle. \end{align*} Noting that $\langle \Lambda Q, Q^2\rangle=\|Q\|_{L^3}^3$ and $\langle \Lambda Q, \nabla Q \rangle=0$ ($L^2$-critical case), we deduce the following linear system \begin{equation}\label{System} (A - B(\epsilon)) \begin{bmatrix} c^2c' \\ c^2(a'_{x} -c^{-2}) \\ c^2a'_{y}\\c^2a'_{z} \end{bmatrix} = \begin{bmatrix} \langle \epsilon, \mathcal{L} \partial_{x} (Q^2)\rangle \\ \langle \epsilon, \mathcal{L} Q_{xx}\rangle \\ \langle \epsilon, \mathcal{L} Q_{xy}\rangle \\ \langle \epsilon, \mathcal{L} Q_{xz}\rangle \end{bmatrix} - \begin{bmatrix} \langle \epsilon^2, \partial_{x}(Q^2)\rangle \\ \langle \epsilon^2, Q_{xx}\rangle \\ \langle \epsilon^2, Q_{xy}\rangle \\ \langle \epsilon^2, Q_{xz}\rangle \end{bmatrix}, \end{equation} where $$ A = \begin{bmatrix} \|Q\|_{L^3}^3 & & & \\ & \|Q_{x}\|_{L^2}^2 && \\ & & \|Q_{y}\|_{L^2}^2& \\ &&& \|Q_{z}\|_{L^2}^2 \end{bmatrix} $$ and $$ B(\epsilon) = \begin{bmatrix} \langle \epsilon, \Lambda Q^2 - Q^2\rangle & \langle \epsilon, (Q^2)_{x}\rangle & \langle \epsilon, (Q^2)_{y}\rangle & \langle \epsilon, (Q^2)_{z}\rangle \\ \langle \epsilon, \Lambda Q_{x} - Q_{x}\rangle & \langle \epsilon, Q_{xx}\rangle & \langle \epsilon, Q_{xy}\rangle & \langle \epsilon, Q_{xz}\rangle \\ \langle \epsilon, \Lambda Q_{y} - Q_{y}\rangle & \langle \epsilon, Q_{xy}\rangle & \langle \epsilon, Q_{yy}\rangle & \langle \epsilon, Q_{yz}\rangle\\ \langle \epsilon, \Lambda Q_{z} - Q_{z}\rangle & \langle \epsilon, Q_{xz}\rangle & \langle \epsilon, Q_{yz}\rangle & \langle \epsilon, Q_{zz}\rangle \end{bmatrix}. $$ Note that the matrix $B(\epsilon)$ has norm $\|B(\epsilon)\|\lesssim \| \epsilon \|_{L_t^\infty L_{\mathbf{x}}^2}$. Therefore, if $b= \| \epsilon \|_{L_t^\infty L_{\mathbf{x}}^2} \ll 1$, then there exists the inverse matrix $(I+A^{-1}B(\epsilon))^{-1}$, and moreover, the Neumann expansion is given by $$ (I+A^{-1}B(\epsilon))^{-1}=I+\sum_{k=1}^{\infty}(A^{-1}B(\epsilon))^k. $$ Setting the matrix $C(\epsilon)=\sum_{k=1}^{\infty}(A^{-1}B(\epsilon))^k$, the system \eqref{System} can be rewritten as $$ \begin{bmatrix} c^2c' - \frac{\langle \epsilon, \mathcal{L} \partial_x (Q^2)\rangle}{ \|Q\|_{L^3}^3} \\ c^2(a'_{x} -c^{-2}) - \frac{ \langle \epsilon, \mathcal{L} Q_{xx}\rangle }{ \|Q_{x}\|_{L^2}^2 } \\ c^2a'_{y} - \frac{ \langle \epsilon, \mathcal{L} Q_{xy}\rangle }{ \|Q_{y}\|_{L^2}^2 }\\ c^2a'_{z}- \frac{ \langle \epsilon, \mathcal{L} Q_{xz}\rangle }{ \|Q_{z}\|_{L^2}^2 } \end{bmatrix} = C(\epsilon)A^{-1}\begin{bmatrix} \langle \epsilon, \mathcal{L} \partial_{x} (Q^2)\rangle \\ \langle \epsilon, \mathcal{L} Q_{xx}\rangle \\ \langle \epsilon, \mathcal{L} Q_{xy}\rangle \\ \langle \epsilon, \mathcal{L} Q_{xz}\rangle \end{bmatrix} - (I+A^{-1}B(\epsilon))^{-1}A^{-1}\begin{bmatrix} \langle \epsilon^2, \partial_{x}(Q^2)\rangle \\ \langle \epsilon^2, Q_{xx}\rangle \\ \langle \epsilon^2, Q_{xy}\rangle \\ \langle \epsilon^2, Q_{xz}\rangle \end{bmatrix}. $$ Finally, since $\|C(\epsilon)\|\lesssim b$ and $\|(I+A^{-1}B(\epsilon))^{-1}\|\lesssim 1$, we deduce estimates \eqref{E:param-ODEs}. \end{proof} Note that the estimates in \eqref{E:param-ODEs} recast in terms of $\eta$ are the following (where we use the notation \eqref{E:shift-notation}), \begin{equation} \label{E:eta-ODEs} \begin{aligned} &\left|c c' - \frac{\langle \eta, (\mathcal{L} \partial_x (Q^2))_{c,\mathbf{a}} \rangle}{ \langle \Lambda Q, Q^2 \rangle} \right| \lesssim b^2, && \left| c(a_x' -c^{-2}) - \frac{ \langle \eta, (\mathcal{L} Q_{xx})_{c,\mathbf{a}} \rangle }{ \|Q_x\|_{L^2}^2 } \right| \lesssim b^2, \\ &\left| c a_y' - \frac{ \langle \eta, (\mathcal{L} Q_{xy})_{c,\mathbf{a}} \rangle }{ \|Q_y\|_{L^2}^2 } \right| \lesssim b^2, && \left| c a_z' - \frac{ \langle \eta, (\mathcal{L} Q_{xz})_{c,\mathbf{a}} \rangle }{ \|Q_z\|_{L^2}^2 } \right| \lesssim b^2. \end{aligned} \end{equation} For convenience, define the functions $$ f\stackrel{\rm{def}}{=} \frac{\mathcal{L}\partial_x (Q^2)}{\langle \Lambda Q, Q\rangle} \quad \mbox{and} \quad \mathbf{g} = \left( \frac{ \mathcal{L}Q_{xx} }{ \|Q_x\|_{L_{\mathbf{x}}^2}}, \frac{\mathcal{L}Q_{xy}}{ \|Q_y\|_{L_{\mathbf{x}}^2}}, \frac{\mathcal{L}Q_{xz} }{ \|Q_z\|_{L_{\mathbf{x}}^2}}\right). $$ Using \eqref{E:eta-ODEs} as a guide, rewrite \eqref{E:eta-eq} as \begin{equation} \label{E:eta-eq2} \begin{aligned}[t] \partial_t \eta &= - \partial_x \Delta \eta - 2 \partial_x ( Q_{c,\mathbf{a}} \eta) + c^{-2} \langle \eta, f_{c,\mathbf{a}}\rangle (\Lambda Q)_{c,\mathbf{a}} + c^{-2}\langle \eta, \mathbf{g}_{c,\mathbf{a}} \rangle \cdot (\nabla Q)_{c,\mathbf{a}} \\ & \qquad - \partial_x \eta^2 + (c'c^{-1} - c^{-2} \langle \eta, f_{c, \mathbf{a}}\rangle ) (\Lambda Q)_{c,\mathbf{a}} \\ & \qquad + (c^{-1}(\mathbf{a}' - c^{-2} \mathbf{i}) - \langle \eta, \mathbf{g}_{c,\mathbf{a}}\rangle ) \cdot (\nabla Q)_{c,\mathbf{a}} \end{aligned} \end{equation} so that now the top line consists of linear terms in $\eta$ and the second and third lines are quadratic. Let $$ B = \|b(t) \|_{L_t^\infty}, $$ and define $$ B \zeta(t) \stackrel{\rm{def}}{=} \eta(t). $$ Substituting into \eqref{E:eta-eq2}, we obtain \begin{equation} \label{E:zeta-1} \begin{aligned}[t] \partial_t \zeta &= - \partial_x \Delta \zeta - 2 \partial_x ( Q_{c,\mathbf{a}} \zeta) + c^{-2} \langle \zeta, f_{c,\mathbf{a}}\rangle (\Lambda Q)_{c,\mathbf{a}} + c^{-2}\langle \zeta, \mathbf{g}_{c,\mathbf{a}} \rangle \cdot (\nabla Q)_{c,\mathbf{a}} \\ & \qquad - B \partial_x \zeta^2 + B \omega_c (\Lambda Q)_{c,\mathbf{a}} + B\boldsymbol{\omega}_{\mathbf{a}} \cdot (\nabla Q)_{c,\mathbf{a}}, \end{aligned} \end{equation} where \begin{equation} \label{E:omegas} \omega_c \stackrel{\rm{def}}{=} B^{-2}(c'c^{-1} - c^{-2} B\langle \zeta, f_{c, \mathbf{a}}\rangle)\quad \mbox{and} \quad \boldsymbol{\omega}_{\mathbf{a}} \stackrel{\rm{def}}{=} B^{-2}(c^{-1}(\mathbf{a}' - c^{-2} \mathbf{i}) - Bc^{-2}\langle \zeta, \mathbf{g}_{c,\mathbf{a}}\rangle). \end{equation} By \eqref{E:eta-ODEs}, we have $$ |\omega_c| \lesssim 1 \quad \mbox{and} \quad |\boldsymbol{\omega}_{\mathbf{a}}| \lesssim 1. $$ \section{Monotonicity: $I_\pm$ lemma for $u$, $J_{\pm}$ lemma for $\eta$} \label{S:monotonicity} In this section, we introduce key monotonicity lemmas for controlling the movement of mass of $u$ and $\eta$. The monotonicity properties in various ZK contexts have been used in \cite{FHRY, FHR2, FHR3}. The lemmas below will be needed in later sections. \begin{lemma}[weighted Gagliardo-Nirenberg] \label{L:weighted-GN} For a weight function $\psi(\mathbf{x})>0$ such that pointwise $|\nabla \psi (\mathbf{x})| \lesssim \psi(\mathbf{x})$, and $E\subset \mathbb{R}^3$ any measurable subset, \begin{equation} \label{E:wk-111} \int_{E} \psi |u|^3 \, d \mathbf{x} \lesssim \left( \int_{E} |u|^2 \, d\mathbf{x} \right)^{1/2} \left( \int \psi |u|^2 \, d\mathbf{x} \right)^{1/4} \left( \int \psi (|\nabla u|^2 + |u|^2) \, d \mathbf{x} \right)^{3/4}. \end{equation} The estimate holds with constant independent of $E$. \end{lemma} \begin{proof} First, split as follows $$ \int_E \psi |u|^3 \,d \mathbf{x} = \int_E |u| \cdot \psi^{1/4} |u|^{1/2} \cdot \psi^{3/4} |u|^{3/2} \,d \mathbf{x}. $$ Applying H\"older with norms $L^2$, $L^4$, and $L^4$, we get \begin{equation} \label{E:wk-113} \int_E \psi |u|^3 \,d \mathbf{x} \leq \left( \int_E |u|^2\, d\mathbf{x} \right)^{1/2} \left( \int \psi |u|^2\, d\mathbf{x} \right)^{1/4} \left( \int \psi^3 |u|^6 \, d \mathbf{x} \right)^{1/4}. \end{equation} Applying Sobolev embedding for the last term, we have $$ \left( \int \psi^3 |u|^6 \, d \mathbf{x} \right)^{1/4} = \|\psi^{1/2} u \|_{L^6}^{3/2} \lesssim \| \nabla[\psi^{1/2} u ] \|_{L^2}^{3/2}. $$ Distributing the derivative and using that $|\nabla \psi| \lesssim \psi$, it follows that $$ \left( \int \psi^3 |u|^6 \, d \mathbf{x} \right)^{1/4} \lesssim \left( \int \psi( |\nabla u|^2+|u|^2) \, d \mathbf{x} \right)^{3/4}. $$ Combining with \eqref{E:wk-113} yields \eqref{E:wk-111}. \end{proof} Recall that if $u(t)$ is a Class B solution to the 3D ZK with $M(u)=M(Q)$ that is $\alpha$-orbitally stable for $\alpha \ll 1$, and $\mathbf{a}(t)$ and $c(t)$ are the unique parameters as in Lemma \ref{L:geom-decomp}, then $$ |c(t)-1| \leq \alpha $$ and, with $\mathbf{i}=(1,0,0)$, by \eqref{E:param-ODEs} in Lemma \ref{L:ODE-bounds}, we get \begin{equation} \label{E:mon1} |\mathbf{a}'(t) -\mathbf{i} | \lesssim \alpha. \end{equation} By Taylor expansion, $Q(c\mathbf{x}) = Q(\mathbf{x}) + (c-1) \mathbf{x} \cdot \nabla Q(\mathbf{x}) + \cdots $, and thus, $$ \| c^{-2} Q(c^{-1} \mathbf{x}) - Q(\mathbf{x}) \|_{H_{\mathbf{x}}^1} \lesssim \alpha. $$ It follows that \begin{equation} \label{E:mon2} \| u(\mathbf{x}+\mathbf{a}(t), t) - Q(\mathbf{x}) \|_{H_{\mathbf{x}}^1} \lesssim \alpha. \end{equation} For the purposes of the following lemma, let $\kappa$ be a constant larger than both implicit constants in \eqref{E:mon1} and \eqref{E:mon2}. \begin{figure}[ht] \includegraphics[scale=0.8]{ZK-3d-fig-monotonicity1.pdf} \caption{The $I_\pm$ estimates. The vertical line $x=0$ is the soliton center. The lines $x = -\lambda(t-t_0)+r$ for $t<t_0$ and $x = -\lambda(t-t_0)-r$ for $t>t_0$ are the $\phi_\pm$ weight transition lines in \eqref{E:Ip-right}, \eqref{E:Im-right} and \eqref{E:Ip-left}, \eqref{E:Im-left}, respectively. Note that this depiction is for $(y,z)=(0,0)$. Away from $(y,z)=(0,0)$, the weight transition lines are shifted to the left (for $\theta\geq 0$) by $\tan \theta \sqrt{1+y^2+z^2}$.} \label{F:I} \end{figure} \begin{lemma}[conic $I_\pm$ estimates] \label{L:Ipm-estimates} Let $u(t)$ be a Class B solution to the 3D ZK with $M(u)=M(Q)$, that is $\alpha$-orbitally stable for $\alpha \ll 1$, and let $\mathbf{a}(t)$ and $c(t)$ be the unique parameters as in Lemma \ref{L:geom-decomp}. Let $\delta$ be a constant satisfying $0<16\kappa \alpha \leq \delta \ll 1$. Let $$|\theta| \leq \frac{\pi}{3}-\delta$$ be an angle and fix a speed constant $\lambda$ satisfying \begin{equation} \label{E:mon4} \delta \leq \lambda \leq 1-\delta \end{equation} and fix a shift distance $r>0$. For $K\geq 4\delta^{-1}$, let \begin{equation} \label{E:Ipm-de} I_{\pm, \theta, r, t_0} (t) = \int_{\mathbb{R}^3} \phi_\pm \left( \cos \theta(x-r+\lambda(t-t_0)) + \sin \theta \sqrt{1+y^2+z^2} \right) u^2(\mathbf{x}+\mathbf{a}(t), t) \, d\mathbf{x}, \end{equation} where $$ \phi_+(x) = \frac{2}{\pi} \operatorname{arctan}(e^{x/K}) \,, \qquad \phi_-(x) = \phi_+(-x) $$ so that $\phi_+(x)$ increases from $0$ to $1$ and $\phi_-(x)$ decreases from $1$ to $0$. Suppose that $$ t_{-1}<t_0<t_1. $$ The estimates for $I_+$ bound the \emph{future in terms of the past}, see Figure \ref{F:I}. We have \begin{equation} \label{E:Ip-right} I_{+,\theta, r,t_0}(t_0) \leq I_{+,\theta, r,t_0}(t_{-1}) + C e^{-\delta r} \end{equation} and \begin{equation} \label{E:Ip-left} I_{+,\theta, -r,t_0}(t_1) \leq I_{+,\theta, -r,t_0}(t_0) + Ce^{-\delta r} \end{equation} for some $C$ depending on $\delta$ and $K$. The estimates for $I_-$ bound the \emph{past in terms of the future}. We have \begin{equation} \label{E:Im-left} I_{-,\theta,-r,t_0}(t_0) \leq I_{-,\theta,-r,t_0}(t_1) + Ce^{-\delta r} \end{equation} and \begin{equation} \label{E:Im-right} I_{-,\theta, r,t_0}(t_{-1}) \leq I_{-,\theta, r,t_0}(t_0) + Ce^{-\delta r}. \end{equation} \end{lemma} \begin{remark} For Lemma \ref{L:Ipm-estimates}, one needs only to assume that $u$ is a Class B solutions, since the calculations in the proof can be reproduced using frequency projected regularizations, and the errors managed as in the proof of mass conservation for Class B solutions in \S\ref{S:ClassB-mass}. We will not carry out the details. \end{remark} \begin{proof}[Proof of Lemma \ref{L:Ipm-estimates}] We consider first the case $\phi=\phi_+$ and $I=I_+$. The estimates for $\phi_-$ and $I_-$ follow by time inversion, as explained at the end of the proof. Note that $$ \phi'(\omega) = \frac{1}{\pi K} \operatorname{sech}(\omega/K) $$ and $$ |\phi''(\omega)| \leq \frac{1}{K} |\phi'| \,, \qquad |\phi'''(\omega)| \leq \frac{1}{K^2} |\phi'|. $$ In the following, $$ \phi(\cdots) = \phi( \cos \theta(x-r+\lambda(t-t_0)) + \sin \theta \sqrt{1+y^2+z^2} ). $$ Before proceeding, let us note that $$ \nabla [\phi(\cdots)] = ( \cos \theta, \frac{y}{\sqrt{1+y^2+z^2}} \sin \theta, \frac{z}{1+y^2+z^2} \sin \theta) \phi'(\cdots), $$ and thus, $$ | (\mathbf{a}- \mathbf{i}) \cdot \nabla[ \phi(\cdots) ]| \leq \alpha \kappa \phi' \leq \frac{\delta}{16} \phi'. $$ Also note that, by integration by parts \begin{align*} \hspace{0.3in}&\hspace{-0.3in} -2\int \phi \, u \, \partial_x \Delta u \,d \mathbf{x} \\ &= \int \left\{ -\partial_x[ \phi(\cdots)] (3u_x^2+u_y^2+u_z^2) - 2\partial_y[\phi(\cdots)] u_xu_y - 2\partial_z[ \phi(\cdots)] u_x u_z \right\} \, d\mathbf{x}\\ & \qquad+ \int \partial_x \Delta[ \phi(\cdots) ] u^2 \,d \mathbf{x} \\ &= \int \phi' \left\{ -\cos\theta (3u_x^2+u_y^2+u_z^2) - \frac{2y\sin \theta}{\sqrt{1+y^2+z^2}} u_xu_y - \frac{2z\sin \theta}{\sqrt{1+y^2+z^2}} u_x u_z \right\} \, d\mathbf{x}\\ & \qquad+ \int \partial_x \Delta[ \phi(\cdots) ] u^2 \,d \mathbf{x}. \end{align*} Using Peter-Paul, we split the products as $$ \frac{2|y| }{\sqrt{1+y^2+z^2}} |u_x u_y| \leq \frac{ y^2 \sqrt{3}}{1+y^2+z^2} u_x^2 + \frac{1}{\sqrt{3}} u_y^2, $$ $$ \frac{2|z| }{\sqrt{1+y^2+z^2}} |u_x u_z| \leq \frac{ z^2 \sqrt{3}}{1+y^2+z^2} u_x^2 + \frac{1}{\sqrt{3}} u_z^2, $$ and adding, we obtain $$ \frac{2|y| }{\sqrt{1+y^2+z^2}} |u_x u_y| + \frac{2|z| }{\sqrt{1+y^2+z^2}} |u_x u_z| \leq \frac{1}{\sqrt{3}}(3u_x^2+u_y^2+u_z^2). $$ Thus, we see that we need the condition $\frac{|\sin \theta|}{\sqrt{3}} < \cos \theta - \delta$, which is implied by the condition $|\tan \theta| \leq \sqrt{3}- 2\delta$, which is implied by the angle condition in the hypothesis. Note \begin{align*} \partial_x \Delta [ \phi(\cdots)] &= \left[\cos^3\theta + \frac{ (y^2+z^2) \sin^2\theta \cos\theta}{1+y^2+z^2} \right]\phi'''(\cdots) \\ & \qquad + \frac{(2+y^2+z^2) \sin \theta \cos \theta}{ (1+y^2+z^2)^{3/2} } \phi''(\cdots), \end{align*} and thus, $$ |\partial_x \Delta [ \phi(\cdots)] | \leq \frac{2}{K} \phi'. $$ Putting all this together (and using that $\frac{2}{K} \leq \delta$), we obtain $$ -2\int \phi \, u \, \partial_x \Delta u \,d \mathbf{x} \leq - \delta \int \phi' (3u_x^2+u_y^2+u_z^2). $$ We compute $$ I' = \lambda \cos \theta \int \phi' \, u^2 \, d\mathbf{x} + 2 \int \, \phi \, u \, \nabla u \cdot \mathbf{a}' \,d \mathbf{x} - 2\int \phi \, u \, \partial_x \Delta u \, d\mathbf{x} +\frac43 \int \phi' \, u^3 \, d \mathbf{x}. $$ Note that \begin{align*} 2\int \phi \, u \, \nabla u \cdot \mathbf{a}' \,d \mathbf{x} &= - \int \mathbf{a} \cdot \nabla [ \phi(\cdots) ] \, u^2 \, d\mathbf{x}\\ &= - \int (\mathbf{a}-\mathbf{i}) \cdot \nabla [ \phi(\cdots) ] \, u^2 \, d\mathbf{x} - \cos\theta \int \phi' \, u^2 \, d \mathbf{x}. \end{align*} Putting all the inequalities together, yields $$ I' \leq - \delta \int \phi' (u^2 + 3u_x^2+u_y^2+u_z^2)+\frac43 \int \phi' \, u^3 \, d \mathbf{x}. $$ Apply \eqref{E:wk-111} in Lemma \ref{L:weighted-GN} with $\psi( \mathbf{x}) = \phi'( \cdots)$, and with the set $E\subset \mathbb{R}^3$ taken to be the exterior of a neighborhood of $0$ large enough so that $\|Q\|_{L^2_E} \leq \kappa \alpha$. Then it follows that $$ \| u( \mathbf{x}+\mathbf{a}(t),t)\|_{L^2_E} \leq \| u( \mathbf{x}+\mathbf{a}(t),t) - Q(\mathbf{x}) \|_{L_{\mathbf{x}}^2} + \|Q\|_{L^2_{E}} \leq 2\kappa \alpha \leq \frac{\delta}{8}. $$ By \eqref{E:wk-111} $$ \int_E \phi'(\cdots) |u|^3 \, d \mathbf{x} \leq \frac{\delta}{8} \int \phi'(\cdots) (|\nabla u|^2 + |u|^2) \,d \mathbf{x}. $$ On $E^c$, we use the standard Gagliardo-Nirenberg inequlaity $$ \int_{E^c} \phi' |u|^3 \,d \mathbf{x} \leq \sup_{\mathbf{x} \in E^c} |\phi'(\cdots)| \int |u|^3 \,d \mathbf{x} \leq \sup_{\mathbf{x} \in E^c} |\phi'(\cdots)| \|\nabla u \|_{L_t^\infty L_{\mathbf{x}}^2}^{3/2}, $$ combined with the following pointwise bounds for $\phi'(\cdots)$ on $E^c$. On $E^c$ (that is, near $0$), if $t<t_0$, then we have $-r<0$ and $\lambda(t-t_0)<0$, so that $$ |\phi'(\cdots)| \leq e^{-r} e^{-\lambda|t-t_0|}, $$ and consequently, $$ I_{+,r,t_0}'(t) \lesssim e^{-r} e^{-\delta |t-t_0|} $$ with constant depending on $\delta$ and $K$. After integrating from $t_{-1}$ to $t_0$, we obtain \eqref{E:Ip-right}. On $E^c$ (that is, near $0$), if $t>t_0$, then we have $r>0$ and we have $\lambda(t-t_0)>0$, so that $$ |\phi'(\cdots)| \leq e^{-r} e^{-\lambda|t-t_0|} $$ again, and consequently, $$ I_{+,-r,t_0} '(t) \lesssim e^{-r} e^{-\delta |t-t_0|} $$ with constant depending on $\delta$ and $K$. After integrating from $t_0$ to $t_1$, we obtain \eqref{E:Ip-left}. Now we turn to the $I_-$ estimates involving $\phi_-$. We will obtain these as consequences of the $I_+$ estimates involving $\phi_+$ by space-time inversion, as follows. Given $u$, let $$ \bar u(\mathbf{x},t) = u(-\mathbf{x},-t). $$ Then $\bar u$ is an $\alpha$-orbitally stable Class B solution to the 3D ZK, with associated modulation parameters $\bar c$ and $\bar{\mathbf{a}}$ satisfying $$ \bar c(t) = c(-t) \,, \qquad \bar{\mathbf{a}}(t) = -\mathbf{a}(-t). $$ In referencing $I_+$ and $I_-$ we will add an additional subscript indicating the function $u$ or $\bar u$ as well. Plugging $\bar u$ into $I_+$, we note the change of variables $\mathbf{x}\to -\mathbf{x}$ in the integration shows that \begin{equation} \label{E:pm-conversion} I_{\bar u, +,-\theta,-r,-t_0}(-t) = I_{u, -, \theta,r,t_0}(t). \end{equation} Given $t_{-1}<t_0<t_1$, note that $-t_1<-t_0<-t_{-1}$, so we can apply \eqref{E:Ip-right} with $t_0$ replaced by $-t_0$ and $t_{-1}$ replaced by $-t_1$ to obtain $$ I_{\bar u, +, -\theta,r,-t_0}(-t_0) \leq I_{\bar u, +, -\theta, r, -t_0}(-t_1) + Ce^{-\delta r}. $$ Using \eqref{E:pm-conversion}, this gives $$ I_{u,-,\theta,-r,t_0}(t_0) \leq I_{u,-,\theta,-r,t_0}(t_1) + Ce^{-\delta r}, $$ which is \eqref{E:Im-left}. We also apply \eqref{E:Ip-left} with $t_0$ replaced by $-t_0$ and $t_1$ replaced by $-t_{-1}$ to obtain $$ I_{\bar u,+,-\theta, -r,-t_0}(-t_{-1}) \leq I_{\bar u,+,-\theta, -r,-t_0}(-t_0) + Ce^{-\delta r}. $$ Using \eqref{E:pm-conversion}, this gives $$ I_{u,-,\theta, r,t_0}(t_{-1}) \leq I_{u,-,\theta, r,t_0}(t_0) + Ce^{-\delta r}, $$ which is \eqref{E:Im-right}. \end{proof} Replacing $u$ by $\eta$ in $I_\pm$ gives us new quantities that we denote $J_\pm$ that will be applied to obtain uniform decay estimates for $\tilde \epsilon_n$ in \S \ref{S:uniform-n-decay}. The main difference is that, out of the four estimates \eqref{E:Ip-right}, \eqref{E:Ip-left}, \eqref{E:Im-right}, \eqref{E:Im-left} for $I_\pm$, only \eqref{E:Ip-right} and \eqref{E:Im-left} have analogues for $J_\pm$. (See also Figure \ref{F:I}.) The reason is that $\phi_\pm Q$ needs to be small over the relevant interval. On the interval $[t_{-1},t_0]$ with weight transition line to the right of $x=0$, the product $\phi_+Q$ is small. On the interval $[t_0,t_1]$ with weight transition line to the left of $x=0$, the product $\phi_-Q$ is small. \begin{lemma}[conic $J_\pm$ estimates] \label{L:Jpm-estimates} Let $\eta(t)$ be defined by \eqref{E:eta-def}, so that $\eta$ solves \eqref{E:eta-eq}. Let $$ |\theta| \leq \frac{\pi}{3}-\delta $$ be an angle and fix a speed constant $\lambda$ satisfying \begin{equation} \label{E:mon4J} \delta \leq \lambda \leq 1-\delta, \end{equation} and fix a shift distance $r>0$. For $K\geq 4\delta^{-1}$, let \begin{equation} \label{E:J-def} J_{\pm, \theta, r, t_0} (t) = \int_{\mathbb{R}^3} \phi_\pm \left( \cos \theta(x-r+\lambda(t-t_0)) + \sin \theta \sqrt{1+y^2+z^2} \right) \eta^2(\mathbf{x}+\mathbf{a}(t), t) \, d\mathbf{x}, \end{equation} where $$ \phi_+(x) = \frac{2}{\pi} \operatorname{arctan}(e^{x/K}) \,, \qquad \phi_-(x) = \phi_+(-x) $$ so that $\phi_+(x)$ increases from $0$ to $1$ and $\phi_-(x)$ decreases from $1$ to $0$. Suppose that $$ t_{-1}<t_0<t_1. $$ The estimate for $J_+$ bounds the \emph{future in terms of the past}, and is only available on the right of the soliton: \begin{equation} \label{E:Jp-right} J_{+,\theta, r,t_0}(t_0) \leq J_{+,\theta, r,t_0}(t_{-1}) + C e^{-\delta r} \| \eta \|_{L_{[t_{-1},t_0]}^\infty L_{\mathbf{x}}^2}^2 \end{equation} for some $C$ depending on $\delta$ and $K$. The estimate for $J_-$ bounds the \emph{past in terms of the future}, and is only available on the left of the soliton: \begin{equation} \label{E:Jm-left} J_{-,\theta,-r,t_0}(t_0) \leq J_{-,\theta,-r,t_0}(t_1) + Ce^{-\delta r} \| \eta \|_{L_{[t_0,t_1]}^\infty L_{\mathbf{x}}^2}^2. \end{equation} \end{lemma} \begin{proof} We carry out only the proof of \eqref{E:Jp-right} for $J_+$ with $\phi_+$, and suppress the subscript notation. Abbreviating the expression for $J$ by suppressing the arguments of $\phi$ and $\eta$, $$ J = \int_{\mathbb{R}^3} \phi \eta^2 \, d\mathbf{x}, $$ we have $$ J' = \lambda \cos\theta \int_{\mathbb{R}^3} \phi' \, \eta^2 \, d\mathbf{x} + 2\mathbf{a}' \cdot \int_{\mathbb{R}^3} \phi \,\eta \, \nabla \eta \, d\mathbf{x} + 2\int_{\mathbb{R}^3} \phi\, \eta \,\partial_t\eta \, d\mathbf{x}. $$ Using that $\nabla [ \phi(\cdots) ] = \phi'(\cdots)\boldsymbol{\Omega}_\theta(y,z)$, where $$ \boldsymbol{\Omega}_\theta(y,z) = \left( \cos \theta, \sin \theta \frac{y}{\sqrt{1+y^2+z^2}}, \sin \theta \frac{z}{\sqrt{1+y^2+z^2}}\right), $$ combined with integration by parts in the middle term, gives $$ J' = \int_{\mathbb{R}^3} [\lambda \cos\theta - \mathbf{a}'\cdot \boldsymbol{\Omega}_\theta] \, \phi' \, \eta^2 \, d\mathbf{x} + 2\int_{\mathbb{R}^3} \phi\, \eta \,\partial_t\eta \, d\mathbf{x}. $$ Replacing $\mathbf{a}' = \mathbf{a}'-\mathbf{i} + \mathbf{i}$, yields $$ J' = -(1-\lambda) \cos \theta \int \phi' \, \eta^2 \, d\mathbf{x} + 2\int_{\mathbb{R}^3} \phi\, \eta \,\partial_t\eta \, d\mathbf{x} + ( \mathbf{i}- \mathbf{a}')\cdot \int_{\mathbb{R}^3} \boldsymbol{\Omega}_\theta \, \phi' \, \eta^2 \, d\mathbf{x}. $$ Plugging in \eqref{E:eta-eq}, we obtain \begin{equation} \label{E:Jp-1} \begin{aligned} J' = \; & -(1-\lambda) \cos \theta \int \phi' \, \eta^2 \, d\mathbf{x} - 2\int_{\mathbb{R}^3} \phi\, \eta \, \partial_x \Delta \eta \, d\mathbf{x} \\ &- 4\int_{\mathbb{R}^3} \phi \, \eta \, \partial_x (Q_{c,\mathbf{a}} \eta) \, d\mathbf{x} -2 \int_{\mathbb{R}^3} \phi \, \eta \, \partial_x (\eta^2) \, d\mathbf{x} \\ \quad&+ c' c^{-1} \int_{\mathbb{R}^3} (\Lambda Q)_{c,\mathbf{a}} \phi \, \eta \, d\mathbf{x} + c^{-1} ( \mathbf{a}- c^{-2}\mathbf{i}) \cdot \int_{\mathbb{R}^3} \, (\nabla Q)_{c,\mathbf{a}} \phi \, \eta \\ \quad&+(\mathbf{a}' - \mathbf{i})\cdot \int_{\mathbb{R}^3} \boldsymbol{\Omega}_\theta \, \phi' \, \eta^2 \, d\mathbf{x} \\ = \; & A_1+A_2+A_3+A_4+A_5+A_6+A_7. \end{aligned} \end{equation} We note that $1-\lambda \geq \delta$ and by the same calculations as in the proof of Lemma \ref{L:Ipm-estimates}, $$ A_2=-2\int \phi \, \eta \, \partial_x \Delta \eta \, d \mathbf{x} \leq - \delta \int \phi' |\nabla \eta|^2 \,d \mathbf{x}. $$ Thus, the first two terms $A_1$ and $A_2$ in \eqref{E:Jp-1} are ``good terms'' with the negative upper bound \begin{equation} \label{E:Jp-2} A_1+A_2 \lesssim -\delta \int \phi' (|\nabla \eta|^2 +\eta^2) \, d\mathbf{x}. \end{equation} Note that $\phi(\omega) \lesssim e^{\omega/K}$ for all $x\in \mathbb{R}$ (although it is a terrible estimate for $\omega\gg 1$), and recall $K \sim \delta^{-1}$ and $\cos\theta \geq \frac12$. With $$ \omega = \cos\theta(-r + \lambda(t-t_0)) + (x \cos \theta + \sqrt{1+y^2+z^2} \sin \theta), $$ we have $$ \phi(\omega) \lesssim e^{\delta(-r+ \lambda(t-t_0))} e^{+\delta |\mathbf{x}|}. $$ Recall that since the $\eta$ terms are evaluated at $\mathbf{x}+\mathbf{a}(t)$, the functions $Q_{c,\mathbf{a}}$, $\nabla Q_{c,\mathbf{a}}$ and $\Lambda Q_{c,\mathbf{a}}$ are exponentially concentrated near $\mathbf{x}=0$. Hence, $$ \phi(\omega) Q_{c,\mathbf{a}}(\mathbf{x}+\mathbf{a}(t)) \lesssim e^{\delta(-r+\lambda(t-t_0))} e^{-|\mathbf{x}|/4}, $$ and similarly, for $\phi |\nabla Q_{c,\mathbf{a}}|$ and $\phi |\Lambda Q_{c,\mathbf{a}}|$. For $t<t_0$, this is a good estimate and can be written as $$ \phi(\omega) Q_{c,\mathbf{a}}(\mathbf{x}+\mathbf{a}(t)) \lesssim e^{-\delta r} e^{-\delta^2 |t-t_0|} e^{-|\mathbf{x}|/4}. $$ In \eqref{E:Jp-1}, this estimate is used to control the three terms $A_3$, $A_5$, and $A_6$ and to obtain the bounds (using also \eqref{E:eta-ODEs}), $$ |A_3|+|A_5|+|A_6| \lesssim e^{-\delta r} e^{-\delta^2|t-t_0|} \| \eta\|_{L_{t\in [t_{-1},t_0]}^\infty L_{\mathbf{x}}^2}^2. $$ In \eqref{E:Jp-1}, it remains to consider $A_4$ and $A_7$, given by $$ A_4=-2 \int_{\mathbb{R}^3} \phi \, \eta \, \partial_x (\eta^2) \, d\mathbf{x} \,, \qquad A_7= (\mathbf{a}' - \mathbf{i})\cdot \int_{\mathbb{R}^3} \boldsymbol{\Omega}_\theta \, \phi' \, \eta^2 \, d\mathbf{x}. $$ By integration by parts, $$ A_4 = \frac43 \cos \theta \int \phi' \, \eta^3 \, d\mathbf{x}, $$ and by Lemma \ref{L:weighted-GN}, $$ |A_4| \lesssim \| \eta\|_{L_{\mathbf{x}}^2} \int \phi' (|\nabla \eta|^2 + \eta^2) \, d\mathbf{x}. $$ Since $\|\eta\|_{L_{\mathbf{x}}^2} \ll \delta$, this term is absorbed by the right side of \eqref{E:Jp-2}. Since $|\mathbf{a}'-\mathbf{i}| \ll \delta$ and $|\boldsymbol{\Omega}_\theta| \leq 1$, we also have that $A_7$ is absorbed by the right side of \eqref{E:Jp-2}. Combining the above bounds into \eqref{E:Jp-1}, we have for $t<t_0$, $$ J'(t) \leq e^{-\delta r} e^{-\delta^2 |t-t_0|} \| \eta \|_{L_{t\in [t_{-1},t_0]}^\infty L_{\mathbf{x}}^2}^2. $$ Integrating from $t=t_{-1}$ to $t=t_0$ gives \eqref{E:Jp-right}. \end{proof} \section{Weak convergence implies asymptotic stability} \label{S:proof-main-theorem} In this section, we obtain Lemma \ref{L:u-decay} below as a consequence of monotonicity estimates in Lemma \ref{L:Ipm-estimates}. At the end of the section, Lemma \ref{L:u-decay} is applied to show that Theorem \ref{T:main} follows from Proposition \ref{P:wk-lim} and Proposition \ref{P:rigidity}. We note that Lemma \ref{L:u-decay} is also applied in \S\ref{S:app-mon} to prove Lemma \ref{L:exp-decay}, part of the proof of Proposition \ref{P:wk-lim} itself. With $\phi_+$ as defined in Lemma \ref{L:Ipm-estimates}, let \begin{equation} \label{E:pf-main-3} \frac{1}{c_*} \stackrel{\rm{def}}{=} \frac{1}{\|Q\|_{L_{\mathbf{x}}^2}^2} \limsup_{t\nearrow +\infty} \int \phi_+(x + \frac1{10}t) u^2(\mathbf{x}+ \mathbf{a}(t),t) \, d\mathbf{x}. \end{equation} From the assumed orbital stability of $u$, we have $$ |c_*-1| \lesssim \alpha_0 \quad \mbox{and} \quad |\mathbf{a}'(t) - c_*^{-2} \mathbf{i}| \lesssim \alpha_0. $$ To fix reference constants, take $\alpha_0$ small enough so that $$ | \mathbf{a}'(t) - \mathbf{i} | \leq \frac{1}{100} \quad \mbox{and} \quad |c_*-1| \leq \frac{1}{100}. $$ \begin{figure} \begin{center}\includegraphics[scale=0.87]{ZK-3d-fig-decay.pdf}\end{center} \caption{\label{F:decay}In Lemma \ref{L:u-decay}, \eqref{E:pf-main-1} gives a ``decay on the right estimate'', and \eqref{E:pf-main-5} gives a ``decay on the left estimate''. The weight $\phi_+(\rho)$ transitions from $0$ to $1$ smoothly as $\rho$ moves from left to right across $0$. Thus $\rho=0$ corresponds to a ``transition line''. In \eqref{E:pf-main-1}, $\rho>0$ corresponds to $x>r-\tan \theta \sqrt{1+y^2+z^2}$, where we can take $\theta$ close to $60^\circ$. Thus, this gives decay in the conic region pictured. For \eqref{E:pf-main-5}, we take $\theta=0$, so this ``decay on the left'' estimate occurs between the vertical lines $x=-\frac{19}{20}t$ and $x=-r$. When the two are combined, we obtain $L^2$-smallness outside the triangular region around $0$ but we have no estimate in the region labeled ``no decay''.} \end{figure} See Figure \ref{F:decay} for a depiction of the estimates in the following lemma. \begin{lemma} \label{L:u-decay} In \eqref{E:pf-main-3}, the $\limsup$ can be replaced by $\lim$. Moreover, for any $0\leq \theta \leq \frac{\pi}{3}-\delta$, we have the decay on the right estimate \begin{equation} \label{E:pf-main-1} \lim_{t\nearrow +\infty} \int \phi_+\left(\cos\theta(x-r) + \sin\theta \sqrt{1+y^2+z^2}\right)u^2(\mathbf{x}+\mathbf{a}(t),t) \, d\mathbf{x} \lesssim e^{-\delta r}, \end{equation} and the decay on the left estimate \begin{equation} \label{E:pf-main-5} \lim_{t\nearrow +\infty} \int [\phi_+(x+\frac{19t}{20}) - \phi_+(x+r)] u^2(\mathbf{x}+\mathbf{a}(t),t)] \, d\mathbf{x} \lesssim e^{-\delta r}. \end{equation} By \eqref{E:pf-main-3}, \eqref{E:pf-main-1}, and \eqref{E:pf-main-5}, for each $r>0$, for $t$ sufficiently large, \begin{equation} \label{E:pf-main-11} \Big| \| u( \mathbf{x} + \mathbf{a}(t),t) \|_{L_{\mathbf{x}}^2(|\mathbf{x}|\leq r)}^2 - c_*^{-1} \|Q\|_{L_{\mathbf{x}}^2}^2 \Big| \lesssim e^{-\delta r}. \end{equation} \end{lemma} \begin{proof} Apply \eqref{E:Ip-right} in Lemma \ref{L:Ipm-estimates} with $0\leq \theta\leq \frac{\pi}{3}-\delta$, $\lambda = \frac12$, $t_0=t$, $t_{-1}=0$, and any $r>0$, to obtain \begin{align*} \hspace{0.3in}&\hspace{-0.3in} \int \phi_+\left(\cos\theta(x-r)+\sin\theta \sqrt{1+y^2+z^2}\right)u^2(\mathbf{x}+\mathbf{a}(t),t) \, d\mathbf{x} \\ &\leq \int \phi_+\left(\cos\theta(x-r-\frac12t)+\sin\theta \sqrt{1+y^2+z^2}\right) u^2(\mathbf{x}+\mathbf{a}(0),0) \,d \mathbf{x} + Ce^{-\delta r}. \end{align*} As $t \nearrow +\infty$, the integral on the right-side goes to $0$, since $u(0)$ is a fixed function and the effective support window $x>r+\frac12 t - \tan\theta \sqrt{1+y^2+z^2}$ moves outside of any compact set. Thus, we obtain the decay on the right estimate \eqref{E:pf-main-1}. Now we begin the left-side estimates. Suppose that $t\geq t'>0$. Apply \eqref{E:Ip-left} in Lemma \ref{L:Ipm-estimates} with $\theta=0$, $\lambda = \frac{19}{20}$, $t_1=t$, $t_0=t'$, $r=\frac{19}{20}t'$ to get \begin{equation} \label{E:pf-main-2} \int \phi_+(x +\frac{19}{20}t) u^2(\mathbf{x}+\mathbf{a}(t),t) \,d \mathbf{x} \leq \int \phi_+(x+\frac{19}{20}t') u^2(\mathbf{x}+\mathbf{a}(t'),t') \, d\mathbf{x} + e^{-19\delta t'/20}. \end{equation} Consequently, \begin{equation} \label{E:pf-main-4} \ell \stackrel{\rm{def}}{=} \frac{1}{\|Q\|_{L_{\mathbf{x}}^2}^2} \lim_{t\to +\infty} \int \phi_+(x +\frac{19}{20}t) u^2(\mathbf{x}+\mathbf{a}(t),t) \,d \mathbf{x} \end{equation} exists. To prove this, take, for the moment $$ \ell(t) \stackrel{\rm{def}}{=} \frac{1}{\|Q\|_{L_{\mathbf{x}}^2}^2} \int \phi_+(x +\frac{19}{20}t) u^2(\mathbf{x}+\mathbf{a}(t),t) \,d \mathbf{x}, $$ $$ \ell_- \stackrel{\rm{def}}{=} \liminf_{t\to \infty} \ell(t)\,, \quad \ell_+\stackrel{\rm{def}}{=}\limsup_{t\to \infty} \ell(t). $$ We will show that $\ell_+=\ell_-$. Construct two sequences $t_m'$ and $t_m$ as follows: \begin{itemize} \item select $t_1'$ so that $t_1'>1$ and $|\ell(t_1') - \ell_-| \leq 2^{-1}$, \item select $t_1$ so that $t_1>t_1'$ and $|\ell(t_1) - \ell_+| \leq 2^{-1}$, \item select $t_2'$ so that $t_2'>2$ and $|\ell(t_2') - \ell_-| \leq 2^{-2}$, \item select $t_2$ so that $t_2>t_2'$ and $|\ell(t_2) - \ell_+| \leq 2^{-2}$, \item etc. \end{itemize} Then $t_m' \nearrow +\infty$, and for all $m$, $t_m>t_m'$, and moreover, $$ \ell_- = \lim_{m\to \infty} \ell(t_m') \,, \qquad \ell_+ = \lim_{m\to \infty} \ell(t_m). $$ By \eqref{E:pf-main-2}, we have $$ \ell(t_m) \leq \ell(t_m') + e^{-19\delta t_m'/20}. $$ Sending $m\to \infty$, we obtain $\ell_+ \leq \ell_-$, completing the proof that $\ell$ exists. \begin{figure}[ht] \includegraphics[scale=0.71]{ZK-3d-fig-monotonicity2.pdf} \caption{\label{F:mon-2}Take $0<t_0 = \frac{t_1}{100} < t_1$. To link $x=-\frac{19 t}{20}$ at $t=t_1$ to $x=-\frac{t}{10}$ at $t=t_0$, we follow the line $x=-\lambda(t-t_0)-r$ with $r=-\frac{t_0}{10}$. Solving, yields $\lambda = \frac{100}{99}(\frac{19}{20}-\frac{1}{1000})<1$. } \end{figure} Next, we claim that in fact $\ell = c_*^{-1}$. For this, see Figure \ref{F:mon-2}. Take $$ 0< t_0 = \frac{t_1}{100} < t_1. $$ Apply \eqref{E:Ip-left} in Lemma \ref{L:Ipm-estimates} with $\theta=0$, $\lambda = \frac{100}{99}(\frac{19}{20}-\frac{1}{1000})$, $r=\frac{1}{10}t_0$ to obtain $$ \int \phi_+( x + \frac{19}{20} t_1) u^2(\mathbf{x}+\mathbf{a}(t_1),t_1) \,d \mathbf{x} \leq \int \phi_+(x + \frac{1}{10}t_0) u^2(\mathbf{x}+ \mathbf{a}(t_0),t_0) \, d\mathbf{x} + Ce^{-\delta t_0/10}.$$ Sending $t_0\nearrow +\infty$ along a sequence that achieves the $\liminf$ (since $t_1=100t_0$, $t_1\nearrow +\infty$), we obtain $$ \ell \leq \liminf_{t\nearrow +\infty} \int \phi_+(x + \frac{1}{10}t_0) u^2(\mathbf{x}+ \mathbf{a}(t_0),t_0) \, d\mathbf{x}. $$ On the other hand, noting that for all $x$ and all $t>0$, $\phi_+(x+\frac{1}{10}t) \leq \phi_+(x+\frac{19}{20}t)$, it is straightforward from the definitions that \begin{align*} \frac{1}{c_*} &= \frac{1}{\|Q\|_{L_{\mathbf{x}}^2}^2} \limsup_{t\nearrow +\infty} \int \phi_+(x + \frac1{10}t) u^2(\mathbf{x}+ \mathbf{a}(t),t) \, d\mathbf{x} \\ &\leq \frac{1}{\|Q\|_{L_{\mathbf{x}}^2}^2} \limsup_{t\nearrow +\infty} \int \phi_+(x +\frac{19}{20}t) u^2(\mathbf{x}+\mathbf{a}(t),t) \,d \mathbf{x} = \ell. \end{align*} Hence, $\ell = \frac{1}{c_*}$, and the $\limsup$ in the definition \eqref{E:pf-main-3} can be replaced by $\lim$. Taking the difference between \eqref{E:pf-main-4} and \eqref{E:pf-main-3}, using that $\ell=\frac{1}{c_*}$, we obtain \begin{equation} \label{E:pf-main-6} 0= \lim_{t\nearrow +\infty} \int [\phi_+(x+\frac{19}{20}t)-\phi_+(x + \frac1{10}t)] u^2(\mathbf{x}+ \mathbf{a}(t),t) \, d\mathbf{x}. \end{equation} Now, apply \eqref{E:Ip-left} in Lemma \ref{L:Ipm-estimates} with $\theta=0$, $\lambda = \frac12$, and any $r>0$, for $$ 0<t_0= \frac{4}{5}t_1+2r<t_1 $$ to obtain $$ \int \phi_+(x+\frac{t_1}{10}) u^2(\mathbf{x}+\mathbf{a}(t_1),t_1) \,d \mathbf{x} \leq \int \phi_+(x+r) u^2(\mathbf{x}+\mathbf{a}(t_0),t_0) \, d\mathbf{x}+Ce^{-\delta r}, $$ and hence, $$ \lim_{t_0\nearrow +\infty} \int [\phi_+(x+\frac{t_1}{10}) u^2(\mathbf{x}+\mathbf{a}(t_1),t_1)- \phi_+(x+r) u^2(\mathbf{x}+\mathbf{a}(t_0),t_0)] \, d\mathbf{x} \lesssim e^{-\delta r}. $$ However, given that the limit in \eqref{E:pf-main-3} exists, $$ \lim_{t_0\nearrow +\infty} \int [\phi_+(x+\frac{t_1}{10}) u^2(\mathbf{x}+\mathbf{a}(t_1),t_1)- \phi_+(x+\frac{t_0}{10}) u^2(\mathbf{x}+\mathbf{a}(t_0),t_0)] \, d\mathbf{x} =0. $$ Taking the difference of the above two equations, we obtain $$ \lim_{t_0\nearrow +\infty} \int [\phi_+(x+\frac{t_0}{10}) - \phi_+(x+r)] u^2(\mathbf{x}+\mathbf{a}(t_0),t_0)] \, d\mathbf{x} \lesssim e^{-\delta r}. $$ Making the notational changes of replacing $t_0$ by $t$ in this estimate, and adding it to \eqref{E:pf-main-6}, we obtain \eqref{E:pf-main-5}. \end{proof} Now, we complete the proof that Proposition \ref{P:wk-lim} and \ref{P:rigidity} imply Theorem \ref{T:main}. First, we claim that \begin{equation} \label{E:pf-main-8} \begin{aligned} &u(\mathbf{x}+\mathbf{a}(t),t) \rightharpoonup c_*^{-2} Q(c_*^{-1}\mathbf{x}) \text{ weakly in }H_{\mathbf{x}}^1,\\ &u(\mathbf{x}+\mathbf{a}(t),t) \to c_*^{-2} Q(c_*^{-1}\mathbf{x}) \text{ strongly in }L_{\mathbf{x}}^2(|\mathbf{x}|\leq R) \text{ for any }R>0 . \end{aligned} \end{equation} Let $t_m \nearrow +\infty$ be any sequence. By Proposition \ref{P:wk-lim}, there exists a subsequence $t_{m'}$ such that \begin{equation} \label{E:pf-main-9} \begin{aligned} &u(\mathbf{x}+ \mathbf{a}(t_{m'}),t_{m'}+t) \rightharpoonup \tilde u(x,t) \text{ weakly in }H_{\mathbf{x}}^1,\\ &u(\mathbf{x}+ \mathbf{a}(t_{m'}),t_{m'}+t) \to \tilde u(x,t) \text{ strongly in }L_{\mathbf{x}}^2(|\mathbf{x}|\leq R)\text{ for any }R>0 \end{aligned} \end{equation} for every $t\in \mathbb{R}$, with $\tilde u$ satisfying the conditions of Proposition \ref{P:rigidity}. By Proposition \ref{P:rigidity}, there exists $c_+>0$ and $\mathbf{a}_+\in \mathbb{R}^3$ such that for all $t\in \mathbb{R}$, $$ \tilde u(x,t) = c_+^{-2}Q(c_+^{-1}(\mathbf{x}-\mathbf{a}_+-tc_+^{-2})) $$ so that $\mathbf{a}_+= \tilde{\mathbf{a}}(0) = 0$ and $\tilde c(t)=c_+$ for all $t\in \mathbb{R}$. Inserting this into \eqref{E:pf-main-9} and evaluating at $t=0$, we obtain \begin{equation} \label{E:pf-main-10} \begin{aligned} &u(\mathbf{x}+\mathbf{a}(t_{m'}),t_{m'}) \rightharpoonup c_+^{-2} Q(c_+^{-1}\mathbf{x}) \text{ weakly in }H_{\mathbf{x}}^1,\\ &u(\mathbf{x}+\mathbf{a}(t_{m'}),t_{m'}) \to c_+^{-2} Q(c_+^{-1}\mathbf{x}) \text{ strongly in }L_{\mathbf{x}}^2(|\mathbf{x}|\leq R) \text{ for any }R>0, \end{aligned} \end{equation} where \emph{a priori} $c_+$ can depend on the choice of sequence $t_m$. To complete the proof of \eqref{E:pf-main-8}, we must show that $c_+=c_*$ as defined in \eqref{E:pf-main-3}. The estimate \eqref{E:pf-main-11}, and the fact that $u(\mathbf{x}+\mathbf{a}(t_{m'}),t_{m'})$ converges strongly to $\tilde u(\mathbf{x},0)$ in $L^2(|\mathbf{x}|\leq r)$ yields that for every $r>0$, $$ \Big| \| \tilde u( \mathbf{x},0) \|_{L_{\mathbf{x}}^2(|\mathbf{x}|\leq r)}^2 - c_*^{-1} \|Q\|_{L_{\mathbf{x}}^2}^2 \Big| \lesssim e^{-\delta r}. $$ By \eqref{E:tilde-u-decay-1}, for every $r>0$, $$ \Big| M(\tilde u) - c_*^{-1} M(Q)\Big| \lesssim e^{-\delta r}, $$ from which it follows that $M(\tilde u)= c_*^{-1}M(Q)$. Since $\tilde u(\mathbf{x},0) = c_+^{-2}Q(c_+^{-1}\mathbf{x})$, we have $M(\tilde u) = c_+^{-1}$. Hence, $c_+=c_*$, and \eqref{E:pf-main-8} is established. By \eqref{E:param-conv}, $$ c_* = c_+ = \tilde c(0) = \lim_{m'\to \infty} c(t_{m'}). $$ Since this limit is independent of the choice of sequence $t_m$, we conclude $c(t) \to c_*$ as $t\to \infty$. Next we remark on how this implies the strong convergence \eqref{E:window-conv} asserted in Theorem \ref{T:main}. We explain this in the reference frame of Lemma \ref{L:u-decay}, where $\mathbf{x}=0$ corresponds to the soliton center. Thus, we are looking to show that we have $L_{\mathbf{x}}^2$ strong convergence in the conic region \begin{equation} \label{E:pure-wedge} x>-\frac{9}{10}t - \tan \theta \sqrt{1+y^2+z^2} \qquad \text{(pure wedge)}, \end{equation} where $\theta< \frac{\pi}{3}-\delta$. The local convergence \eqref{E:pf-main-8} implies the convergence in a compact neighborhood of $0$. Taking $\tilde\theta$ such that $\theta < \tilde \theta < \frac{\pi}{3}-\delta$, then for $t$ sufficiently large, the region \begin{equation} \label{E:cut-wedge} \left\{ \begin{aligned} &x> r- \tan \tilde{\theta} \sqrt{1+y^2+z^2} \qquad \text{(cut wedge)} \\ & x> - \frac{19}{20}t \end{aligned} \right. \end{equation} fits inside the region \eqref{E:pure-wedge}, as depicted in Figure \ref{F:cut-cone}. Since \eqref{E:pf-main-1} (with $\theta$ replaced by $\tilde \theta$) and \eqref{E:pf-main-5} imply the convergence in \eqref{E:cut-wedge} away from $\mathbf{x}=0$, the convergence also holds in \eqref{E:pure-wedge} away from $\mathbf{x}=0$. This completes the proof of Theorem \ref{T:main}. \begin{figure} \includegraphics[scale=0.65]{ZK-3d-fig-wedges.pdf} \caption{The pure wedge \eqref{E:pure-wedge} and cut wedge \eqref{E:cut-wedge} regions.} \label{F:cut-cone} \end{figure} \section{Construction of the weak time limit Class B solution $\tilde u$} \label{S:wk-lim} In this section, we prove Lemma \ref{L:soft-step}. The entire contents of Lemma \ref{L:soft-step} follow from the combination of Lemmas \ref{L:wk-1}, \ref{L:wk-2}, \ref{L:wk-3}, \ref{L:wk-5}, \ref{L:wk-6} stated and proved below. \begin{lemma}[rational time shifts] \label{L:wk-1} Given $t_m\nearrow +\infty$, there exists a subsequence $t_{m'}$ such that \begin{enumerate} \item for each $t\in \mathbb{Q}$, $u(\mathbf{x}+\mathbf{a}(t_{m'}),t+t_{m'})$ converges weakly in $H^1_{\mathbf{x}}$ as $m'\to\infty$, \item for each $t\in \mathbb{Q}$, $\partial_t u (\mathbf{x}+\mathbf{a}(t_{m'}),t+t_{m'})$ converges weakly in $H^{-2}_{\mathbf{x}}$ as $m'\to \infty$, \item for each $t\in \mathbb{Q}$, $\mathbf{a}(t_{m'}+t) - \mathbf{a}(t_{m'})$ converges (in $\mathbb{R}^3$) as $m'\to \infty$, \item for each $t\in \mathbb{Q}$, $c(t_{m'}+t)$ converges as $m'\to \infty$. \end{enumerate} \end{lemma} \begin{proof} By \eqref{E:param-ODEs} in Lemma \ref{L:ODE-bounds}, we have that $$ |\mathbf{a}(t_{m}+t) - \mathbf{a}(t_{m})| \lesssim \alpha |t| $$ uniformly in $m$. Also, mass conservation (Lemma \ref{L:mass-conservation}) and the definition of orbital stability (Definition \ref{D:orb-stab}) yield $$ | c(t_{m}) -1 | \lesssim \alpha. $$ These bounds and a diagonal argument, using that $\mathbb{Q}$ is countable, imply that there is a subsequence such that items (3) and (4) hold. By passing to a further subsequence, (1) and (2) follow from the Banach--Alaoglu theorem, and a diagonal argument using that $\mathbb{Q}$ is countable. Thus, there is a single subsequence, denoted $m'$ for which all properties (1)-(4) hold. \end{proof} \begin{lemma}[uniform continuity for frequency projected solution] \label{L:wk-2} Given dyadic $M\geq 1$, we have that for all $m'$ \begin{equation} \label{E:wk-106} \| P_{\leq M} u( t+t_{m'}) - P_{\leq M} u(t'+t_{m'}) \|_{L^2_{\mathbf{x}}} \lesssim \min(M^2 |t-t'|, M^{-1}). \end{equation} Consequently, for any $-2<s<1$, \begin{equation} \label{E:wk-106b} \| u( t+t_{m'}) - u(t'+t_{m'}) \|_{H^s_{\mathbf{x}}} \lesssim |t-t'|^{(1-s)/3}, \end{equation} and for any $-4\leq s<-2$, \begin{equation} \label{E:wk-107b} \| \partial_t u( t+t_{m'}) - \partial_t u(t'+t_{m'}) \|_{H^s_{\mathbf{x}}} \lesssim |t-t'|^{(-s-2)/3}. \end{equation} \end{lemma} \begin{proof} The bound of $M^{-1}$ follows immediately from the bound on $\| u(t) \|_{L_t^\infty H_{\mathbf{x}}^1}$. We have \begin{align*} \hspace{0.3in}&\hspace{-0.3in} P_{<M} u(t_{m'}+t') - P_{<M} u(t_{m'}+t) \\ &= P_{<M} (U(t'-t)-I) u(t_{m'}+t) - P_{<M} \int_{s=t_{m'}+t}^{t_{m'}+t'} U(t_{m'}+t'-s) \partial_x (u^2)(s) \,ds. \end{align*} For the first term, we use that $P_{<M} [U(s)-I]$ is $H_{\mathbf{x}}^1 \to L_{\mathbf{x}}^2$ bounded with operator norm $\leq \min(1,|s|M^2)$. For the second term, estimating in $L^2_{\mathbf{x}}$ in the usual way, bounding half of the derivative to $M^{1/2}$, distributing the other half via the fractional Leibniz rule, applying Sobolev, yields a bound of $M^{1/2}|t-t'|$. The two estimates together complete the proof of \eqref{E:wk-106}. Now we explain how \eqref{E:wk-106b} follows from \eqref{E:wk-106}. Dividing frequency space into dyads, $$ \| u( t+t_{m'}) - u(t'+t_{m'}) \|_{H^s_{\mathbf{x}}} \lesssim \sum_{M\geq 1 \text{ dyadic}} M^s \| P_M [u( t+t_{m'}) - u(t'+t_{m'})] \|_{L^2_{\mathbf{x}}}. $$ Applying \eqref{E:wk-106}, $$ \| u( t+t_{m'}) - u(t'+t_{m'}) \|_{H^s_{\mathbf{x}}} \lesssim \sum_{M\geq 1 \text{ dyadic}} M^s \min( M^2|t-t'|, M^{-1}). $$ Since $-2<s<1$, $M^{s+2}$ is a positive power of $M$ and $M^{s-1}$ is a negative power of $M$. For $M \leq |t-t'|^{-1/3}$, the first bound is better, and for $M\geq |t-t'|^{-1/3}$ the second bound is better. Carrying out the sum yields \eqref{E:wk-106b}. Next, we deduce \eqref{E:wk-107b} as a consequence of \eqref{E:wk-106b}. Writing $u_2 = u(t+t_{m'})$ and $u_1 = u(t'+t_{m'})$, we use the 3D ZK equation $$ \partial_t u = -\partial_x \Delta u - \partial_x(u^2) $$ for $u=u_2$ and $u=u_1$ to obtain $$ \partial_t (u_2-u_1) = - \partial_x\Delta(u_2-u_1) - \partial_x[ (u_2-u_1)(u_2+u_1) ], $$ from which it follows that $$ \| \partial_t (u_2-u_1) \|_{H_{\mathbf{x}}^s} \lesssim \|u_2 -u_1 \|_{H_{\mathbf{x}}^{s+3}} + \|(u_2-u_1)(u_2+u_1) \|_{H_{\mathbf{x}}^{s+1}}. $$ Then apply the inequality, for $-\infty<\alpha\leq \frac12$, \begin{equation} \label{E:prod-Sob} \| fg \|_{H_{\mathbf{x}}^\alpha} \lesssim \|f \|_{H_{\mathbf{x}}^{\max(\alpha+\frac12,-1)}} \|g \|_{H_{\mathbf{x}}^1} \end{equation} to obtain, if $-4\leq s< -2$, $$ \| \partial_t (u_2-u_1) \|_{H_{\mathbf{x}}^s }\lesssim \|u_2 - u_1 \|_{H_{\mathbf{x}}^{s+3}} \lesssim |t-t'|^{(-2-s)/3}. $$ For $s\leq -4$, it seems, we cannot improve on the estimate $|t-t'|^{2/3}$, since the right side of \eqref{E:prod-Sob} cannot be improved if $\alpha<-\frac32$. \end{proof} \begin{lemma}[density and convergence] \label{L:wk-3} \quad \begin{enumerate} \item For all $t\in \mathbb{R}$, $u(\mathbf{x}+\mathbf{a}(t_{m'}),t+t_{m'})$ converges weakly in $H_{\mathbf{x}}^1$ as $m'\to\infty$ and $\partial_t u(\mathbf{x}+\mathbf{a}(t_{m'}),t+t_{m'})$ converges weakly in $H_{\mathbf{x}}^{-2}$ as $m'\to\infty$. \item \emph{Define}, for all $t\in \mathbb{R}$, $$ \tilde u(t) = \operatorname{wk-lim}_{m'\to \infty} u(\mathbf{x}+\mathbf{a}(t_{m'}),t+t_{m'}), $$ $$ \tilde v(t) = \operatorname{wk-lim}_{m'\to \infty} \partial_t u(\mathbf{x}+\mathbf{a}(t_{m'}),t+t_{m'}), $$ where the first is a weak limit in $H^1_{\mathbf{x}}$ and the second is a weak limit in $H^{-2}_{\mathbf{x}}$. Then we have, for every $t\in \mathbb{R}$, that $\partial_t \tilde u = \tilde v$, and $\tilde u$ is uniformly-in-time bounded in $H^1_{\mathbf{x}}$ and $\partial_t \tilde u$ is uniformly-in-time bounded in $H^{-2}_{\mathbf{x}}$. \item For every $T>0$ and all $s<1$, $\tilde u \in C([-T,T]; H^s_{\mathbf{x}})$ and $\partial_t \tilde u \in C([-T,T]; H^{s-2}_{\mathbf{x}})$. \item For every $T>0$ and $R>0$, $u(\mathbf{x}+\mathbf{a}(t_{m'}),t+t_{m'})\mathbf{1}_{<R}(x)$ converges to $\tilde u(\mathbf{x},t)\mathbf{1}_{<R}(\mathbf{x})$ strongly in $C([-T,T]; L_{\mathbf{x}}^2)$. \item For all $t\in \mathbb{R}$, $\mathbf{a}(t_{m'}+t)-\mathbf{a}(t_{m'})$ converges. The limit, that we denote by $\tilde{\mathbf{a}}(t)$, is Lipschitz continuous. \item For all $t\in \mathbb{R}$, $c(t_{m'}+t)$ converges. The limit, that we denote by $\tilde c(t)$, is Lipschitz continuous. \end{enumerate} \end{lemma} \begin{proof} (1) Let $t\in \mathbb{R}\backslash \mathbb{Q}$ and let $\phi \in H^{-1}_{\mathbf{x}}$ be a test function. We must show that $\langle u(\bullet+\mathbf{a}(t_{m'}), t+ t_{m'}), \phi \rangle_{\mathbf{x}}$ is a Cauchy sequence (of numbers). Let $\epsilon>0$. Since $u(t+t_{m'})$ is bounded in $H^1_{\mathbf{x}}$ (uniformly in $m'$), there exists dyadic $M>0$ sufficiently large so that \begin{equation} \label{E:wk-105} | \langle u(\bullet+\mathbf{a}(t_{m'}),t+t_{m'}), P_{>M} \phi \rangle | \leq ( \sup_{t\in \mathbb{R}} \|u(t) \|_{H_{\mathbf{x}}^1} ) \| P_{>M} \phi \|_{H^{-1}_{\mathbf{x}}} \leq \epsilon. \end{equation} It suffices to find $m_0'$ so that for any $m_1', m_2' \geq m_0'$ chosen from the $m'$ sequence, we have \begin{equation} \label{E:wk-102} | \langle u(\bullet+\mathbf{a}(t_{m_1'}), t+t_{m_1'}) - u(\bullet+\mathbf{a}(t_{m_2'}), t+t_{m_2'}) , P_{<M} \phi \rangle_{\mathbf{x}} | \leq 3\epsilon. \end{equation} Indeed, once \eqref{E:wk-102} is established, \eqref{E:wk-105} and \eqref{E:wk-102} combined give that for any $m_1', m_2' \geq m_0'$ chosen from the $m'$ sequence, $$ | \langle u(\bullet+\mathbf{a}(t_{m_1'}), t+t_{m_1'}) - u(\bullet+\mathbf{a}(t_{m_2'}),t+t_{m_2'}) , \phi \rangle_{\mathbf{x}} | \leq 5\epsilon, $$ completing the proof. To establish \eqref{E:wk-102}, first note that the frequency restriction transfers to $u$, i.e., $$ \langle u(\bullet+\mathbf{a}(t_{m_1'}), t+t_{m_1'}) - u(\bullet+\mathbf{a}(t_{m_2'}), t+t_{m_2'}) , P_{<M} \phi \rangle_{\mathbf{x}} $$ $$ = \langle P_{<2M} u(\bullet+\mathbf{a}(t_{m_1'}), t+t_{m_1'}) - P_{<2M} u(\bullet+\mathbf{a}(t_{m_2'}), t+t_{m_2'}) , P_{<M} \phi \rangle_{\mathbf{x}}, $$ and thus, we can apply Lemma \ref{L:wk-2} to obtain that for any $t'$, and either $j=1$ or $j=2$, $$ |\langle u(\bullet+\mathbf{a}(t_{m_j'}), t+t_{m_j'}) - u(\bullet+\mathbf{a}(t_{m_j'}),t'+t_{m_j'}), P_{<M} \phi \rangle | \lesssim M^{1/2} |t-t'|. $$ We just chose $t'\in \mathbb{Q}$ so that $M^{1/2}|t-t'| \lesssim \epsilon$ to obtain \begin{equation} \label{E:wk-103} |\langle u(\bullet+\mathbf{a}(t_{m_j'}),t+t_{m_j'}) - u(\bullet+\mathbf{a}(t_{m_j'}),t'+t_{m_j'}), P_{<M} \phi \rangle | \leq \epsilon. \end{equation} By Lemma \ref{L:wk-1}, since $t'\in \mathbb{Q}$, there exists $m_0'$ so that for any $m_1', m_2' \geq m_0'$ chosen from the $m'$ sequence, we have \begin{equation} \label{E:wk-104} | \langle u(\bullet+\mathbf{a}(t_{m_1'}),t'+t_{m_1'}) - u(\bullet+\mathbf{a}(t_{m_2'}),t'+t_{m_2'}) , P_{<M} \phi \rangle_{\mathbf{x}} | \leq \epsilon. \end{equation} Combining \eqref{E:wk-103} (for both $j=1$ and $j=2$) and \eqref{E:wk-104} gives \eqref{E:wk-102}. This completes the proof that $u(\mathbf{x}+\mathbf{a}(t_{m'}),t+t_{m'})$ converges weakly in $H_{\mathbf{x}}^1$ as $m'\to\infty$. The fact that for all $t\in \mathbb{R}$, $\partial_t u(\mathbf{x}+\mathbf{a}(t_{m'}),t+t_{m'})$ converges weakly in $H_{\mathbf{x}}^{-2}$ as $m'\to\infty$ follows similarly, using \eqref{E:wk-107b} in place of \eqref{E:wk-106}. (2) Now we can, as in the lemma statement, define $\tilde u$ and $\tilde v$. Our objective is to show that in fact $\partial_t \tilde u = \tilde v$, where $\partial_t$ is defined for functions of $t$ taking values in $H_{\mathbf{x}}^{-2}$. Now for fixed test function $\phi(\mathbf{x})$, \begin{align*} \hspace{0.3in}&\hspace{-0.3in} \langle u(\mathbf{x}+\mathbf{a}(t_{m'}), t+t_{m'}), \phi(\mathbf{x}) \rangle - \langle u(\mathbf{x}+\mathbf{a}(t_{m'}),t_0+t_{m'}), \phi(\mathbf{x})\rangle \\ &= \int_{s=t_0}^t \langle \partial_s u(\mathbf{x}+\mathbf{a}(t_{m'}), s+t_{m'}), \phi(\mathbf{x}) \rangle \,ds. \end{align*} Send $m'\to \infty$, which gives by dominated convergence $$ \langle \tilde u(\mathbf{x}, t), \phi(\mathbf{x}) \rangle - \langle \tilde u(\mathbf{x},t_0), \phi(\mathbf{x})\rangle = \int_{s=t_0}^t \langle \tilde v(\mathbf{x}, s), \phi(\mathbf{x}) \rangle \,ds. $$ Taking $\partial_t$ we obtain $$ \langle \partial_t \tilde u(\mathbf{x}, t), \phi(\mathbf{x}) \rangle = \langle \tilde v(\mathbf{x}, s), \phi(\mathbf{x}) \rangle . $$ Since this holds for arbitrary $\phi$, we conclude $\partial_t \tilde u = \tilde v$. (3) For the continuity claim for $\tilde u$, we note that by a standard property of weak limits $$ \| \tilde u(t) - \tilde u(t') \|_{H_{\mathbf{x}}^s} \leq \liminf_{m'\to+\infty} \| u( \bullet+\mathbf{a}(t_{m'}), t+t_{m'}) - u( \bullet+\mathbf{a}(t_{m'}), t'+t_{m'}) \|_{H_{\mathbf{x}}^s}, $$ and thus, by \eqref{E:wk-106b} in Lemma \ref{L:wk-2}, we have \begin{equation} \label{E:wk-108} \| \tilde u(t) - \tilde u(t') \|_{H_{\mathbf{x}}^s} \lesssim |t-t'|^{\frac23(1-s)}. \end{equation} Similarly, one can argue for the claimed continuity of $\partial_t \tilde u$ appealing to \eqref{E:wk-107b} in Lemma \ref{L:wk-2}. (4) Fix $T>0$ and $R>0$, and we aim to establish the claimed uniform-in-time convergence. Let $\epsilon>0$. Let $S\subset [-T,T]$ be a \emph{finite} set of time points, so that any point of $[-T,T]$ is less than $\sim \epsilon^{3/2}$ from a point in $S$. Since $u(\bullet+\mathbf{a}(t_{m'}),t+t_{m'}) \rightharpoonup \tilde u(\bullet, t)$ in $H^1$, by the Rellich-Kondrachov compactness theorem, for each $t_j\in S$, there exists $m'_j$ such that $m' \geq m'_j$ implies $$ \| u(\bullet+\mathbf{a}(t_{m'}),t_j+t_{m'}) - \tilde u(\bullet, t_j)\|_{L_{|\mathbf{x}|\leq R}^2} \leq \tfrac12\epsilon. $$ By taking $m'_0$ to be the maximum over all $m'_j$ as $t_j$ ranges over the finite set $S$, we obtain that for any $m' \geq m'_0$ and any $t'\in S$, \begin{equation} \label{E:wk-109} \| u(\bullet+\mathbf{a}(t_{m'}),t'+t_{m'}) - \tilde u(\bullet, t')\|_{L_{|\mathbf{x}|\leq R}^2} \leq \tfrac12\epsilon. \end{equation} Now for any $t\in [-T,T]$, take $t'\in S$ such that $|t-t'|\lesssim \epsilon^4$. Note that \begin{align*} \hspace{0.3in}&\hspace{-0.3in} \| u(\bullet+\mathbf{a}(t_{m'}),t+t_{m'}) - \tilde u(\bullet, t)\|_{L_{|\mathbf{x}|\leq R}^2} \\ & \lesssim \| u(\bullet+\mathbf{a}(t_{m'}),t+t_{m'}) -u(\bullet+\mathbf{a}(t_{m'}),t'+t_{m'})\|_{L_{\mathbf{x}}^2} \\ &\qquad +\| u(\bullet+\mathbf{a}(t_{m'}),t'+t_{m'}) - \tilde u(\bullet, t')\|_{L_{|\mathbf{x}|\leq R}^2} + \| \tilde u(t')-\tilde u(t) \|_{L_{\mathbf{x}}^2}. \end{align*} By \eqref{E:wk-106b} for $s=0$, \eqref{E:wk-109}, and \eqref{E:wk-108}, $$ \| u(\bullet+\mathbf{a}(t_{m'}),t+t_{m'}) - \tilde u(\bullet, t)\|_{L_{|\mathbf{x}|\leq R}^2} \leq \epsilon$$ for $m'\geq m'_0$. (5)-(6) By \eqref{E:param-ODEs} in Lemma \ref{L:ODE-bounds}, for any $t,t'\in \mathbb{R}$, \begin{equation} \label{E:par-1} \begin{aligned} & |c(t_{m'}+t) - c(t_{m'}+t')| \lesssim |t-t'|,\\ & |\mathbf{a}(t_{m'}+t) - \mathbf{a}(t_{m'}+t') | \lesssim |t-t'| \end{aligned} \end{equation} independently of $m'$. In Lemma \ref{L:wk-2} (3)-(4), the convergence was established for $t'\in \mathbb{Q}$. Similar to the arguments used above, we can approximate any $t\in \mathbb{R}$ by $t'\in \mathbb{Q}$ and use the estimates \eqref{E:par-1} to deduce that $c(t_{m'}+t)$ and $\mathbf{a}(t_{m'}+t)-\mathbf{a}(t_{m'})$ are Cauchy sequences, and thus, converge, and we can define $\tilde c(t)$ and $\tilde{\mathbf{a}}(t)$ to be their limits. Then by \eqref{E:par-1} the Lipschitz continuity of $\tilde c(t)$ and $\tilde{\mathbf{a}}(t)$ follows. \end{proof} \begin{lemma} \label{L:wk-5} $\tilde u$ is a Class B solution to the 3D ZK. \end{lemma} \begin{proof} The regularity claims in Definition \ref{D:ClassB} have been established in Lemma \ref{L:wk-3}(3). It remains to show that $$ \partial_t \tilde u(t) + \partial_x \Delta \tilde u(t) + \partial_x \tilde u(t)^2=0 $$ holds for each $t\in \mathbb{R}$, where each of the three terms in the equation belongs to $H_{\mathbf{x}}^{-2}$. This will follow if we show that for each test function $\phi \in C_c^\infty(\mathbb{R}^3)$ $$ \langle \partial_t \tilde u(t), \phi \rangle + \langle \partial_x \Delta \tilde u(t), \phi\rangle + \langle \partial_x \tilde u(t)^2, \phi\rangle=0. $$ Since $u$ is a Class B solution of the 3D ZK, we have for each $t\in \mathbb{R}$ and each $m'$, \begin{align*} 0 &= \langle (\partial_t u)(\mathbf{x}+ \mathbf{a}(t_{m'}),t+t_{m'}), \phi(\mathbf{x}) \rangle + \langle \partial_x \Delta u(\mathbf{x}+ \mathbf{a}(t_{m'}),t+t_{m'}), \phi(\mathbf{x}) \rangle \\ & \qquad + \langle \partial_x u(\mathbf{x}+ \mathbf{a}(t_{m'}),t+t_{m'})^2, \phi(\mathbf{x}) \rangle. \end{align*} Shifting spatial derivatives to the test function in the second and third terms, \begin{align*} 0 &= \langle (\partial_t u)(\mathbf{x}+ \mathbf{a}(t_{m'}),t+t_{m'}), \phi(\mathbf{x}) \rangle - \langle u(\mathbf{x}+ \mathbf{a}(t_{m'}),t+t_{m'}), \partial_x \Delta \phi(\mathbf{x}) \rangle \\ & \qquad - \langle u(\mathbf{x}+ \mathbf{a}(t_{m'}),t+t_{m'})^2, \partial_x \phi(\mathbf{x}) \rangle. \end{align*} Send $m'\to \infty$. In the first term, we use that $(\partial_t u)(\bullet+ \mathbf{a}(t_{m'}),t+t_{m'}) \rightharpoonup \tilde \partial_t \tilde u(\bullet, t)$ weakly in $H_{\mathbf{x}}^{-2}$. In the second term, we use that $u(\bullet+ \mathbf{a}(t_{m'}),t+t_{m'}) \rightharpoonup \tilde u(\bullet, t)$ weakly in $H_{\mathbf{x}}^1$. In the third term, we let $R>0$ sufficiently large so that $\operatorname{supp} \phi$ is contained in the ball of radius $R$. Since $u(\bullet+ \mathbf{a}(t_{m'}),t+t_{m'})\mathbf{1}_{<R}(\mathbf{x}) \to \tilde u(\bullet, t)\mathbf{1}_{<R}(\mathbf{x}) $ strongly in $L_{\mathbf{x}}^2$, it follows that $$ \langle u(\mathbf{x}+ \mathbf{a}(t_{m'}),t+t_{m'})^2, \partial_x \phi(\mathbf{x}) \rangle \to \langle \tilde u(\mathbf{x},t)^2, \partial_x \phi(\mathbf{x}) \rangle. $$ \end{proof} \begin{lemma} \label{L:wk-6} $\tilde u$ is $\alpha$-orbitally stable and $\tilde{\mathbf{a}}(t)$ and $\tilde c(t)$, constructed above in Lemma \ref{L:wk-3}, are the modulation parameters as in Lemma \ref{L:geom-decomp}. \end{lemma} \begin{proof} From Lemma \ref{L:wk-3}, we have that for all $t\in \mathbb{R}$ $$ u(\mathbf{x}+\mathbf{a}(t_{m'}),t+t_{m'}) \rightharpoonup \tilde u(\mathbf{x},t), $$ weakly in $H_{\mathbf{x}}^1$, and also $$ \tilde{\mathbf{a}}(t) \stackrel{\rm{def}}{=} \lim_{m'\to \infty} [\mathbf{a}(t+t_{m'}) - \mathbf{a}(t_{m'})]\,, \qquad \tilde c(t) \stackrel{\rm{def}}{=} \lim_{m'\to \infty} c(t+t_{m'}). $$ Hence, for all $t\in \mathbb{R}$ \begin{align*} & c(t+t_{m'})^2 u(c(t+t_{m'})\mathbf{x}+\mathbf{a}(t+t_{m'}),t+t_{m'})\\ & \quad = c(t+t_{m'})^2u(c(t+t_{m'})\mathbf{x}+[\mathbf{a}(t+t_{m'})-\mathbf{a}(t_{m'})]+\mathbf{a}(t_{m'}),t+t_{m'}) \\ & \quad \quad \rightharpoonup \tilde c(t)^2 \tilde u(\tilde c(t) \mathbf{x}+\tilde{\mathbf{a}}(t),t) \end{align*} weakly in $H_{\mathbf{x}}^1$. Consequently, \begin{align*} &\epsilon(\mathbf{x},t+t_{m'}) \\ &= c(t+t_{m'})^2 u(c(t+t_{m'})\mathbf{x}+\mathbf{a}(t+t_{m'}),t+t_{m'}) -Q(\mathbf{x}) \\ &\rightharpoonup \tilde c(t)^2 \tilde u(\tilde c(t) \mathbf{x}+\tilde{\mathbf{a}}(t),t) - Q(\mathbf{x})\\ &= \tilde \epsilon(\mathbf{x},t) \end{align*} weakly in $H_{\mathbf{x}}^1$. Hence, $$ \| \tilde\epsilon(t) \|_{H_{\mathbf{x}}^1} \leq \liminf_{m'\to \infty} \| \epsilon(t+t_{m'})\|_{H_{\mathbf{x}}^1}\leq \alpha. $$ Thus, $\tilde u$ is $\alpha$-orbitally stable. Moreover, $$ \langle \tilde \epsilon(t), Q^2 \rangle = \lim_{m'\to \infty} \langle \epsilon(t+t_{m'}), Q^2 \rangle =0, $$ $$ \langle \tilde \epsilon(t), \nabla Q \rangle = \lim_{m'\to \infty} \langle \epsilon(t+t_{m'}), \nabla Q \rangle =0, $$ so that $\tilde{\mathbf{a}}(t)$ and $\tilde c(t)$ are the (unique) parameter values that achieve the orthogonality conditions in Lemma \ref{L:geom-decomp}. \end{proof} \section{$\tilde u$ has exponential decay in space} \label{S:app-mon} In this section, we prove Lemma \ref{L:exp-decay} by applying the estimates \eqref{E:pf-main-1} and \eqref{E:pf-main-5} in Lemma \ref{L:u-decay}, which were obtained from the $I_+$ estimate \eqref{E:Ip-right} in Lemma \ref{L:Ipm-estimates}. We know from Lemma \ref{L:soft-step} that $$ \mathbf{a}(t+t_{m'}) - \mathbf{a}(t_{m'}) \to \tilde{\mathbf{a}}(t) \text{ as } m' \to \infty $$ and $$ u(\mathbf{x}+\mathbf{a}(t_{m'}),t_{m'}+t) \rightharpoonup \tilde u(\mathbf{x}, t) \text{ as } m'\to \infty \text{ (weakly) in }H_{\mathbf{x}}^1. $$ It follows that\footnote{This is the following elementary fact: If $f_n(x)\rightharpoonup f(x)$ and $a_n\to a$, then $f_n(x+a_n) \rightharpoonup f(x+a)$.} \begin{align*} & u(\mathbf{x}+\mathbf{a}(t+t_{m'}) ,t_{m'}+t) = u(\mathbf{x}+[\mathbf{a}(t+t_{m'}) - \mathbf{a}(t_{m'})] + \mathbf{a}(t_{m'}),t_{m'}+t) \\ & \qquad \rightharpoonup \tilde u(\mathbf{x}+\tilde{\mathbf{a}}(t), t) \text{ as } m'\to \infty \text{ (weakly) in }H_{\mathbf{x}}^1. \end{align*} Since the norm of a weak limit is less than, or equal to, the limit of the norms, \begin{align*} \hspace{0.3in}&\hspace{-0.3in} \int \phi_+(\cos\theta(x-r) + \sin \theta \sqrt{1+y^2+z^2}) \tilde u^2(\mathbf{x}+\tilde{\mathbf{a}}(t),t) d\mathbf{x} \\ &\leq \lim_{m'\to \infty} \int \phi_+(\cos\theta(x-r) + \sin \theta \sqrt{1+y^2+z^2}) u^2(\mathbf{x}+\mathbf{a}(t+t_{m'}) ,t_{m'}+t) \, d\mathbf{x}. \end{align*} By \eqref{E:pf-main-1}, we have \begin{equation} \label{E:tilde-right} \int \phi_+(\cos\theta(x-r)+ \sin\theta \sqrt{1+y^2+z^2}) \tilde u^2(\mathbf{x}+\tilde{\mathbf{a}}(t),t) d\mathbf{x} \lesssim e^{-\delta r}, \end{equation} which yields the decay on the right estimate for $\tilde u$. Likewise, \begin{align*} &\int (1- \phi_+(x+r)) \tilde u^2(\mathbf{x}+\tilde{\mathbf{a}}(t),t) d\mathbf{x} \\ & \qquad \leq \lim_{m'\to \infty} \int [\phi_+(x+\frac{19(t+t_{m'})}{20}) - \phi_+(x+r)] u^2(\mathbf{x}+\mathbf{a}(t+t_{m'}) ,t_{m'}+t) \, d\mathbf{x}. \end{align*} By \eqref{E:pf-main-5}, we deduce \begin{equation} \label{E:tilde-left} \int (1- \phi_+(x+r)) \tilde u^2(\mathbf{x}+\tilde{\mathbf{a}}(t),t) d\mathbf{x} \lesssim e^{-\delta r}, \end{equation} which yields the decay on the left estimate for $\tilde u$. Combining $\theta=\frac{\pi}{4}$ in \eqref{E:tilde-right} and \eqref{E:tilde-left} yields, for all $t\in \mathbb{R}$, \begin{equation} \label{E:tilde-u-decay-1} \int_{|\mathbf{x}| > r} \tilde u^2(\mathbf{a}+\tilde{\mathbf{a}}(t),t) \,d \mathbf{x} \lesssim e^{-\delta r}. \end{equation} This completes the proof of Lemma \ref{L:exp-decay}. \section{Higher regularity of spatially decaying Class B solutions} \label{S:higher-regularity} In this section, we prove Lemma \ref{L:regularity-boost}. As a reminder of notation, note that in many places in this section, $x$ appears as a weight (not $\mathbf{x}$). Also recall that $P_N$ refers to the Littlewood-Paley multiplier, and this operator acts in all three variables. We will use the notation $$\ln^+N \stackrel{\rm{def}}{=} \ln(N+2)$$ for $N\geq 1$ dyadic. We note two weighted Sobolev interpolation inequalities. First, for $0<\theta\leq 1$, \begin{equation} \label{E:HR1} \| |x|^\alpha u \|_{L_{\mathbf{x}}^2} \leq \| |x|^{\alpha/\theta} u \|_{L_{\mathbf{x}}^2}^\theta \, \|u\|_{L_{\mathbf{x}}^2}^{1-\theta}. \end{equation} More generally, for $p\geq 2$ and $0< \theta \leq \frac{2}{p}$, \begin{equation} \label{E:HR2} \| |x|^\alpha u \|_{L_{\mathbf{x}}^p} \leq \| |x|^{\alpha/\theta} u \|_{L_{\mathbf{x}}^2}^\theta \|u \|_{L_{\mathbf{x}}^{\tilde p}}^{1-\theta}\,, \quad \text{where } \tilde p = p \cdot \frac{(1-\theta)}{1-p \theta/2}. \end{equation} Note that \eqref{E:HR2} reduces to \eqref{E:HR1} when $p=2$. The inequality \eqref{E:HR1} is proved by writing $$ \| |x|^\alpha u \|_{L_{\mathbf{x}}^2}^2 = \int |x|^{2\alpha} |u|^{2\theta} \, \cdot \, |u|^{2-2\theta} \, d\mathbf{x}, $$ and then applying H\"older with dual pair $L_{\mathbf{x}}^{1/\theta}$ and $L_{\mathbf{x}}^{1/(1-\theta)}$. Likewise \eqref{E:HR2} is proved by writing $$ \| |x|^\alpha u \|_{L_{\mathbf{x}}^p}^p = \int |x|^{p\alpha} |u|^{p\theta} \, \cdot \, |u|^{p(1-\theta)} \, d\mathbf{x}, $$ and then applying H\"older with dual pair $L_{\mathbf{x}}^{2/p\theta}$ and $L_{\mathbf{x}}^{1/(1-p\theta/2)}$. Second, we need the elementary fact that the commutator of $x$ and $P_N$, $$xP_N - P_N x,$$ is an $L_{\mathbf{x}}^2\to L_{\mathbf{x}}^2$ bounded operator with operator norm $\lesssim N^{-1}$. This follows since the kernel of the commutator $xP_N - P_N x$ is $$ K(\mathbf{x},\mathbf{x}') = N^3 \check{\chi}(N(\mathbf{x}-\mathbf{x}')) (x-x'). $$ More generally, we have \begin{lemma} For any $N\geq 1$ and $\alpha \geq 1$, \begin{equation} \label{E:HR4} \| (\langle x \rangle^\alpha P_N - P_N \langle x \rangle^\alpha) f \|_{L_{\mathbf{x}}^2} \lesssim N^{-1} \|\langle x \rangle^{\alpha-1} f\|_{L_{\mathbf{x}}^2}, \end{equation} where the implicit constant depends only on $\alpha$. \end{lemma} \begin{proof} This is equivalent to stating that the operator $$(\langle x\rangle^\alpha P_N \langle x \rangle^{-\alpha} - P_N ) \langle x \rangle$$ is $L_{\mathbf{x}}^2\to L_{\mathbf{x}}^2$ bounded with operator norm $\lesssim N^{-1}$. To see this, note that the kernel associated to the operator is $$ K(\mathbf{x},\mathbf{x}')=\left( \frac{\langle x\rangle^\alpha}{\langle x' \rangle^\alpha} - 1 \right) \, N^3 \, \check \chi( N(\mathbf{x}-\mathbf{x}')) \langle x' \rangle. $$ We note the pointwise estimate $$ \left| \frac{\langle x\rangle^\alpha}{\langle x' \rangle^\alpha} - 1 \right| \lesssim \langle x' \rangle^{-1}|x-x'|, $$ which is proved by considering the regions $|x-x'| \ll \langle x' \rangle$ and $|x-x'| \gtrsim \langle x' \rangle$, separately. In the first case, the bound follows by Taylor expansion, for fixed $x'$, of the function $\langle x \rangle^{\alpha}$ around center $x=x'$. In the second case, it follows by bounding $\langle x \rangle^\alpha \leq 2^\alpha(\langle x-x' \rangle^\alpha + \langle x' \rangle^\alpha)$. By this pointwise estimate, we have $$ |K(\mathbf{x},\mathbf{x}')| \lesssim N^{-1} \cdot N^3 |\check\chi(N(\mathbf{x}-\mathbf{x}'))|\, N|x-x'|, $$ and thus, the $L_{\mathbf{x}}^2\to L_{\mathbf{x}}^2$ boundedness claim follows by Young's inequality. \end{proof} Let us note a corollary: For any $N \geq 1$, \begin{equation} \label{E:HR3} \| \langle x \rangle^\alpha P_N u \|_{L_{\mathbf{x}}^2} \lesssim \| \langle x \rangle^\alpha u \|_{L_{\mathbf{x}}^2}. \end{equation} In other words, we can drop $P_N$. To prove \eqref{E:HR3}, write $$ \langle x \rangle^\alpha P_N u = (\langle x \rangle^\alpha P_N - P_N \langle x \rangle^\alpha) u + P_N \langle x \rangle^\alpha u. $$ Then apply the $L^2$ norm, and use \eqref{E:HR4} and the $L_{\mathbf{x}}^2 \to L_{\mathbf{x}}^2$ boundedness of $P_N$, which concludes the proof. \begin{lemma} Suppose that $u$ is a Class B solution to the 3D ZK. Then \begin{equation} \label{E:HR10} \begin{aligned} -\frac12\partial_t \int x |P_Nu|^2 \, d\mathbf{x} &= \frac32 \int |\partial_x P_N u |^2 \, d\mathbf{x} + \frac12 \int |\partial_y P_N u |^2 \, d\mathbf{x}+ \frac12 \int |\partial_z P_N u |^2 \, d\mathbf{x} \\ & \qquad + \int \, x \, P_N u \, P_N\partial_x (u^2) \, d\mathbf{x}. \end{aligned} \end{equation} \end{lemma} \begin{proof} This is a direct calculation. Note that due to the $P_N$ operators, there is no divergent integrals issue for Class B solutions. \end{proof} \begin{lemma} \label{L:L2boost} Suppose that $u$ is a Class B solution of the 3D ZK on a time interval $I$ of length $|I|\leq 1$, then for $0<\theta<\frac14$ we have \begin{equation} \label{E:HR11} \| u\|_{L_I^2H_{\mathbf{x}}^{\frac54-\theta}}^2 \lesssim \|\langle x \rangle^{1/\theta} u \|_{L_I^\infty L_{\mathbf{x}}^2}^\theta \langle \| u \|_{L_I^\infty H_{\mathbf{x}}^1} \rangle^{3-\theta}. \end{equation} \end{lemma} This indicates that we can nearly achieve $H_{\mathbf{x}}^{5/4}$ regularity but averaged in time. \begin{proof} First, we prove that \begin{equation} \label{E:HR6} \left|\int \, x \, P_N u \, P_N\partial_x (u^2) \, d\mathbf{x}\right| \lesssim N^{-\frac12(1 - 2\theta)}\| \langle x\rangle^{1/\theta} u \|_{L_{\mathbf{x}}^2}^\theta \| u \|_{H_{\mathbf{x}}^1}^{3-\theta}. \end{equation} Applying H\"older, $L_{\mathbf{x}}^{3/2} \to L_{\mathbf{x}}^{3/2}$ boundedness of $P_N$, and Sobolev embedding \begin{align*} \left|\int \, x \, P_N u \, P_N\partial_x (u^2) \, d\mathbf{x}\right| &\lesssim \|xP_N u \|_{L_{\mathbf{x}}^3} \|P_N( u_x u) \|_{L_{\mathbf{x}}^{3/2}} \\ &\lesssim \|xP_N u \|_{L_{\mathbf{x}}^3} \|u_x \|_{L_{\mathbf{x}}^2} \|u\|_{L_{\mathbf{x}}^6} \\ &\lesssim \| u \|_{H_{\mathbf{x}}^1}^2 \|xP_Nu \|_{L_{\mathbf{x}}^3}. \end{align*} Now we apply \eqref{E:HR2} for $0<\theta<\frac23$ and \eqref{E:HR3}, \begin{equation} \label{E:HR7} \left|\int \, x \, P_N u \, P_N\partial_x (u^2) \, d\mathbf{x}\right| \lesssim \| u \|_{H_{\mathbf{x}}^1}^2 \| \langle x\rangle^{1/\theta} u \|_{L_{\mathbf{x}}^2}^\theta \|P_N u \|_{L_{\mathbf{x}}^{\tilde p}}^{1-\theta}, \end{equation} where in this case $$ \tilde p = 3 \frac{1-\theta}{1-\frac32\theta}= 3(1+ \frac{\theta}{2-3\theta})=3+. $$ Provided $0<\theta< \frac12$ so that $\tilde p <6$, we still have room to gain from Bernstein's inequality: \begin{equation} \label{E:HR8} \|P_N u \|_{L_{\mathbf{x}}^{\tilde p}} \lesssim N^s \|P_N u\|_{L_{\mathbf{x}}^2} \leq N^{-(1-s)}\|u\|_{H_{\mathbf{x}}^1}, \end{equation} where $$ s = \frac12(1+ \frac{\theta}{1-\theta})=\frac12+\,, \qquad 1-s = \frac12( 1- \frac{\theta}{1-\theta})=\frac12-. $$ Plugging \eqref{E:HR8} into \eqref{E:HR7} yields the claimed estimate \eqref{E:HR6}. Next, we claim \begin{equation} \label{E:HR9} \left| \int x\, |P_Nu|^2 \, d\mathbf{x} \right| \lesssim N^{-(2-\theta)} \|\langle x \rangle^{1/\theta} u \|_{L_{\mathbf{x}}^2}^\theta \| u \|_{H_{\mathbf{x}}^1}^{2-\theta}. \end{equation} Note that by Cauchy-Schwarz and \eqref{E:HR1}, we have \begin{align*} \left| \int x |P_Nu|^2 \, d\mathbf{x} \right|&\leq \| x P_Nu \|_{L_{\mathbf{x}}^2} \|P_N u \|_{L_{\mathbf{x}}^2} \\ &\lesssim \|\langle x \rangle^{1/\theta} P_N u \|_{L_{\mathbf{x}}^2}^\theta \|P_N u \|_{L_{\mathbf{x}}^2}^{2-\theta} \\ &\lesssim N^{-(2-\theta)} \|\langle x \rangle^{1/\theta} P_N u \|_{L_{\mathbf{x}}^2}^\theta \|\nabla P_N u \|_{L_{\mathbf{x}}^2}^{2-\theta}. \end{align*} Then \eqref{E:HR9} follows from \eqref{E:HR3} and the $L^2\to L^2$ boundedness of $P_N$. Now by \eqref{E:HR10}, \eqref{E:HR6} and \eqref{E:HR9}, over a time interval $I$ of length $|I|\leq 1$, $$ \begin{aligned} \int_I \int_{\mathbf{x}} |\nabla P_N u|^2 \, d\mathbf{x}\, dt &\lesssim N^{-\frac12(1 - 2\theta)} \| \langle x\rangle^{1/\theta} u \|_{L_I^\infty L_{\mathbf{x}}^2}^\theta \| u \|_{L_I^\infty H_{\mathbf{x}}^1}^{3-\theta}\\ & \qquad +N^{-(2-\theta)} \|\langle x \rangle^{1/\theta} u \|_{L_I^\infty L_{\mathbf{x}}^2}^\theta \| u \|_{L_I^\infty H_{\mathbf{x}}^1}^{2-\theta}. \end{aligned} $$ We can now multiply this by $N^{\frac12(1-4\theta)}$ to obtain $$ \begin{aligned} N^{\frac12(1-4\theta)}\int_I \int_{\mathbf{x}} |\nabla P_N u|^2 \, d\mathbf{x}\, dt &\lesssim N^{-\theta} \| \langle x\rangle^{1/\theta} u \|_{L_I^\infty L_{\mathbf{x}}^2}^\theta \| u \|_{L_I^\infty H_{\mathbf{x}}^1}^{3-\theta}\\ & \qquad +N^{-\frac32-\theta} \|\langle x \rangle^{1/\theta} u \|_{L_I^\infty L_{\mathbf{x}}^2}^\theta \| u \|_{L_I^\infty H_{\mathbf{x}}^1}^{2-\theta}. \end{aligned} $$ By summing over $N\geq 1$, we obtain \eqref{E:HR11}. \end{proof} \begin{lemma} \label{L:maximal-compare} For any $t_0\in \mathbb{R}$, let $I=[t_0-\delta,t_0+\delta]$ for $\delta \ll 1$. Suppose that $u$ is a Class B solution of the 3D ZK on $I$, and for $0<\theta<\frac14$ we have \begin{equation} \label{E:HR27} \| u\langle x \rangle^{1/\theta} \|_{L_I^\infty L_{\mathbf{x}}^2} <\infty \quad \mbox{and} \quad \|u \|_{L_I^\infty H_{\mathbf{x}}^1} < \infty \end{equation} so that \eqref{E:HR11} is available. Then for each $N\geq 1$, $$ \|P_N u(t) -P_NU(t-t_0) u(t_0) \|_{L_x^2 L_{yz I}^\infty} \lesssim \delta^{1/4} N^{-\frac18+\frac{\theta}{2}}(\ln^+ N)^5 $$ with implicit constant depending on the norms in \eqref{E:HR27}. Consequently, by \eqref{E:HR20} \begin{equation} \label{E:HR28} \| P_N u(t) \|_{L_x^2 L_{yz I}^\infty} \lesssim (\ln^+ N)^2. \end{equation} \end{lemma} \begin{proof} By the Duhamel formula, $$ P_N u(t) = P_NU(t-t_0) u(t_0) - \int_{t_0}^t P_N U(t-s) \partial_x u(s)^2 \, ds. $$ By \eqref{E:HR21}, \begin{equation} \label{E:HR22} \|P_N u(t) - P_NU(t-t_0) u(t_0) \|_{L_x^2 L_{yz I}^\infty} \lesssim (\ln^+ N)^2 N \| P_N( u^2) \|_{L_x^1L_{yzI}^2}. \end{equation} Using the paraproduct decomposition $$ P_N ( u^2) \sim P_N (P_{\lesssim N} u \, P_N u) + P_N \sum_{N'\gg N} (P_{N'}u \, P_{N'}u), $$ we obtain $$ \| P_N (u^2) \|_{L_x^1 L_{yzI}^2} \lesssim \| P_{\lesssim N} u \|_{L_x^2L_{yzI}^\infty} \|P_N u \|_{L_{\mathbf{x}I}^2} + \sum_{N'\gg N} \| P_{N'} u \|_{L_x^2L_{yzI}^\infty} \|P_{N'} u \|_{L_{\mathbf{x}I}^2}. $$ For the terms on the right involving $L_{yzI}^\infty$, we replace $$ u(t)= (u(t) - U(t-t_0)u(t_0))+U(t-t_0)u(t_0) $$ and obtain the estimate \begin{equation} \label{E:HR23} \begin{aligned} \hspace{0.3in}&\hspace{-0.3in} (\ln^+ N)^2 N \| P_N (u^2) \|_{L_x^1 L_{yzI}^2} \\ &\lesssim (\ln^+ N)^2 N\|P_{\lesssim N} (u(t)-U(t-t_0)u(t_0)) \|_{L_x^2 L_{yz I}^\infty} \|P_N u \|_{L_{\mathbf{x}I}^2} \\ &\qquad + (\ln^+ N)^2 N\sum_{N' \gg N} \|P_{N'}(u(t)-U(t-t_0)u(t_0)) \|_{L_x^2 L_{yz I}^\infty} \|P_{N'} u \|_{L_{\mathbf{x}I}^2}\\ &\qquad + (\ln^+ N)^2 N\|P_{\lesssim N} U(t-t_0)u(t_0) \|_{L_x^2 L_{yz I}^\infty} \|P_N u \|_{L_{\mathbf{x}I}^2} \\ &\qquad + (\ln^+ N)^2 N\sum_{N' \gg N} \|P_{N'}U(t-t_0)u(t_0) \|_{L_x^2 L_{yz I}^\infty} \|P_{N'} u \|_{L_{\mathbf{x}I}^2}. \end{aligned} \end{equation} For the last two terms, we use that \eqref{E:HR20} implies \begin{equation} \label{E:HR24} \begin{aligned} &\|P_{\lesssim N} U(t-t_0)u(t_0) \|_{L_x^2 L_{yz I}^\infty} \lesssim (\ln^+ N)^3 \|u(t_0)\|_{H_{\mathbf{x}}^1}, \\ & \|P_{N'}U(t-t_0)u(t_0) \|_{L_x^2 L_{yz I}^\infty}\lesssim (\ln^+ N')^2 \|u(t_0)\|_{H_{\mathbf{x}}^1}. \end{aligned} \end{equation} By \eqref{E:HR11} in Lemma \ref{L:L2boost}, \begin{equation} \label{E:HR25} \begin{aligned} N \, \|P_N u \|_{L_{\mathbf{x}I}^2} &\leq \min\left( \delta^{1/2} \|u\|_{L_I^\infty H_{\mathbf{x}}^1}, N^{-\frac14+\theta} \|P_N u \|_{L_I^2H_{\mathbf{x}}^{\frac54-\theta}} \right) \\ &\lesssim \min(\delta^{1/2}, N^{-\frac14+\theta}) \lesssim \delta^{1/4} N^{-1/8}. \end{aligned} \end{equation} Let $$ \gamma(N) = \|P_Nu(t)-P_NU(t-t_0)u(t_0)\|_{L_x^2L_{yz I}^\infty}. $$ Plugging \eqref{E:HR23}, \eqref{E:HR24}, and \eqref{E:HR25} into the right side of \eqref{E:HR22}, we obtain \begin{equation} \label{E:HR26} \begin{aligned} \gamma(N) & \lesssim \delta^{1/4} N^{-1/8} (\ln^+ N)^2 \sum_{N' \lesssim N}\gamma(N') + \delta^{1/4} (\ln^+ N)^2 \sum_{N' \gg N} (N')^{-1/8} \gamma(N') \\ & \qquad + \delta^{1/4} N^{-1/8} (\ln^+ N)^5. \end{aligned} \end{equation} Let $$\Gamma(N) = \sum_{N' \lesssim N} \gamma(N').$$ If $N'' \lesssim N$, then \begin{align*} \sum_{N' \gg N''} (N')^{-1/8} \gamma(N') &\leq \sum_{N' \gg N} (N')^{-1/8} \gamma(N') + \sum_{N'' \ll N' \lesssim N} (N')^{-1/8} \gamma(N') \\ &\leq \sum_{N' \gg N} (N')^{-1/8} \Gamma(N') + (N'')^{-1/8} \Gamma(N). \end{align*} Hence, if $N'' \lesssim N$, then \begin{align*} \gamma(N'') &\lesssim \delta^{1/4} (\ln^+ N'')^2 (N'')^{-1/8} \Gamma(N) + \delta^{1/4} (\ln^+ N'')^2 \sum_{N'\gg N} (N')^{-1/8} \Gamma(N') \\ & \qquad + \delta^{1/4} (\ln^+ N'')^5 (N'')^{-1/8}. \end{align*} Summing in $N''$ from $1$ to $N$, $$ \Gamma(N) \lesssim \delta^{1/4} \Gamma(N) + \delta^{1/4} (\ln^+ N)^3\sum_{N'\gg N} (N')^{-1/8} \Gamma(N') + \delta^{1/4}. $$ For $\delta$ sufficiently small, $$ \Gamma(N) \lesssim \delta^{1/4}(\ln^+ N)^3 \sum_{N'\gg N} (N')^{-1/8} \Gamma(N') + \delta^{1/4}. $$ Therefore, for any $N'' \geq N$, $$ \Gamma(N'') \lesssim \delta^{1/4}(\ln^+ N'')^3 \sum_{N'\gg N} (N')^{-1/8} \Gamma(N') + \delta^{1/4}. $$ Multiply by $(N'')^{-1/8}$ and sum over $N'' \gg N$ to obtain \begin{align*} \sum_{N'' \gg N} (N'')^{-1/8} \Gamma(N'') &\lesssim \delta^{1/4}\sum_{N''\gg N} (\ln^+ N'')^3 (N'')^{-1/8} \sum_{N'\gg N} (N')^{-1/8} \Gamma(N') \\ &\qquad + \delta^{1/4} \sum_{N'' \gg N} (N'')^{-1/8}. \end{align*} From this, we obtain (that for $\delta$ sufficiently small) $$ \sum_{N'\gg N} (N')^{-1/8} \Gamma(N') \lesssim \delta^{1/4}N^{-1/8}. $$ Thus, for all $N$, $$ \Gamma(N) \lesssim 1. $$ Returning to \eqref{E:HR26}, we obtain $$ \gamma(N) \lesssim \delta^{1/4} N^{-1/8} (\ln^+ N)^5. $$ \end{proof} \begin{lemma} \label{L:reg-boost-last} For any $t_0\in \mathbb{R}$, let $I=[t_0-\delta,t_0+\delta]$ for $\delta \ll 1$. Suppose that $u$ is a Class B solution of the 3D ZK on $I$ and for $0<\theta<\frac14$ we have \begin{equation} \label{E:HR27b} \| u\langle x \rangle^{1/\theta} \|_{L_I^\infty L_{\mathbf{x}}^2} <\infty \quad \mbox{and} \quad \|u \|_{L_I^\infty H_{\mathbf{x}}^1} < \infty \end{equation} so that \eqref{E:HR11} and \eqref{E:HR28} are available. Then, for each $N\geq 1$, \begin{equation} \label{E:HR30} \|P_N u(t) -P_NU(t-t_0) u(t_0) \|_{L_I^\infty L_{\mathbf{x}}^2} \lesssim N^{-\frac54+\theta} (\ln^+ N)^3, \end{equation} from which it follows that \begin{equation} \label{E:HR31} \|P_N u(t_0) \|_{L_{\mathbf{x}}^2} \lesssim \delta^{-1/2} (\ln^+ N)^3 N^{-\frac54+\theta} \end{equation} with implicit constant depending on the norms in \eqref{E:HR27b}. \end{lemma} \begin{proof} By the Duhamel formula $$ P_Nu(t)-P_NU(t-t_0) u(t_0) = -\int_{t_0}^t U(t-s) \, \partial_x u(s)^2 \, ds. $$ By \eqref{E:HR21b}, \begin{equation} \label{E:HR29} \| P_Nu(t)-P_NU(t-t_0) u(t_0)\|_{L_I^\infty L_{\mathbf{x}}^2} \lesssim \| P_N(u^2) \|_{L_x^1L_{yz I}^2}. \end{equation} Using the paraproduct decomposition $$ P_N ( u^2) \sim P_N (P_{\lesssim N} u \, P_N u) + P_N \sum_{N'\gg N} (P_{N'}u \, P_{N'}u), $$ we obtain $$ \| P_N (u^2) \|_{L_x^1 L_{yzI}^2} \lesssim \| P_{\lesssim N} u \|_{L_x^2L_{yzI}^\infty} \|P_N u \|_{L_{\mathbf{x}I}^2} + \sum_{N'\gg N} \| P_{N'} u \|_{L_x^2L_{yzI}^\infty} \|P_{N'} u \|_{L_{\mathbf{x}I}^2}. $$ By \eqref{E:HR11} and \eqref{E:HR28}, we get $$ \| P_N (u^2) \|_{L_x^1 L_{yzI}^2} \lesssim (\ln^+ N)^3 N^{-\frac54+\theta} + \sum_{N'\gg N} (\ln^+ N')^2 (N')^{-\frac54+\theta} \lesssim (\ln^+ N)^3 N^{-\frac54+\theta}. $$ Combining this with \eqref{E:HR29}, we obtain \eqref{E:HR30}. Since $\| P_N U(t-t_0) u(t_0)\|_{L_{\mathbf{x}}^2}$ is conserved in time, we have \begin{align*} \|P_N u(t_0) \|_{L_{\mathbf{x}}^2} &= (2\delta)^{-1/2} \|P_N U(t-t_0) u(t_0) \|_{L_I^2 L_{\mathbf{x}}^2} \\ &\leq \delta^{-1/2} \|P_N U(t-t_0) u(t_0)- P_N u(t) \|_{L_I^2 L_{\mathbf{x}}^2} + \delta^{-1/2} \|P_N u(t) \|_{L_I^2 L_{\mathbf{x}}^2}. \end{align*} By \eqref{E:HR30} and \eqref{E:HR11}, we conclude that \eqref{E:HR31} holds. \end{proof} We note that \eqref{E:HR31} implies that $u \in L_t^\infty H_{\mathbf{x}}^{\frac54-2\theta}$. Now we give the arguments to achieve higher regularity. \begin{lemma} Suppose that $u$ is a Class B solution of the 3D ZK on a time interval $I$ of length $|I|\leq 1$, then for $\theta>0$ sufficiently small, $s_1\geq 1$ and \begin{equation} \label{E:s2s1} s_2 = \begin{cases} \frac32s_1-\frac14 - \theta, & \text{if }1\leq s_1 < \frac32, \\ (s_1+\frac12)(1- \frac12\theta), & \text{if } s_1> \frac32, \end{cases} \end{equation} we have the estimate \begin{equation} \label{E:HR12} \| u\|_{L_I^2H_{\mathbf{x}}^{s_2}}^2 \lesssim \|\langle x \rangle^{1/\theta} u \|_{L_I^\infty L_{\mathbf{x}}^2}^\theta \, \langle \| u \|_{L_I^\infty H_{\mathbf{x}}^{s_1}} \rangle^{3-\theta}. \end{equation} \end{lemma} Thus, for $1\leq s_1 < \frac32$, we can gain nearly $\frac12 s_1-\frac14$ derivatives, and for $s_1>\frac32$, we can gain nearly $\frac12$ derivatives, although averaged in time. It should be noted that in the case $s_1>\frac32$, the gain is precisely $\frac12 - \frac12\theta(s_1+\frac12)$, so that one needs to take $\theta \sim 1/(2s_1)$ for large $s_1$ in order to increment the regularity by, say, $\frac14$ derivatives. Since the power on the weight on the right side is $\langle x \rangle^{1/\theta}$, the power on the weight grows like $\sim 2s_1$ as we proceed to very high regularity. \begin{proof} We will need the estimate \begin{equation} \label{E:HR13} \|\partial_x P_N(u^2) \|_{L^2} \lesssim \begin{cases} N^{\frac52-2s_1} \| u \|_{H^{s_1}}^2, & \text{if }1\leq s_1 < \frac32, \\ N^{1-s_1} \| u \|_{H^{s_1}} \| u \|_{H^{\frac32+}},& \text{if } s_1 > \frac32. \end{cases} \end{equation} To prove \eqref{E:HR13}, we will now need the paraproduct decomposition \begin{equation} \label{E:HR14} P_N( u^2 ) \approx P_N( P_N u \, P_{\lesssim N} u + \sum_{N' \gg N} P_{N'} u \, P_{N'}u). \end{equation} Hence, for $1\leq s_1 < \frac32$, we estimate as $$ \|\partial_x P_N(u^2) \|_{L^2} \lesssim N \|P_N u \|_{L^{3/s_1}} \| P_{\lesssim N} u \|_{L^{p'}} + N \sum_{N' \gg N} \| P_{N'} u \|_{L^2} \| P_{N'}u \|_{L^\infty}, $$ where $$\frac{1}{p'} = \frac12 - \frac{s_1}{3}.$$ By Bernstein and Sobolev embedding $$ \|\partial_x P_N(u^2) \|_{L^2} \lesssim N^{\frac52-s_1} \|P_N u \|_{L^2} \| u \|_{H^{s_1}} + N \sum_{N' \gg N} (N')^{-2s_1+\frac32}\| u \|_{H^{s_1}}^2, $$ and hence, \eqref{E:HR13} holds for $1\leq s_1 < \frac32$. For $s_1>\frac32$, we start with \eqref{E:HR14} but apply H\"older as follows $$ \|\partial_x P_N(u^2) \|_{L^2} \lesssim N \| P_N u \|_{L^2} \|P_{\lesssim N} u \|_{L^\infty} + N \sum_{N' \gg N} \| P_{N'} u \|_{L^2} \| P_{N'}u \|_{L^\infty}. $$ Then \eqref{E:HR13} again follows by Bernstein. As in the proof of Lemma \ref{L:L2boost}, the key is the estimates of the type \eqref{E:HR6} and \eqref{E:HR9}: \begin{equation} \label{E:HR6b} \left|\int \, x \, P_N u \, P_N\partial_x (u^2) \, d\mathbf{x}\right| \lesssim \| \langle x\rangle^{1/\theta} u \|_{L_{\mathbf{x}}^2}^\theta \, \| u \|_{H_{\mathbf{x}}^{s_1}}^{3-\theta} \begin{cases} N^{\frac52-3s_1+\theta s_1}, & \text{if }1\leq s_1 < \frac32, \\ N^{1-2s_1+s_1\theta}, & \text{if } s_1\geq \frac32, \end{cases} \end{equation} \begin{equation} \label{E:HR9b} \left| \int x |P_Nu|^2 \, d\mathbf{x} \right| \lesssim N^{-2s_1+\theta s_1} \|\langle x \rangle^{1/\theta} u \|_{L_{\mathbf{x}}^2}^\theta \| u \|_{H_{\mathbf{x}}^{s_1}}^{2-\theta}. \end{equation} To prove \eqref{E:HR6b}, we estimate by H\"older $$ \left|\int \, x \, P_N u \, P_N\partial_x (u^2) \, d\mathbf{x}\right| \lesssim \| xP_N u \|_{L^2} \|\partial_x P_N (u^2) \|_{L^2}. $$ By \eqref{E:HR1}, $$ \left|\int \, x \, P_N u \, P_N\partial_x (u^2) \, d\mathbf{x}\right| \lesssim \| |x|^{1/\theta} P_N u \|_{L^2}^\theta \| P_N u \|_{L^2}^{1-\theta} \|\partial_x P_N (u^2) \|_{L^2}. $$ Combining with \eqref{E:HR13}, we obtain \eqref{E:HR6b}. To prove \eqref{E:HR9b}, we estimate by H\"older $$ \left| \int x \, |P_Nu|^2 \, d\mathbf{x} \right| \lesssim \| xP_N u \|_{L^2} \|P_N u \|_{L^2}. $$ By \eqref{E:HR1}, $$ \left| \int x \, |P_Nu|^2 \, d\mathbf{x} \right| \lesssim \| |x|^{1/\theta} P_N u \|_{L^2} \|P_N u \|_{L^2}^{2-\theta}, $$ and hence, \eqref{E:HR9b} follows. Let us consider first the case $1\leq s_1 < \frac32$. Plugging \eqref{E:HR6b} and \eqref{E:HR9b} into \eqref{E:HR10} integrated over $I$, we obtain $$ \| \nabla P_N u \|_{L_I^2L_{\mathbf{x}}^2}^2 \lesssim N^{-2s_1+\theta s_1} \|\langle x \rangle^{1/\theta} u \|_{L_I^\infty L_{\mathbf{x}}^2}^\theta \| u \|_{L_I^\infty H_{\mathbf{x}}^{s_1}}^{2-\theta}+N^{\frac52-3s_1+\theta s_1} \| \langle x\rangle^{1/\theta} u \|_{L_I^\infty L_{\mathbf{x}}^2}^\theta \| u \|_{L_I^\infty H_{\mathbf{x}}^{s_1}}^{3-\theta}. $$ Multiplying by $N^{3s_1-\frac52-2\theta}$, we get $$ N^{3s_1-\frac52-2\theta} \| \nabla P_N u \|_{L_I^2L_{\mathbf{x}}^2}^2 \lesssim \begin{aligned}[t] &N^{s_1-\frac52 +\theta (s_1-2)} \|\langle x \rangle^{1/\theta} u \|_{L_I^\infty L_{\mathbf{x}}^2}^\theta \| u \|_{L_I^\infty H_{\mathbf{x}}^{s_1}}^{2-\theta}\\ &+N^{\theta (s_1-2)} \| \langle x\rangle^{1/\theta} u \|_{L_I^\infty L_{\mathbf{x}}^2}^\theta \| u \|_{L_I^\infty H_{\mathbf{x}}^{s_1}}^{3-\theta}. \end{aligned} $$ Summing in $N$, we obtain the claimed estimate \eqref{E:HR12} (for $1\leq s_1 < \frac32$). Next, consider the case $s_1\geq \frac32$. Plugging \eqref{E:HR6b} and \eqref{E:HR9b} into \eqref{E:HR10} integrated over $I$, we obtain $$ \| \nabla P_N u \|_{L_I^2L_{\mathbf{x}}^2}^2 \lesssim N^{-2s_1+\theta s_1} \|\langle x \rangle^{1/\theta} u \|_{L_I^\infty L_{\mathbf{x}}^2}^\theta \| u \|_{L_I^\infty H_{\mathbf{x}}^{s_1}}^{2-\theta}+N^{1-2s_1+\theta s_1} \| \langle x\rangle^{1/\theta} u \|_{L_I^\infty L_{\mathbf{x}}^2}^\theta \| u \|_{L_I^\infty H_{\mathbf{x}}^{s_1}}^{3-\theta}. $$ With $s_2 = (s_1+\frac12)(1-\frac12\theta)$, multiplying by $N^{-2s_2-2}$, we have $$ N^{2s_2-2} \| \nabla P_N u \|_{L_I^2L_{\mathbf{x}}^2}^2 \lesssim \begin{aligned}[t] &N^{-1-\frac12 \theta} \|\langle x \rangle^{1/\theta} u \|_{L_I^\infty L_{\mathbf{x}}^2}^\theta \| u \|_{L_I^\infty H_{\mathbf{x}}^{s_1}}^{2-\theta}\\ &+N^{-\frac12\theta} \| \langle x\rangle^{1/\theta} u \|_{L_I^\infty L_{\mathbf{x}}^2}^\theta \| u \|_{L_I^\infty H_{\mathbf{x}}^{s_1}}^{3-\theta}. \end{aligned} $$ Summing in $N$, we obtain the claimed estimate \eqref{E:HR12} (for $s_1>\frac32$). \end{proof} We can assume $s_1>\frac54-$. By Lemma \ref{L:maximal-compare}, $$ \| P_N u(t) - P_N U(t-t_0) u(t_0) \|_{L_x^2 L_{yzI}^\infty} \lesssim \delta^{1/4} N^{-\frac18+ \frac{\theta}{2}} (\ln^+ N)^5. $$ Applying Lemma \ref{L:RV1}, \eqref{E:HR20} to estimate the term $P_N U(t-t_0) u(t_0)$, we obtain \begin{equation} \label{E:regb-2} \begin{aligned} \|P_N u(t) \|_{L_x^2L_{yzI}^\infty} &\lesssim N \ln^+ N \|P_N u(t_0) \|_{L_x^2} + \delta^{1/4} N^{-\frac18+ \frac{\theta}{2}} (\ln^+ N)^5 \\ &\lesssim N^{1-s_1} \ln^+ N + \delta^{1/2} N^{-\frac18+\frac{\theta}{2}} (\ln^+ N)^5 \lesssim N^{-1/16}. \end{aligned} \end{equation} Revisiting the proof of Lemma \ref{L:reg-boost-last}, \begin{equation} \label{E:regb-1} \| P_Nu(t)-P_NU(t-t_0) u(t_0)\|_{L_I^\infty L_{\mathbf{x}}^2} \lesssim \| P_N(u^2) \|_{L_x^1L_{yz I}^2}. \end{equation} Using the paraproduct decomposition $$ P_N ( u^2) \sim P_N (P_{\lesssim N} u \, P_N u) + P_N \sum_{N'\gg N} (P_{N'}u \, P_{N'}u), $$ we obtain $$ \| P_N (u^2) \|_{L_x^1 L_{yzI}^2} \lesssim \| P_{\lesssim N} u \|_{L_x^2L_{yzI}^\infty} \|P_N u \|_{L_{ \mathbf{x}I}^2} + \sum_{N'\gg N} \| P_{N'} u \|_{L_x^2L_{yzI}^\infty} \|P_{N'} u \|_{L_{\mathbf{x}I}^2}. $$ Plugging into \eqref{E:regb-1}, we have \begin{align*} \hspace{0.3in}&\hspace{-0.3in} \| P_Nu(t)-P_NU(t-t_0) u(t_0)\|_{L_I^\infty L_{\mathbf{x}}^2} \\ & \lesssim \| P_{\lesssim N} u \|_{L_x^2L_{yzI}^\infty} \|P_N u \|_{L_{\mathbf{x}I}^2} + \sum_{N'\gg N} \| P_{N'} u \|_{L_x^2L_{yzI}^\infty} \|P_{N'} u \|_{L_{\mathbf{x}I}^2}. \end{align*} By \eqref{E:HR12}, \eqref{E:regb-2}, we get \begin{equation} \label{E:regb-2b} \| P_Nu(t)-P_NU(t-t_0) u(t_0)\|_{L_I^\infty L_{\mathbf{x}}^2} \lesssim N^{-s_2} + \sum_{N' \gg N} (N')^{-1/16} (N')^{-s_2} \lesssim N^{-s_2}. \end{equation} Since $\| P_N U(t-t_0) u(t_0)\|_{L_{\mathbf{x}}^2}$ is conserved in time, we have \begin{align*} \|P_N u(t_0) \|_{L_{\mathbf{x}}^2} &= (2\delta)^{-1/2} \|P_N U(t-t_0) u(t_0) \|_{L_I^2 L_{\mathbf{x}}^2} \\ &\leq \delta^{-1/2} \|P_N U(t-t_0) u(t_0)- P_N u(t) \|_{L_I^2 L_{\mathbf{x}}^2} + \delta^{-1/2} \|P_N u(t) \|_{L_I^2 L_{\mathbf{x}}^2}. \end{align*} Plugging \eqref{E:regb-2b} and \eqref{E:HR12} into the above, we get $$ \|P_N u(t_0) \|_{L_{\mathbf{x}}^2} \lesssim N^{-s_2}. $$ Multiplying by $N^{s_2-}$ and square summing, we obtain that $u(t_0) \in H^{s_2-}$, while we started with the assumption that $\|u \|_{L_I^\infty H_{\mathbf{x}}^{s_1}} < \infty$. Noting that $t_0$ was arbitrary in $I$, and recalling \eqref{E:s2s1} expressing $s_2$ in terms of $s_1$, we see that we can incrementally step up to arbitrarily high regularity. \section{$\tilde \epsilon_n$ has exponential decay} \label{S:uniform-n-decay} This is the first section addressing Prop. 2. We use the $J_\pm$ monotonicity in Lemma \ref{L:Jpm-estimates} to prove Lemma \ref{L:ep-decay}, which establishes the uniform-in-$n$ exponential spatial decay of $\tilde \epsilon_n$. In place of $\tilde \epsilon_n$, we pass to $\eta$ (subscript $n$ and tildes dropped) defined by \eqref{E:eta-def} and solving equation \eqref{E:eta-eq} in terms of which Lemma \ref{L:Jpm-estimates} is phrased. In the estimates, we can path back and forth between the $\tilde \epsilon_n$ and $\eta$, since $\tilde c_n\sim 1$ uniformly in time. Fix any $t_0\in \mathbb{R}$ and apply Lemma \ref{L:Jpm-estimates}. In particular, we apply \eqref{E:Jp-right} and use that the uniform-in-time $L^2$ compactness hypothesis on $\tilde \epsilon_n$ implies $$ \lim_{t_{-1} \searrow -\infty} J_{+,\theta,r,t_0}(t_{-1}) = 0$$ to conclude that \begin{equation} \label{E:Jmon-p} J_{+,\theta,r,t_0}(t_0) \lesssim e^{-\delta r} \sup_{t\in \mathbb{R}} \| \tilde \epsilon_n \|_{L_t^\infty L_{\mathbf{x}}^2}^2. \end{equation} Likewise, we apply \eqref{E:Jm-left} and use that the uniform-in-time $L^2$ compactness hypothesis on $\tilde \epsilon_n$ implies $$\lim_{t_1\nearrow +\infty} J_{-,\theta,-r,t_0}(t_1) = 0$$ to conclude that \begin{equation} \label{E:Jmon-m} J_{-,\theta,-r,t_0}(t_0) \lesssim e^{-\delta r} \sup_{t\in \mathbb{R}} \| \tilde \epsilon_n \|_{L_t^\infty L_{\mathbf{x}}^2}^2. \end{equation} Let us take $\theta = \frac{\pi}{4}$ (any number less than $\frac{\pi}3-\delta$ will suffice). Note that $$ J_{\pm, \theta, r, t_0}(t_0) = \int_{\mathbb{R}^3} \phi_{\pm} (\cos\theta(x-r) + \sin\theta \sqrt{1+y^2+z^2}) \eta^2(\mathbf{x}+\mathbf{a}(t_0),t_0) \,d \mathbf{x}. $$ The estimate \eqref{E:Jmon-p} gives the $L_{\mathbf{x}}^2$ estimate outside the cone of angle $\frac{\pi}{4}$ with the negative $x$-axis, with vertex at $(x,y,z) = (r,0,0)$. The estimate \eqref{E:Jmon-m} gives the $L_{\mathbf{x}}^2$ estimate outside the cone of angle $\frac{\pi}{4}$ with the positive $x$-axis, with vertex at $(x,y,z) = (-r,0,0)$. Combined they give the $L_{\mathbf{x}}^2$ estimate outside the ball of radius $r$, completing the proof of Lemma \ref{L:ep-decay} (since $t_0\in \mathbb{R}$ selected arbitrarily). \section{Comparability of higher Sobolev norms for $\tilde \epsilon_n$} \label{S:Sobolev-comparability} The goal of this section is to prove Lemma \ref{L:ep-comparability}. The proof is similar to \S\ref{S:higher-regularity}, although achieving the $H_{\mathbf{x}}^1$ bound below required a little bit more care -- there is no direct analogue in \S\ref{S:higher-regularity}, since in that section we start with the assumption of an $H_{\mathbf{x}}^1$ bound. Here, we do have assumption \eqref{E:comp1} (right estimate) but we have to account for the $B^{-1}$ penalty when using this assumed bound. Thus, we devised the strategy of first proving Lemma \ref{L:comp1}, which does not have $P_N$, and thus, allows for clean integration by parts in the term $\int (x-a_1) \, (\zeta^2)_x \, \zeta \, d\mathbf{x}$, to obtain the preliminary estimate \eqref{E:comp3}. We then use \eqref{E:comp3} in the $P_N$ calculation in Lemma \ref{L:comp2}. This is the main new idea in comparison to what is already in \S\ref{S:higher-regularity}. Before we begin, let us state and prove an elementary computational lemma. In the statement, $P_N \, q \, P_M$ means the composition of operators $P_N \circ q \circ P_M$, where $q$ is the operator of multiplication by $q$. \begin{lemma} \label{L:weight-est} Let $q\in \mathcal{S}(\mathbb{R}^3)$ and $\omega>0$ arbitrary. Then for any $M,N \geq 1$, $$ \| P_N \, q \, P_M \|_{L^2\to L^2} \lesssim \min \left(\frac{M}{N}, \frac{N}{M}\right)^\omega $$ and $$ \| \langle \mathbf{x} \rangle \, P_N \, q \, P_M \|_{L^2\to L^2} \lesssim \min \left(\frac{M}{N}, \frac{N}{M}\right)^\omega $$ with constants depending on $q$ and $\omega$. \end{lemma} \begin{proof} By the Plancherel theorem, it suffices to prove the $L^2\to L^2$ estimates on the operators with kernels: $$ K_1(\boldsymbol{\xi},\boldsymbol{\xi}') = \chi(N^{-1} \boldsymbol{\xi}) \, \hat q( \boldsymbol{\xi} - \boldsymbol{\xi}') \, \chi(M^{-1} \boldsymbol{\xi}') \quad \mbox{and} $$ $$ K_2(\boldsymbol{\xi},\boldsymbol{\xi}') = \nabla_{\boldsymbol{\xi}} [\chi(N^{-1} \boldsymbol{\xi}) \, \hat q( \boldsymbol{\xi} - \boldsymbol{\xi}') \, \chi(M^{-1} \boldsymbol{\xi}')]. $$ It suffices to examine $K_1$, since the $\nabla_{\boldsymbol{\xi}}$ operator in $K_2$, when distributed into the product, does not produce harmful factors. If $N\sim M$, then we just use that each component in the composition is an $L^2\to L^2$ operator with norm $\lesssim 1$ to obtain a bound of $\lesssim 1$ for the composition. If $N\gg M$, then $|\boldsymbol{\xi} - \boldsymbol{\xi}'| \sim N$, so $|\hat q(\boldsymbol{\xi}-\boldsymbol{\xi}')| \lesssim N^{-\omega-3}$. Hence, $$ \| K \|_{L_{\boldsymbol{\xi}}^\infty L_{\boldsymbol{\xi}'}^1}^{1/2} \| K \|_{L_{\boldsymbol{\xi}'}^\infty L_{\boldsymbol{\xi}}^1}^{1/2} \lesssim N^{-\omega-3} M^{3/2}N^{3/2} \lesssim N^{-\omega}. $$ Similarly, if $M \gg N$, then $|\boldsymbol{\xi} - \boldsymbol{\xi}'| \sim M$, so $|\hat q(\boldsymbol{\xi}-\boldsymbol{\xi}')| \lesssim M^{-\omega-3}$. Hence, $$ \| K \|_{L_{\boldsymbol{\xi}}^\infty L_{\boldsymbol{\xi}'}^1}^{1/2} \| K \|_{L_{\boldsymbol{\xi}'}^\infty L_{\boldsymbol{\xi}}^1}^{1/2} \lesssim M^{-\omega-3} M^{3/2}N^{3/2} \lesssim M^{-\omega}. $$ The conclusion follows from these estimates and the Schur test. \end{proof} In this section, we prove Lemma \ref{L:ep-comparability}. In the language of $\zeta$, we can phrase this in a way that does not reference the index $n$, but is instead a statement about obtaining bounds that are independent of the constant $0<B\ll 1$ in the equation for $\zeta$. Let us recall from \S\ref{S:geom-decomp} the equation \eqref{E:zeta-1} for $\zeta$: \begin{equation} \label{E:zeta-1p} \begin{aligned}[t] \partial_t \zeta &= - \partial_x \Delta \zeta - 2 \partial_x ( Q_{c,\mathbf{a}} \zeta) + c^{-2} \langle \zeta, f_{c,\mathbf{a}}\rangle (\Lambda Q)_{c,\mathbf{a}} + c^{-2}\langle \zeta, \mathbf{g}_{c,\mathbf{a}} \rangle \cdot (\nabla Q)_{c,\mathbf{a}} \\ & \qquad - B \partial_x \zeta^2 + B \omega_c (\Lambda Q)_{c,\mathbf{a}} + B\boldsymbol{\omega}_{\mathbf{a}} \cdot (\nabla Q)_{c,\mathbf{a}}, \end{aligned} \end{equation} where by \eqref{E:eta-ODEs}, $$ |\omega_c| \lesssim 1 \quad \mbox{and} \quad |\boldsymbol{\omega}_{\mathbf{a}}| \lesssim 1. $$ We can assume that for all $\theta>0$, \begin{equation} \label{E:comp1} \| \langle x-a_x\rangle^{1/\theta} \zeta\|_{L_t^\infty L_{\mathbf{x}}^2} \lesssim_\theta 1 \quad \mbox{and} \quad \| \zeta \|_{L_t^\infty H_{\mathbf{x}}^1} \lesssim \alpha B^{-1} \end{equation} with constant depending on $\theta$ but independent of $B$ and global in time, and we can assume that for all $s\geq 2$ and all finite length time intervals $I$, \begin{equation} \label{E:comp2} \| \zeta \|_{L_I^\infty H_{\mathbf{x}}^s} < \infty, \end{equation} where the bound is finite but can depend on anything, like the time interval or the constant $B$. With these assumptions, we \emph{aim to prove} that for all $s \geq 1$, \begin{equation} \label{E:comp2b} \| \zeta \|_{L_t^\infty H_{\mathbf{x}}^s} \lesssim_s 1, \end{equation} where the constant depends on $s$ but is independent of $B$ and global in time. The assertion \eqref{E:comp2b} in the case $s=1$ will be established in Lemma \ref{L:comp4} below. The argument is broken in steps with $$ \text{Lemma }\ref{L:comp1} \implies \text{Lemma }\ref{L:comp2} \implies \text{Lemma }\ref{L:comp3} \implies \text{Lemma }\ref{L:comp4}. $$ Higher values of $s$ are then addressed recursively by applying Lemma \ref{L:comp5} and Lemma \ref{L:comp6}, starting with $s=\frac32$, then proceeding by half-integer steps upward. \begin{lemma} \label{L:comp1} Suppose \eqref{E:comp1} holds, and \eqref{E:comp2} holds for $s=1$. Then, provided $|I| \ll 1$, \begin{equation} \label{E:comp3} \| \zeta \|_{L_I^2 H_{\mathbf{x}}^1} \lesssim 1 \end{equation} with constant independent of $B$ and $I$. \end{lemma} \begin{proof} By plugging in \eqref{E:zeta-1p}, we obtain \begin{align*} \partial_t \int (x-a_1) ( \zeta)^2 \, d\mathbf{x} &= -2 \int (x-a_1) \, \zeta \, \partial_x \Delta \zeta \, d\mathbf{x} \\ & \qquad - 4 \int (x-a_1) \, \zeta \, \partial_x ( Q_{c,\mathbf{a}} \zeta) \, d\mathbf{x} \\ & \qquad - 2 B \int (x-a_1) \, \zeta \, \partial_x ( \zeta^2) \, d \mathbf{x} + G(t), \end{align*} where \begin{align*} G(t) &= -2 c^{-2} \langle \zeta, f_{c,\mathbf{a}}\rangle \int (x-a_1) \, \zeta \, (\Lambda Q)_{c,\mathbf{a}} \, d\mathbf{x} \\ & \qquad -2 c^{-2} \langle \zeta, \mathbf{g}_{c,\mathbf{a}} \rangle \cdot \int (x-a_1) \, \zeta \, (\nabla Q)_{c,\mathbf{a}} \, d\mathbf{x} \\ & \qquad + 2B \omega_c \int (x-a_1) \, \zeta \, ( \Lambda Q)_{c,\mathbf{a}} \, d\mathbf{x} \\ & \qquad + 2B \omega_{\mathbf{a}} \cdot \int (x-a_1) \, \zeta \, (\nabla Q)_{c,\mathbf{a}} \, d \mathbf{x}. \end{align*} Simplifying the term $-2 \int (x-a_1) \, \zeta \, \partial_x \Delta \zeta \, d\mathbf{x}$ (the first term on the right) via integration by parts, moving it over to the left, and integrating in time over $I=[t_-,t_+]$, we obtain \begin{equation} \label{E:comp17} \| \zeta \|_{L_I^2 \dot H_{\mathbf{x}}^1}^2 \lesssim \int_I\int_{\mathbf{x}} [3 (\partial_x \zeta)^2 + (\partial_y \zeta)^2] \, d\mathbf{x} \, dt = H_1+H_2+H_3 + \int_{t_-}^{t_+}G(t) \,dt, \end{equation} where \begin{align*} H_1 &\stackrel{\rm{def}}{=} \int_{\mathbf{x}} (x-a_1) ( \zeta)^2 \, d\mathbf{x} \Big|^{t=t_+}_{t=t_-}, \\ H_2 &\stackrel{\rm{def}}{=} - 4 \int_I \int_{\mathbf{x}} (x-a_1) \, \zeta \, \partial_x ( Q_{c,\mathbf{a}} \zeta) \, d\mathbf{x} \, dt, \\ H_3 &\stackrel{\rm{def}}{=} - 2 B \int_I \int_{\mathbf{x}} (x-a_1) \, \zeta \, \partial_x ( \zeta^2) \, d \mathbf{x} \, dt. \end{align*} First, we address $H_3$. By integration by parts, $$ \int_{\mathbf{x}} (x-a_1) \zeta (\zeta^2)_x \, d\mathbf{x} = - \frac{2}{3}\int_{\mathbf{x}} \zeta^3 \, d\mathbf{x}, $$ and hence, $$ \left| \int_{\mathbf{x}} (x-a_1) \, \zeta (\zeta^2)_x \, d\mathbf{x} \right| \lesssim \| \zeta\|_{L_{\mathbf{x}}^3}^3 \lesssim \| \zeta \|_{L^2_{\mathbf{x}}}^{3/2} \| \zeta \|_{\dot H^1_{\mathbf{x}}}^{3/2} \lesssim \|\zeta\|_{L_{\mathbf{x}}^2}^6 + \|\zeta \|_{\dot H_{\mathbf{x}}^1}^2. $$ Adding the time integration, we obtain $$ |H_3| \lesssim B |I| \, \|\zeta\|_{L_I^\infty L_{\mathbf{x}}^2}^6 + B \, \|\zeta \|_{L_I^2 \dot H_{\mathbf{x}}^1}^2 \lesssim 1 + B \, \|\zeta \|_{L_I^2 \dot H_{\mathbf{x}}^1}^2. $$ Owing to the $B$ coefficient, the second term is easily absorbed on the left in \eqref{E:comp17}. Next, we address $H_2$: \begin{align*} \hspace{0.3in}&\hspace{-0.3in} \int_{\mathbf{x}} (x-a_1) \, \zeta \, \partial_x ( Q_{c,\mathbf{a}} \zeta) \, d\mathbf{x} = \int_{\mathbf{x}} (x-a_1) \, (\partial_xQ_{c,\mathbf{a}}) \zeta^2 \, d\mathbf{x} + \int_{\mathbf{x}} (x-a_1) \, Q_{c,\mathbf{a}} \, \zeta \zeta_x \, d\mathbf{x}\\ & = \int_{\mathbf{x}} (x-a_1) \, (\partial_xQ_{c,\mathbf{a}}) \zeta^2 \, d\mathbf{x} -\frac12 \int_{\mathbf{x}} \partial_x[(x-a_1) \, Q_{c,\mathbf{a}}] \, \zeta^2 \, d\mathbf{x}. \end{align*} Thus, $$ |H_2| \lesssim |I| \|\zeta\|_{L_I^\infty L_{\mathbf{x}}^2}^2 \lesssim 1. $$ Next, $$ |H_1| \lesssim \|\langle x-a_1 \rangle \zeta \|_{L_I^\infty L_{\mathbf{x}}^2} \| \zeta \|_{L_I^\infty L_{\mathbf{x}}^2} \lesssim 1. $$ The terms in $G$ are straightforwardly bounded by $$ \|G\|_{L_I^1} \lesssim |I| \|\zeta\|_{L_I^\infty} \lesssim 1. $$ With all of these estimates, the bound follows from \eqref{E:comp17} . \end{proof} \begin{lemma} \label{L:comp2} Suppose $|I| \ll 1$, \eqref{E:comp1} holds, and \eqref{E:comp2} holds for $s=1$, so that \eqref{E:comp3} holds as well. Then for all $N\geq 1$ and $0\leq \omega \leq \frac18$, \begin{equation} \label{E:comp4} \| P_N \zeta \|_{L_I^2 H_{\mathbf{x}}^1} \lesssim N^{-\omega} B^{-\omega} \end{equation} with constant independent of $N$, $B$ and $I$. (Notice that $B^{-\omega}$ is a penalty but $N^{-\omega}$ is a gain.) \end{lemma} Therefore, we can obtain a gain in $N$ at the expense of a penalty in $B$. \begin{proof} By plugging in \eqref{E:zeta-1p}, we obtain \begin{align*} \partial_t \int (x-a_1) (P_N \zeta)^2 \, d\mathbf{x} &= -2 \int (x-a_1) \, P_N \zeta \, \partial_x \Delta P_N \zeta \, d\mathbf{x} \\ & \qquad - 4 \int (x-a_1) \, P_N \zeta \, \partial_x P_N ( Q_{c,\mathbf{a}} \zeta) \, d\mathbf{x} \\ & \qquad - 2 B \int (x-a_1) \, P_N \zeta \, \partial_x P_N ( \zeta^2) \, d \mathbf{x} + G(t), \end{align*} where \begin{align*} G(t) &= -2 c^{-2} \langle \zeta, f_{c,\mathbf{a}}\rangle \int (x-a_1) \, P_N \zeta \, P_N (\Lambda Q)_{c,\mathbf{a}} \, d\mathbf{x} \\ & \qquad -2 c^{-2} \langle \zeta, \mathbf{g}_{c,\mathbf{a}} \rangle \cdot \int (x-a_1) \, P_N \zeta \, P_N (\nabla Q)_{c,\mathbf{a}} \, d\mathbf{x} \\ & \qquad + 2B \omega_c \int (x-a_1) \, P_N\zeta \, ( \Lambda Q)_{c,\mathbf{a}} \, d\mathbf{x} \\ & \qquad + 2B \omega_{\mathbf{a}} \cdot \int (x-a_1) \, P_N \zeta \, (\nabla Q)_{c,\mathbf{a}} \, d \mathbf{x}. \end{align*} Simplifying the term $-2 \int (x-a_1) \, P_N \zeta \, \partial_x \Delta P_N \zeta \, d\mathbf{x}$ (the first term on the right) via integration by parts, moving it over to the left, and integrating in time over $I=[t_-,t_+]$, we obtain \begin{equation} \label{E:stuff1} N^2 \, \| P_N \zeta \|_{L_I^2 L_{\mathbf{x}}^2}^2 \lesssim \int_I\int_{\mathbf{x}} [3 (\partial_x P_N \zeta)^2 + (\partial_y P_N\zeta)^2] \, d\mathbf{x} \, dt = H_1+H_2+H_3 + \int_{t_-}^{t_+}G(t) \,dt, \end{equation} where, similarly as in the previous lemma, we define \begin{align*} H_1 &\stackrel{\rm{def}}{=} \int_{\mathbf{x}} (x-a_1) (P_N \zeta)^2 \, d\mathbf{x} \Big|^{t=t_+}_{t=t_-} \\ H_2 &\stackrel{\rm{def}}{=} - 4 \int_I \int_{\mathbf{x}} (x-a_1) \, P_N \zeta \, \partial_x P_N ( Q_{c,\mathbf{a}} \zeta) \, d\mathbf{x} \, dt \\ H_3 &\stackrel{\rm{def}}{=} - 2 B \int_I \int_{\mathbf{x}} (x-a_1) \, P_N \zeta \, \partial_x P_N ( \zeta^2) \, d \mathbf{x} \, dt. \end{align*} The terms in $G$ are easily bounded. Note that in estimating $H_3$, we can use \eqref{E:comp3} as follows \begin{align*} |H_3| &\leq B \,\| \partial_x P_N \zeta^2 \|_{L_I^1L_{\mathbf{x}}^{3/2}} \| (x-a_1) P_N \zeta \|_{L_I^\infty L_{\mathbf{x}}^3} \leq B \, \| \zeta \zeta_x \|_{L_I^1L_{\mathbf{x}}^{3/2}} \| (x-a_1) P_N \zeta \|_{L_I^\infty L_{\mathbf{x}}^3} \\ &\leq B \,\| \zeta\|_{L_I^2 L_{\mathbf{x}}^6} \| \zeta_x \|_{L_I^2L_{\mathbf{x}}^2} \| (x-a_1) P_N \zeta \|_{L_I^\infty L_{\mathbf{x}}^3} \lesssim B\, \| (x-a_1) P_N \zeta \|_{L_I^\infty L_{\mathbf{x}}^3} \end{align*} by Sobolev embedding and \eqref{E:comp3}. Following through with estimate \eqref{E:HR2}, $\theta=\frac12$, we obtain $$ |H_3| \lesssim B\, \| |x-a_1|^2 P_N\zeta \|_{L_I^\infty L_\mathbf{x}^2}^{1/2} \| P_N \zeta \|_{L_I^\infty L_{\mathbf{x}}^6}^{1/2}. $$ By \eqref{E:HR1}, $$ |H_3| \lesssim B \| |x-a_1|^{2/\theta} P_N\zeta \|_{L_I^\infty L_\mathbf{x}^2}^{\theta/2} \| P_N\zeta \|_{L_I^\infty L_\mathbf{x}^2}^{(1-\theta)/2} \| P_N \zeta \|_{L_I^\infty L_{\mathbf{x}}^6}^{1/2}. $$ Finally, by \eqref{E:HR3} and Sobolev embedding, $$ |H_3| \lesssim B\, \| |x-a_1|^{2/\theta} \zeta \|_{L_I^\infty L_\mathbf{x}^2}^{\theta/2} \| P_N\zeta \|_{L_I^\infty L_\mathbf{x}^2}^{(1-\theta)/2} \| P_N \zeta \|_{L_I^\infty H_{\mathbf{x}}^1}^{1/2}. $$ By \eqref{E:comp1}, $$ |H_3| \lesssim B^{\theta/2} N^{-\frac{1-\theta}{2}}, $$ which suffices for \eqref{E:comp4}. For $H_2$, we estimate as $$ |H_2| \lesssim N \, \| P_N \zeta \|_{L_I^2 L_{\mathbf{x}}^2} \| (x-a_1) P_N ( Q_{c,\mathbf{a}} \zeta) \|_{L_I^2 L_{\mathbf{x}}^2}. $$ Expanding $\zeta = \sum_{M\geq 1} P_M \zeta$, $$ |H_2| \lesssim N \, \| P_N \zeta \|_{L_I^2 L_{\mathbf{x}}^2} \sum_{M \geq 1} \| (x-a_1) P_N ( Q_{c,\mathbf{a}} P_M \zeta) \|_{L_I^2 L_{\mathbf{x}}^2}. $$ Applying Lemma \ref{L:weight-est}, we obtain $$ |H_2| \lesssim N \, \| P_N \zeta \|_{L_I^2 L_{\mathbf{x}}^2} \, \| \zeta \|_{L_I^2 H_\mathbf{x}^1} \sum_{M \geq 1} \min (NM^{-1}, MN^{-1}) M^{-1}. $$ The sum carries out to $N^{-1}$. By \eqref{E:comp3}, we can bound $$ |H_2| \lesssim \| P_N \zeta \|_{L_I^2 L_{\mathbf{x}}^2} \lesssim \epsilon N^2\|P_N \zeta\|_{L_I^2 L_{\mathbf{x}}^2}^2 + \epsilon^{-1} N^{-2}. $$ The first term can be absorbed into the main term \eqref{E:stuff1}, while the second term is an acceptable contribution to the upper bound in \eqref{E:comp4}. For $H_1$, we estimate as $$ |H_1| \lesssim \| |x-a_1| P_N \zeta \|_{L_I^\infty L_{\mathbf{x}}^2} \| P_N \zeta \|_{L_I^\infty L_{\mathbf{x}}^2}. $$ From \eqref{E:comp1}, it follows that $|H_1| \lesssim 1$. On the other hand, we can also estimate as $$ H_1 \lesssim \| |x-a_1| \zeta \|_{L_I^\infty L_{\mathbf{x}}^2} N^{-1} \| P_N \zeta \|_{L_I^\infty H_{\mathbf{x}}^1}, $$ and by applying \eqref{E:comp1}, obtain $|H_1| \lesssim B^{-1}N^{-1}$. Interpolating, we obtain a bound of $B^{-\omega} N^{-\omega}$ for any $0\leq \omega \leq 1$. \end{proof} \begin{lemma} \label{L:comp3} Assume \eqref{E:comp1} and suppose \eqref{E:comp2} holds for $s=1$. Suppose $I$ is an interval of length $0<\delta \ll 1$. Then \eqref{E:comp3} and \eqref{E:comp4} hold, and in addition for $N\geq 1$, \begin{equation} \label{E:comp5} \|P_N \zeta \|_{L_x^2 L_{yz I}^\infty} \lesssim (\ln^+ N)^4 \delta^{-1/2} \end{equation} with constant independent of $B$ and $I$. \end{lemma} \begin{proof} Let $t_0\in I$ be such that \begin{equation} \label{E:comp10} \|\zeta(t_0)\|_{H_{\mathbf{x}}^1} = \min_{t\in I} \|\zeta(t)\|_{H_{\mathbf{x}}^1} \leq \delta^{-1/2} \| \zeta \|_{L_I^2H_{\mathbf{x}}^1} \lesssim \delta^{-1/2}. \end{equation} Then we estimate \begin{equation} \label{E:comp8} \gamma_N \stackrel{\rm{def}}{=} \| P_N \zeta(t) \|_{L_x^2 L_{y z I}^\infty} \end{equation} as follows. Note that \begin{equation} \label{E:comp9} P_N \zeta(t) = P_N U(t-t_0) \zeta(t_0) + \int_{t_0}^t U(t-t') P_N F(t') \,d t', \end{equation} where $F = \sum_j F_j$ and the $F_j$ are terms in \eqref{E:zeta-1p}, specifically, \begin{equation} \label{E:comp9b} \begin{aligned} & F_1 = - 2 \partial_x ( Q_{c,\mathbf{a}} \zeta) \,, && F_2 = c^{-2} \langle \zeta, f_{c,\mathbf{a}}\rangle (\Lambda Q)_{c,\mathbf{a}} \,, && F_3 = c^{-2}\langle \zeta, \mathbf{g}_{c,\mathbf{a}} \rangle \cdot (\nabla Q)_{c,\mathbf{a}} \,, \\ &F_4 = - B \partial_x \zeta^2 \,, &&F_5 = B \omega_c (\Lambda Q)_{c,\mathbf{a}} \,, &&F_6 = B\boldsymbol{\omega}_{\mathbf{a}} \cdot (\nabla Q)_{c,\mathbf{a}} \,, \end{aligned} \end{equation} and the estimate of \eqref{E:comp8} via Lemma \ref{L:RV1} applied to \eqref{E:comp9} corresponding to $F_j$ will be denoted by $\gamma_{N,j}$, so that we have $$\gamma_N \leq \sum_j \gamma_{N,j}.$$ By \eqref{E:HR20}, \begin{equation} \label{E:comp11} \|P_N U(t-t_0) \zeta (t_0) \|_{L_x^2 L_{y z I}^\infty} \lesssim (\ln^+ N)^2 \| \zeta(t_0) \|_{H_{\mathbf{x}}^1} \lesssim \delta^{-1/2} (\ln^+ N)^2, \end{equation} where in the last step, we used \eqref{E:comp10}. Now we consider the term $F_4$. By \eqref{E:HR21}, $$ \gamma_{N,4} \lesssim B (\ln^+ N)^2 N \|P_N(\zeta^2) \|_{L_x^1 L_{yzI}^2}. $$ Using the decomposition \begin{equation} \label{E:comp15} P_N( \zeta^2) \sim P_N(P_{\lesssim N} \zeta \cdot P_N \zeta) + \sum_{N'\geq N} P_N( P_{N'}\zeta \cdot P_{N'}\zeta) \end{equation} and H\"older, we obtain $$ \gamma_{N,4} \lesssim B (\ln^+ N)^2 N ( \|P_{\lesssim N} \zeta \|_{L_x^2 L_{yzI}^\infty} \|P_N \zeta \|_{L_x^2 L_{yzI}^2} + \sum_{N'\geq N} \|P_{N'} \zeta \|_{L_x^2 L_{yzI}^\infty} \|P_{N'} \zeta \|_{L_x^2 L_{yzI}^2} ). $$ By \eqref{E:comp4} in Lemma \ref{L:comp2}, $$ \gamma_{N,4} \lesssim B^{1-\omega} N^{-\omega} (\ln^+ N)^2 \left( \|P_{\lesssim N} \zeta \|_{L_x^2 L_{yzI}^\infty} + \sum_{N'\geq N} \frac{N^{1+\omega}}{(N')^{1+\omega}} \|P_{N'} \zeta \|_{L_x^2 L_{yzI}^\infty} \right), $$ and thus, \begin{equation} \label{E:comp12} \gamma_{N,4}\lesssim B^{1-\omega} N^{-\omega/2} \sum_{N' \geq 1} \min\left( 1 , \frac{N^{1+\omega}}{(N')^{1+\omega}}\right) \|P_{N'} \zeta \|_{L_x^2 L_{y z I}^\infty}. \end{equation} By \eqref{E:HR21}, \begin{align*} \gamma_{N,1} &\lesssim (\ln^+ N)^2 \|P_N\partial_x ( Q_{c,\mathbf{a}} \zeta) \|_{L_x^1 L_{y z I}^2}\\ &\lesssim (\ln^+ N)^2 \|\partial_x ( Q_{c,\mathbf{a}} \zeta) \|_{L_x^1 L_{y z I}^2} \\ &\lesssim (\ln^+ N)^2 ( \|Q_{c,\mathbf{a}}\|_{L_x^2 L_{y z I}^\infty}+\|(\partial_x Q)_{c,\mathbf{a}}\|_{L_x^2 L_{y z I}^\infty}) \| \zeta \|_{L_I^2 H_{\mathbf{x}}^1}. \end{align*} By \eqref{E:comp3} in Lemma \ref{L:comp1}, \begin{equation} \label{E:comp13} \gamma_{N,1} \lesssim (\ln^+ N)^2. \end{equation} Since for each $\theta>0$, we have $$ \|P_N (\Lambda Q)_{c,\mathbf{a}} \|_{L_x^2 L_{y z I}^\infty} \lesssim N^{-\theta} \quad \mbox{and} \quad \|P_N (\nabla Q)_{c,\mathbf{a}} \|_{L_x^2 L_{y z I}^\infty} \lesssim N^{-\theta}, $$ the remaining terms are more straightforward to estimate and we have \begin{equation} \label{E:comp14} \gamma_{N,2} + \gamma_{N,3} + \gamma_{N,5} + \gamma_{N,6} \lesssim 1. \end{equation} By \eqref{E:comp9}, \eqref{E:comp11}, \eqref{E:comp12}, \eqref{E:comp13}, and \eqref{E:comp14}, we have $$ \gamma_N \lesssim \delta^{-1/2}(\ln^+ N)^2 + B^{1-\omega} N^{-\omega/2} \sum_{N' \geq 1} \min( 1 , \frac{N^{1+\omega}}{(N')^{1+\omega}}) \gamma_{N'}. $$ Multiply by $(\ln^+ N)^{-4}$ and sum over dyadic $N \geq 1$ to obtain $$ \sum_{N \geq 1} (\ln^+ N)^{-4} \gamma_N \lesssim \delta^{-1/2} + B^{1-\omega} \sum_{N \geq 1} N^{-\omega/2} \sum_{N' \geq 1} \min( 1 , \frac{N^{1+\omega}}{(N')^{1+\omega}}) \gamma_{N'}. $$ Interchanging the order of $N$ and $N'$ summation, we obtain $$ \sum_{N \geq 1} (\ln^+ N)^{-4} \gamma_N \lesssim \delta^{-1/2} + B^{1-\omega} \sum_{N' \geq 1} (N')^{-\omega/2} \gamma_{N'}. $$ Since $B^{1-\omega} \ll 1$ and $(N')^{-\omega/2} \lesssim (\ln^+ N')^{-4}$, it follows that $$ \sum_{N \geq 1} (\ln^+ N)^{-4} \gamma_N \lesssim \delta^{-1/2}, $$ and, in particular, \eqref{E:comp5} holds. \end{proof} \begin{lemma} \label{L:comp4} Suppose \eqref{E:comp1} and \eqref{E:comp2} hold for $s=1$. Suppose $I$ is an interval of length $0<\delta \ll 1$. Then \eqref{E:comp3}, \eqref{E:comp4} and \eqref{E:comp5} hold, and moreover, $$ \| \zeta \|_{L_I^\infty H_{\mathbf{x}}^1} \lesssim \delta^{-1/2} $$ with constant independent of $B$ and $I$. \end{lemma} \begin{proof} We start by writing the Duhamel formula $$ \zeta(t) = U(t-t_0) \zeta(t_0) + \sum_{j=1}^6 \int_{t_0}^t U(t-t_0) F_j(t') \, dt' $$ with $F_j$ defined by \eqref{E:comp9b}. By the standard estimate for the linear flow, $$ \| \zeta \|_{L_I^\infty H_{\mathbf{x}}^1} \lesssim \delta^{-1/2} + \sum_{j=1}^6 \mu_j, $$ where $$ \mu_j = \left\| \int_{t_0}^t U(t-t_0) F_j(t') \, dt' \right\|_{L_I^\infty H_{\mathbf{x}}^1}. $$ By \eqref{E:HR21b}, $$ \mu_4 \lesssim B\| \nabla( \zeta^2) \|_{L_x^1 L_{y z I}^2}. $$ Using, as usual, \eqref{E:comp15}, $$ \mu_4 \lesssim B \sum_{N \geq 1} N \left( \| P_{\lesssim N} \zeta \|_{L_x^2 L_{y z I}^\infty} \|P_N \zeta \|_{L_I^2 L_\mathbf{x}^2} + \sum_{N' \geq N} \| P_{\lesssim N'} \zeta \|_{L_x^2 L_{y z I}^\infty} \|P_{N'} \zeta \|_{L_I^2 L_\mathbf{x}^2} \right). $$ By \eqref{E:comp4} and \eqref{E:comp5}, $$ \mu_4 \lesssim B^{1-\omega} \sum_{N\geq 1} \left( (\ln^+ N)^5 \delta^{-1/2} N^{-\omega} + \sum_{N'\geq N} \frac{N}{N'} (\ln^+ N')^4 (N')^{-\omega} \delta^{-1/2} \right) \lesssim \delta^{-1/2}. $$ By \eqref{E:HR21b} and \eqref{E:comp3}, $$ \mu_1 \lesssim \| \nabla (Q_{c,\mathbf{a}} \zeta) \|_{L_x^1 L_{y z I}^2} \lesssim (\|Q_{c,\mathbf{a}} \|_{L_x^2 L_{y z I}^\infty} +\|\nabla Q_{c,\mathbf{a}} \|_{L_x^2 L_{y z I}^\infty}) \|\zeta \|_{L_I^2 H_{\mathbf{x}}^1} \lesssim 1. $$ The estimates for $\mu_2$, $\mu_3$, $\mu_5$, and $\mu_6$ are more straightforward, since the terms $(\Lambda Q)_{c,\mathbf{a}}$ and $(\nabla Q)_{c,\mathbf{a}}$ absorb derivatives. \end{proof} Thus, we have established that \eqref{E:comp2b} holds for $s=1$. From here, the argument is similar but a bit easier, and we increment by half-derivatives recursively with Lemmas \ref{L:comp5}-\ref{L:comp6} below. \begin{lemma} \label{L:comp5} Suppose \eqref{E:comp1} holds, and \eqref{E:comp2b} holds for some $s\geq 1$. Then for $|I|\leq 1$, \begin{equation} \label{E:comp7} \| \zeta \|_{L_I^2 H_{\mathbf{x}}^{s+\frac34}} \lesssim 1. \end{equation} \end{lemma} \begin{proof} We know that $$ \sum_{N\geq 1} N^{2s+\frac32} \| P_N \zeta \|_{L_I^2 L_{\mathbf{x}}^2} < \infty, $$ so we just have to prove that it is bounded independently of $B$ and $I$, which is a key difference from the analysis here and that in \S \ref{S:higher-regularity}, and allows us to give a simpler argument here. By plugging in \eqref{E:zeta-1p}, we obtain \begin{align*} \partial_t \int (x-a_1) (P_N \zeta)^2 \, d\mathbf{x} &= -2 \int (x-a_1) \, P_N \zeta \, \partial_x \Delta P_N \zeta \, d\mathbf{x} \\ & \qquad - 4 \int (x-a_1) \, P_N \zeta \, \partial_x P_N ( Q_{c,\mathbf{a}} \zeta) \, d\mathbf{x} \\ & \qquad - 2 B \int (x-a_1) \, P_N \zeta \, \partial_x P_N ( \zeta^2) \, d \mathbf{x} + G(t), \end{align*} where \begin{align*} G(t) &= -2 c^{-2} \langle \zeta, f_{c,\mathbf{a}}\rangle \int (x-a_1) \, P_N \zeta \, P_N (\Lambda Q)_{c,\mathbf{a}} \, d\mathbf{x} \\ & \qquad -2 c^{-2} \langle \zeta, \mathbf{g}_{c,\mathbf{a}} \rangle \cdot \int (x-a_1) \, P_N \zeta \, P_N (\nabla Q)_{c,\mathbf{a}} \, d\mathbf{x} \\ & \qquad + 2B \omega_c \int (x-a_1) \, P_N\zeta \, ( \Lambda Q)_{c,\mathbf{a}} \, d\mathbf{x} \\ & \qquad + 2B \omega_{\mathbf{a}} \cdot \int (x-a_1) \, P_N \zeta \, (\nabla Q)_{c,\mathbf{a}} \, d \mathbf{x}. \end{align*} Simplifying the term $-2 \int (x-a_1) \, P_N \zeta \, \partial_x \Delta P_N \zeta \, d\mathbf{x}$ (the first term on the right) via integration by parts, moving it over to the left, and integrating in time over $I=[t_-,t_+]$, we obtain $$ N^2 \| P_N \zeta \|_{L_I^2 L_{\mathbf{x}}^2}^2 \lesssim \int_I\int_{\mathbf{x}} [3 (\partial_x P_N \zeta)^2 + (\partial_y P_N\zeta)^2] \, d\mathbf{x} \, dt = H_1+H_2+H_3 + \int_{t_-}^{t_+}G(t) \,dt, $$ where \begin{align*} H_1 &\stackrel{\rm{def}}{=} \int_{\mathbf{x}} (x-a_1) (P_N \zeta)^2 \, d\mathbf{x} \Big|^{t=t_+}_{t=t_-}, \\ H_2 &\stackrel{\rm{def}}{=} - 4 \int_I \int_{\mathbf{x}} (x-a_1) \, P_N \zeta \, \partial_x P_N ( Q_{c,\mathbf{a}} \zeta) \, d\mathbf{x} \, dt, \\ H_3 &\stackrel{\rm{def}}{=} - 2 B \int_I \int_{\mathbf{x}} (x-a_1) \, P_N \zeta \, \partial_x P_N ( \zeta^2) \, d \mathbf{x} \, dt. \end{align*} Multiply by $N^{2s-\frac12}$ and sum over dyadic $N\geq 1$, to obtain \begin{equation} \label{E:comp16} \sum_{N\geq 1} N^{2s+\frac32} \|P_N \zeta\|_{L_I^2 L_{\mathbf{x}}^2}^2 \lesssim \sum_{j=1}^3 \sum_{N\geq 1} N^{2s-\frac12} H_j + \sum_{N\geq 1} N^{2s-\frac12} \|G\|_{L_I^1}. \end{equation} Let us first focus on the term $$ H_3 = 2 B \int_I \int_{\mathbf{x}} \, P_N \zeta \, \partial_x P_N ( \zeta^2) \, d \mathbf{x} \, dt + 2 B \int_I \int_{\mathbf{x}} (x-a_1) \, \partial_x P_N \zeta \, P_N ( \zeta^2) \, d \mathbf{x} \, dt. $$ Using \eqref{E:comp15}, \begin{align*} H_3 &\lesssim B \| \langle x-a_1\rangle P_N ( P_{\lesssim N} \zeta \, P_N \zeta) \, \partial_x P_N \zeta \|_{L_I^1 L_{\mathbf{x}}^1} \\ & \qquad + B \sum_{N'\geq N}\| \langle x-a_1\rangle P_N ( P_{N'} \zeta \, P_{N'} \zeta) \, \partial_x P_N \zeta \|_{L_I^1 L_{\mathbf{x}}^1}. \end{align*} Consider the first term on the right side of the above estimate. Since $\langle x \rangle P_N \langle x \rangle^{-1}$ is an $L^2_{\mathbf{x}} \to L^2_{\mathbf{x}}$ bounded operator with operator norm $\lesssim 1$ (independent of $N\geq 1$), \begin{align*} H_{31} &\lesssim B\| \langle x-a_1 \rangle P_{\lesssim N} \zeta \, \, P_N \zeta \|_{L_I^2 L_{\mathbf{x}}^2} \, \| \partial_x P_N \zeta\|_{L_I^2 L_{\mathbf{x}}^2}\\ &\lesssim B \| \langle x-a_1\rangle P_{\lesssim N} \zeta \|_{L_I^\infty L_{\mathbf{x}}^3} \, \|P_N \zeta \|_{L_I^2 L_{\mathbf{x}}^6} \, \| \partial_x P_N \zeta \|_{L_I^2 L_\mathbf{x}^2}. \end{align*} By the Bernstein inequality and the fact that $\| \langle x-a_1\rangle P_{\lesssim N} \zeta \|_{L_I^\infty L_{\mathbf{x}}^3} \lesssim 1$ by the hypotheses (since $s>\frac12$), we get $$ H_{31} \lesssim B N^2 \|P_N \zeta \|_{L_I^2 L_\mathbf{x}^2}^2. $$ A similar analysis of the other term gives $$ H_{32} \lesssim B \sum_{N'\geq N} (N')^2 \|P_{N'} \zeta \|_{L_I^2 L_\mathbf{x}^2}^2. $$ Thus, $$ H_3 \lesssim B \sum_{N' \gtrsim N} (N'^2) \| P_{N'} \zeta \|_{L_I^2 L_{\mathbf{x}}^2}^2. $$ By reversing the order of the double sum (sum in $N$ and sum in $N'$), we obtain $$ \sum_{N \geq 1} N^{2s-\frac12} H_3 \lesssim B\sum_{N' \geq 1} (N')^{2s+\frac32} \|P_{N'}\zeta\|_{L_I^2L_{\mathbf{x}}^2}^2. $$ Since $B\ll 1$, this term can be absorbed back on the left in \eqref{E:comp16}. The term $H_2$ is handled as in Lemma \ref{L:comp2}. $$ |H_2| \lesssim N \|P_N \zeta \|_{L_I^2L_{\mathbf{x}}^2} \| (x-a_1) \tilde P_N ( Q_{c,\mathbf{a}} \zeta) \|_{L_I^2 L_{\mathbf{x}}^2}, $$ where $\tilde P_N$ is a Littlewood-Paley multiplier different from $P_N$. By Lemma \ref{L:weight-est}, $$ \| (x-a_1) P_N( Q_{c,\mathbf{a}} P_M \zeta) \|_{L_{\mathbf{x}}^2} \lesssim \min\left( \frac{M}{N}, \frac{N}{M}\right)^{s+1} \|P_M \zeta \|_{L_{\mathbf{x}}^2}. $$ Thus, upon expanding $\zeta = \sum_{M \geq 1} P_M \zeta$, we obtain $$ |H_2| \lesssim N \| P_N \zeta \|_{L_{\mathbf{x}}^2} \sum_{M\geq 1} \min\left( \frac{M}{N}, \frac{N}{M}\right)^{s+1}\| P_M \zeta \|_{L_{\mathbf{x}}^2}. $$ By Cauchy-Schwarz and the discrete Schur test applied to the kernel $$ K(M,N) = N^{s+\frac14} \min(MN^{-1}, NM^{-1})^{s+1} M^{-s-\frac14} \leq \min( N^{-1}M, N^{2s+1}M^{-(2s+1)}), $$ we obtain $$ \sum_{N\geq 1} N^{2s-\frac12}|H_2| \lesssim \sum_{N\geq 1} N^{2s+\frac12} \|P_N \zeta\|_{L_{\mathbf{x}}^2}^2. $$ This term is easy to absorb for $N \gg 1$, but for $N \lesssim 1$, it is trivially bounded. Specifically, for $0< \delta \ll 1$ small but independent of $N$, \begin{align*} \sum_{N\geq 1} N^{2s-\frac12} H_2 &\lesssim \sum_{1\leq N\leq \log_2 \delta^{-1}} N^{2s+\frac12} \|P_N \zeta \|_{L_I^2 L_{\mathbf{x}}^2}^2 + \sum_{N\geq \log_2 \delta^{-1}} N^{2s+\frac12} \|P_N \zeta \|_{L_I^2 L_{\mathbf{x}}^2}^2 \\ &\lesssim \delta^{-1/2}|I|^{1/2}\sum_{1\leq N\leq \log_2 \delta^{-1}} N^{2s} \|P_N \zeta \|_{L_I^\infty L_{\mathbf{x}}^2}^2 + \delta \sum_{N\geq \log_2 \delta^{-1}} N^{2s+\frac32} \|P_N \zeta \|_{L_I^2 L_{\mathbf{x}}^2}^2. \end{align*} For $\delta$ sufficiently small, the second term can be absorbed on the left in \eqref{E:comp16}. For $H_1$ we use \begin{align*} H_1 &\leq \|\langle x-a_1 \rangle P_N \zeta \|_{L_I^\infty L_{\mathbf{x}}^2} \|P_N \zeta \|_{L_I^\infty L_{\mathbf{x}}^2} \\ & \lesssim \| \langle x-a_1 \rangle^{1/\theta} P_N \zeta \|_{L_I^\infty L_{\mathbf{x}}^2}^\theta \|P_N \zeta \|_{L_I^\infty L_\mathbf{x}^2}^{2-\theta} \\ & \lesssim N^{-s(2-\theta)} ( N^s \|P_N \zeta \|_{L_I^\infty L_{\mathbf{x}}^2})^{2-\theta}, \end{align*} and therefore, $$ \sum_{N \geq 1} N^{2s-\frac12} H_1 \lesssim \sum_{N \geq 1} N^{-\frac12 + s\theta} ( N^s \|P_N \zeta \|_{L_{\mathbf{x}}^2} )^{2-\theta}. $$ Since by hypothesis $N^s \|P_N \zeta \|_{L_{\mathbf{x}}^2} \leq 1$, the above sum evaluates to $\lesssim 1$, provided we take $\theta< \frac{1}{2s}$. Thus, this contributes a constant term to the right side of \eqref{E:comp16}. Finally, the terms in $G$ are straightforward to bound in \eqref{E:comp16}, using $$ \| \langle x-a_1 \rangle P_N q \|_{L_{\mathbf{x}}^2} \lesssim \|\tilde P_N \langle x -a_1 \rangle q \|_{L_{\mathbf{x}}^2}, $$ where $\tilde P_N$ is a new Littlewood-Paley multiplier, and that for all $\omega>0$, $$ \| \tilde P_N [ \langle x- a_1 \rangle(\Lambda Q)_{x,\mathbf{a}}] \|_{L_{\mathbf{x}}^2} \lesssim_\omega N^{-\omega} \,, \qquad \| \tilde P_N [\langle x- a_1 \rangle (\nabla Q)_{x,\mathbf{a}}] \|_{L_{\mathbf{x}}^2} \lesssim_\omega N^{-\omega}. $$ \end{proof} \begin{lemma} \label{L:comp6} Suppose $|I|\leq 1$, \eqref{E:comp1} holds, \eqref{E:comp2b} holds for some $s\geq 1$, and thus, \eqref{E:comp7} holds. Then $$ \| \zeta \|_{L_I^\infty H_{\mathbf{x}}^{s+\frac12}} \lesssim 1, $$ i.e., \eqref{E:comp2b} holds for $s \mapsto s+\frac12$. \end{lemma} \begin{proof} This is pretty quickly done using the result of Lemma \ref{L:comp5} together with Lemma \ref{L:comp3} and the Ribaud-Vento well-posedness estimates. \end{proof} \section{Convergence of $w_n = \tilde \epsilon_n/B_n$ to $w$} \label{S:convergence} In this section, we prove Lemma \ref{L:convergence}. Recall the setup from \S\ref{S:introduction}. Associated to $\tilde u_n$ are the parameters $\tilde c_n(t)$, $\tilde{\mathbf{a}}_n(t)$, remainder $\tilde{\epsilon}_n(\mathbf{x},t)$, and $$ b_n(t) = \|\tilde{\epsilon}_n(\mathbf{x},t) \|_{L_{\mathbf{x}}^2} \,, \qquad B_n = \|b_n(t) \|_{L_t^\infty}. $$ The sequence has been shifted in time to arrange that $b_n(0) \geq \frac12 B_n$, and scaled and shifted in space to arrange that $$\tilde c_n(0)=1 \quad \mbox{and} \quad \tilde{\mathbf{a}}_n(0) = 0.$$ As in \S\ref{S:geom-decomp}, we denote \begin{equation} \label{E:tilzeta} \tilde{\eta}_n(\mathbf{x},t) = \tilde c_n^{\,-2} \,\tilde\epsilon_n ( \tilde c_n^{\,-1}\mathbf{x}(x-\tilde{\mathbf{a}}_n),t) \,, \qquad \tilde{\zeta}_n= B_n \tilde{\eta}_n. \end{equation} Note that \begin{equation} \label{E:con4} \| \tilde \zeta_n(0) \|_{L_{\mathbf{x}}^2} = \frac{\| \tilde \epsilon_n(0) \|_{L_{\mathbf{x}}^2}}{B_n} = \frac{b_n(0)}{B_n} \geq \frac12. \end{equation} By Lemma \ref{L:ep-decay}, \begin{equation} \label{E:con2} \| \tilde \zeta_n(0) \|_{L_{\mathbf{x}}^2(|\mathbf{x}|\geq r)} \leq e^{-\delta r}, \end{equation} and by Lemma \ref{L:ep-comparability}, for all $k \geq 0$, \begin{equation} \label{E:con3} \| \tilde \zeta_n \|_{L_t^\infty H_{\mathbf{x}}^k} \lesssim_k 1. \end{equation} By \eqref{E:con2}, \eqref{E:con3} and the Rellich-Kondrachov theorem, we can pass to a subsequence (still indexed by $n$) so that $$\tilde{\zeta}_n(0) \to \zeta_\infty(0)$$ strongly in $H_{\mathbf{x}}^k$, for every $k\geq 0$ (this is the \emph{definition} of $\zeta_\infty(0)$). By \eqref{E:con4}, we have $$\|\zeta_\infty(0)\|_{L_{\mathbf{x}}^2} \geq \frac12.$$ From \S \ref{S:geom-decomp}, \eqref{E:zeta-1} and \eqref{E:omegas}, we have \begin{equation} \label{E:con6} \begin{aligned}[t] \partial_t \tilde{\zeta}_n &= - \partial_x \Delta \tilde{\zeta}_n - 2 \partial_x ( Q_{\tilde c_n,\tilde{\mathbf{a}}_n} \tilde{\zeta}_n) + \tilde c_n^{-2} \langle \tilde{\zeta}_n, f_{\tilde c_n,\tilde{\mathbf{a}}_n}\rangle (\Lambda Q)_{\tilde c_n,\tilde{\mathbf{a}}_n} \\ & \qquad + \tilde c_n^{-2}\langle \tilde{\zeta}_n, \mathbf{g}_{\tilde c_n,\tilde{\mathbf{a}}_n} \rangle \cdot (\nabla Q)_{\tilde c_n,\tilde{\mathbf{a}}_n} - B_n \partial_x \tilde{\zeta}_n^2 + B_n \omega_{\tilde c_n} (\Lambda Q)_{\tilde c_n,\tilde{\mathbf{a}}_n} \\ & \qquad + B_n\boldsymbol{\omega}_{\tilde{\mathbf{a}}_n} \cdot (\nabla Q)_{{\tilde c_n},\tilde{\mathbf{a}}_n}. \end{aligned} \end{equation} \begin{lemma} \label{L:param-conv} On $[-T,T]$, we have $$ |\tilde c_n -1 | \lesssim TB_n \quad \mbox{and} \quad |\tilde{\mathbf{a}}_n - t \mathbf{i}| \lesssim \langle T\rangle^2 B_n. $$ Consequently, if $F(\mathbf{x})$ is smooth, for any $k\geq 0$ $$ \| F_{\tilde c_n, \tilde{\mathbf{a}}_n} - F_{1,t \mathbf{i}} \|_{L_T^\infty H_{\mathbf{x}}^k} \lesssim_k \langle T \rangle^2 B_n. $$ \end{lemma} \begin{proof} This follows from Lemma \ref{L:ODE-bounds}. \end{proof} By making the formal substitutions $$\tilde{c}_n \to 1 \,, \quad \tilde{\mathbf{a}}_n \to t \mathbf{i}\,, \quad \tilde{\zeta}_n \to \zeta_\infty \,, \quad F_{\tilde c_n, \tilde{\mathbf{a}}_n} \to F_{1,t\mathbf{i}}\,, \quad B_n \to 0,$$ where $F$ takes the place of $\Lambda Q$, $\nabla Q$, $Q$, $f$, or $\mathbf{g}$, we obtain that the expected limit $\zeta_\infty(t)$ of $\tilde{\zeta}_n(t)$ should solve \begin{equation} \label{E:con1} \partial_t \zeta_\infty = - \partial_x \Delta \zeta_\infty - 2 \partial_x ( Q_{1,t \,\mathbf{i}} \zeta_\infty) + \langle \zeta_\infty , f_{1,t \,\mathbf{i}} \rangle (\Lambda Q)_{1,t \mathbf{i}} + \langle \zeta_\infty, \mathbf{g}_{1,t\,\mathbf{i}} \rangle \cdot (\nabla Q)_{1,t \,\mathbf{i}} . \end{equation} Let $\zeta_\infty$ solve \eqref{E:con1} with initial condition $\zeta_\infty(0)$. [The well-posedness of \eqref{E:con1} can be proved in $C([-T,T];H_{\mathbf{x}}^k)$ using the Ribaud \& Vento estimates.] We prove that, for each $T>0$ and each $k\geq 0$, \begin{equation} \label{E:con8} \tilde{\zeta}_n \to \zeta_\infty \text{ in } C([-T,T]; H_{\mathbf{x}}^k) \end{equation} as follows. Let $$ \hat \zeta_n \stackrel{\rm{def}}{=} \tilde{\zeta}_n - \zeta_\infty \quad \mbox{and} \quad \hat F_n = F_{\tilde{c}_n, \tilde{\mathbf{a}}_n} - F_{1,t\mathbf{i}}, $$ where $F$ takes the place of $\Lambda Q$, $\nabla Q$, $Q$, $f$, and $\mathbf{g}$. In \eqref{E:con6}, for all terms without a $B_n$ coefficient, start by substituting $$F_{\tilde{c}_n, \tilde{\mathbf{a}}_n} = \hat F_n + F_{1,t\mathbf{i}}$$ to obtain \begin{equation} \label{E:con20} \partial_t \tilde \zeta_n = - \partial_x \Delta \tilde \zeta_n - 2 \partial_x ( Q_{1,t\mathbf{i}} \tilde \zeta_n) + \langle \tilde \zeta_n, f_{1,t\mathbf{i}} \rangle (\Lambda Q)_{1,t\mathbf{i}} + \langle \tilde \zeta_n, \mathbf{g}_{1,t\mathbf{i}} \rangle \cdot (\nabla Q)_{1, t \mathbf{i}} + G_n, \end{equation} where \begin{align*} G_n = &-2\partial_x ( \hat Q_n \tilde \zeta_n) \\ &\hspace{5mm} +( \tilde c_n^{-2}-1) \langle \tilde \zeta_n, f_{\tilde c_n, \tilde{\mathbf{a}}_n} \rangle (\Lambda Q)_{\tilde c_n, \tilde{\mathbf{a}}_n} + \langle \tilde \zeta_n, \hat f_n \rangle (\Lambda Q)_{\tilde c_n, \tilde{\mathbf{a}}_n} + \langle \tilde \zeta_n, f_{1,t \,\mathbf{i}} \rangle \widehat{\Lambda Q}_n \\ &\hspace{5mm} +( \tilde c_n^{-2}-1) \langle \tilde \zeta_n, \mathbf{g}_{\tilde c_n, \tilde{\mathbf{a}}_n} \rangle \cdot (\nabla Q)_{\tilde c_n, \tilde{\mathbf{a}}_n} + \langle \tilde \zeta_n, \hat{\mathbf{g}}_n \rangle \cdot (\nabla Q)_{\tilde c_n, \tilde{\mathbf{a}}_n} + \langle \tilde \zeta_n, \mathbf{g}_{1,t\,\mathbf{i}} \rangle \cdot \widehat{\nabla Q}_n \\ &\hspace{5mm} - B_n \partial_x \tilde{\zeta}_n^2 + B_n \omega_{\tilde c_n} (\Lambda Q)_{\tilde c_n,\tilde{\mathbf{a}}_n} + B_n\boldsymbol{\omega}_{\tilde{\mathbf{a}}_n} \cdot (\nabla Q)_{{\tilde c_n},\tilde{\mathbf{a}}_n}. \end{align*} Since each term involves either $\tilde c_n -1$, $\hat F_n$, or a $B_n$ coefficient, Lemma \ref{L:param-conv} and \eqref{E:con3} implies $$\|G_n \|_{H^k} \lesssim_k \langle T \rangle^2 B_n $$ for all $k\in \mathbb{N}$. Taking the difference between \eqref{E:con20} and \eqref{E:con1}, we get \begin{equation} \label{E:con21} \partial_t \hat \zeta_n = - \partial_x \Delta \hat \zeta_n - 2 \partial_x ( Q_{1,t\mathbf{i}} \hat \zeta_n) + \langle \hat \zeta_n, f_{1,t\mathbf{i}} \rangle (\Lambda Q)_{1,t\mathbf{i}} + \langle \hat \zeta_n, \mathbf{g}_{1,t\mathbf{i}} \rangle \cdot (\nabla Q)_{1, t \mathbf{i}} + G_n. \end{equation} We then compute $$ \partial_t \| \nabla^k \hat \zeta_n \|_{L^2_{\mathbf{x}}}^2, $$ then simplify with integration by parts, and apply Gronwall's inequality, to obtain $$ \| \nabla^k \hat \zeta_n \|_{L_{[-T,T]}^\infty L^2_{\mathbf{x}}}^2 \lesssim e^{CT} (\| \nabla^k \hat \zeta_n(0) \|_{L_{[-T,T]}^\infty L^2_{\mathbf{x}}}^2+ B_n). $$ Consequently, \eqref{E:con8} holds. By \eqref{E:con3}, it follows that \begin{equation} \label{E:con10} \| \zeta_\infty \|_{L_t^\infty H_{\mathbf{x}}^k} \lesssim_k 1. \end{equation} Note that $$ w_n(\mathbf{x},t) = \frac{\tilde \epsilon_n(\mathbf{x},t)}{B_n} = \tilde c_n^2 \tilde \zeta_n( \tilde c_n\mathbf{x}+ \tilde{\mathbf{a}}_n, t). $$ Let $$w(\mathbf{x},t) \stackrel{\rm{def}}{=} \zeta_\infty(\mathbf{x}+t \mathbf{i},t).$$ Then \eqref{E:con8} implies \begin{equation} \label{E:con9} w_n \to w \text{ in } C([-T,T]; H_{\mathbf{x}}^k) \end{equation} and \eqref{E:con10} implies \begin{equation} \label{E:con11} \|w \|_{L_t^\infty H_{\mathbf{x}}^k} \lesssim_k 1. \end{equation} By Lemma \ref{L:ep-decay}, we have $$ \| w_n \|_{L_{\mathbf{x}}^2(|\mathbf{x}| \geq r)} \lesssim e^{-\delta r}. $$ By \eqref{E:con9}, we obtain $$ \| w \|_{L_{\mathbf{x}}^2(|\mathbf{x}| \geq r)} \lesssim e^{-\delta r}. $$ The equation \eqref{E:con1} converts to the equation for $w$ in the statement of Lemma \ref{L:convergence}. Moreover, since $\tilde \epsilon_n$ satisfies the orthogonality conditions for each $n$, $w_n$ also satisfies them, and hence, the limit $w$ does as well. This completes the proof of Lemma \ref{L:convergence}. \section{Proof of the linear Liouville lemma assuming the viral estimate} \label{S:linear-Liouville} In this section, we prove Lemma \ref{L:linear-Liouville}, the linear Liouville theorem. We first note that \begin{equation} \label{E:w1} \partial_t \left[ \langle \mathcal{L}w,w \rangle + \frac{2}{\langle \Lambda Q, Q \rangle} \langle w, Q \rangle^2 \right] = 0, \end{equation} which follows from a straightforward computation substituting the equation \eqref{E:w-eq} for $w$ and applying the orthogonality conditions \eqref{E:extra-orth}. This of course means that the expression $\displaystyle \langle \mathcal{L}w, w \rangle + \frac{2}{\langle \Lambda Q, Q \rangle} \langle w, Q \rangle^2$ is constant in time. We observe that from the definition of $\mathcal{L}$ and integration by parts \begin{equation} \label{E:w2} \int_{t=-\infty}^{+\infty} \left( \langle \mathcal{L}w,w \rangle + \frac{2}{\langle \Lambda Q, Q \rangle} \langle w, Q \rangle^2 \right) \, dt \lesssim \| w\|_{L^2_t H^1_{\bf x}}^2 \,. \end{equation} Lemma \ref{L:v-virial} (proved in the next section) shows that for the dual problem $v = \mathcal{L}w$ we have the estimate $$ \| v \|_{L_t^2 H_{\bf x}^1} \lesssim \| \langle x \rangle^{1/2} v \|_{L_t^\infty L_{\bf x}^2}, $$ which by Lemma \ref{L:conversion} implies the following bound for $w$: $$ \| w \|_{L_t^2 H_{\bf x}^1} \leq \|w\|_{L_t^2 H_{\bf x}^3} \lesssim \| \langle x \rangle^{1/2} w \|_{L_t^\infty H_{\bf x}^2}, $$ which is finite by \eqref{E:w-dec}. Thus, the last term in \eqref{E:w2} is bounded, and hence, the integrand in the left-hand side of \eqref{E:w2} given by $\displaystyle \langle \mathcal{L}w, w \rangle + \frac{2}{\langle \Lambda Q, Q \rangle} \langle w, Q \rangle^2$, which is constant in time, must be zero. Since $\langle \Lambda Q, Q \rangle =\frac12 \|Q\|^2_{L^2} > 0$ (subcritical case), the quantity is positive definite, and we conclude that both $$ \langle \mathcal{L}w, w \rangle = 0 \quad \mbox{and} \quad \langle w, Q \rangle = 0. $$ By the orthogonality conditions, $\mathcal{L}$ is strictly positive definite, which implies that $w\equiv 0$. \section{Proof of the viral estimate} \label{S:virial} In this section we prove Lemma \ref{L:virial}, which is just a combination of Lemma \ref{L:conversion} and Lemma \ref{L:v-virial} below. Lemma \ref{L:conversion} reduces the inequality to a statement about a dual function $v=\mathcal{L}w$, and Lemma \ref{L:v-virial} achieves the inequality for the dual function $v$ by invoking the results from the numerical verification in \S\ref{S:numerics} and by applying an ``angle lemma'' (Lemma \ref{L:angle}). We will start with the conversion lemma: \begin{lemma}[conversion]\label{L:conversion} Suppose that $w$ satisfies $\langle w, \nabla Q \rangle =0$ and $v=\mathcal{L}w$. If $v$ satisfies the global-in-time estimate $$ \|v \|_{L_t^2 H_\mathbf{x}^1} \lesssim \| \langle x \rangle^{1/2} v\|_{L_t^\infty L_\mathbf{x}^2}, $$ then it follows that $w$ satisfies the global-in-time estimate $$ \| w\|_{L_t^2 H_{\mathbf{x}}^3} \lesssim \| \langle x \rangle^{1/2} w \|_{L_t^\infty H_\mathbf{x}^2}. $$ \end{lemma} \begin{proof} Since $\mathcal{L}$ is a self-adjoint Schr\" odinger operator with smooth rapidly decaying potential, its spectrum consists of $[1,+\infty)$ plus a finite number of eigenvalues. It follows that the spectrum of $\mathcal{L}^2$ is $[1,+\infty)$ plus the square of the eigenvalues of $\mathcal{L}$. Since $\ker L = \operatorname{span} \{ \nabla Q \}$, $\ker \mathcal{L}^2 = \operatorname{span} \{ \nabla Q \}$, and there is a positive gap to the next eigenvalue of $\mathcal{L}^2$. Consequently, $\mathcal{L}^2$ is strictly positive on the orthocomplement of $\nabla Q$: there exists $\delta>0$ such that \begin{equation} \label{E:conv1} \delta \|w \|_{L^2}^2 \leq \langle \mathcal{L}^2 w, w \rangle = \| \mathcal{L}w \|_{L^2}^2 = \| v \|_{L^2}^2. \end{equation} It is straightforward that, for some $\kappa>0$, \begin{equation} \label{E:conv2} \| w \|_{H^3}^2 \leq \| \mathcal{L}w \|_{H^1}^2 + \kappa \| w\|_{L^2}^2 = \| v \|_{H^1}^2 + \kappa \| w\|_{L^2}^2. \end{equation} Combining \eqref{E:conv1} and \eqref{E:conv2}, we obtain $$ \| w \|_{H^3} \lesssim \| v \|_{H^1}. $$ It is also straightforward that $$ \|\langle x \rangle^{1/2} v \|_{L^2} = \| \langle x \rangle^{1/2} \mathcal{L} w \|_{L^2} \lesssim \| \langle x \rangle^{1/2} w \|_{H^2}. $$ \end{proof} We provide here a statement of the elementary angle lemma--for proof see \cite{FHRY}. \begin{lemma}[angle lemma] \label{L:angle} Suppose that $A$ is a self-adjoint operator on a Hilbert space $H$ with eigenvalue $\lambda_1$ and corresponding eigenspace spanned by a function $e_1$ with $\|e_1\|_{L^2}=1$. Let $P_1f = \langle f,e_1\rangle e_1$ be the corresponding orthogonal projection. Assume that $(I-P_1)A$ has spectrum bounded below by $\lambda_\perp$, with $\lambda_\perp>\lambda_1$. Suppose that $f$ is some other function such that $\|f\|_{L^2}=1$ and $0 \leq \beta \leq \pi$ is defined by $\cos \beta = \langle f, e_1\rangle$. Then if $v$ satisfies $\langle v, f \rangle =0$, we have $$ \langle Av,v \rangle \geq (\lambda_\perp - (\lambda_\perp - \lambda_1)\sin^2\beta)\|v\|_{H}^2. $$ \end{lemma} We are now ready to prove the virial estimate for $v$. \begin{lemma}[linearized virial estimate for $v$] \label{L:v-virial} Suppose that $v \in C^0(\mathbb{R}_t; H_{\bf{x}}^1) \cap C^1(\mathbb{R}_t; H_{\bf{x}}^{-2})$ solves $$ \partial_t v = \mathcal{L}\partial_x v -2 \alpha Q $$ for some time dependent coefficient $\alpha$, and moreover, $v$ satisfies the orthogonality conditions $$ \langle v, Q \rangle =0\quad \mbox{and} \quad \langle v, \nabla Q \rangle =0. $$ Then \begin{equation} \label{E:v-virial-1} \| v\|_{L_t^2H_{\bf x}^1} \lesssim \| \langle x \rangle^{1/2} v \|_{L_t^\infty L_{\bf x}^2}, \end{equation} where $t$ is carried out over all time $-\infty < t< \infty$. \end{lemma} \begin{proof} Using the orthogonality condition $\langle v,Q \rangle =0$, we compute $$ 0= \partial_t \langle v, Q \rangle = \langle \mathcal{L}\partial_x v, Q \rangle -2 \alpha \langle Q, Q \rangle. $$ This yields $$ \alpha = \frac{\langle v, Q \, Q_x \rangle}{\langle Q, Q \rangle} $$ so that \begin{equation}\label{E:P} \partial_t v = \mathcal{L} \partial_x v - 2 \frac{\langle v, Q \, Q_x \rangle}{\langle Q, Q\rangle}\, Q. \end{equation} Now compute \begin{equation} \label{E:v-virial-2} -\frac12 \partial_t \int x v^2 = \langle Bv,v\rangle + \langle P v,v\rangle, \end{equation} where $$ B = \frac12 - \frac32\partial_x^2 - \frac12\partial_y^2 - \frac12\partial_z^2 - (x \,Q)_x $$ and from \eqref{E:P} $P$ can be taken as the rank $2$ self-adjoint operator $$ Pv = \frac{ Q \, Q_x}{\langle Q, Q\rangle} \langle v, xQ\rangle + \frac{xQ}{\langle Q, Q\rangle} \langle v, Q \, Q_x\rangle. $$ The continuous spectrum of $A=B+P$ is $[\frac12,+\infty)$. Via a numerical solver we find the eigenvalues and corresponding eigenfunctions below $\frac12$ (the details are given in \S \ref{S:numerics} below). We obtain two simple eigenvalues below $\frac12$, namely, $$ \lambda_1=-0.0294 \text{ and } \lambda_2=-0.4688. $$ Denoting the corresponding normalized eigenfunctions by $f_1$ and $f_2$, and $g_1 = \frac{Q}{\|Q\|}$ and $g_2 = \frac{Q_x}{\|Q_x\|}$, we find $$\langle f_1, g_1 \rangle =0.9946 \,, \qquad \langle f_1,g_2 \rangle = 0,$$ $$\langle f_2, g_1 \rangle = 0 \,, \qquad \langle f_2, g_2 \rangle = -0.7922.$$ Following the $L^2$ decomposition as in \cite[Lemma 14.2]{FHRY}, we consider the closed subspace $H_o$ of $L^2(\mathbb{R}^3)$ given by functions that are odd in $x$ (no constraint in $y$ or $z$), and the closed subspace $H_e$ of $L^2(\mathbb{R}^3)$ given by functions that are even in $x$ (no constraint in $y$ or $z$). Note that $L^2(\mathbb{R}^3) = H_o \oplus H_e$ is an orthogonal decomposition. Observe that $f_1$ and $g_2$ belong to $H_o$, while $f_2$ and $g_1$ belong to $H_e$. Thus, $A\big|_{H_o}$ has spectrum $\{ \lambda_1 \} \cup [\frac12, +\infty)$ with $f_1$ being the eigenfunction corresponding to $\lambda_1$. Applying the angle lemma (Lemma \ref{L:angle} or \cite[Lemma 14.3]{FHRY}) with $H=H_o$ and $\lambda_\perp = \frac12$, and noting that $$ (\lambda_\perp - \lambda_1) \sin^2\beta = (0.5 + 0.0294)*(1-0.9946^2) = 0.0057, $$ we find that $$ \langle AP_o v, P_o v \rangle \geq (0.5000- 0.0057) \langle P_ov, P_o v \rangle = 0.4943 \, \langle P_ov, P_o v \rangle. $$ Also, $A\big|_{H_e}$ has spectrum $\{ \lambda_2 \} \cup [\frac12, +\infty)$ with the eigenfunction $f_2$ corresponding to $\lambda_2$. Applying the angle lemma with $H=H_e$ and $\lambda_\perp = \frac12$, we get $$ (\lambda_\perp - \lambda_2) \sin^2\beta = (0.5000-0.4688)*(1-0.7922^2)=0.0116, $$ and $$ \langle AP_e v, P_e v \rangle \geq (0.5000-0.0116)\langle P_e v, P_e v\rangle=0.4884\,\langle P_e v,P_e v\rangle. $$ Thus $A=B+P$ is positive (assuming $v$ satisfies the two orthogonality conditions). Integrating \eqref{E:v-virial-2} in time and using elliptic regularity, we obtain \eqref{E:v-virial-1}. \end{proof} \section{Verification of spectral property} \label{S:numerics} \subsection{Set up} Here, we discuss how we find the eigenvalues and eigenfunctions of the operator $2(B+P)$ in 3d (for computational convenience, we doubled the operator; thus, the continuous spectrum will start from 1): \begin{align} 2(B+P) \stackrel{\rm{def}}{=}-3\partial_{xx}-\partial_{yy} -\partial_{zz} + 1 - 2(x\, Q)_x + 2P, \end{align} where $P$ is defined as \begin{align} Pv= \frac{Q \, Q_x}{\|Q\|_2^2}\langle v, xQ \rangle + \frac{xQ}{\|Q\|_2^2} \langle v, Q\, Q_x \rangle. \end{align} We follow our approach from \cite[Section 16]{FHRY} and investigate the spectrum of the operator $2(B+P)$. Similar to the 2d case we use the collocation method, however, due to the computational limitations in 3d, we can only apply a few collocation points for each axis ($x$, $y$ and $z$). In this computations $N=36$ in each dimension is the maximum number that we could reach, though we show that even with that many points, the results are robust and truthful. To arrange the Chebyshev collocation points to be more concentrated at the center we need a specific mapping, we use a similar approach as in the 2d case: \begin{align}\label{D: x mapping} x(\xi)=L\frac{e^{a\xi}-e^{-a\xi}}{e^{a}-e^{-a}}, \end{align} with $\xi \in [-1,1]$ and $a$ is the parameter that we can chose (in our computation we take $a=4$ or $a=5$). By the chain rule, the partial derivatives $\partial_x$, $\partial_{xx}$ are \begin{align}\label{E:D1} \partial_x=\frac{1}{x_{\xi}}\partial_{\xi}, \end{align} and \begin{align}\label{E:D2} \partial_{xx}=\frac{1}{x_{\xi}^2}+\left(\partial_{\xi}(\frac{1}{x_{\xi}})\cdot\frac{1}{x_{\xi}}\right)\partial_{\xi}. \end{align} We apply similar mapping and calculation to the $y$-direction as well as the $z$-direction. Now, we need to discretize the operator $2(B+P)$ with the mapped-Chebyshev collocation points. The discretization of the operator $B$ as well as imposing the homogeneous Dirichlet boundary conditions are quite standard, for example, we follow the same approach as in \cite[Chapters 6, 9, 12]{T2001}. It follows similar steps as we had in the 2D case \cite{FHRY} (and we described a general formula for discretizing the projection operator), for completeness, we outline the process here. First, we consider the 1D case. Then the extension to the cases $d\geq 2$ is done by standard numerical integration technique for multi-dimensions, e.g., see \cite[Chapters 6, 12]{T2001}. We denote by $f_i$ the discretized form of the function $f(x)$ at the point $x_i$, and we write the vector $\vec{f}$ for $\vec{f}=(f_0,f_1,\cdots,f_N)^T$. We denote the operation ``$.*$" to be the pointwise multiplication of the vectors or matrices with the same dimension, i.e., $\vec{a}.*\vec{b}=(a_0b_0,\cdots,a_Nb_N)^{\mathrm{T}}$; the notation ``$*$" stands for the regular vector or matrix multiplication. Let $w(x)$ to be the weights for a given quadrature. For example, if we consider the composite trapezoid rule with step-size $h$, we have \begin{align*} \vec{w}=(w_0,w_1,\cdots ,x_N)^T=\frac{h}{2}(1,2,\cdots,2,1)^T, \end{align*} since the composite trapezoid rule can be written as \begin{align*} \int_a^b f(x) dx \approx \sum_{i=0}^{N} f_i w_i=\vec{f}^{\,\, T} *\vec{w}. \end{align*} To evaluate a Chebyshev Gauss-Lobatto quadrature, which we need for this work, we write $$ \int_{-1}^1 f(x) dx \approx \sum_{i=0}^N w_i f(x_i)=\vec{f}^{\,\, T} * \vec{w}, $$ where $w_i=\frac{\pi}{N}\sqrt{1-x_i^2}$ for $i=1,2,\cdots, N-1$, and $w_0=\frac{\pi}{2N}\sqrt{1-x_0^2}$, $w_N=\frac{\pi}{2N}\sqrt{1-x_N^2}$, are the weights together with the weighted functions. We have \begin{align*} Pu=&\langle u,f \rangle g= (\sum_{i=0}^N w_i \,f_i\, u_i) \, \vec{g} = \left[\begin{matrix} g_0\\ g_1\\ \vdots\\ g_N \end{matrix}\right] \big(\sum_{i=0}^N w_i f_i u_i \big) = \left[\begin{matrix} g_0\\ g_1\\ \vdots\\ g_N \end{matrix}\right] (\vec{w}^{\, T}.*\vec{f}^{\,\, T}) * \vec{u} := \mathbf{P} \vec{u}, \end{align*} with the matrix \begin{align}\label{E: P term} \mathbf{P}=\vec{g}*(\vec{w}^{\,\, T}.*\vec{f}^{\,\, T}) \end{align} to be the discretized approximation form of the projection operator $P$. Denote by $\mathbf{D_x^{(2)}}$, $\mathbf{D_y^{(2)}}$, $\mathbf{D_z^{(2)}}$ the second order mapped-Chebyshev differential matrices coming from the equation \eqref{E:D2} (see also \cite{T2001}), the $x$-derivative of $Q$ as $\vec{Q}_x=\mathbf{D_x^{(1)}}\vec{Q}$, and the matrix $\mathbf{M}=2(\mathbf{B}+\mathbf{P})$. Then we obtain \begin{align}\label{E: M-matrix} \mathbf{M}=-3\mathbf{{D}_x^{(2)}}-\mathbf{{D}_y^{(2)}}-\mathbf{{D}_z^{(2)}}+diag(\vec{1}-3*\vec{Q^2}-6*\vec{x}.*\vec{Q}.*\vec{Q}_x)+\mathbf{P}, \end{align} where $\mathbf{P}$ is the matrix form for the projection term discretized from \eqref{E: P term}, and $\vec{1}=(1,\cdots,1)^T$ is the vector with the same size of other variables (such as $\vec{Q}$). Before we proceed with spectral properties, we explain how we obtain the ground state $Q$. \subsection{Calculation of the ground state $Q$ While we can calculate the ground state directly in the 3D space, the computational cost is very expensive. Applying the radial symmetry, we only need to compute the ground state in 1D radial case and interpolate it into the 3D space. The 1D radial equation for the ground state is as follows \begin{align}\label{E: GS_radial} -R_{rr}-\frac{2}{r}R_r+R-|R|^{p-1}R=0, \quad R_r(0)=0, \quad R(2 L)=0. \end{align} We choose the computational domain to be $r \in [0,2L)$ since $r=\sqrt{x^2+y^2+z^2}$, where each $x,y,z \in [-L,L]$. Therefore, the computational domain for $r$ has to be greater or equal to $\sqrt{3}L$ to avoid the extrapolation in the upcoming interpolation process. Next, the equation \eqref{E: GS_radial} can be solved by using the renormalization method \cite[Chapter 24]{F2016}. For that we use the shape preserving cubic spline to interpolate the solution into the full three dimensional data. Suppose $\vec{r}=(r_0,r_1,\cdots, r_{N_r})^T$ to be the $N_r$ collocation points we used to compute the equation \eqref{E: GS_radial}, and $\vec{R}$ is the discretized solution of \eqref{E: GS_radial} from $\vec{r}$. Let $\vec{x}=(x_0,x_1,\cdots,x_N)^T $ with $x_0=-L$ and $x_N=L$ be the mapped Chebyshev collocation points we discussed previously. We generate the 3D tensor data by using the matlab command \texttt{meshgrid} $$ [\mathbf{X},\mathbf{Y},\mathbf{Z}]=\operatorname{meshgrid}(\vec{x}). $$ Then, the tensor data for $\mathbf{Q}$ (the 3D ground state $Q$), is obtained via the shape-preserving cubic spline interpolation with the matlab function \texttt{interp1} by \begin{align*} \mathbf{Q}=\operatorname{interp1}(\vec{r},\vec{R},\sqrt{\mathbf{X}^2 +\mathbf{Y}^2 +\mathbf{Z}^2},'pchip'). \end{align*} \subsection{Spectrum} Let $N$ be the number of collocation points assigned for each dimensions (this will result in a $N^3 \times N^3$ matrix of $\mathbf{M}$). Let $M[R]$ be the mass of $Q$ computed from the radial solution $R$ by the composite trapezoid rule, and $M[Q]$ be the mass of $Q$ computed in full 3D by evaluating the Chebyshev-Gauss quadrature. We track a possible error generated by the interpolation via $\mathcal{E}=\| M[Q]-M[R] \|_{\infty}$. The matlab command ``\texttt{eigs}" produces the eigenvalues, and we consider only those, which are less than $1$. Taking a different number of collocation points $N$ for each direction ($x$, $y$ and $z$), and normalizing the $L^2$ norm of the corresponding eigenfunctions to $1$, we obtain the following: $\bullet$ \underline{\bf $N=16$}: $\mathcal{E}=0.17778$. The eigenvalues are \begin{align} \lambda_{1,2}= -0.04938, \qquad 0.93316. \end{align} The angles with the eigenfunctions (and normalized $Q$ and $Q_x$) are \begin{align} \left[ \begin{matrix} \langle Q, \phi_1 \rangle & \langle Q, \phi_2 \rangle \\ \langle Q_x, \phi_1 \rangle & \langle Q_x, \phi_2 \rangle \\ \end{matrix} \right] = \left[ \begin{matrix} -0.9952 & -0.0000 \\ 0.0000 & -0.7940 \\ \end{matrix} \right]. \end{align} $\bullet$ \underline{\bf $N=21$}: $\mathcal{E}=0.0024339$. The eigenvalues are \begin{align} \lambda_{1,2}= -0.052992, \qquad 0.9382. \end{align} The angles with the eigenfunctions are \begin{align} \left[ \begin{matrix} \langle Q, \phi_1 \rangle & \langle Q, \phi_2 \rangle \\ \langle Q_x, \phi_1 \rangle & \langle Q_x, \phi_2 \rangle \\ \end{matrix} \right] = \left[ \begin{matrix} 0.9947 & -0.0000 \\ 0.0000 & -0.7918 \\ \end{matrix} \right]. \end{align} $\bullet$ \underline{\bf $N=32$}: $\mathcal{E}=6.9879e-06$. The eigenvalues are \begin{align} \lambda_{1,2}= -0.058808, \qquad 0.93757. \end{align} The angles with the eigenfunctions are \begin{align} \left[ \begin{matrix} \langle Q, \phi_1 \rangle & \langle Q, \phi_2 \rangle \\ \langle Q_x, \phi_1 \rangle & \langle Q_x, \phi_2 \rangle \\ \end{matrix} \right] = \left[ \begin{matrix} 0.9946 & -0.0000 \\ 0.0000 & -0.7922 \\ \end{matrix} \right]. \end{align} $\bullet$ \underline{\bf $N=36$}: $\mathcal{E}=1.6117e-06$. The eigenvalues are \begin{align} ss= -0.058812, \qquad 0.93757. \end{align} The angles with the eigenfunctions are obtained as \begin{align} \left[ \begin{matrix} \langle Q, \phi_1 \rangle & \langle Q, \phi_2 \rangle \\ \langle Q_x, \phi_1 \rangle & \langle Q_x, \phi_2 \rangle \\ \end{matrix} \right] = \left[ \begin{matrix} 0.9946 & -0.0000 \\ 0.0000 & -0.7922 \\ \end{matrix} \right]. \end{align} Finally, we conclude that the eigenfunction $\phi_1$, corresponding to $\lambda_1$, the negative eigenvalue, is (almost) orthogonal to $Q$, and the second eigenfunction $\phi_2$ is (almost) orthogonal to $Q_x$. We also note that while we do not use a large number of points, our numerical findings become consistent with an increasing $N$ (see the consistency for $N=32$ and $N=36$).
1,314,259,995,461
arxiv
\section{Introduction} Supermassive black holes (SMBHs; $M \gtrsim 10^5 \, M_\odot$) are inferred to reside in most galactic nuclei \citep{kormendy1995, magorrian1998,laor2000,ferra2006,shan2009}. When efficiently supplied with gas, these objects can produce some of the most luminous sources in the Universe \citep{sil2008,hue2010,falocco2012,koss2012}. Recent panchromatic surveys have shown that close dual AGN are comparatively more luminous at high energies than their isolated counterparts, suggesting that the merger process is intricately tied to the feeding history of the waltzing SMBHs \citep{gul2009a,gul2009b,koss2012,liu2013}. Gas supply to central SMBHs during galaxy mergers directly affects the growth and luminosity of these objects. Therefore, understanding how mass is bestowed to SMBHs results in predictions of their number density and luminosity distribution \citep{silk1998,komossa2003}. In cosmological simulations of merging galaxies, some form of the classical Bondi-Hoyle-Lyttleton (BHL) accretion prescription is usually implemented to estimate SMBH feeding rates \citep{springel2005,kuro2009,fabjan2010,li2012,jeon2012,choi2012,choi2013,hirschmann2013,newton2013,blecha2013,angles2013,barai2014,gabor2014}. The use of this recipe assumes that the properties of gas at 50-100~pc (generally determined by the resolution scale length) accurately models the mass accretion rate onto the SMBH \citep{johansson2009}. A significant fraction of SMBHs are expected to be embedded in nuclear star clusters \citep[NSCs;][]{boker2002,boker2004,walcher2005,graham2009,boker2010,seth2010}. Because these NSCs are typically more massive than the central SMBH, they can significantly alter the gas flow at scales which are commonly unresolved in cosmological simulations \citep[$\approx 1-5 \, {\rm pc}$;][]{naiman2011}. As a result, NSCs could provide an efficient mechanism for funneling gas towards the central black hole. Accurately determining the mass accretion history of SMBHs has important consequences not only for their growth history, but also for the evolution of the host galaxy \citep{king2003,croton2006,hopkins2007,sij2007,dimatteo2008,primack2008,booth2009,debuhr2011,debuhr2012}. For this reason, pinning down the dominant mechanism by which gas at large scales is funneled into the SMBH is essential for understanding their role in mediating galaxy evolution. In this paper, we make use of simulations to investigate how the accretion rate of SMBHs might be enhanced during galaxy mergers when they are embedded in massive and compact NSCs. Hydrodynamical models with multiple levels of refinement are used to capture both the large scale gas flow around the SMBH and NSC complex as well as the small scale accretion onto the central sink. We model the SMBH and the NSC as static potentials and simulate the structure of supersonic gas flows within their combined gravitational field in order to quantify the gas accretion rate in the inner tenths of parsecs. Simulations are performed for adiabatic and isothermal flows as well as for wide range of NSC radii and ambient conditions. These well-resolved, three dimensional simulations are used to generate sub-grid accretion prescriptions for larger scale simulations, thus providing a more accurate estimate of the feeding rates of SMBHs in cosmological simulations as well as a better determination of the expected luminosities of dual AGNs. We test the effects of this accretion rate enhancement on the growth of SMBHs in the galaxy merger simulations of \cite{debuhr2011,debuhr2012} and discuss what combination of gas and gravitational potential parameters results in a significant augmentation to the mass accretion rate. This paper is organized as follows. In Section \ref{section:anal} we review the commonly used sub-grid prescriptions for calculating the mass accretion rates onto central massive black holes during galaxy merger simulations and suggest a modification in the presence of a NSC. A numerical scheme aimed at calculating black hole growth at high resolution, accounting for the gravitational focusing effects of NSC, is presented in Section \ref{section:methods}. The conditions necessary for NSCs to collect ambient gas and, in turn, enhance the mass accretion rate of SMBHs are derived in Section \ref{sec:cond}. In Section \ref{section:mergerMods} we use the central gas densities from realistic SPH cosmological merger simulations to estimate the augmentation to SMBHs mass accretion rates in the presence of a NSC as a function of time. We summarize our findings in Section \ref{section:summary}. \section{Accretion Flows Modified by the Presence of a NSC} \label{section:anal} SMBHs can trigger nuclear activity only as long as they interact with the surrounding gas. The way in which gas flows into a SMBH depends largely on the conditions where material is injected. The mass accretion rate into SMBHs in cosmological simulations is commonly estimated using analytical prescriptions based on the large scale gas structures in the centers of the simulated merging galaxies. The BHL formalism assumes the gas is accreted spherically symmetrically \citep{johansson2009}: \begin{equation} \dot{M}_{\rm bondi} = 4 \pi (G M)^2 \rho_\infty (v^2 + c_{\rm s}^2)^{-3/2}, \end{equation} where $v$ is the relative velocity of the object through the external gas whose sound speed and density are given by $c_{\rm s}$ and $\rho_\infty$, respectively \citep{edgar2004}. The flow pattern is dramatically altered if the inflowing gas has a small amount of angular momentum. If the inflowing gas is injected more or less isotropically from large $r$, but has specific angular momentum per unit mass such that $l^2/GM \gg r_{\rm g}$, then the quasi-spherical approximation will break down and the gas will have sufficient angular momentum to orbit the SMBH (here $r_{\rm g}$ is the SMBH's Schwarzschild radius). Viscous torques will then cause the gas to sink into the equatorial plane of the SMBH. In recent works, such departures from spherical symmetry have been accounted for by assuming the accretion proceeds through a disk: \begin{equation} \dot{M}_\nu = 3 \pi \alpha \Sigma c_{\rm s}^2 \Omega^{-1} \label{eq:cpmmacc} \end{equation} where $\Sigma$ is the average surface gas density of the accretion disk, $\Omega$ is the average rotational angular frequency, and $\alpha$ is a dimensionless parameter dictating the strength of turbulent viscosity in the disk \citep{debuhr2011,debuhr2012}. Both prescriptions rely on an understanding of the gas properties at $r \gtrsim$100~pc in order to determine the mass accretion rate onto the central SMBH. As discussed in \cite{naiman2009,naiman2011}, a NSC moving at a low Mach number through relatively cold gas can drastically increase the gas density in its interior with respect to that of the external medium. Analytical estimates suggest that in the central regions of a NSC the expected density enhancement is given by \begin{equation} {\rho(r = 0) \over \rho_\infty} \approx \left[ 1 + {G M_{\rm c} (\gamma-1) \over c_\infty^{2} r_{\rm c}} \right]^{1/(\gamma-1)}, \label{eq:rhostationary} \end{equation} where the core radius, $r_{\rm c}$, is related to the cluster velocity dispersion by $\sigma_{V}^2 \approx GM_{\rm c}/r_{\rm c}$. The gas properties characterized here by $\rho_\infty$ and $c_\infty$ are the density and sound speed at $r\gg r_{\rm c}$. Here $\gamma$ denotes the adiabatic index of the flow and $M_{\rm c}$ designates the mass of the NSC \citep{naiman2011}. Given this density enhancement in the core of the NSC, the accretion rate of the embedded SMBH would be amplified by a factor of $\dot{M}_{\rm bh+c} \propto \rho(r=0)/c_{\rm s}^3(r=0)$ where $c_{\rm s} (r=0)=c_{\rm s,nsc}$ is the sound speed of the gas within the NSC and we have assumed here that accretion proceeds at the classical Bondi rate \citep{bondi1952}. Even a small amount of angular momentum can make a big difference, breaking the spherical symmetry of the inflowing gas and yielding an accretion disk instead of the radial flow assumed in equation \ref{eq:rhostationary}. If we instead suppose that accretion onto the central SMBH proceeds by viscous torques and require the average surface density of the disk to be proportional to $\rho_{\rm nsc}$, the mass accretion rate can be approximated as \citep{debuhr2011} \begin{equation} \dot{M}_{\rm bh+c} = 3 \pi \alpha \Sigma_{\rm nsc} c_{\rm s, nsc}^2 \Omega^{-1} \label{eq:mdot} \end{equation} with $c_{\rm s, nsc} = K \gamma \rho_{\rm nsc} ^{\gamma-1}$, and \begin{equation} {\Sigma_{\rm nsc} \over \Sigma} = {\rho_{\rm nsc} \over \rho_\infty } = \max\left\{\left( 1 + \frac{G M_{\rm c} [\gamma -1 ]}{c_\infty^2 r_{\rm c} [ 1 + \mu_\infty^2]} \right)^{\frac{1}{\gamma-1}} \left( \frac{\mu_\infty^2}{\mu_\infty^2 + 1}\right)^{\frac{3}{2}},1\right\}. \label{eq:rho} \end{equation} Here $\rho_{\rm nsc}$ is the modified density in the NSC's interior and $\mu_\infty = v_\infty/c_\infty$ is the Mach number of the SMBH+NSC complex with respect to the ambient gas. The resultant accretion rate would, in this limit, be amplified by a factor $\dot{M}_{{\rm bh+c}} \propto \rho(r=0) c_{\rm s}^2(r=0)$. Contrary to the classical BHL case, the motion of the SMBH+NSC complex with respect to the ambient medium does not result in a considerable change in the rate of mass accretion onto the SMBH. This is because equation \ref{eq:rho} accounts for the protection provided by the large scale NSC which forms a quasi-hydrostatic envelope around the SMBH. Having said this, the strength of this protection is indeed slightly diminished because the capture radius decreases due to the relative motion of the potential \citep{naiman2011}. As a result, the expected enhancement in the gas density of the NSC core is indeed moderately smaller than the one given by the stationary formula (equation \ref{eq:rhostationary}). More importantly, we have assumed here that the influence of the NSC dominates the gravitational potential, which implies $M_{\rm c} \gtrsim M_{\rm bh}$. When $M_{\rm c} \lesssim M_{\rm bh}$, the formation of a quasi-hydrostatic envelope is inhibited. The presence of a compact and massive NSC with $M_{\rm c} \gtrsim M_{\rm bh}$ is thus required in order to significantly increase the accretion rate onto the central SMBHs during a galaxy merger. The degree by which the accretion rate is enhanced by the presence of a NSC is studied with numerical simulations in the remaining of the paper. \section{Simulating Accretion onto SMBH\lowercase{s} embedded in NSC\lowercase{s}} \label{section:methods} To examine the ability of a NSC to collect ambient gas and, in turn, enhance the mass accretion rate onto the central SMBH, we simulate the SMBH~+~NSC complex as a gravitational potential $\Phi = \Phi_{\rm c} + \Phi_{\rm bh}$ moving through ambient gas with FLASH, a parallel, adaptive mesh refinement hydrodynamics code \citep{fryxell}. A smooth potential, given by \begin{equation} \Phi_{\rm c} = {G M_{\rm c} \over (r^2 + r_{\rm c}^2)^{1/2}}, \end{equation} provides an accurate description of the NSC potential given the cluster mass, $M_{\rm c}$, and radius, $r_{\rm c} = (2/3^{3/2}) G M_{\rm c}/\sigma_V^2$ \citep{pflamm2009}. The gravitational potential of the SMBH is given by \begin{equation} \Phi_{\rm bh} = {G M_{\rm bh} \over r} \end{equation} and is modeled by a sink term. Here, we assume the gravitational potential is static - $M_{\rm bh}$ does not grow - a valid approximation since the mass accreted during the simulation is small. We use inflow boundaries to simulate the NSC's motion through the central galaxy medium. In order to accurately resolve the mass accretion rate on the black hole SMBH sink sizes are taken to be within hundredths of sonic radii, $r_{\rm s}$, where \begin{equation} r_{\rm s} = \frac{2 G M_{\rm bh}}{c_\infty^2 ( 1 + \mu_\infty^2)}, \end{equation} following the prescriptions of \cite{ruffert1994a,ruffert1994b}. Models are run from an initially uniform background until a steady density enhancement forms within the NSC, which usually takes tens of core sound crossing times. Convergence tests with higher refinement and longer run times produce models which show similar central densities and mass accretion rates to those depicted here. We compute both models with adiabatic ($\gamma = 5/3$) and nearly isothermal ($\gamma = 1.1$) equation of states in order to test the effects of cooling. To adequately resolve the small scale sink, the significantly more extended NSC's core, and the even larger scale flow structures that develop around the cluster, a sizable level of refinement is required on the AMR grid (commonly 14 levels of refinement). As the minimal time step in our simulation is determined by the gas flow on small scales, necessary runtimes would need to be prohibitively large in order to simulate a resolved sink and the large scale gas structure until a steady state density enhancement forms within the NSC. Instead, we construct the accretion rates onto our model SMBH+NSC systems from a set of three simulation setups, as depicted in Figure \ref{fig:simsExplained} with parameters summarized in the first row of Table \ref{table:sims}. Figure \ref{fig:simsExplained} depicts the flow of nearly isothermal gas ($\gamma = 1.1$) in the vicinity of a SMBH ($M_{\rm bh} = 2.5 \times 10^7 \, M_\odot$, $r_{\rm sink} = 0.5$~pc) surrounded by a massive NSC ($M_{\rm c}/M_{\rm bh} = 10$, $r_{\rm c} = 5.3$~pc, $\sigma_V = 176\, \rm{km/s}$). The gas has a sound speed of $c_\infty = 83 \, {\rm km/s}$ and the SMBH~+~NSC complex is moving at a Mach number of $1.64$ with respect to the ambient gas. The ratio of $M_{\rm c}/M_{\rm bh} = 10$ is consistent with observations: $0.1 \lesssim M_{\rm c}/M_{\rm bh}\lesssim 100$ \citep{graham2009,neumayer2012b}. However, as noted in Section \ref{section:anal}, as the ratio of $M_{\rm c}/M_{\rm bh}$ decreases so does the ability of a NSC to significantly alter the flow of gas onto the embedded SMBH (for a given $\sigma_{V}$). Thus, the model depicted in Figure \ref{fig:simsExplained} provides an example in which the presence of a massive NSC can drastically alter the flow properties around the central SMBH. The first of these simulations, labeled {\it initial} in the density contours and mass accretion rate plots shown in Figure \ref{fig:simsExplained}, is a small scale simulation which follows the gas flow as the central density enhancement begins to form in the NSC core while simultaneously tracking the mass accretion rate onto the fully resolved central sink. Once the large scale bowshock begins to interact with the boundaries of the computational domain (at about three to five core sound crossing times) the simulation is halted. We concurrently simulate the same initial setup in a larger box (about fifty core radii) at a lower resolution and follow the density build up within the NSC's core without the presence of a sink. We label this simulation as {\it no sink} in Figure \ref{fig:simsExplained}. Because the presence of a sink has a minimal effect on the build up of mass in the core, the central density in the large scale simulation, albeit lower resolution, agrees well with central density evolution observed in the {\it initial} simulation. While there is no explicit sink in these second set of simulations, the mass accretion rate onto a central black hole can be relatively accurately inferred from the gas properties within the NSC's core as shown in Figure \ref{fig:simsExplained}. Once a steady state density enhancement has formed in the central regions of the NSC, these large scale simulations are then refined further until a central sink is resolved by at least 16 cells. This level of refinement was chosen to allow the unsteady mass accretion rate to converge, as argued by \cite{ruffert1994b}. We refer to these simulations as {\it steady state} in Figure \ref{fig:simsExplained}. By using the set of three simulations discussed here for each SMBH+NSC system, we can resolve both the larger scale flow around the nuclear star cluster and the small scale flows into the accreting SMBH. Because we are not explicitly including radiative cooling, the simulations depicted in Figure \ref{fig:simsExplained} can be easily rescaled to consider a wide range of NSC properties. For example, for a fixed $M_{\rm c}/M_{\rm bh}$, the structure of the flow will remain unchanged provided the ratio $\sigma_V^2/(c_{\rm s}^2 + v^2)$ remains constant \citep{naiman2011}. In Figure \ref{fig:scaled} we compare the flow in and around a {\it heavy} SMBH~+~NSC complex ($M_{\rm bh} = 2.5 \times 10^7 \, M_\odot$, $M_{\rm c} = 2.5 \times 10^8 \, M_\odot$, $\sigma_V = 280 \, \rm{km/s}$) moving through hot gas ($c_\infty = 140 \, {\rm km/s}$, $\mu = 1.33$, $\gamma = 5/3$) with the flow surrounding a {\it lighter} system ($M_{\rm bh} = 10^7 \, M_\odot$, $M_{\rm c} = 10^8 \, M_\odot$, $\sigma = 177 \, {\rm km/s}$) moving through cold gas ($c_\infty = 89 \, {\rm km/s}$, $\mu = 1.33$, $\gamma = 5/3$). The parameters of the less massive model are chosen such that the ratio $\sigma_V^2/(c_\infty^2 + v^2)$ remains the same. Because this ratio is constant, the flow is nearly identical. It is important to note that while the mass accretion rate onto the central SMBH is higher for the more massive system, the accretion rate enhancement with respect to that of a black hole without the NSC, $\dot{M}_{\rm bh+c}/\dot{M}_{\rm bondi}$, is the same between the two simulations. The set of simulations shown in Figure \ref{fig:simsExplained} (Figure~\ref{fig:scaled}) depict the gas flow around a compact NSC slowly transversing through nearly isothermal (adiabatic) gas. However, the enhancement in the black hole's accretion rate depends not only on the thermodynamical conditions of the ambient gas but also on the properties of the NSC. The remainder of this paper is thus devoted to calculating the necessary conditions for large central density enhancements in the centers of SMBH~+~NSC systems and determining whether or not these conditions persist during a galaxy merger. \section{Necessary Conditions for Accretion Rate Enhancements}\label{sec:cond} In order for the gravitational potential of the NSC to alter the local gas flow before it is accreted onto the central SMBH, the NSC must be moving relatively slowly through cold gas, a condition shown in \citet{naiman2011} to be equivalent to requiring that \begin{equation} \sigma_{V}^2 > c_{\rm s}^2 + v^2. \end{equation} For this reason, knowledge of the velocity dispersion of typical NSCs is vital in determining whether or not central gas densities will form in their cores (provided that $M_{\rm c}/M_{\rm bh}\gg 1$). Surveys of NSCs show half light radii in the range $1-5$~pc \citep{boker2010,georgiev2014}, which assuming a Plummer model (with $M/L=1$), gives a range in core radii of $0.8-3.8$~pc. These results combined with the observed mass range of approximately $10^5 - 10^7 \, M_\odot$ \citep{boker2010,georgiev2014,neumayer2012b} naturally result in a large range of possible velocity dispersions: $\sigma_V \approx 5 - 100 \, {\rm km/s}$. Intuitively, we expect more massive systems to have larger core radii, a fact which is born out in several surveys of NSC properties as depicted in Figure \ref{fig:msigma} \citep{seth2008,brok2014}. If we narrow our sample size by restricting our calculations to systems observed to contain both a NSC and a SMBH the mass-size relation of NSCs is less well constrained as clearly depicted in Figure \ref{fig:msigma}. Measuring the mass of theblack hole and the mass and velocity dispersion of the NSC requires high resolution observations, which are only available for a handful of systems, with the best currently known example being our own Milky Way. In the Galaxy's nuclei, the black hole mass has been measured to be $M_{\rm bh} \approx 4 \times 10^6 \, M_\odot$ \citep{ghez2008,gillessen2009,genzel2010,do2013}. With a NSC mass of $M_{\rm c} \approx 3 \times 10^7 \, M_\odot$ and a half light radius of $r_{\rm eff} \approx 4$~pc, one finds $r_{\rm c} \approx 3$~pc and $\sigma_V \approx 130 \, {\rm km/s}$ \citep{graham2009,feldmeier2014}. The observed NSC velocity dispersions in galaxies containing black holes, which are in the range $\sigma_V \approx 100 - 300 \, {\rm km \, s^{-1}}$, are similar in magnitude to both the sound speed of the surrounding gas and the relative gas velocities of SMBHs during galaxy mergers \citep{debuhr2011,debuhr2012}. As a result, the conditions necessary for efficient mass accumulation are commonly satisfied, an assertion that we will quantify below by making use of detailed galaxy merger simulations. Figure \ref{fig:modBondiadia} shows how the accretion rate onto a SMBH is modified by the presence of a NSC satisfying $\sigma_V \gtrsim c_{\rm s}^2 + v^2$. In all calculations, the flow is assumed to behave adiabatically. Without the presence of a NSC, the bowshock penetrates close to the sink boundary. However, in the presence of a NSC, the bowshock forms at the outer boundary of the cluster's core. As discussed in \citet{lin2007} and \cite{naiman2011}, this effectively mitigates the effects of the gas motion on the central sink and the accretion proceeds as a nearly radial inward flow. This added protection results in a moderate mass accretion rate enhancement onto the central sink, which are slightly larger for a more compact NSC. As the flow accumulates in the NSC's potential, a quasi-hydrostatic envelope builds up around the central sink, whose central density increases with the compactness of the NSC. For an adiabatic flow, this density enhancement is accompanied by an increase in the sound speed of the flow such that $\dot{M}_{\rm nsc} \propto \rho_{\rm nsc}/c_{\rm s,nsc}^3 \approx \rho_\infty/c_\infty^3$. As a result, the accretion rate for a stationary sink is not expected to be aided by the increase in central density in an adiabatic flow. The moderate increase in mass accretion rates for an adiabatic flow, produced mainly by changes in the flow structure, is shown by both our analytical (see equations \ref{eq:mdot} and \ref{eq:rho}) and simulation results. On the other hand, the central density enhancement enabled by the presence of a NSC when $\sigma_V \approx c_{\rm s}^2 + v^2$ can be accompanied by a drastic increase in mass accretion rate if the gas is permitted to cool. This allows the central density to grow without a mitigating increase in the local sound speed of the gas. In the near isothermal ($\gamma=1.1$) calculations depicted in Figure \ref{fig:modBondiiso}, we have $c_{\rm s} (r) \approx c_\infty$ and the presence of a NSC results in an enhancement of about an order of magnitude in the mass accretion rate onto the central sink. Here, the accretion rate fluctuates around a mean value, as the isothermal gas can collapse to much smaller scale structures than in the adiabatic case, providing the central sink with much larger temporal changes in the amount of accreted gas. Figures \ref{fig:modBondiadia} and \ref{fig:modBondiiso} together demonstrate the important effects that both the equation of state and the compactness of the NSC can have on the mass accreted by the central SMBH. In what follows, we make use of cosmological simulations to estimate the range of gas and NSC properties conducive to large enhancements in the mass accretion rate onto the central SMBHs during galaxy mergers. \section{The Accretion History of SMBH\lowercase{s} in Galaxy Mergers} \label{section:mergerMods} The majority of galaxies ($50-70$\%) are expected to harbor nuclear star clusters \citep{neumayer2012}, and therefore large enhancements in accretion rates onto SMBHs are possible during typical galaxy mergers when conditions are favorable (i.e. $\sigma_V^2 \approx c_{\rm s}^2 + v^2$). If, in addition, during the merger the gas in the central regions cools efficiently, the increase in the mass accreted by the SMBH can be significant. To determine if and when these conditions are satisfied during a merger we examine the gas properties in galaxy merger simulations from \cite{debuhr2011}. These full-scale SPH simulations of major mergers include cooling, star formation and associated feedback, and feedback from a supermassive black hole. Of interest for this work is the model {\it fidNof}, which places two galaxies on a prograde parabolic orbit with unaligned spins and a merger mass ratio of 1:1. Both galaxies in the simulation have a total mass of $1.94 \times 10^{12} M_{\sun}$ (including the dark matter halo), and have $8 \times 10^5$ particles. The Plummer equivalent gravitational force softening was $47$ pc. Figure \ref{fig:snapshots} shows the gas properties in the central core regions of the galaxies in the {\it fidNof} model. This range of sound speeds and densities represent the average values of a subset of particles within the accretion radii, defined as four times the simulation's gravitational softening length: $R_{\rm acc} \approx 188$~pc \citep{debuhr2011,debuhr2012}. To estimate the average properties of the gas ($c_\infty$, $v_\infty$, $\rho_\infty$) flowing toward the SMBH we use only particles within a $30^\circ$ conical region in front of the SMBH's velocity vector. While the average gas parameters are relatively insensitive to the exact value of the opening angle of the cone, the amount of inflowing gas is slightly underestimated using this method as it ignores the material which can be accreted from behind the direction of motion of the SMBH. Given the estimated average properties of the gas flow at large scales, we expect the effects of the NSC in altering the mass accretion history of the central SMBH to be most prominent when the gas is able to cool efficiently, as argued in Section~\ref{sec:cond}. In a merger simulation, the condition for efficient cooling is established when the sound crossing time across the accretion radius is longer than the cooling time of the gas: $t_{\rm cs,acc} \gtrsim t_{\rm cool}$. If this condition holds, the cold gas can be significantly compressed and, as a result, lead to a large density enhancement in the core of the NSC. We can estimate the accretion timescale of the SMBH as \begin{equation} t_{\rm cs,acc} = {2 G M_{\rm bh} \over c_\infty^3(1+\mu_\infty^2)} \end{equation} where $\mu_\infty = v_\infty/c_\infty$ is the Mach number of the large scale flow. The cooling time can be written as $t_{\rm cool} = \epsilon/[n_e n_H \Lambda(T,Z)]$, where $\epsilon$ is the internal energy of the gas, $\Lambda$ is the cooling rate of the gas at a temperature $T$ and metallicity $Z = 10^{-2} \, Z_\odot$, and $n_e$ and $n_H$ are the electron and neutral hydrogen number densities, respectively. In galaxy merger simulations, the condition $t_{\rm cs,acc} \gtrsim t_{\rm cool}$ is generally satisfied (Figure \ref{fig:snapshots}) although for a particular run, the average gas properties can fluctuate between the cooling and non-cooling regimes as the merger progresses. This is illustrated in the simulation snapshots {\it \RNum{1}}, {\it \RNum{2}} and {\it \RNum{3}} taken from the {\it fidNof} model of \cite{debuhr2011} at early ($t_{\RNum{1}}=0.27$~Gyrs, $t_{\RNum{2}}=0.5$~Gyrs) and late ($t_{\RNum{3}} = 1.49$~Gyrs) times in the merger process, which are depicted in Figure \ref{fig:snapshots}. As a consequence, there may be times during the galaxy merger when cooling rates within the accretion radius are high and the mass accretion rate can be heavily augmented by the presence of a NSC, provided that $\sigma_V \approx c_\infty$. Since we are not treating the feedback from the black hole explicitly, we use both adiabatic (inefficient cooling; efficient feedback) and isothermal (efficient cooling; inefficient feedback) simulations to illustrate the effects of the surrounding NSC on the gas flow as a whole and the importance of the $\sigma_V \approx c_\infty$ condition. Figure \ref{fig:nocool} shows the gas in the inner regions of a model where $\sigma_V < c_\infty$. Here, a moderately massive black hole ($M_{\rm bh} = 10^6 \, M_\odot$) with and without a surrounding NSC ($M_{\rm c} = 10^7 \, M_\odot$ and $\sigma_v = 115 \, {\rm km/s}$) propagates through a background medium with $c_\infty=$ 200km/s and $\rho_\infty=10^{-23}\,{\rm g\,cm^{-3}}$ (similar to the gas properties found in simulation snapshot {\it \RNum{3}} of Figure \ref{fig:snapshots}). Because $\sigma_V < c_\infty$, the gas flow around the SMBH is not altered by the presence of the NSC and the mass accretion rates change only minimally between the model with and without the NSC, even when cooling is efficient. This is corroborated by the results of the near isothermal and adiabatic simulations, which are shown in Figure \ref{fig:nocool}. When the SMBH+NSC complex propagates into a region in parameter space where cooling is efficient and the condition $\sigma_V \gtrsim c_\infty$ is satisfied, the presence of a NSC can dramatically increase the mass supply onto the SMBH. Figure \ref{fig:cool} shows the gas flow in the inner regions of a NSC where cooling is predicted to be efficient. Similar to Figure \ref{fig:nocool}, the SMBH~+~NSC complex is characterized by $M_{\rm bh} = 10^6 \, M_\odot$ and $M_{\rm c} = 10^7 \, M_\odot$ ($\sigma = 115 \, {\rm km/s}$) but in this case it propagates through a background medium with $c_\infty=$ 100 km/s and $\rho_\infty=10^{-21}\,{\rm g\,cm^{-3}}$ (similar to those found in simulation snapshot {\it \RNum{1}} of Figure \ref{fig:snapshots}). Here, even without the presence of a NSC, the accretion rate is significantly higher than that derived from Figure \ref{fig:nocool} due to the isothermal equation of state, higher density and lower sound speed of the medium. However, in these efficient cooling conditions, the presence of a NSC can have drastic effects on the mass feeding rate enhancement when compare to the case without a NSC. This is evident when comparing the evolution of the mass accretion rate calculated in the adiabatic and near isothermal simulations. We note here that in both regimes, the prescription laid out by equations (\ref{eq:mdot}) and (\ref{eq:rho}) provides a relatively good estimate of the steady state mass accretion rate onto the SMBH (see Figures \ref{fig:nocool} and \ref{fig:cool}). In what follows, we will assume the validity of such prescription in order to estimate the growth history of SMBHs in galaxy mergers. To establish the mass feeding history of merging SMBHs embedded in NSCs, we use the gas properties in the central regions of simulated merging galaxies. Figure \ref{fig:subgrid} shows how the growth history of the central SMBHs, as derived from the {\it fidNof} simulation of \cite{debuhr2011}, is altered by the presence of a NSC. In both galaxies, a NSC with $M_{\rm c} = 10^7 \, M_\odot$ is assumed to reside in each galactic center at the start of the simulation. These values are consistent with observations of NSC around SMBHs \citep{graham2009,graham2011}. The calculation assumes that the flow is able to cool efficiently ($\gamma = 1.1$) and that the NSC is unable to grow as the mass of the SMBH increases such that the mass accretion prescription reverts to the unmodified version (equation \ref{eq:cpmmacc}) when $M_{\rm bh} > M_{\rm c}$. At early times, the presence of the NSC enables the central SMBH to grow quicker than it would if it was in isolation due to the increase in gas density accreted by the central SMBHs as described by equation \ref{eq:mdot}. As the galaxy merger progresses and the mass of the SMBH quickly increases above $M_{\rm c}$, the gravitational influence of the cluster ceases to be relevant and the mass accretion prescription reverts to that used in the simulation (equation \ref{eq:cpmmacc}). Because in our formalism $\dot{M}_\nu \propto \Omega^{-1} \propto M^{-1/2}$, the growth rate of the pre-merger SMBH hosting a NSC is reduced when compared to the one in isolation, whose is initially lighter. The swifter rate of growth for the unmodified SMBH implies that it is able to reach a similar total mass to that surrounded by a NSC before the merger takes place. Despite having only a brief impact, this early growth spurt induced by the presence of a NSC results in a different early growth and feeding history for the pre-merger SMBHs. We find that the initial properties of the NSC have an enduring effect on the luminosity and mass assembly history of the early time pre-merger SMBHs. We note here that we have assumed that the NSC's mass remains unchanged during the entire simulation. If a larger stellar concentration is able to form around the growing SMBH, then the evolving NSC could have a longer lasting impact. \section{Summary and Conclusions} \label{section:summary} An understanding of how matter can be funneled to galactic nuclei is essential when constructing a cosmological framework for galaxy evolution. When modeling galaxy evolution, an implementation of a sub-grid mass accretion prescription to estimate the gas flow onto SMBHs is required for the sake of computational efficiency. In simulations of merging galaxies, some form of the classical Bondi-Hoyle-Lyttleton (BHL) accretion prescription is usually implemented to estimate the SMBH's feeding rate. This prescription assumes that the properties of gas at hundreds of parsecs accurately determine the mass accretion rate. In this paper we argue that NSCs, a common component of galactic centers at parsec scales, can provide an efficient mechanism for funneling gas towards the SMBH at scales which are commonly unresolved in cosmological simulations. For the conditions expected to persist in the centers of merging galaxies, the resultant large central gas densities in NSCs should produce enhanced accretion rates onto the embedded SMBHs, especially if cooling is efficient. Because these NSCs are typically more massive than the central SMBH, they can significantly alter the gas flow before being accreted. While the model shown in Figure \ref{fig:subgrid} results in a modest increase in the final mass of the merged SMBH, the presence of NSCs result in faster SMBH growth rates and higher bolometric luminosities than predicted by the standard BHL formalism. Obviously, these calculation are incomplete and would improve with a self-consistent implementation of feedback. It has been suggested that the interplay between SMBH mass and host galaxy properties indicates that black hole feedback during mergers alters the properties of gas at galactic scales in order to shape these observed correlations \citep{johansson2009,debuhr2011,debuhr2012}. Progress in our understanding of these processes and higher resolution simulations will be necessary before we can conclude that quasar feedback is in fact an essential ingredient. With more accurate simulations of the growth of SMBHs surrounded by NSC in galaxy mergers, we can better constrain the relevant physics responsible for the $M_{\rm bh}-\sigma$ and $M_{\rm bh}-L$ relations from comparisons to observational data. \\ \\ \indent We acknowledge helpful discussions with Doug Lin, Elena D'Onghia, Anil Seth, Morgan MacLeod and Lars Hernquist. We also thank the anonymous referee for constructive comments. Support was provided by the David and Lucile Packard Foundation and NSF grant AST-0847563, Simons Foundation and NASA grant NNX11AI97G. \clearpage \begin{deluxetable}{ccccccccc \tablehead{\colhead{Name} & \colhead{$M_{\rm c}$ $[10^8 \, M_\odot]$} & \colhead{$M_{\rm bh}$ $[10^7 \, M_\odot]$} & \colhead{$\sigma_V$ $[\rm{km s^{-1}}]$} & \colhead{$r_{\rm c}$ [pc]} & \colhead{$\gamma$} & \colhead{$\mu$} & \colhead{$\sigma_V/c_\infty$} \startdata 1A & 2.5 & 2.5 & 280 & 5.3 & $1.1$ & $1.64$ & 1.31 \\ 2A ({\it Heavy}) & 2.5 & 2.5 & 280 & 5.3 & $5/3$ & 1.33 & 2.12 \\ 2B ({\it Light}) & 1.0 & 1.0 & 177 & 5.3 & $5/3$ & 1.33 & 2.12 \\ 4A & 0 & 2.5 & Naked BH & Naked BH & $5/3$ & 1.33 & Naked BH \\ 4B & 2.5 & 2.5 & 198 & 10.6 & $5/3$ & 1.33 & 1.50 \\ 4C & 2.5 & 2.5 & 280 & 5.3 & $5/3$ & 1.33 & 2.12 \\ 5A & 0 & 2.5 & Naked BH & Naked BH & 1.1 & 1.64 & Naked BH \\ 5B & 2.5 & 2.5 & 198 & 10.6 & 1.1 & 1.64 & 0.93 \\ 5C & 2.5 & 2.5 & 280 & 5.3 & $1.1$ & $1.64$ & 1.31 \\ 7A & 0 & 0.1 & Naked BH & Naked BH & $1.1$ & $1.5$ & Naked BH \\ 7B & 0.1 & 0.1 & 115 & 2.0 & $1.1$ & $1.5$ & 0.58 \\ 7C & 0 & 0.1 & Naked BH & Naked BH & $5/3$ & $1.5$ & Naked BH \\ 7D & 0.1 & 0.1 & 115 & 2.0 & $5/3$ & $1.5$ & 0.58 \\ 8A & 0 & 0.1 & Naked BH & Naked BH & $1.1$ & $1.5$ & Naked BH \\ 8B & 0.1 & 0.1 & 115 & 2.0 & $1.1$ & $1.5$ & 1.2 \\ 8C & 0 & 0.1 & Naked BH & Naked BH & $5/3$ & $1.5$ & Naked BH \\ 8D & 0.1 & 0.1 & 115 & 2.0 & $5/3$ & $1.5$ & 1.2 \enddata \tablecomments{Columns are (1) The name of the simulation - figure denoted by a number, subplot or line denoted by a letter, (2) mass of the NSC, (3) mass of the SMBH, (4) velocity of the NSC, (5) NSC cluster radius, (6) adiabatic index of the ambient gas, (7) Mach number of the flow, and (8) the ratio of NSC velocity dispersion to background sound speed. Simulations with out a NSC, and thus no parameter for the NSC mass, velocity dispersion or radius, are denoted by $M_{\rm c} = 0.0$ and the words ``Naked BH" in all other fields.} \label{table:sims} \end{deluxetable} \begin{figure*} \centering\includegraphics[width=0.99\textwidth]{fig1_scaled.jpg} \caption{Density contours of the flow pattern around a SMBH~+~NSC system moving through a uniform density, near isothermal ($\gamma = 1.1$) medium for three different simulation setups. The {\it initial} simulation is a small scale calculation that resolves how the gas flow begins to accumulate within the NSC core and accretes onto the fully resolved central sink. The {\it no sink} simulation is a low resolution, large scale calculation without a sink that captures the gas build up in the NSC core until a steady state is achieved. The {\it steady state} simulation is a large scale calculation that includes an embedded sink once a steady state central density enhancement has been realized. The right most line plot shows the mass accretion rate for the three different simulation setups as a function of the sink's sound crossing time: $t_{\rm cs} = r_{\rm sink}/c_{\infty}$. Common to all calculations are $M_{\rm c}/M_{\rm bh} = 10$ with $M_{\rm bh} = 2.5 \times 10^7 \, M_\odot$, $\mu = 1.64$, $c_{\rm s} = 83 \, \rm{km/s}$ and $\sigma_V/c_{\rm s} = 2$. The Plummer core radius is denoted by a black open circle and the sink size $r_{\rm sink} \approx 0.05r_{\rm b,R}$ is depicted by a black filled circle. Here, the sink size is fractions of the modified Bondi radius as defined by \cite{ruffert1994a}, $R_{\rm b, R} = G M_{\rm bh}/c_\infty^2$, to ensure a converged mass accretion rate onto the central SMBH \citep{ruffert1994a,ruffert1994b}. Snapshots from left to right correspond to times $t_{\rm cs} = $~130, 221, and 250. This simulation is identified as {\it 1A} in Figure \ref{fig:msigma} and Table \ref{table:sims}.} \label{fig:simsExplained} \end{figure*} \begin{figure*} \centering\includegraphics[width=0.99\textwidth]{fig2_scaled.jpg} \caption{Density (top panels) and temperature (bottom panels) contours of the flow pattern around a {\it Heavy} ({\it 2A} in Figure \ref{fig:msigma}, Table \ref{table:sims}) ($M_{\rm c} = 2.5 \times 10^8 \, M_\odot$, $M_{\rm bh} = 2.5 \times 10^7 \, M_\odot$, $\sigma_V = 280 \, {\rm km/s}$) and {\it Light} ({\it 2B}) ($M_{\rm c} = 10^8 \, M_\odot$, $M_{\rm bh} = 10^7 \, M_\odot$, $\sigma_V = 177 \, {\rm km/s}$) SMBH~+~NSC complexes moving through a uniform density in an adiabatic ($\gamma = 5/3$) medium with $\mu = 1.33$ and $\sigma_V/c_\infty = 2.12$. The Plummer core radius is denoted by a black open circle and the sink size $r_{\rm sink} \approx 0.05r_{\rm b,R}$ is denoted by a black filled circle.} \label{fig:scaled} \end{figure*} \begin{figure*} \centering\includegraphics[width=0.99\textwidth]{fig3_msigma_new.pdf} \caption{Mass--size relation for NSCs. Grey asterisks are from \cite{seth2008} while grey triangles are taken from the \cite{neumayer2012b} sample of NSCs in bulgeless galaxies. The three black diamonds show measurements of NSC masses and sizes where the mass of the embedded SMBH is also known \citep{graham2009}. The two lines show the mass-size relation inferred from the NSC sample of \cite{brok2014} with ({\it dashed} line) and without ({\it dot-dashed} line) the unresolved sources included in their fit. Here, all half light radii have been scaled to a core radius assuming the NSC potentials are well described by a Plummer model. The mass and $\sigma_V$ of the clusters presented in the simulations of Figures \ref{fig:scaled}-\ref{fig:subgrid} are shown with black crosses.} \label{fig:msigma} \end{figure*} \begin{figure*} \centering\includegraphics[width=0.99\textwidth]{fig4_scaled.jpg} \caption{Density contours of the flow pattern around a SMBH with $M_{\rm bh} = 2.5 \times 10^7 \, M_\odot$ moving through a uniform density medium characterized by $\gamma = 5/3$. Simulation snapshots are plotted for a naked SMBH ({\it left}, {\it 4A} in Table \ref{table:sims}), a SMBH embedded in a diffuse NSC ({\it middle}, {\it 4B} in Figure \ref{fig:msigma}, Table \ref{table:sims}) and a SMBH embedded in a compact NSC ({\it right, 4C}) together with the mass accretion rate history in each system, which is calculated using the three different simulation setups discussed in Figure~\ref{fig:simsExplained}. The effect of the NSC's velocity dispersion ($M_{\rm c} = 2.5 \times 10^8 \, M_\odot$) can be seen by comparing the gas flow between the {\it compact} ($r_{\rm c} = $~5.3~pc) and {\it diffuse} ($r_{\rm c} = $~10.6~pc) clusters. The {\it dark blue} and {\it black} horizontal lines show our modified Bondi-Hoyle-Lyttleton (BHL) prescription for the mass accretion rate in the presence of a NSC. All sink sizes are $r_{\rm sink} = $~0.5~pc. Here, the sound speed and Mach number are $c_\infty = 132 \, {\rm km/s}$ and $\mu_\infty = 1.33$, respectively. The snapshots from left to right are at times $t_{\rm cs} = $~112, 108, and 81. } \label{fig:modBondiadia} \end{figure*} \begin{figure*} \centering\includegraphics[width=0.99\textwidth]{fig5_scaled.jpg} \caption{Similar to Figure~\ref{fig:modBondiadia} but for a near isothermal ($\gamma = 1.1$) medium. The snapshots from left to right are at times $t_{\rm cs} = $~130, ({\it 5A}), 134 ({\it 5B}), and 131 ({\it 5C}).} \label{fig:modBondiiso} \end{figure*} \begin{figure*} \centering\includegraphics[width=0.99\textwidth]{tcoolcurves_withlabels2.pdf} \caption{The range of sound speeds, $c_\infty$, and densities, $\rho_\infty$, from the gas surrounding a naked SMBH in the galaxy merger models of \cite{debuhr2011}. The {\it gray} contour denotes the combinations of [$c_\infty$,$\rho_\infty$] for their entire suite of models while the {\it blue} contour shows the range for their {\it fidNof} model. The locations in the [$c_\infty$,$\rho_\infty$] plane for three different simulation snapshots ({\it \RNum{1}}, {\it \RNum{2}}, {\it \RNum{3}}) taken from the {\it fidNof} model of \cite{debuhr2011} are highlighted. Simulation points {\it \RNum{1}} and {\it \RNum{2}} are at early times before the galaxies' first pass ($t_{\RNum{1}} = 0.27$~Gyrs, $t_{\RNum{2}} = 0.5$~Gyrs) and point {\it \RNum{3}} occurs right before the merger ($t_{\RNum{3}} = 1.49$~Gyrs). The reader is referred to Figure 1 of \citet{debuhr2011} for further details. The plotted lines correspond to the condition $t_{\rm cool}=t_{\rm cs, acc}$ for three different values of $M_{\rm bh}$.} \label{fig:snapshots} \end{figure*} \begin{figure*} \centering\includegraphics[width=0.99\textwidth]{fig_simNoCoolContours.jpg} \caption{A $M_{\rm bh} = 10^6 \, M_\odot$ black hole with and without a surrounding NSC ($M_{\rm c} = 10^7 \, M_\odot$ and $\sigma_V=115$ km/s) propagates with $\mu = 1.5$ through a background medium with $c_{\rm s} = 200$ km/s and $\rho_\infty=10^{-23}\,{\rm g\,cm}^{-3}$ (similar to the gas properties found in simulation snapshot {\it \RNum{3}} of Figure~\ref{fig:snapshots}). The {\it left} panel shows the mass accretion rate of models with and without a NSC for adiabatic ($\gamma = 5/3$) and near isothermal gas ($\gamma=1.1$) flows, which are calculated using different simulation setups as discussed in Figure~\ref{fig:simsExplained}. Under these conditions, $\sigma_V<c_\infty$ and the gas flow around the SMBH is not altered by the presence of the NSC. As a result, the change in mass accretion rate between the model with and without the NSC is negligible, even when cooling is efficient. Cluster radii are shown as white circles. The simulations ``Naked SMBH, Isothermal", ``NSC~+~SMBH, Isothermal", ``Naked SMBH, Adiabatic" and ``NSC~+~SMBH, Adiabatic" are denoted in Table \ref{table:sims} by {\it 7A}, {\it 7B}, {\it 7C}, and {\it 7D}, respectively. Points {\it 7B} and {\it 7D} are also shown in Figure \ref{fig:msigma}.} \label{fig:nocool} \end{figure*} \begin{figure*} \centering\includegraphics[width=0.99\textwidth]{fig_simCoolContours.jpg} \caption{Similar to Figure~\ref{fig:nocool} but in this case the black hole propagates through a background medium with $c_\infty = 100$ km/s and $\rho_\infty=10^{-21}{\rm \,g\,cm}^{-3}$ (similar to those found in simulation snapshot {\it \RNum{1}} of Figure~\ref{fig:snapshots}). Because $\sigma_V>c_\infty$, the presence of a NSC can result in a large mass feeding rate increase when compare to the case without a NSC, in particular when the gas cools efficiently ($\gamma=1.1$). Once again, the accretion rate onto the SMBH is calculated using different simulation setups as illustrated in Figure~\ref{fig:simsExplained}. The models in this figure labeled ``Naked SMBH, Isothermal", ``NSC~+~SMBH, Isothermal", ``Naked SMBH, Adiabatic" and ``NSC~+~SMBH, Adiabatic" are denoted in Figure \ref{fig:msigma} and Table \ref{table:sims} by points {\it 8A}, {\it 8B}, {\it 8C}, and {\it 8D}, respectively.} \label{fig:cool} \end{figure*} \begin{figure*} \centering\includegraphics[width=0.99\textwidth]{fig_analytic_t3.pdf} \caption{The growth history of the two central SMBHs in the merging galaxy model {\it fidNof} of \cite{debuhr2011}. The prescription laid out by equations (\ref{eq:mdot}) and (\ref{eq:rho}) is used to estimate the steady state mass accretion rate onto the SMBH, which in turn use the gas properties as derived by the SPH simulations to calculate the augmented growth of the SMBH masses during the galaxy merger simulation. A NSC characterized by $M_{\rm c} = 10^7 \, M_\odot$ and $\sigma_V = 180 \, {\rm km/s}$ \citep[consistent with observations of SMBH+NSC systems]{graham2009,graham2011} is assumed to reside in each galactic center at the start of the simulation. The calculation assumes that the flow is able to cool efficiently ($\gamma= 1.1$) and that the mass of the NSC is fixed. As the galaxy merger evolves and the mass of the SMBHs increase above $M_{\rm c}$, the gravitational influence of the NSC stops being relevant.} \label{fig:subgrid} \end{figure*}
1,314,259,995,462
arxiv
\section{Introduction} \label{sec:intro} Galaxy properties (e.g., morphology, star formation rate (SFR), and color) change with environment including the field, groups, and clusters. As the surrounding density increases, the fraction of early-type galaxies increases (i.e., the density-morphology relation; \citealt[][]{dressler1980,goto2003,houghton2015}). In particular, in the cluster environment, the galaxy population is dominated by red sequence galaxies with lower SFRs \citep[][]{lewis2002,gomez2003,Kauffmann2004,hogg2004}. Furthermore, this trend is also observed at the outskirts of galaxy clusters \citep{cybulski2014,jaffe2016,morokuma-matsui2021}. In fact, in a hierarchical Universe, a significant fraction of cluster populations is accreted through the groups \citep[][]{mcgee2009,delucia2012}. This result implies that many galaxies are likely to be already affected by the environmental effects in groups (e.g., tidal interaction and ram pressure stripping (RPS); for a review see \citealt{cortese2021}), making them become red, passive, and gas deficient before they fall into a cluster. This process (known as “pre-processing”) can account for a significant fraction of the quenched galaxies at the outskirts of the cluster or beyond the virial radius of the cluster \citep[][]{haines2015,denes2016,jaffe2016,robert2017,jung2018,vulcani2018,dzudzar2019,seth2020,kleiner2021,castignani2021,morokuma-matsui2021,cortese2021}. Therefore, studying “pre-processing” (e.g., how significantly galaxies can be processed in the group environment before they enter a cluster) is important to understand galaxy evolution in the groups as well as the clusters. In addition, at least half of all galaxies belong to galaxy groups in the local universe \citep[][]{eke2004,robotham2011}, indicating that the group environment is the common place where local galaxies evolve. Although there are various external processes that can play a role in changing physical properties of galaxies, tidal interactions and merging events are more frequent, especially in the group environment, due to low velocity dispersion of the group, which are thought to be main mechanisms affecting group galaxies \citep[][]{zabludoff1998,bitsakis2014, alatalo2015,iodice2020,kleiner2021,s.wang2021}. Some groups are detected in X-ray, indicating the presence of a hot intragroup medium (IGrM) \citep[e.g.,][]{osmond2004}. However, the strength of ram pressure in groups is expected to be not as strong as the strength of ram pressure in clusters because the velocity of group galaxies relative to the IGrM is not as high and the density of IGrM is relatively low. Nevertheless, evidence of ram pressure stripping has been reported in galaxy groups \citep[][]{kantharia2005, rasmussen2006,westmeier2011,wolter2015,brown2017,roberts2021}. The cold ISM, mainly composed by atomic hydrogen ({\hbox{{\rm H}\kern 0.2em{\sc i}}}) and molecular hydrogen (H$_{2}$), is one of the important baryonic components of a galaxy, as the fuel for star formation. Generally, the {\hbox{{\rm H}\kern 0.2em{\sc i}}} gas disk of the star-forming field galaxies extends beyond the optical disk \citep[e.g.,][]{walter2008}. The large extent of the {\hbox{{\rm H}\kern 0.2em{\sc i}}} disk and the low density of {\hbox{{\rm H}\kern 0.2em{\sc i}}} gas make it more susceptible to environmental processes, such as tidal interactions \citep[][]{yun1994,saponara2018} and ram pressure stripping \citep[][]{chung2009,wang2020,wang2021}. In the group environment, galaxies are often reported with {\hbox{{\rm H}\kern 0.2em{\sc i}}} deficiency, asymmetric {\hbox{{\rm H}\kern 0.2em{\sc i}}} distribution, and a shrinking of {\hbox{{\rm H}\kern 0.2em{\sc i}}} disk \citep[][]{denes2016,brown2017,for2019, for2021,leewaddell2019,kleiner2021,s.wang2021,roychowdhury2022}. In particular, the H$_{2}$ gas, which is mainly traced by the carbon monoxide (CO), is known as the direct ingredient for star formation. Therefore, it is essential to understand how the group environmental processes affect the molecular gas of galaxies because a change of molecular gas properties by the environmental effects likely links to star formation activity and hence galaxy evolution. However, since the distribution of molecular gas is more compact within the stellar disk and its density is higher, it still remains unclear whether the molecular gas is also strongly affected by the group environmental processes. For cluster galaxies, recent studies on the molecular gas have found evidence of the environmental effects on molecular gas of cluster galaxies \citep[][]{boselli2014b,lee2017,lee2018,jachym2019,zabel2019,cramer2020,moretti2020a,moretti2020b}. However, there are not many studies of the molecular gas in group galaxies \citep[e.g.,][]{alatalo2015}, and therefore we still lack a good understanding of the group environmental effects on the molecular gas. In order to obtain a better understanding of the group environmental effects on the molecular gas and the star formation activity, we carried out a $^{12}$CO($J$=1--0) imaging survey of 31 galaxies in two loose groups (IC~1459 group and NGC~4636 group) using the Atacama Large Millimeter/submillimeter Array (ALMA)/the Atacama Compact Array (ACA) in Cycle 7. This is the first CO imaging survey for the loose groups. In particular, a loose group is an intermediate structure between compact groups and clusters. Loose groups host tens of galaxies over an area of $\sim$1 Mpc$^{2}$, with a median velocity dispersion of 165 km~s$^{-1}$ and a median virial mass of $\sim$1.9 $\times$ 10$^{13}h^{-1}$ {$M_{\odot}$} \citep{tucker2000,pisano2004}. These are particular interesting objects in a study of structure formation in the hierarchical Universe. However, previous CO imaging studies on group galaxies mainly focused on compact groups, showing violent interactions among group members \citep[e.g.,][]{alatalo2015}. Thus, the well-resolved CO imaging data of our survey can provide a unique opportunity to study the detailed molecular gas properties (e.g., CO distribution and velocity field) of group galaxies and is expected to show direct evidence for the group environmental effects on the molecular gas. Recent high-resolution {\hbox{{\rm H}\kern 0.2em{\sc i}}} imaging observations for the IC~1459 group and the NGC~4636 group (hereafter I1459G and N4636G, respectively) show that there are explicit signs (stripped {\hbox{{\rm H}\kern 0.2em{\sc i}}} gas and asymmetric {\hbox{{\rm H}\kern 0.2em{\sc i}}} morphology) of external perturbations \citep{serra2015,saponara2018,oosterloo2018,koribalski2020}. In addition, thanks to the proximity of these two groups (I1459G: 27.2 Mpc and N4636G: 13.6 Mpc) and the high spatial resolution of our ACA observations, individual group galaxies can be resolved on a kpc scale (I1459G: $\sim$1.5 kpc and N4636G: $\sim$0.7 kpc). Therefore, both the I1459G and N4636G are good laboratories to investigate the group environmental effects on molecular gas in detail. In particular, since the N4636G is falling into the Virgo cluster \citep{tully1984,nolthenius1993}, this makes the N4636G the best target to study the pre-processing in the group environment. In this work, we present the results of our ACA CO survey of the two groups. In particular, we compare the CO images with existing {\hbox{{\rm H}\kern 0.2em{\sc i}}} images for our group sample to investigate how these two phases of the cold ISM react to the group environmental processes, since the {\hbox{{\rm H}\kern 0.2em{\sc i}}} gas and CO gas have different characteristics (e.g., the disk size and the density). The combination of CO and {\hbox{{\rm H}\kern 0.2em{\sc i}}} images is possibly powerful for distinguishing between different stripping or quenching mechanisms. In Section~\ref{sec:sample}, we describe the two galaxy groups and the sample selection along with the general properties of the target galaxies. Details of the ACA observations, data reduction, and ancillary data are described in Section~\ref{sec:obs}. We present CO properties and CO images of our sample in Section~\ref{sec:res}. In Section~\ref{sec:dis}, we investigate differences in global properties between the group sample and the extended CO Legacy Database for GASS\footnote{The Parkes Galactic All Sky Survey} (xCOLD GASS; \citealt{saintonge2017}) sample, and we discuss the impacts of the group environment on member galaxies. In Section~\ref{sec:sum}, we summarize our results and conclusions. Throughout this paper, we adopt distances of 27.2 Mpc and 13.6 Mpc to two groups, I1459G and N4636G, respectively \citep{brough2006}. \section{Group sample} \label{sec:sample} The Group Evolution Multiwavelength Study (GEMS) is a panchromatic survey of 60 nearby galaxy groups (the recession velocity of 1000~km~s$^{-1}$ $<$ $v_{group}$ $<$ 3000~km~s$^{-1}$), which has been initiated to study environmental effects on galaxy evolution in the group environment \citep[][]{osmond2004, forbes2006}. The survey covers a broad range of wavelengths from radio \citep{kilborn2005,kilborn2009}, infrared \citep{brough2006}, optical \citep{miles2004}, and X-ray \citep{osmond2004} to investigate how strongly and frequently group members are influenced by various processes. Among 60 groups in the GEMS, in particular, both I1459G and N4636G were observed in the {\hbox{{\rm H}\kern 0.2em{\sc i}}} emission, with the high spatial resolution of the Australian Square Kilometer Array Pathfinder (ASKAP, \citealt{serra2015a,koribalski2020}). The presence of high-resolution {\hbox{{\rm H}\kern 0.2em{\sc i}}} images takes great advantage of verifying immediately the environmental effects on the cold gas disk (e.g., asymmetric {\hbox{{\rm H}\kern 0.2em{\sc i}}} distribution; \citealt{serra2015a,for2019, leewaddell2019}). For these reasons, we selected these two groups to probe the group environmental effects on the cold gas and star formation activity of group galaxies. Both I1459G and N4636G have many properties in common such as the detection of X-ray emission, the presence of a relatively large elliptical galaxy at the group center (i.e., bright group-centered galaxy; BGG) \citep[][]{osmond2004,brough2006}. However, their locations in the large scale structure are rather distinct. The I1459G is a relatively isolated group, while the N4636G is falling into the Virgo cluster \citep{tully1984,nolthenius1993}. Therefore, the N4636G is an ideal laboratory to study “pre-processing” of group galaxies. We describe the details of the properties of two groups as well as the sample selection for our ACA CO observations in the following sections. \begin{figure*}[!htbp] \begin{center} \includegraphics[width=1.0\textwidth]{fig_1.pdf} \caption{Distribution of sample galaxies in the I1459G (left) and the N4636G (right). Our targets are indicated by open circles (red: detection of CO, black: non-detection of CO). Open diamonds display some of the samples, which display asymmetric {\hbox{{\rm H}\kern 0.2em{\sc i}}} and/or CO distributions. Green large cross in each panel is the BGG. Small crosses (gray: late-type, yellow: early-type) represent galaxies taken from the Hyperleda database, which are also within 1.5 $\times$ $R_{200}$ (dashed circles, I1459G: 0.55 Mpc , N4636G: 0.70 Mpc) and $\pm$3 $\times$ $\sigma_{\rm group}$ (I1459G: 223 km~s$^{-1}$, N4636G: 284 km~s$^{-1}$) and brighter than the absolute magnitude of -15.5 (the faintest galaxy among small crosses in the I1459G) in the $B$-band. The bar in the bottom-left corner represents the physical scale of 0.25 Mpc. \label{fig:fig1}} \end{center} \end{figure*} \begin{figure*}[!htbp] \begin{center} \includegraphics[width=1.0\textwidth]{new_fig_2.pdf} \caption{Distribution of sample galaxies on the projected phase-space diagram (left: I1459G, right: N4636G). Symbols are the same as in Figure~\ref{fig:fig1}. The x-axis is a projected distance from the group center ($r/R_{200}$, normalized by $R_{200}$). The y-axis is a line of sight velocity with respect to the systemic velocity of the group ($\Delta V/\sigma_{group}$, normalized by the group velocity dispersion). The dashed (solid) line indicates the averaged (the maximum) escape velocity as a function of the projected distance from the group center ($r/R_{200}$). \label{fig:fig2}} \end{center} \end{figure*} \subsection{IC~1459 group} \label{sec:ic1459g} The I1459G is a loose galaxy group that includes $\sim$10 members identified by a friends-of-friends (FOF) algorithm \citep{brough2006}. Additional faint galaxies (e.g., dwarf and ultra-diffuse galaxies) are likely to be associated with the I1459G, based on previous {\hbox{{\rm H}\kern 0.2em{\sc i}}} observations and recent deep optical imaging \citep[e.g.,][]{kilborn2009,serra2015,forbes2020}. The BGG of the I1459G, with a stellar mass of $\sim$3~$\times$~10$^{11}$~{$M_{\odot}$}, which we estimated using the the Wide-field Infrared Survey Explorer (WISE) data (see Section~\ref{subsec:add_data}), shows many peculiar features, such as a counter-rotating stellar core \citep{franx1988,prichard2019}, an irregular dust distribution \citep{forbes1994}, and the tidal tails and shells in the outskirts of the galaxy \citep{forbes1995,iodice2020}. All these features indicate that the BGG in the I1459G has experienced tidal interactions and/or merging events. X-ray observations also revealed that there is a hot IGrM at the center of the I1459G \citep{osmond2004}. Many previous {\hbox{{\rm H}\kern 0.2em{\sc i}}} imaging observations of the I1459G \citep[e.g.,][]{kilborn2009,serra2015,saponara2018,oosterloo2018} also suggest that galaxies in the I1459G are undergoing violent interactions (e.g., tidal interactions between galaxies). With the Australia Telescope Compact Array (ATCA), \cite{saponara2018} reported a substantial amount of {\hbox{{\rm H}\kern 0.2em{\sc i}}} in clouds ($\sim$7.2$\times10^{8}$ {$M_{\odot}$}) and in extended distributions near IC~1459 (BGG), IC~5264, and NGC~7418A, covering nearly half of the group region. In addition, \cite{oosterloo2018} found a long {\hbox{{\rm H}\kern 0.2em{\sc i}}} tail across the I1459G with the Karoo Array Telescope (KAT-7). Recent {\hbox{{\rm H}\kern 0.2em{\sc i}}} imaging data obtained from the ASKAP reveal that many of group galaxies show asymmetric and disturbed {\hbox{{\rm H}\kern 0.2em{\sc i}}} morphologies \citep{serra2015}. Under the assumption that the I1459G is in dynamical equilibrium \citep[e.g.,][]{balogh2007} and adopting the group velocity dispersion ($\sigma_{\rm group}$: 223 km~s$^{-1}$) taken from \cite{osmond2004}, we calculated the group virial radius ($R_{200}$: 0.55 Mpc) and the total group mass of $M_{200}$ (1.9 $\times$ 10$^{13}$ {$M_{\odot}$}), where $R_{200}$ is the radius at which the system’s mean mass density is 200 times higher than the critical density of the Universe, and $M_{200}$ is the total mass enclosed within $R_{200}$. \subsection{NGC~4636 group} \label{sec:ngc4636g} The N4636G is located towards the southeast region of the Virgo cluster and is falling into the cluster. In the group, there is an early-type BGG (NGC~4636) with a stellar mass of $\sim$3$\times$10$^{10}${$M_{\odot}$} \citep{osullivan2018}. The N4636G shows luminous and extended X-ray emission, indicating the existence of a hot IGrM \citep[e.g.,][]{osmond2004,baldi2009,ahoranta2016}. This X-ray emission is extended out to $\sim$0.3~Mpc from the group center \citep{matsushita1998}, suggesting that some group galaxies near the group center are possibly affected by ram pressure stripping. Interestingly, recent X-ray studies of NGC~4636 show that the morphology of X-ray emission near the BGG is complex, showing arm-like structures and bubbles related to previous active galactic nucleus (AGN) activity \citep[e.g.,][]{baldi2009}. The direction of the radio jets also coincides with the X-ray cavities \citep{giacintucci2011}. The N4636G hosts at least 17 members identified by \cite {brough2006}. However, $\sim$100 systems, including low-mass dwarfs and massive spiral galaxies, could be associated with the N4636G, in a recent ongoing study (Lin et al. 2022, in preparation). Previous {\hbox{{\rm H}\kern 0.2em{\sc i}}} observations of the N4636G found that there are many {\hbox{{\rm H}\kern 0.2em{\sc i}}} deficient galaxies, which implies that group members in the N4636G are affected by the environmental effects \citep{kilborn2009}. Under the assumption that the N4636G is in dynamical equilibrium and adopting the group velocity dispersion ($\sigma_{\rm group}$: 284 km~s$^{-1}$) taken from \cite{osmond2004}, we calculated the total group mass of $M_{200}$ (3.9 $\times$ 10$^{13}$ {$M_{\odot}$}) and the group virial radius ($R_{200}$: 0.70 Mpc). \subsection{Sample selection} \label{sec:selection} The target galaxies of our ACA CO survey were selected from the GEMS-{\hbox{{\rm H}\kern 0.2em{\sc i}}} survey that observed {\hbox{{\rm H}\kern 0.2em{\sc i}}} gas of galaxies within a square region of 30 deg$^{2}$ surrounding the center of each group for a sample of 16 selected GEMS groups, with the Parkes radio telescope \citep{kilborn2009}. From this {\hbox{{\rm H}\kern 0.2em{\sc i}}} survey, we selected a sample of 31 systems with {\hbox{{\rm H}\kern 0.2em{\sc i}}} detections, which are located within 1.5 $\times$ $R_{200}$ and $\pm$3 $\times$ $\sigma_{\rm group}$ (group velocity dispersion), in these two groups (11 galaxies in the I1459G and 20 galaxies in the N4636G). Since the primary selection criterion is the presence of {\hbox{{\rm H}\kern 0.2em{\sc i}}}, which implies that the sample galaxies may have a cold ISM with potentially molecular gas, our targets show a variety of morphological types from dwarfs to spirals (Table~\ref{tab:table1}), with a wide range of stellar masses (1.6 $\times$ 10$^{6}$ $-$ 4.9 $\times$ 10$^{10}$~{$M_{\odot}$}; Table~\ref{tab:table1}) and {\hbox{{\rm H}\kern 0.2em{\sc i}}} gas masses (3.1 $\times$ 10$^{7}$ $-$ 7.5 $\times$ 10$^{8}$ {$M_{\odot}$}; Table~\ref{tab:table1}, \citealt{kilborn2009}). Figure~\ref{fig:fig1} shows the locations of our sample in the I1459G and the N4636G (red circles: detection of CO, black circles: non-detection of CO). Their BGGs are shown as green large crosses. Small crosses (gray: late-type, yellow: early-type) also indicate galaxies taken from the Hyperleda database, which are also within 1.5 $\times$ $R_{200}$ and $\pm$3 $\times$ $\sigma_{\rm group}$ and brighter than the absolute magnitude of -15.5 (the faintest galaxy among small crosses in the I1459G) in the $B$-band. Our target galaxies of the I1459G are located around the BGG as well as in the outskirts of the group. On the other hand, many of our sample galaxies in the N4636G are located around its $R_{200}$. Figure~\ref{fig:fig2} also displays the distributions of the sample galaxies on the projected phase-space diagram (PSD), which allows us to probe a projected distance from the group center ($r/R_{200}$, normalized by $R_{200}$) and a line of sight velocity with respect to the systemic velocity of the group ($\Delta V/\sigma_{group}$, normalized by the group velocity dispersion), simultaneously. Some of our group targets, with high $\Delta V/\sigma_{group}$ above the curve of the maximum escape velocity, may just pass through a group, but they can still be affected by all group environmental effects during this one fly-by trip. The general properties (e.g., coordinates, stellar masses, {\hbox{{\rm H}\kern 0.2em{\sc i}}} gas masses, star formation rates) of our sample are summarized in Table~\ref{tab:table1}. \begin{deluxetable*}{lccccccccc} \tabletypesize{\footnotesize} \tablecaption{General properties of sample galaxies in IC~1459 group and NGC~4636 group \label{tab:table1}} \tablehead{ \multicolumn{1}{l}{Name} & \multicolumn{1}{c}{R.A.} & \multicolumn{1}{c}{Decl.} & \multicolumn{1}{c}{Type} & \multicolumn{1}{c}{Inc} & \multicolumn{1}{c}{PA} & \multicolumn{1}{c}{v (opt)} & \multicolumn{1}{c}{$M_{\rm\tiny {\hbox{{\rm H}\kern 0.2em{\sc i}}}}$} & \multicolumn{1}{c}{log $M_\star$} & \multicolumn{1}{c}{SFR} \\ & (J2000) & (J2000) & & ($\degr$) & ($\degr$) & (km~s$^{-1}$) & ($\times$10$^8$$M_{\odot}$) & ($M_{\odot}$) & ($M_{\odot}$~yr$^{-1}$) \\ (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9) & (10) } \startdata \multicolumn{10}{c}{IC~1459 group} \\ \hline ESO 406-G040 & 23h00m21s.99 & -37d12m04s.0 & IB & 44.3 & 1.5 & 1248 & 6.3$\pm$1.0 & 8.87 & 0.044$\pm$0.020 \\ ESO 406-G042 & 23h02m14s.24 & -37d05m01s.2 & SABm & 80.1 & 66.4 & 1375 & 19.2$\pm$1.6 & 9.42 & 0.169$\pm$0.025 \\ DUKST 406-083 & 23h02m00s.50 & -36d29m04s.5 & \nodata & 66.1 & 97.0 & 1624 & 2.8$\pm$1.0 & 7.50 & 0.021$\pm$0.003 \\ IC 5264 & 22h56m53s.02 & -36d33m15s.3 & Sab & 90.0 & 80.1 & 1940 & 9.7$\pm$1.2 & 10.15 & 0.144$\pm$0.004 \\ IC 5269B & 22h56m36s.72 & -36d14m59s.5 & SBc & 82.6 & 96.7 & 1638 & 30.0$\pm$2.0 & 9.89 & 0.178$\pm$0.026 \\ IC 5269C & 23h00m48s.17 & -35d22m13s.5 & Scd & 74.9 & 62.8 & 1796 & 14.7$\pm$1.6 & 9.46 & 0.121$\pm$0.022 \\ IC 5270 & 22h57m54s.88 & -35d51m29s.0 & SBc & 58.2 & 105.1 & 1929 & 66.6$\pm$3.7 & 9.94 & 0.740$\pm$0.102 \\ IC 5273 & 22h59m26s.72 & -37d42m10s.5 & SBc & 50.8 & 48.5 & 1286 & 38.1$\pm$2.2 & 10.34 & 1.758$\pm$0.529 \\ NGC 7418 & 22h56m36s.15 & -37d01m48s.2 & Sc & 40.0 & 137.7 & 1417 & 48.8$\pm$2.6 & 10.57 & 2.049$\pm$0.759 \\ NGC 7418A & 22h56m41s.23 & -36d46m21s.9 & Scd & 61.2 & 81.4 & 2102 & 40.1$\pm$2.3 & 9.30 & 0.456$\pm$0.033 \\ NGC 7421 & 22h56m54s.32 & -37d20m50s.7 & Sbc & 36.2 & 80.6 & 1801 & 7.5$\pm$1.2 & 10.32 & 0.274$\pm$0.041 \\ \hline \multicolumn{10}{c}{NGC~4636 group} \\ \hline EVCC 0854 & 12h32m58s.04 & +04d34m42s.5 & Sm & 60.8 & \nodata & 1233 & 0.9$\pm$0.2 & 6.20 & 0.002$\pm$0.001 \\ EVCC 0962 & 12h37m29s.07 & +04d45m04s.3 & I & 62.4 & 12.4 & 1655 & 0.3$\pm$0.2 & 7.65 & 0.005$\pm$0.001 \\ IC 3474 & 12h32m36s.51 & +02d39m40s.9 & Scd & 87.1 & 42.5 & 1727 & 5.3$\pm$0.3 & 8.75 & \nodata\\ NGC 4496A & 12h31m39s.26 & +03d56m22s.8 & Scd & 38.5 & 67.5 & 1730 & 19.0$\pm$0.9 & 9.58 & 0.345$\pm$0.018 \\ NGC 4517 & 12h32m45s.48 & +00d06m52s.5 & Sc & 90.0 & 84.4 & 1131 & 38.1$\pm$1.4 & 10.27 & 0.633$\pm$0.079 \\ NGC 4517A & 12h32m27s.97 & +00d23m25s.7 & Sd & 75.2 & 30.1 & 1509 & 14.9$\pm$0.7 & 9.33 & 0.104$\pm$0.015 \\ NGC 4527 & 12h34m08s.47 & +02d39m13s.9 & SABb & 81.2 & 69.5 & 1736 & 41.9$\pm$1.4 & 10.30 & 1.895$\pm$0.943 \\ NGC 4536 & 12h34m27s.09 & +02d11m16s.8 & SABb & 73.1 & 120.7 & 1808 & 31.9$\pm$1.1 & 10.13 & 2.171$\pm$0.959 \\ NGC 4592 & 12h39m18s.74 & -00d31m54s.6 & Sd & 90.0 & 94.2 & 1069 & 75.0$\pm$3.3 & 9.38 & 0.305$\pm$0.016 \\ NGC 4632 & 12h42m32s.34 & -00d04m54s.2 & Sc & 70.5 & 58.9 & 1723 & 20.2$\pm$0.8 & 9.42 & 0.258$\pm$0.028 \\ NGC 4666 & 12h45m08s.64 & -00d27m42s.5 & SABc & 69.6 & 40.6 & 1533 & 36.9$\pm$1.4 & 10.18 & 2.193$\pm$1.025 \\ NGC 4688 & 12h47m46s.61 & +04d20m12s.8 & Sc & 23.7 & \nodata & 986 & 12.6$\pm$1.0 & 8.69 & 0.182$\pm$0.028 \\ NGC 4772 & 12h53m29s.17 & +02d10m06s.3 & SABa & 67.3 & 155.1 & 1040 & 1.5$\pm$0.2 & 10.28 & 0.029$\pm$0.006 \\ UGC 07715 & 12h33m55s.68 & +03d32m45s.6 & I & 20.9 & \nodata & 1138 & 0.7$\pm$0.1 & 8.65 & 0.012$\pm$0.002 \\ UGC 07780 & 12h36m42s.09 & +03d06m30s.7 & Sm & 90.0 & 48.2 & 1441 & 2.9$\pm$0.2 & 7.97 & 0.008$\pm$0.007 \\ UGC 07824 & 12h39m50s.42 & +01d40m20s.7 & SBm & 73.0 & 80.9 & 1227 & 2.1$\pm$0.2 & 8.70 & 0.009$\pm$0.003 \\ UGC 07841 & 12h41m11s.59 & +01d24m37s.1 & Sb & 62.3 & 144.4 & 1737 & 3.8$\pm$0.3 & 8.92 & \nodata \\ UGC 07911 & 12h44m28s.78 & +00d28m05s.4 & Sm & 66.8 & 15.0 & 1183 & 5.5$\pm$0.3 & 9.58 & 0.077$\pm$0.012 \\ UGC 07982 & 12h49m50s.25 & +02d51m06s.9 & Sb & 87.1 & 0.8 & 1155 & 3.3$\pm$0.2 & 9.39 & 0.009$\pm$0.001 \\ UGC 08041 & 12h55m12s.64 & +00d06m60s.0 & SBcd & 54.0 & 168.3 & 1359 & 9.2$\pm$0.7 & 9.53 & 0.094$\pm$0.014 \\ \enddata \tablecomments{(1) Galaxy name; (2) right ascension (J2000); (3) declination (J2000); (4) morphological type; (5) inclination angle; (6) position angle; Columns (2) - (6) from HyperLeda$^{*}$ \citep{makarov2014} (7) optical velocity \citep{kilborn2009}; (8) {\hbox{{\rm H}\kern 0.2em{\sc i}}} gas mass, {\hbox{{\rm H}\kern 0.2em{\sc i}}} gas masses from Parkes observations \citep{kilborn2009}; (9) stellar mass, stellar masses estimated from the WISE data; (10) star formation rate (SFR), star formation rates derived using the GALEX FUV/NUV data and the WISE 22 $\mu$m data.} $^{*}$~~\url{http://leda.univ-lyon1.fr/} \end{deluxetable*} \begin{deluxetable*}{lcccc} \tabletypesize{\footnotesize} \tablecaption{CO observational parameters of sample galaxies in two groups \label{tab:table2}} \tablehead{ \multicolumn{1}{l}{Name} & \multicolumn{1}{c}{$b_{maj}$$\times$$b_{min}$, $b_{\rm PA}$} & \multicolumn{1}{c}{$\sigma_{rms}$} & \multicolumn{1}{c}{N$_{\rm mosaic}$} & \multicolumn{1}{c}{{\hbox{{\rm H}\kern 0.2em{\sc i}}} imaging data} \\ & ($\arcsec\times\arcsec$, $\degr$) & (mJy~beam$^{-1}$) & & \\ (1) & (2) & (3) & \multicolumn{1}{c}{(4)} & \multicolumn{1}{c}{(5)} } \startdata \multicolumn{5}{c}{IC~1459 group} \\ \hline ESO 406-G040 & 14.2$\times$8.9, 102.8 & 14.6 & 1 & \nodata \\ ESO 406-G042 & 14.1$\times$8.9, 102.3 & 13.9 & 1 & \nodata \\ DUKST 406-083 & 14.2$\times$8.9, 103.5 & 13.9 & 1 & \nodata \\ IC 5264 & 14.2$\times$8.9, 103.2 & 13.1 & 1 & ASKAP BETA \\ IC 5269B & 14.2$\times$8.9, 101.8 & 10.7 & 3 & ASKAP BETA \\ IC 5269C & 14.2$\times$8.8, 103.0 & 13.9 & 1 & \nodata \\ IC 5270 & 14.2$\times$8.9, 102.3 & 10.1 & 3 & ASKAP BETA \\ IC 5273 & 14.1$\times$9.0, 100.2 & 11.5 & 2 & ASKAP BETA \\ NGC 7418 & 14.1$\times$9.0, 101.3 & 9.2 & 5 & ASKAPBETA \\ NGC 7418A & 14.2$\times$8.9, 103.5 & 13.1 & 1 & \nodata \\ NGC 7421 & 14.2$\times$8.9, 103.2 & 13.8 & 1 & ASKAP BETA \\ \hline \multicolumn{5}{c}{NGC~4636 group} \\ \hline EVCC 0854 & 14.1$\times$10.1, 97.0 & 13.9 & 1 & \nodata \\ EVCC 0962 & 14.2$\times$9.9, 101.4 & 13.3 & 1 & \nodata \\ IC 3474 & 14.2$\times$9.8, 102.0 & 13.7 & 1 & \nodata \\ NGC 4496A & 14.1$\times$9.9, 101.8 & 8.8 & 6 & WALLABY pilot \\ NGC 4517 & 14.1$\times$9.7, 94.9 & 11.4 & 8 & WALLABY pilot \\ NGC 4517A & 14.1$\times$9.7, 97.0 & 14.3 & 1 & \nodata \\ NGC 4527 & 14.1$\times$9.9, 96.8 & 11.7 & 5 & WALLABY pilot \\ NGC 4536 & 14.1$\times$9.8, 99.8 & 8.7 & 13 & VIVA \\ NGC 4592 & 14.0$\times$9.5, 100.5 & 11.4 & 3 & WALLABY pilot \\ NGC 4632 & 14.0$\times$9.7, 95.6 & 10.9 & 3 & WALLABY pilot \\ NGC 4666 & 13.9$\times$9.6, 99.4 & 11.1 & 3 & WALLABY pilot \\ NGC 4688 & 14.1$\times$10.0, 91.1 & 10.1 & 5 & \nodata \\ NGC 4772 & 14.1$\times$9.8, 92.7 & 12.1 & 3 & VIVA \\ UGC 07715 & 14.1$\times$10.0, 96.1 & 13.9 & 1 & \nodata \\ UGC 07780 & 14.1$\times$10.0, 96.0 & 14.7 & 1 & \nodata \\ UGC 07824 & 14.1$\times$9.9, 94.8 & 14.5 & 1 & \nodata \\ UGC 07841 & 14.0$\times$9.8, 95.7 & 14.2 & 1 & \nodata \\ UGC 07911 & 14.1$\times$9.8, 93.4 & 14.5 & 1 & \nodata \\ UGC 07982 & 14.0$\times$9.9, 92.6 & 11.7 & 3 & WALLABY pilot \\ UGC 08041 & 13.9$\times$9.6, 98.3 & 11.2 & 3 & \nodata \\ \enddata \tablecomments{ (1) Galaxy name; (2) beam size (major and minor axes), beam position angle; (3) rms noise level per channel of the CO data; (4) the number of mosaic fields; (5) the {\hbox{{\rm H}\kern 0.2em{\sc i}}} imaging data that we use in Figure~\ref{fig:hico}. See the detailed descriptions of {\hbox{{\rm H}\kern 0.2em{\sc i}}} imaging data in Section~\ref{subsec:add_data}. } \end{deluxetable*} \begin{deluxetable*}{lccccrrrr} \tabletypesize{\footnotesize} \tablecaption{CO data parameters of sample galaxies in two groups \label{tab:table3}} \tablehead{ \noalign{\vskip 2mm} \multicolumn{1}{c}{Name} & \multicolumn{1}{c}{$W_{\rm 50, HI}$} & \multicolumn{1}{c}{$W_{\rm 20, CO}$} & \multicolumn{1}{c}{$W_{\rm 50, CO}$} & \multicolumn{1}{c}{CO peak flux} & \multicolumn{1}{c}{CO flux} & \multicolumn{1}{c}{log $L_{\rm CO}^{'}$} & \multicolumn{1}{c}{log $M_{\rm H2, MW}$} & \multicolumn{1}{c}{log $M_{\rm H2, var}$} \\ & (km~s$^{-1}$) & (km~s$^{-1}$) & (km~s$^{-1}$) & (Jy~km~s$^{-1}$~beam$^{-1}$) & (Jy~km~s$^{-1}$) & \multicolumn{1}{c}{(K~km~s$^{-1}$~pc$^{2}$)} & \multicolumn{1}{c}{($M_{\odot}$)} & \multicolumn{1}{c}{($M_{\odot}$)} \\ \multicolumn{1}{c}{(1)} & (2) & (3) & (4) & \multicolumn{1}{c}{(5)} & \multicolumn{1}{c}{(6)} & \multicolumn{1}{c}{(7)} & \multicolumn{1}{c}{(8)} & \multicolumn{1}{c}{(9)} } \startdata \multicolumn{9}{c}{IC~1459 group} \\ \hline ESO 406-G040 & 42 & \nodata & \nodata & \nodata & $<$1.3 & $<$6.36 & $<$7.00 & $<$7.19 \\ ESO 406-G042 & 121 & \nodata & \nodata & \nodata & $<$2.1 & $<$6.57 & $<$7.21 & $<$7.24 \\ DUKST 406-083 & 83 & \nodata & \nodata & \nodata & $<$1.7 & $<$6.49 & $<$7.12 & $<$8.42 \\ IC 5264 & 94 & 356 & 331 & 17.6 & 54.6$\pm$3.6 & 7.96$\pm$0.03 & 8.60$\pm$0.03 & 8.54$\pm$0.03 \\ IC 5269B & 228 & 156 & 149 & 1.8 & 4.3$\pm$0.7 & 6.89$\pm$0.74 & 7.53$\pm$0.07 & 7.49$\pm$0.07 \\ IC 5269C & 172 & \nodata & \nodata & \nodata & $<$2.45 & $<$6.64 & $<$7.28 & $<$7.31 \\ IC 5270 & 101 & 230 & 198 & 26.1 & 100.4$\pm$5.5 & 8.26$\pm$0.02 & 8.90$\pm$0.02 & 8.85$\pm$0.02 \\ IC 5273 & 196 & 152 & 110 & 13.5 & 60.3$\pm$3.7 & 8.04$\pm$0.03 & 8.67$\pm$0.03 & 8.60$\pm$0.03 \\ NGC 7418 & 207 & 206 & 175 & 34.6 & 302.2$\pm$15.4 & 8.74$\pm$0.02 & 9.37$\pm$0.02 & 9.29$\pm$0.02 \\ NGC 7418A & 180 & \nodata & \nodata & \nodata &$<$2.4 & $<$6.63 & $<$7.27 & $<$7.32 \\ NGC 7421 & 129 & 151 & 70 & 4.3 & 37.6$\pm$2.7 & 7.83$\pm$0.03 & 8.47$\pm$0.03 & 8.40$\pm$0.03 \\ \hline \multicolumn{9}{c}{NGC~4636 group} \\ \hline EVCC 0854 & 62 & \nodata & \nodata & \nodata & $<$1.5 & $<$5.82 & $<$6.45 & $<$7.75 \\ EVCC 0962 & 27 & \nodata & \nodata & \nodata & $<$0.9 & $<$5.62 & $<$6.26 & $<$7.55 \\ IC 3474 & 147 & \nodata & \nodata & \nodata & $<$2.2 & $<$6.00 & $<$6.64 & $<$6.88 \\ NGC 4496A & 154 & 127 & 102 & 4.6 & 36.3$\pm$2.4 & 7.21$\pm$0.03 & 7.85$\pm$0.03 & 7.86$\pm$0.03 \\ NGC 4517 & 301 & 298 & 275 & 28.6 & 449.4$\pm$22.9 & 8.31$\pm$0.02 & 8.95$\pm$0.02 & 8.88$\pm$0.02 \\ NGC 4517A & 154 & \nodata & \nodata & \nodata & $<$2.4 & $<$6.03 & $<$6.66 & $<$6.72 \\ NGC 4527 & 356 & 411 & 370 & 372.6 & 1638.0$\pm$82.1 & 8.87$\pm$0.02 & 9.51$\pm$0.02 & 9.44$\pm$0.02 \\ NGC 4536 & 324 & 354 & 325 & 214.5 & 608.9$\pm$30.6 & 8.44$\pm$0.02 & 9.08$\pm$0.02 & 9.02$\pm$0.02 \\ NGC 4592 & 198 & 109 & 40 & 1.9 & 13.3$\pm$1.7 & 6.78$\pm$0.06 & 7.42$\pm$0.06 & 7.46$\pm$0.06 \\ NGC 4632 & 223 & 236 & 205 & 13.4 & 115.8$\pm$6.4 & 7.72$\pm$0.02 & 8.36$\pm$0.02 & 8.39$\pm$0.02 \\ NGC 4666 & 324 & 409 & 387 & 154.3 & 1471.7$\pm$73.7 & 8.82$\pm$0.02 & 9.46$\pm$0.02 & 9.40$\pm$0.02 \\ NGC 4688 & 44 & \nodata & \nodata & \nodata & $<$0.9 & $<$5.61 & $<$6.25 & $<$6.52 \\ NGC 4772 & 38 & 156 & 52 & 4.5 & 27.7$\pm$2.2 & 7.10$\pm$0.04 & 7.74$\pm$0.04 & 7.66$\pm$0.04 \\ UGC 07715 & 29 & \nodata & \nodata & \nodata & $<$1.0 & $<$5.66 & $<$6.29 & $<$6.59 \\ UGC 07780 & 117 & \nodata & \nodata & \nodata & $<$2.1 & $<$5.98 & $<$6.22 & $<$7.49 \\ UGC 07824 & 101 & \nodata & \nodata & \nodata & $<$2.0 & $<$5.95 & $<$6.58 & $<$6.85 \\ UGC 07841 & 104 & \nodata & \nodata & \nodata & $<$1.9 & $<$5.94 & $<$6.58 & $<$6.75 \\ UGC 07911 & 107 & \nodata & \nodata & \nodata & $<$2.0 & $<$5.96 & $<$6.60 & $<$6.60 \\ UGC 07982 & 214 & 202 & 176 & 6.4 & 41.3$\pm$2.8 & 7.27$\pm$0.03 & 7.91$\pm$0.03 & 7.95$\pm$0.03 \\ UGC 08041 & 175 & 129 & 102 & 2.3 & 8.00$\pm$1.1 & 6.56$\pm$0.06 & 7.20$\pm$0.06 & 7.21$\pm$0.06 \\ \enddata \tablecomments{(1) Galaxy name; (2) the {\hbox{{\rm H}\kern 0.2em{\sc i}}} linewidths measured at 50\% of the peak flux \citep{kilborn2009}; (3) the CO linewidths measured at 20\% and 50\% of the peak flux using SoFiA; (4) the peak value of CO intensity map; (5) the total CO flux; (6) the total CO luminosity; (7) \& (8) the total molecular gas mass derived from the CO luminosity using the constant CO-to-H$_{2}$ conversion factor ($\alpha_{\rm CO}$~=~4.35 {$M_{\odot}$}~pc$^{-2}$~(K~km~s$^{-1}$)$^{-1}$) and the metallicity-dependent the constant CO-to-H$_{2}$ conversion factor.} \end{deluxetable*} \section{Observations and data} \label{sec:obs} \subsection{Observations} Our CO imaging observations (project ID: 2019.1.01804.S; PI: B. Lee) of group galaxies were carried out using the ALMA/ACA in Cycle 7 (2019 October to 2019 December). While the ACA consists of 12 antennas of 7 m diameter (the 7m array) and 4 antennas of 12 m diameter (the total power array), we only used the 7m array, with 9$-$11 antennas in our observations. The mean precipitable water vapor value was 3.6 mm during the observations. Four spectral windows (SPW 1, 2, 3 ,4) were set up to observe $^{12}$CO($J$=1--0) ($\nu_{\rm rest}$ = 115.271~GHz) line and CN($N$=1--0; $J$=3/2--1/2) ($\nu_{\rm rest}$ = 113.491~GHz) line in the upper sideband (SPW 1 and 2) and 3 mm continuum emission in the lower sideband (SPW 3 and 4). Each SPW has a total bandwidth of 1875 MHz ($\sim$5000~km~s$^{-1}$) and a channel width of $\sim$1.13 MHz ($\sim$3~km~s$^{-1}$). The size of the primary beam is $\sim$87$\arcsec$. The most typical extent of CO gas in late-type galaxies is 50\% $-$ 70\% of the inner part of the optical disk \citep[e.g.,][]{schruba2011}. In order to cover the entire CO disk, 15 out of 31 galaxies were observed in mosaic mode. The number of mosaic fields varies from 2 to 13 pointings, which depends on the apparent size of each target galaxy. The rest (16 targets) of the sample were observed by a single pointing. The largest recoverable angular scale (i.e., the maximum recoverable scale (MRS)) in our ACA observations is 59$\arcsec$, corresponding to $\sim$7.8~kpc at 27.2~Mpc (I1459G) and $\sim$3.9~kpc at 13.6~Mpc (N4636G), respectively. \subsection{Data reduction and imaging} \label{subsec:data_red} The ACA data were calibrated using the standard ALMA pipeline in the Common Astronomy Software Applications package (CASA, version: 5.6.1-8; \citealt{mcmullin2007}). After calibration, the continuum is subtracted by fitting the line-free channels with the {\tt uvcontsub} task. Cleaned CO data cubes of individual samples are created using the {\tt tclean} task in the CASA package. The clean regions are carefully selected by visual inspection for every channel of the cubes. In particular, the mosaic images of 15 galaxies observed with multiple pointings were produced by setting the sub-parameter {\tt gridder}={\tt ‘mosaic’} in the {\tt tclean} task. To increase the signal-to-noise ratio (S/N) of the data and to recover faint CO emission, natural weighting was applied and the channel width of the cleaned cubes was binned into a velocity resolution of 20 km~s~$^{-1}$. The final data cubes have a synthesized beam of $\sim$14$\arcsec$$\times$$\sim$9$\arcsec$ with a pixel size of 2$\arcsec$. The noise level is measured using line-free channels. The typical root mean square (rms) noise level is $\sim$12.4 mJy beam$^{-1}$ over a channel width of 20~km~s$^{-1}$, which is measured prior to the primary beam correction. The synthesized beam sizes and rms noise levels for individual galaxies are listed in Table~\ref{tab:table2}. The final data cubes are corrected for primary beam attenuation with the {\tt impbcor} task. For each galaxy, we obtained a detection mask from the cleaned data cube, using the SoFiA (Source Finding Application) software \citep{serra2015a,westmeier2021} with a 3$\sigma$ threshold for reliable detection. In particular, to create the detection mask, the data cube was convolved with several smoothing kernels. We only applied Gaussian filters with 0, 3, 7 pixels on the sky plane, and merged all detected voxels, with a spatial radius of 3 pixels and a spectral radius of 2 channels, to construct 3D individual objects. In addition, we further manually removed false detections in the mask by visual inspection. By applying the final detection masks for the sample galaxies, we generated integrated CO intensity maps (0th moment), velocity field maps (1st moment), and velocity dispersion maps (2nd moment). The CN line data were reduced into final products (i.e., data cubes, moment maps) in the same manner as the CO data. In addition to the line data, we created 3 mm continuum images by averaging the line-free channels in four SPWs. The results of the CN and continuum data are in Appendix~\ref{app:cn} and \ref{app:conti}. \subsection{Ancillary data} \label{subsec:add_data} To estimate the stellar masses ($M_\star$), SFRs, and {\hbox{{\rm H}\kern 0.2em{\sc i}}} masses of galaxies in our sample and to investigate {\hbox{{\rm H}\kern 0.2em{\sc i}}} distribution of the sample galaxies, we made use of the complementary data including the far-ultraviolet (FUV), near-ultraviolet (NUV), infrared, and {\hbox{{\rm H}\kern 0.2em{\sc i}}} images. For calculating the stellar masses and SFRs, we assume a Kroupa initial mass function (IMF; \citealt{kroupa2001}). First, to obtain an estimate of the stellar masses, we utilize the photometric pipeline of \cite{wang2017} with the 3.4~$\mu$m data (W1 band) and 4.6~$\mu$m data (W2 band) of the Wide-field Infrared Survey Explorer (WISE; \citealt{wright2010}). We derive the stellar mass of the sample galaxies using the W1 luminosity with a W1-W2 color dependent mass-to-light ratio \citep{jarrett2017} (for details, see Section 3 of \citealt{wang2017}). The typical error of the stellar mass is 0.15 dex. With a combination of the Galaxy Evolution Explorer (GALEX) FUV data \citep{martin2005} and the 22 $\mu$m data (W4 band) from WISE \citep{wright2010}, we also derive the dust attenuation corrected SFR by using the equations of \cite{calzetti2013}. The FUV and W4 fluxes trace the dust free and dust attenuated part of the total SFR in a galaxy, respectively. When the FUV fluxes have too low S/N ($<$1), we use GALEX NUV fluxes to estimate the dust free part of the SFR. For some of our sample galaxies, there are recent high-resolution {\hbox{{\rm H}\kern 0.2em{\sc i}}} imaging data. For the I1459G sample, 10 out of 11 galaxies were detected in the ASKAP BETA (the Boolardy Engineering Test Array) survey \citep{serra2015}. In the case of the N4636G sample, 18 out of 20 galaxies were detected in the WALLABY pilot survey \citep{koribalski2020}. For NGC~4536 and NGC~4772, we use the {\hbox{{\rm H}\kern 0.2em{\sc i}}} imaging data of the VIVA (the VLA Imaging survey of Virgo galaxies in Atomic gas, \citealt{chung2009}). The spatial resolutions for the ASKAP BETA, WALLABY pilot survey, and VIVA are 60$\arcsec$ (7.92~kpc), 30$\arcsec$ (1.98~kpc), and 18$\arcsec$ (1.19~kpc), respectively. These {\hbox{{\rm H}\kern 0.2em{\sc i}}} imaging data are useful for a comparison of the CO distribution of group galaxies. We adopted the {\hbox{{\rm H}\kern 0.2em{\sc i}}} gas masses from previous Parkes observations \citep{kilborn2009} for homogeneity and flux completeness. The stellar masses, SFRs, and {\hbox{{\rm H}\kern 0.2em{\sc i}}} gas masses of our sample are summarized in Table~\ref{tab:table1}. \section{Results} \label{sec:res} \subsection{CO detections} We detected CO emission in 16 out of 31 galaxies from both the I1459G (6/11) and the N4636G (10/20). The stellar mass range of galaxies with CO detection is $\sim$10$^{9}$$-$10$^{10}${$M_{\odot}$} (Figure~\ref{fig:fig3}); all of these are spiral galaxies. Figure~\ref{fig:fig4} shows the CO distribution overlaid on DSS2 blue optical images. On the other hand, galaxies without CO detection tend to have lower stellar masses ($<$9.4$\times$10$^{9}${$M_{\odot}$}; see Figure~\ref{fig:fig3}), and their morphological types are dwarfs and spirals (Figure~\ref{fig:nonc0} in Appendix~\ref{app:nonco}). Both Figure~\ref{fig:fig1} and \ref{fig:fig2} show the distributions of all sample galaxies on the projected sky plane and on the projected PSD. Red and black open circles indicate CO detections and non-detections of CO, respectively. Interestingly, most of the CO detected galaxies of the N4636G tend to be at large group centric radii, while those in the I1459G tend to be at smaller group centric radii. We discuss this in details in Section~\ref{subsec:pre}. \begin{figure*}[!htbp] \begin{center} \includegraphics[width=1.00\textwidth]{new_fig_3.pdf} \caption{The histograms show the stellar mass distributions of CO detected (green) and non-CO detected (black solid line) group samples. Left panel: all sample galaxies, middle panel: the I1459G, right panel: the N4636G. The stellar mass range of galaxies with CO detections is $\sim$$10^{9}-10^{10}${$M_{\odot}$}. \label{fig:fig3}} \end{center} \end{figure*} \begin{figure*}[!htbp] \begin{center} \includegraphics[width=0.77\textwidth]{fig_4_contour.pdf} \caption{The CO distribution (contours) of 16 group galaxies is overlaid on the Digitized Sky Survey 2 (DSS2, \url{https://archive.stsci.edu/dss/index.html}) blue images. The contour levels are shown at the bottom of each panel. The outermost contour corresponds to the 3$\sigma$ value. The surface density of the molecular gas is calculated by adopting $\alpha_{\rm CO}$~=~4.35 {$M_{\odot}$}~pc$^{-2}$~(K~km~s$^{-1}$)$^{-1}$ \citep{strong1996,bolatto2013}. Galaxies in the I1459G are shown in the first two rows (their names in blue), and galaxies in the N4636G are shown in the second three rows (their names in black). The green cross indicates the stellar disk center, which is obtained from {\it Spitzer} 3.6 $\mu$m data \citep{salo2015}. The bar in the bottom-left corner represents the physical scale of 2~kpc. The synthesized beam is shown in the bottom-right corner. \label{fig:fig4}} \end{center} \end{figure*} \subsection{CO properties} \label{subsec:coprop} The integrated CO flux ($S_{CO}$) are measured in Jy~km~s$^{-1}$ using \begin{equation} S_{\rm CO} = (\Sigma~F_{\rm CO})~\Delta v, \label{eqn:coflux} \end{equation} \noindent where $F_{CO}$ is the total flux of CO in each channel, $\Delta v$ is the velocity resolution (20 km~s$^{-1}$) of the CO data cube. For the integrated CO flux, all pixels within the detection mask are summed over channels. The uncertainty of the integrated CO flux is calculated by \begin{equation} \sigma (S_{CO}) = \sqrt{(\Sigma~{\sigma_{\rm rms}}^{2}N_{Beam})~{\Delta v}^{2} + ({S_{CO}}^{2}/400)}, \label{eqn:coerr} \end{equation} \noindent where $\sigma_{\rm rms}$ is the rms noise level of data cube, and $N_{beam}$ is the number of beams in the emission region of each channel. The first term of equation~\ref{eqn:coerr} is the measurement error of the integrated CO flux, and the second term is a typical absolute flux accuracy (5\% in Band 3, ALMA cycle 7 proposer's guide\footnote{\url{https://almascience.nao.ac.jp/documents-and-tools/cycle7/alma-proposers-guide}}). The integrated CO flux and its uncertainty are summarized in Table~\ref{tab:table3}. For galaxies with non-detections, the $3\sigma$ upper limits of CO flux are estimated using the rms noise level. To calculate the upper limits, we assume that the size of CO emitting area is the same as the beam size and the line width of CO disk corresponds to an FWHM of {\hbox{{\rm H}\kern 0.2em{\sc i}}} gas. We adopt values of the FWHM of {\hbox{{\rm H}\kern 0.2em{\sc i}}} gas (Table~\ref{tab:table3}) from the GEMS-{\hbox{{\rm H}\kern 0.2em{\sc i}}} observations of \cite{kilborn2009}. In our sample, CO emission of 5 galaxies (NGC~4517, NGC~4527, NGC~4536, NGC~4632, NGC~4666) in the N4636G were also detected in previous single-dish observations \citep{boselli2014a,sorai2019}. Except for NGC~4517, four galaxies have resolved CO maps from the CO Multi-line Imaging of Nearby Galaxies (COMING) project \citep{sorai2019}. To estimate how much of the total CO flux we recover in our ACA observations, we compared the CO flux of our ACA observations with that of previous single-dish observations from the literature. The flux ratio between our ACA data and the single-dish data ranges from 0.5 to 0.9. The averaged flux ratio of three galaxies (NGC~4536, NGC~4632, NGC~4666) is $\sim$0.9, but two galaxies (NGC~4517, NGC~4527) show relatively low flux ratios (NGC~4517: 0.5 and NGC 4527: 0.7). We may miss some CO flux of large-scale structures that are larger than the maximum recoverable scale (59$\arcsec$) of our ACA observations, especially for galaxies with a large angular size of the optical disk. The CO luminosity ($L^{'}_{\rm CO}$) is calculated using the following equation \citep{solomon2005}: \begin{equation} L^{'}_{\rm CO}=3.25\times10^7 S_{\rm CO}~\nu_{\rm obs}^{-2}~D_{\rm L}^2~(1+\textit{z})^{-3} \label{eqn:colum} \end{equation} \noindent in K~km~s$^{-1}$~pc$^{2}$, where $S_{\rm CO}$ is the integrated CO flux in Jy~km~s$^{-1}$, $D_{\rm L}$ is the luminosity distance in Mpc, $\nu_{\rm obs}$ is the observing frequency in GHz, and \textit{z} is the redshift. Using $L^{'}_{\rm CO}$, the H$_{2}$ masses for our sample galaxies are determined as \begin{equation} M_{\rm H_{2}} = \alpha_{\rm CO} L^{'}_{\rm CO}, \label{eqn:h2mass} \end{equation} \noindent where $\alpha_{\rm CO}$ is a CO-to-H$_{2}$ conversion factor in {$M_{\odot}$}~pc$^{-2}$~(K~km~s$^{-1}$)$^{-1}$. We apply two different CO-to-H$_{2}$ conversion factors. First, we use the metallicity-dependent CO-to-H$_{2}$ conversion factor calculated using the equation (25) of \cite{accurso2017}. Following the approach of \cite{zabel2019}, we do not consider the distance from the main sequence ($\Delta$MS). The $\Delta$MS parameter does not significantly influence the results of calculation of the metallicity-dependent CO-to-H$_{2}$ conversion factor \citep{zabel2019}. We also follow the approach of \cite{zabel2019} to calculate metallicities of our samples because we do not have metallicities of individual galaxies. To derive metallicities of individual galaxies, we use the mass-metallicity relation of \cite{sanchez2017}. As a result, the metallicity-dependent CO-to-H$_{2}$ conversion factor here mainly depends on the stellar masses of galaxies. Therefore, the low-mass galaxies have higher conversion factors. Secondly, we also adopt $\alpha_{\rm CO}$~=~4.35 {$M_{\odot}$}~pc$^{-2}$~(K~km~s$^{-1}$)$^{-1}$ ($X_{\rm CO} = 2 \times 10^{20}$~cm$^{-2}$ (K~km~s$^{-1})^{-1}$; \citealt{strong1996,bolatto2013}). This CO-to-H$_{2}$ conversion factor includes a factor of 1.36 for helium abundance correction. The CO linewidth is measured at 20\% ($W_{\rm 20, CO}$) and 50\% ($W_{\rm 50, CO}$) of the peak flux using SoFiA. The CO linewidths, CO luminosities, and H$_{2}$ masses for our sample of group galaxies are summarized in Table~\ref{tab:table3}. The details of the CO data for individual group galaxies with CO detections are described in Appendix. \subsection{Peculiar CO structures} \label{subsec:pecco} We find that some of our group samples show peculiar CO structures in their CO intensity maps, CO velocity maps, and position-velocity diagrams (PVDs), as seen in Figure~\ref{fig:fig4} and CO atlas of our group members in Appendix~\ref{app:codata}. These peculiarities are such as (1) a highly asymmetric CO distribution, (2) a hole in the central region, (3) a large offset ($>$1 kpc) between the CO peak and the optical center, (4) a bar- or ring-like structure, and (5) a high molecular gas surface density ($>$100~{$M_{\odot}$}~pc$^{-2}$) in the central region. \paragraph{\textbf{$\bullet$ Asymmetric CO distribution}} IC~5264, IC~5273, and NGC~7418 in the I1459G show highly asymmetric CO distributions. For IC~5264, the CO disk in the west side is shrunken, compared to the extent of the CO disk in the east side (49$\arcsec$ ($\sim$6.5 kpc) in the west side versus 57$\arcsec$ ($\sim$7.5 kpc) in the east side, Figure~\ref{fig:fig4}). The CO gas also appears to be extended toward the southwest. The 60\% of the total CO flux is measured in the southern part below the major axis of IC~5264, compared to the CO flux in the northern part. In IC~5273, a CO clump is located at the southwest edge of the stellar disk (Figure~\ref{fig:fig4}), its distance from the optical center is 8.8~kpc. The CO flux of this clump is $\sim$3 Jy~km~s$^{-1}$, which corresponds to $\sim$5\% of the total CO flux. While the CO distribution of NGC~7418 follows well the spiral arms of the stellar disk, a long CO structure in the southeast part of the CO disk is extended up to $\sim$11.4 kpc from the center of the stellar disk. On the opposite side, however, the extent of the CO disk is about 7.8 kpc. Two galaxies (NGC~4632 and UGC~08041) in the N4636G also show asymmetric CO distributions. The CO disk of NGC~4632 is more extended toward the northeast side (80$\arcsec$ ($\sim$5.3 kpc) in the northeast versus 61$\arcsec$ ($\sim$4.0 kpc) in the southwest). UGC~08041 has a very extended CO structure in the southern part (53$\arcsec$ ($\sim$3.5 kpc) in the south versus 13$\arcsec$ ($\sim$0.9 kpc) in the north). \paragraph{\textbf{$\bullet$ No CO emission in the central region and large offset between the CO peak and the optical center}} As seen in Figure~\ref{fig:fig4}, our ACA CO data do not show any CO emission in the central region of two galaxies (NGC~7421 in the I1459G and NGC~4772 in the N4636G), contrary to our other samples. Instead, NGC~7421 has a relatively strong CO emission in the southwest region of the optical image, with $\sim$50\% of the total CO flux. In the case of NGC~4772, there are two discrete CO regions. In particular, the southeast region has $\sim$65\% of the total CO flux. In addition to no CO emission in the central region, these two galaxies show a large offset between the CO peak position and the optical center (see Figure~\ref{fig:app_8} and \ref{fig:app_17}). The offset distances are 4.8 kpc in NGC~7421 and 2.1 kpc (NGC~4772), respectively. Interestingly, the CO peak of NGC~4496A is also found at the northeast edge of the CO disk (see Figure~\ref{fig:app_10}), and the distance between the peak position and the center is $\sim$3 kpc. \paragraph{\textbf{$\bullet$ Bar- or ring-like structure and high surface density in the central region}} The CO PVDs of NGC~4527 and NGC~4536 in the N4636G clearly show a steep velocity gradient in the inner region (see Figure~\ref{fig:app_12} and \ref{fig:app_13}). This indicates a presence of a bar- or ring-like structure in the central region \citep[e.g.,][]{alatalo2013}. The rapidly increasing velocity structures of these two galaxies are also seen in the central region of their velocity field maps (Figure~\ref{fig:app_12} and \ref{fig:app_13}). In addition to the presence of distinct CO structure in the central region, these two samples and NGC~4666 show relatively high molecular surface density ($>$100~{$M_{\odot}$}~pc$^{-2}$), compared to other member galaxies (Figure~\ref{fig:fig4}). \section{Discussion} \label{sec:dis} Using the {\hbox{{\rm H}\kern 0.2em{\sc i}}} and CO imaging data, we discuss how the group environments affect the distributions of cold ISM components of galaxies in Section~\ref{subsec:hico}. In Section~\ref{subsec:scaling}, we present scaling relations of the global properties of our group sample, and compare them with the global properties in the xCOLD GASS sample. Finally, we discuss pre-processing in the group environment in Section~\ref{subsec:pre}. Note that although some of our targets are thought to be affected by various group environmental processes, we briefly discuss external mechanisms for individual group members in this study. Instead, distinguishing between various environmental processes for individual members will be studied in more detail in a following work (e.g., Lin et al. 2022, in preparation). \subsection{{\hbox{{\rm H}\kern 0.2em{\sc i}}} and CO distributions of group galaxies} \label{subsec:hico} \begin{figure*}[!htbp] \begin{center} \includegraphics[width=0.77\textwidth]{fig_5_hi_co_mor.pdf} \caption{{\hbox{{\rm H}\kern 0.2em{\sc i}}} (contours) and CO distributions (color scale) of 15 group galaxies (the first two rows: 6 galaxies in the I1459G, and the second three rows: 9 galaxies in the N4636G) is overlaid on their optical images (DSS2 blue). In each CO map, while a cyan color indicates a low-intensity value, a magenta color shows a relatively high-intensity value. Contour levels of the {\hbox{{\rm H}\kern 0.2em{\sc i}}} gas are shown at the bottom of each panel. These {\hbox{{\rm H}\kern 0.2em{\sc i}}} images are from the ASKAP BETA data \citep{serra2015} for the I1459G, the WALLABY pilot data \citep{koribalski2020} for the N4636G, and the VIVA data \citep{chung2009} for NGC~4536 and NGC~4772 in the N4636G. The green cross indicates the stellar disk center. The bar in the bottom-left corner represents the physical scale of 2~kpc. The synthesized beams of CO (blue ellipse) and {\hbox{{\rm H}\kern 0.2em{\sc i}}} (black open ellipse) observations are shown at the bottom-right corner. \label{fig:hico}} \end{center} \end{figure*} 15 out of 16 group galaxies with CO detection in our ACA observations also have {\hbox{{\rm H}\kern 0.2em{\sc i}}} imaging data \citep{chung2009,serra2015,koribalski2020}, and Figure~\ref{fig:hico} shows their {\hbox{{\rm H}\kern 0.2em{\sc i}}} and CO distributions overlaid on their optical images (DSS2 blue). Based on their {\hbox{{\rm H}\kern 0.2em{\sc i}}} and CO distributions, we have classified the 15 group galaxies into three categories: (i) peculiar distributions in both {\hbox{{\rm H}\kern 0.2em{\sc i}}} and CO (IC~5264, IC~5273, NGC~7418, NGC~7421, NGC~4632, NGC~4772), (ii) peculiar distribution in {\hbox{{\rm H}\kern 0.2em{\sc i}}} or CO (IC~5270, NGC~4496A, NGC~4666, UGC~07982), (iii) relatively symmetric distributions in both {\hbox{{\rm H}\kern 0.2em{\sc i}}} and CO (IC~5269B, NGC~4527, NGC~4536, NGC~4592). Interesting features of group members are briefly summarized in Table~\ref{tab:table_new}. \subsubsection{Peculiar distributions in both CO and {\hbox{{\rm H}\kern 0.2em{\sc i}}}} \label{asymcohi} \paragraph{\textbf{$\bullet$ Asymmetric structure}} Four galaxies (IC~5264, IC~5273, NGC~7418, NGC~7421) show asymmetric morphologies in both CO and {\hbox{{\rm H}\kern 0.2em{\sc i}}}. In particular, their {\hbox{{\rm H}\kern 0.2em{\sc i}}} distributions are extended toward one side. On the opposite side of the long {\hbox{{\rm H}\kern 0.2em{\sc i}}} extensions, the {HI} disks are truncated near or within the stellar disk. For IC~5264, as shown in Figure~\ref{fig:hico}, the {\hbox{{\rm H}\kern 0.2em{\sc i}}} gas is more extended toward the southeast. On the opposite side, the {\hbox{{\rm H}\kern 0.2em{\sc i}}} disk is truncated within the stellar disk. This asymmetric {\hbox{{\rm H}\kern 0.2em{\sc i}}} distribution is analogous to the CO distribution of IC~5264 as described in Section~\ref{subsec:pecco}. Interestingly, a locally strong CO emission is found at the edge of the truncated {\hbox{{\rm H}\kern 0.2em{\sc i}}} disk. In NGC~7421, the {\hbox{{\rm H}\kern 0.2em{\sc i}}} disk is pushed away to the northeast. On the opposite side, there is a relatively strong CO emission with the CO peak (Figure~\ref{fig:hico}). In the case of NGC~7418, while a long {\hbox{{\rm H}\kern 0.2em{\sc i}}} tail is seen in the northwest, the {\hbox{{\rm H}\kern 0.2em{\sc i}}} disk is compressed in the southeast. In particular, an extended CO emission is found at the site of {\hbox{{\rm H}\kern 0.2em{\sc i}}} compression in NGC~7418 (Figure~\ref{fig:hico}). In these three galaxies, {\hbox{{\rm H}\kern 0.2em{\sc i}}} compression at the truncated side of the {\hbox{{\rm H}\kern 0.2em{\sc i}}} disk may lead to an increase of {\hbox{{\rm H}\kern 0.2em{\sc i}}} gas density, which likely triggers an efficient transition from {\hbox{{\rm H}\kern 0.2em{\sc i}}} gas to H$_{2}$ gas \citep[][]{chung2014,lizee2021}. Consequently, this conversion process is likely to result in the local enhancement of CO (IC~5264 and NGC~7421) and the extended CO structure (NGC~7418). In addition, external perturbations can directly compress CO gas. These phenomena are already known from previous studies of cluster galaxies \citep[][]{chung2014,lee2017,lizee2021}, but similar results are now found for our group galaxies. The morphological correlations and connections between two different ISM components suggest that group environmental processes can significantly affect both diffuse {\hbox{{\rm H}\kern 0.2em{\sc i}}} gas and dense CO gas. IC~5273 shows highly asymmetric {\hbox{{\rm H}\kern 0.2em{\sc i}}} and CO morphologies, but there seems to be no similarity or no connection between the {\hbox{{\rm H}\kern 0.2em{\sc i}}} distribution and the CO distribution, in contrast with the above three galaxies (IC~5264, NGC~7418, and NGC~7421). This suggests that this galaxy may be affected by the environment in a different way. In addition to asymmetric distributions of both {\hbox{{\rm H}\kern 0.2em{\sc i}}} and CO, IC~5273, NGC~7418, NGC~7421 also show clearly a lopsided feature in their optical images, which suggests that these three galaxies are likely to be affected by tidal interactions. \paragraph{\textbf{$\bullet$ Ring-like structure}} NGC~4632 and NGC~4772 have large {\hbox{{\rm H}\kern 0.2em{\sc i}}} outer ring structures, as shown in Figure~\ref{fig:hico}. In particular, the position angle (PA) of {\hbox{{\rm H}\kern 0.2em{\sc i}}} outer ring of NGC~4632 largely deviates from that of the inner {\hbox{{\rm H}\kern 0.2em{\sc i}}} disk and the stellar disk. The PA of {\hbox{{\rm H}\kern 0.2em{\sc i}}} outer ring of NGC~4772 is also slightly different from the inner {\hbox{{\rm H}\kern 0.2em{\sc i}}} disk \citep[][]{chung2009}. The origin of {\hbox{{\rm H}\kern 0.2em{\sc i}}} ring structures could be accretions, minor mergers, and tidal interactions with other galaxies \citep[e.g., see][and references therein]{buta1996,barnes1999,bettoni2010}. Moreover, both NGC~4632 and NGC~4772 show irregular CO distributions in the inner {\hbox{{\rm H}\kern 0.2em{\sc i}}} disks, as described in Section~\ref{subsec:pecco}. The external perturbations seem to cause not only large {\hbox{{\rm H}\kern 0.2em{\sc i}}} outer ring structures but also irregular CO distributions in these two galaxies. \subsubsection{Peculiar distribution in {\hbox{{\rm H}\kern 0.2em{\sc i}}} or CO} \label{hi} The {\hbox{{\rm H}\kern 0.2em{\sc i}}} morphologies of three galaxies (IC~5270, NGC~4666, UGC~07982) appear to be asymmetric, but their CO distributions are smooth and undisturbed. Near the north of IC~5270, there are two {\hbox{{\rm H}\kern 0.2em{\sc i}}} clouds, as seen in Figure~\ref{fig:hico}). Although the {\hbox{{\rm H}\kern 0.2em{\sc i}}} disk does not look asymmetric, these two {\hbox{{\rm H}\kern 0.2em{\sc i}}} clouds are possibly stripped from IC~5270 \citep{serra2015}, which is suggestive of evidence for tidal interactions \citep{serra2015}. In NGC~4666, the outer part of {\hbox{{\rm H}\kern 0.2em{\sc i}}} disk extends farther out towards its neighbor galaxy (NGC~4668), which indicates that the {\hbox{{\rm H}\kern 0.2em{\sc i}}} morphology is strongly disturbed by the interaction with the neighboring galaxy (Figure~\ref{fig:hico}). The direction of the {\hbox{{\rm H}\kern 0.2em{\sc i}}} tail of NGC~4668 is also toward NGC~4666. The previous {\hbox{{\rm H}\kern 0.2em{\sc i}}} observations with the VLA also show clear signs of interaction between NGC~4666 and NGC~4668 (see more details in Figure~5 of \citealt{walter2004}). In UGC~07982, the northern part of the {\hbox{{\rm H}\kern 0.2em{\sc i}}} disk is truncated within the stellar disk (Figure~\ref{fig:hico}). In addition, the {\hbox{{\rm H}\kern 0.2em{\sc i}}} gas is slightly extended toward the west side. However, the stellar disk of UGC~07982 looks undisturbed. This suggests that UGC~07982 is likely to be undergoing ram pressure stripping (RPS) in the N4636G. Indeed, this galaxy is identified as the RPS galaxy, based on an analysis of the ram pressure level against the restoring force in the disk for UGC~07982. Further details of the RPS for UGC~07982 and other galaxies in the N4636G will be presented in Lin et al. 2022, in preparation. In the case of NGC~4496A, there are no signs of external perturbations in the optical and {\hbox{{\rm H}\kern 0.2em{\sc i}}} images, but the CO distribution is somewhat asymmetric with the off-center CO peak, as described in Section~\ref{subsec:pecco}. This seems to be due to internal processes. \subsubsection{Symmetric/smooth distributions in both {\hbox{{\rm H}\kern 0.2em{\sc i}}} and CO} \label{smdisk} Four galaxies (IC~5269B, NGC~4527, NGC~4536, NGC~4592) show relatively symmetric/smooth distributions in both {\hbox{{\rm H}\kern 0.2em{\sc i}}} and CO, compared to other group samples mentioned above. Although the CO distributions of some galaxies (e.g., IC~5269B and NGC~4592) appear to be somewhat clumpy due to the low S/N of the CO data, the {\hbox{{\rm H}\kern 0.2em{\sc i}}} disks of four group samples are symmetric and smooth, indicating that there may be no external perturbations. \subsubsection{Asymmetry parameter for the CO intensity map} In addition to probing peculiar gas distributions qualitatively, we also calculate the asymmetry parameter ($A_{\rm map}$) using the CO intensity maps to estimate the degree of asymmetry quantitatively. The asymmetry of the CO image is calculated using the following equation \citep{conselice2000, holwerda2011a, giese2016}: \begin{equation} A_{\rm map} = \frac{\Sigma_{i ,j}~| I (i, j) - I_{180} (i, j) |}{2~\Sigma_{i ,j}~| I (i, j) |}, \end{equation} \noindent where $I (i, j)$ is the CO intensity map, and $I_{180} (i, j)$ is the same CO intensity map rotated by 180$^{\degr}$ with respect to the center of the stellar disk. A high value indicates a high asymmetry of CO distribution. The CO asymmetry values for 16 galaxies are summarized in Table~\ref{tab:table_new}. Group members (IC~5264, IC~5273, NGC~7418, NGC~7421, NGC 4632, NGC~4772) showing asymmetric morphologies from both CO and {\hbox{{\rm H}\kern 0.2em{\sc i}}} images tend to have higher asymmetry values, compared to other members. On average, the asymmetry value for these six galaxies is 0.46. On the other hand, the mean value for samples (NGC~4517, NGC~4527, NGC~4536) with no signs of interactions from both CO/{\hbox{{\rm H}\kern 0.2em{\sc i}}} morphologies is 0.18, and galaxies (IC~5270, NGC~4666, UGC~07982) showing asymmetric morphology in only {\hbox{{\rm H}\kern 0.2em{\sc i}}} have the mean value of 0.17. As previous studies with {\hbox{{\rm H}\kern 0.2em{\sc i}}} images found higher asymmetry values from galaxies in dense environment \citep{holwerda2011b, reynolds2020}, it is also expected that CO asymmetry values tend to be high in galaxies undergoing environmental processes (e.g., tidal interaction and RPS). Indeed, our results support the notion that high CO asymmetry values can be found in the group environment. However, some (e.g., IC~5269B, NGC~4592) of our group members also tend to have high values of the CO asymmetry parameter although these galaxies are not likely to be affected by environmental processes, based on their symmetric {\hbox{{\rm H}\kern 0.2em{\sc i}}} and optical morphologies. Instead, in these galaxies, clumpy CO structures due to low S/N of the CO data could result in high values in the CO asymmetry. In this analysis for the CO asymmetry parameter, our results could be biased due to the small sample size (16 group members for the asymmetry analysis). Therefore, more group galaxies are required to obtain a statistically robust result. \begin{deluxetable*}{lccc} \tabletypesize{\footnotesize} \tablecaption{Information for CO and/or {\hbox{{\rm H}\kern 0.2em{\sc i}}} distributions \label{tab:table_new}} \tablehead{ \multicolumn{1}{l}{Name} & \multicolumn{1}{c}{CO asymmetry value} & \multicolumn{1}{c}{Peculiar distribution} & \multicolumn{1}{c}{Notes} } \startdata \multicolumn{4}{c}{IC~1459 group} \\ \hline IC 5264 & 0.30 & CO, {\hbox{{\rm H}\kern 0.2em{\sc i}}} & asymmetric CO and {\hbox{{\rm H}\kern 0.2em{\sc i}}} distributions \\ IC 5269B & 0.50 & \nodata & low S/N in CO data \\ IC 5270 & 0.10 & {\hbox{{\rm H}\kern 0.2em{\sc i}}} & two {\hbox{{\rm H}\kern 0.2em{\sc i}}} clouds near IC~5270 \\ IC 5273 & 0.38 & CO, {\hbox{{\rm H}\kern 0.2em{\sc i}}} & CO clump (southwest) \\ NGC 7418 & 0.34 & CO, {\hbox{{\rm H}\kern 0.2em{\sc i}}} & extended CO structure (southeast), {\hbox{{\rm H}\kern 0.2em{\sc i}}} tail (northwest) \\ NGC 7421 & 0.68 & CO, {\hbox{{\rm H}\kern 0.2em{\sc i}}} & strong local CO emission (southwest) \\ \hline \multicolumn{4}{c}{NGC~4636 group} \\ \hline NGC 4496A & 0.49 & CO & off-center CO peak (northeast) \\ NGC 4517 & 0.20 & \nodata & symmetric CO and {\hbox{{\rm H}\kern 0.2em{\sc i}}} distributions \\ NGC 4527 & 0.17 & \nodata & symmetric CO and {\hbox{{\rm H}\kern 0.2em{\sc i}}} distributions \\ NGC 4536 & 0.16 & \nodata & symmetric CO and {\hbox{{\rm H}\kern 0.2em{\sc i}}} distributions \\ NGC 4592 & 0.50 & \nodata & clumpy CO distribution, low S/N in CO data \\ NGC 4632 & 0.51 & CO, {\hbox{{\rm H}\kern 0.2em{\sc i}}} & asymmetric CO distribution, {\hbox{{\rm H}\kern 0.2em{\sc i}}} polar ring structure \\ NGC 4666 & 0.15 & {\hbox{{\rm H}\kern 0.2em{\sc i}}} & asymmetric {\hbox{{\rm H}\kern 0.2em{\sc i}}} distribution, close neighbor galaxy \\ NGC 4772 & 0.57 & CO, {\hbox{{\rm H}\kern 0.2em{\sc i}}} & asymmetric CO distribution, {\hbox{{\rm H}\kern 0.2em{\sc i}}} outer ring structure \\ UGC 07982 & 0.25 & {\hbox{{\rm H}\kern 0.2em{\sc i}}} & asymmetric {\hbox{{\rm H}\kern 0.2em{\sc i}}} distribution \\ UGC 08041 & 0.70 & CO & asymmetric CO distribution, low S/N in CO data, no {\hbox{{\rm H}\kern 0.2em{\sc i}}} image \\ \enddata \tablecomments{ (1) Galaxy name; (2) CO asymmetry value; (3) peculiar distributions of CO and/or {\hbox{{\rm H}\kern 0.2em{\sc i}}}; (4) notes for CO and/or {\hbox{{\rm H}\kern 0.2em{\sc i}}} distributions of group members. } \end{deluxetable*} \subsection{Comparisons of global properties between group galaxies and xCOLD GASS galaxies} \label{subsec:scaling} \begin{figure*}[!htbp] \begin{center} \includegraphics[width=1.00\textwidth]{fig_6_scaling.pdf} \caption{Comparisons of global properties between our group galaxies and the xCOLD GASS galaxies. (a) SFR, (b) {\hbox{{\rm H}\kern 0.2em{\sc i}}} gas fraction ($M_{\rm\tiny {\hbox{{\rm H}\kern 0.2em{\sc i}}}}/M_\star$), (c) H$_{2}$ gas fraction ($M_{\rm H2}/M_\star$), (d) the ratio of molecular to atomic gas ($M_{\rm H2}/M_{\rm\tiny {\hbox{{\rm H}\kern 0.2em{\sc i}}}}$), (e) {\hbox{{\rm H}\kern 0.2em{\sc i}}} gas depletion time ($\tau_{\rm dep, \tiny {HI}}$), (f) H$_{2}$ gas depletion time ($\tau_{\rm dep, H2}$) as a function of the stellar mass ($M_\star$). The red and blue filled circles indicate our sample galaxies with CO detection in the I1495G and the N4636G, respectively. The upper limit for non-detection of CO is shown as the open circles. Group samples with their names, which show asymmetric {\hbox{{\rm H}\kern 0.2em{\sc i}}} and/or CO distributions, are marked by yellow crosses. The xCOLD GASS sample are shown as gray filled circles ({\hbox{{\rm H}\kern 0.2em{\sc i}}} or CO detections) and open gray circles (non-detection in {\hbox{{\rm H}\kern 0.2em{\sc i}}} or non-detection in CO, respectively). In panel (a), the star-forming main sequence, which is determined by \cite{saintonge2016}, is indicated by a solid line, and the 0.3 dex scatter is shown in dashed lines. Filled circles indicate CO detections while non-detections in CO are shown in open circles. In panels (b), (c), (d), (e), and (f), the median trends (solid lines) and the 1$\sigma$ standard deviations (dashed lines) are derived from the xCOLD GASS sample. In panel (d), the upper limit (i.e., non-detection in CO) and the lower limit (i.e., non-detection in {\hbox{{\rm H}\kern 0.2em{\sc i}}}) are represented by small magenta and cyan arrows, respectively. We limited our group sample to relatively massive galaxies (log~($M_\star$/{$M_{\odot}$}) $>$ 9) because all group samples with CO detection have their stellar mass of log~($M_\star$/{$M_{\odot}$}) $>$ 9. \label{fig:fig6}} \end{center} \end{figure*} In this section, we present scaling relations of global measurements (e.g., SFR, gas fraction, gas depletion time) in our group galaxies, and compare global properties of group galaxies with those of galaxies in a low-density environment. From the comparison, we can investigate how the group environment affects global properties of group galaxies. For this comparison, we use 240 isolated galaxies, which are selected based on the SDSS\footnote{The Sloan Digital Sky Survey} DR7 group catalog of \cite{yang2007}, from the extended CO Legacy Database for GASS (xCOLD GASS; \citealt{saintonge2017}) with {\hbox{{\rm H}\kern 0.2em{\sc i}}} information from the extended GALEX Arecibo SDSS Survey (xGASS; \citealt{catinella2018}). The sample of 240 local isolated galaxies (0.01 $< z <$ 0.05 ) uniformly covering the stellar mass range (9 $<$ log~($M_\star$/{$M_{\odot}$}) $<$ 11.5) were observed deeply in both {\hbox{{\rm H}\kern 0.2em{\sc i}}} and CO, which can provide a reference for various scaling relations in local isolated galaxies. Hereafter, we refer to the sample of 240 galaxies as the xCOLD GASS sample. Note that we rescaled the measurements of stellar masses and SFRs of our group sample by dividing by 1.06 \citep[e.g.,][]{salim2007,zahid2012}, to compare our group sample with the xCOLD GASS sample under the same Chabrier IMF condition \citep{chabrier2003}. Figure~\ref{fig:fig6} shows (a) SFR, (b) {\hbox{{\rm H}\kern 0.2em{\sc i}}} gas fraction ($M_{\rm\tiny {\hbox{{\rm H}\kern 0.2em{\sc i}}}}/M_\star$), (c) H$_{2}$ gas fraction ($M_{\rm H2}/M_\star$), (d) the ratio of H$_{2}$ gas to {\hbox{{\rm H}\kern 0.2em{\sc i}}} gas ($M_{\rm H2}/M_{\rm\tiny {\hbox{{\rm H}\kern 0.2em{\sc i}}}}$), (e) {\hbox{{\rm H}\kern 0.2em{\sc i}}} gas depletion time ($\tau_{\rm dep, \tiny {HI}}$), (f) H$_{2}$ gas depletion time ($\tau_{\rm dep, H2}$) as a function of the stellar mass ($M_\star$) of galaxies. In Figure~\ref{fig:fig6}, we use the metallicity-dependent CO conversion factor to calculate the H$_{2}$ gas masses of the group sample and the xCOLD GASS sample. The red and blue filled circles indicate our group galaxies that have CO detection, in the I1495G and the N4636G, respectively. The group samples with non-detection in CO are shown as open circles. Note that in the following analysis of scaling relations, we limited our group sample objects with a stellar mass of log~($M_\star$/{$M_{\odot}$}) $>$ 9 because all samples with CO detections have stellar masses of log~($M_\star$/{$M_{\odot}$}) $>$ 9. The xCOLD GASS galaxies are shown in the gray filled circles ({\hbox{{\rm H}\kern 0.2em{\sc i}}} or CO detections) and gray open circles (non-detections in CO (Figure~\ref{fig:fig6} (a), (c), (f)) or non-detection in {\hbox{{\rm H}\kern 0.2em{\sc i}}} (Figure~\ref{fig:fig6} (b), (e))), respectively. Note that fractions that we calculated in the following scaling relations also include non-detection cases. \paragraph{\textbf{$\bullet$ Star formation activity}} first, Figure~\ref{fig:fig6} (a) shows the distribution of SFRs of the group sample and the xCOLD GASS sample as a function of the stellar mass of the galaxies. A solid line indicates the star-forming main sequence (SFMS) determined by \cite{saintonge2016}, and dashed lines are the $\pm$0.3 dex scatter \citep[][]{speagle2014}. Galaxies lying below the solid line are less star-forming (i.e., relatively suppressed in star formation) than the galaxies lying above and on the line. Galaxies lying below the lower dashed line tend to leave the star-forming main sequence and get quenched in star formation. In Figure~\ref{fig:fig6} (a), we can see that our group samples cover both star-forming galaxies (48\%) and quenched galaxies (48\%). The ratio of star-forming galaxies in our group sample is consistent with that (50\%) in the xCOLD GASS sample. On the other hand, the fraction of quenched galaxies in our group sample is slightly higher than that (39\%) in the xCOLD GASS sample. Interestingly, 8 out of 12 low-mass (log~($M_\star$/{$M_{\odot}$}) $<$ 10) group members have suppressed SFR with respect to the main sequence. In the following, using scaling relations (median values (solid lines) and standard deviations (dashed lines), see Figure~\ref{fig:fig6} (b), (c), (d), (e), and (f)), we compare global physical properties (e.g., gas fraction, gas depletion time) of our group galaxies to the isolated galaxies from the xCOLD GASS sample, in order to obtain clues why and how the galaxies get affected in the group environment. The different behavior of the low-mass galaxies also motivates us to investigate the low- and high-mass galaxies separately. \paragraph{\textbf{$\bullet$ {\hbox{{H}\kern 0.2em{\sc i}}} gas and H$_{2}$ gas fractions}} Figure~\ref{fig:fig6} (b) and (c) show the {\hbox{{\rm H}\kern 0.2em{\sc i}}} gas fraction and the H$_{2}$ gas fraction, respectively. By using the median trends (solid lines) of the {\hbox{{\rm H}\kern 0.2em{\sc i}}} gas and the H$_{2}$ gas fractions for isolated galaxies, with their 1$\sigma$ standard deviations (dashed lines), this figure shows that {\hbox{{\rm H}\kern 0.2em{\sc i}}} (H$_{2}$)-deficient galaxies are below the lower dashed line, while normal galaxies are above that line. {\hbox{{\rm H}\kern 0.2em{\sc i}}} (H$_{2}$)-rich galaxies are above the upper dashed line. We also compare the distribution of {\hbox{{\rm H}\kern 0.2em{\sc i}}} and H$_{2}$ fractions of group samples with respect to the median trend as a function of stellar mass to identify potential systematic shifts. In Figure~\ref{fig:fig6} (b), the group galaxies follow a similar distribution as the whole xCOLD GASS sample above and below the median relation of {\hbox{{\rm H}\kern 0.2em{\sc i}}} mass fraction as a function of stellar mass. There are 17 group galaxies with normal or high {\hbox{{\rm H}\kern 0.2em{\sc i}}} gas fraction (81\%), while 4 group galaxies (19\%) are below the -1$\sigma$ scatter, indicating {\hbox{{\rm H}\kern 0.2em{\sc i}}} deficient galaxies. In the xCOLD GASS sample (9.25 $<$ log~($M_\star$/{$M_{\odot}$}) $<$ 11.25), the fractions of {\hbox{{\rm H}\kern 0.2em{\sc i}}}-rich or normal, and {\hbox{{\rm H}\kern 0.2em{\sc i}}}-deficient galaxies are 77\% and 23\%, respectively. We do not find that overall, the galaxies in our group sample are deficient in {\hbox{{\rm H}\kern 0.2em{\sc i}}} gas, compared to the xCOLD GASS sample, even though previous {\hbox{{\rm H}\kern 0.2em{\sc i}}} studies for group galaxies \citep[e.g.,][]{hess2013,brown2017} found evidence that group galaxies tend to be deficient in the {\hbox{{\rm H}\kern 0.2em{\sc i}}} gas. This result is likely at least partly due to our sample selection of GEMS-{\hbox{{\rm H}\kern 0.2em{\sc i}}} detected galaxies from the very beginning. Thus our results may be slightly biased toward the relatively {\hbox{{\rm H}\kern 0.2em{\sc i}}}-rich systems. Nevertheless, {\hbox{{\rm H}\kern 0.2em{\sc i}}}-deficient group members (e.g., IC~5264, NGC~4772, UGC~07982) tend to show highly asymmetric {\hbox{{\rm H}\kern 0.2em{\sc i}}} morphology, implying that the {\hbox{{\rm H}\kern 0.2em{\sc i}}} gas removal occurs due to the group environmental processes. In Figure~\ref{fig:fig6} (c), we find a high fraction (57\%) of H$_{2}$-deficient galaxies in our group sample in comparison with the fraction (26\%) of H$_{2}$-deficient galaxies in the xCOLD GASS sample (9.25 $<$ log~($M_\star$/{$M_{\odot}$}) $<$ 11.25). This result suggests that group members can be deficient in the H$_{2}$ gas content. On the other hand, \cite{martinez-badenes2012} showed that there is an excess of the H$_{2}$ gas content in compact group galaxies, compared to isolated galaxies. Interestingly, most of the low-mass members (log~($M_\star$/{$M_{\odot}$}) $<$ 10), including the non-detections) in our group sample show significant deficiency in H$_{2}$ gas (below the dashed line). Given that our sample is already biased toward the {\hbox{{\rm H}\kern 0.2em{\sc i}}}-rich galaxies, such a deficiency in H$_{2}$ gas is striking. \paragraph{\textbf{$\bullet$ The ratio of H$_{2}$ gas to {\hbox{{H}\kern 0.2em{\sc i}}} gas and {\hbox{{H}\kern 0.2em{\sc i}}}/H$_{2}$ depletion times}} for the ratio of H$_{2}$ gas to {\hbox{{\rm H}\kern 0.2em{\sc i}}} gas ($M_{\rm H2}/M_{\rm\tiny {\hbox{{\rm H}\kern 0.2em{\sc i}}}}$), as shown in Figure~\ref{fig:fig6} (d), the high-mass galaxies (log~($M_\star$/{$M_{\odot}$}) $>$ 10) of our group sample are in line with the trend of the xCOLD GASS sample. However, the ratios of H$_{2}$ gas to {\hbox{{\rm H}\kern 0.2em{\sc i}}} gas in the low-mass galaxies (log~($M_\star$/{$M_{\odot}$}) $<$ 10), including the non-detections of our group sample tend to be lower than the median trend of star-forming galaxies. A similar trend is found in the H$_{2}$ gas depletion time that shows, on average, a shorter depletion time of H$_{2}$ gas for the low-mass galaxies when comparing to the star-forming galaxies of the same stellar mass (Figure~\ref{fig:fig6} (f)). As a result, this subset of the low-mass galaxies shows systematically longer (except for one outlier) depletion time for the {\hbox{{\rm H}\kern 0.2em{\sc i}}} gas than the median value of the same stellar mass (Figure~\ref{fig:fig6} (e)). \begin{figure*}[!htbp] \begin{center} \includegraphics[width=1.00\textwidth]{fig_7_scaling.pdf} \caption{Comparisons of global properties between our group galaxies and the xCOLD GASS galaxies. (a) H$_{2}$ gas fraction ($M_{\rm H2}/M_\star$), (b) the ratio of molecular to atomic gas ($M_{\rm H2}/M_{\rm\tiny {\hbox{{\rm H}\kern 0.2em{\sc i}}}}$), (c) H$_{2}$ gas depletion time ($\tau_{\rm dep, H2}$) as a function of the stellar mass ($M_\star$). All symbols are the same as the symbols in Figure~\ref{fig:fig6}. The H$_{2}$ gas mass is calculated with the constant CO-to-H$_{2}$ conversion factor ($\alpha_{\rm CO}$~=~4.35 {$M_{\odot}$}~pc$^{-2}$~(K~km~s$^{-1}$)$^{-1}$ \citep{strong1996,bolatto2013}). \label{fig:fig7}} \end{center} \end{figure*} \paragraph{\textbf{$\bullet$ A constant CO-to-H$_{2}$ conversion factor}} we also calculate the H$_{2}$ gas fraction, the ratio of H$_{2}$ to {\hbox{{\rm H}\kern 0.2em{\sc i}}}, and H$_{2}$ gas depletion time, using the constant CO-to-H$_{2}$ conversion factor ($\alpha_{\rm CO}$~=~4.35 {$M_{\odot}$}~pc$^{-2}$~(K~km~s$^{-1}$)$^{-1}$), to investigate whether these quantities significantly changes with the constant CO-to-H$_{2}$ conversion factor, compared to the results with the metallicity-dependent CO-to-H$_{2}$ conversion factor. As seen in Figure~\ref{fig:fig7}, there are no significant differences in the scaling relations between these two different conversion factors. This indicates our conclusions are insensitive to the CO-to-H$_{2}$ conversion factor in a given stellar mass range. \paragraph{\textbf{$\bullet$ The missing flux problem}} for two galaxies (NGC~4517 and NGC~4527) whose total fluxes are significantly missed in our ACA observations (the flux ratio between our ACA data and the single-dish data: 0.5 (NGC~4517) and 0.7 (NGC~4527), see also Section~\ref{subsec:coprop}), we examine again the scaling relations using their total fluxes from the single-dish data, in order to test whether the missing flux issue in our ACA observations severely affects our results. Based on the results with the single-dish data, the missing flux issue of these two members, which belong to the high-mass sub-sample, does not change our conclusion in the analysis of scaling relations. Considering (1) the most typical extent of the CO disk (i.e., 50\% $-$ 70\% of the optical disk) in late-type galaxies \citep{schruba2011} and (2) the fact that the low-mass galaxies tend to have a smaller size of the stellar disk \citep[e.g.,][]{lorenzo2013}, the low-mass group members are expected to have smaller CO disks than the high-mass group galaxies. Consequently, we may miss relatively small portions of total CO fluxes of the low-mass group sample in our ACA observations, compared to the high-mass group sample. Therefore, the deficiency of H$_{2}$ gas in the low-mass group members is unlikely due to the missing flux problem. As a result, although we may miss some CO fluxes from our group targets, we expect that this missing flux problem is not likely to change our conclusions. However, we cannot completely rule out the possibility of which the missing flux problem can affect our conclusions because we do not have accurate measurements of total CO fluxes in most group samples. \subsubsection{Group environmental effects on low- and high-mass group members} For the high-mass galaxies, their distribution of all the quantities discussed above (if ignoring the marginally lower H$_{2}$ mass fraction) tend to be close to that of the xCOLD GASS sample. For the low-mass galaxies the majority of which have a suppressed SFR, a short H$_{2}$ depletion time, and a long {\hbox{{\rm H}\kern 0.2em{\sc i}}} gas depletion time, the bottleneck in the baryonic flow from the {\hbox{{\rm H}\kern 0.2em{\sc i}}} gas to the stars seems to be mostly at the conversion from {\hbox{{\rm H}\kern 0.2em{\sc i}}} to H$_{2}$. We note that the missing flux problem from the observations with the interferometry is unlikely to explain the systematic differences that we observe here, as the average fraction of the missing flux is only 10\% and the problem should not be so serious for these small low-mass galaxies. It is known that the transition of {\hbox{{\rm H}\kern 0.2em{\sc i}}} to H$_{2}$ is not efficient in low-mass galaxies as the pressure and metallicities are both lower \citep{leroy2005,krumholz2009,bolatto2011}. However, this effect does not explain the systematic difference that we observe either, as we are comparing with the median behavior of galaxies with the same stellar mass. One of the possible scenarios is that for these low-mass galaxies, which are more sensitive to the environmental effects due to their relatively shallow gravitational potential, the {\hbox{{\rm H}\kern 0.2em{\sc i}}} gas is highly perturbed and becomes stable against gravitational collapse (i.e., a high $\sigma_{v}$ value in the Toomre Q parameter). The systematically low ratio of H$_{2}$ to {\hbox{{\rm H}\kern 0.2em{\sc i}}} in these low-mass galaxies was not fully expected, as environmental effects could have enhanced such a conversion, when the ram pressure assists the gas to reach high densities \citep[e.g.,][]{mok2017}, or when the tidal effects induce shocks and condense the cold gas \citep[e.g.,][]{lizee2021}. It is therefore meaningful to find that in these low-mass galaxies, the actual situation is that the conversion of {\hbox{{\rm H}\kern 0.2em{\sc i}}} to H$_{2}$ seems to be suppressed. It is also interesting to point out the tentatively shortened H$_{2}$ depletion time in the low-mass galaxies. It is possible that the molecular clouds once formed can be more massive and denser as a result of the higher Jeans mass in the perturbed interstellar medium \citep[][]{bournaud2010}, leading to more vigorous star-formation than in an unperturbed circumstance. This scenario is supported by studies based on observations of the dense molecular gas in merging systems \citep[e.g.,][]{juneau2009}. As a result, we observe a systematically shorter depletion time of the molecular gas for these low-mass group galaxies than for field galaxies. Although the suppressed {\hbox{{\rm H}\kern 0.2em{\sc i}}}-to-molecular conversion and the shortened molecular depletion are opposite effects in the flow of baryons from the {\hbox{{\rm H}\kern 0.2em{\sc i}}} to the stars, the former process seems to dominate over the latter, resulting in a systematically elongated {\hbox{{\rm H}\kern 0.2em{\sc i}}} depletion time, and lower SFR than the SFMS for these low-mass galaxies, as found in this work. If we look closely at the three massive, quenched galaxies (NGC~7421, IC~5264, and NGC~4772), they have relatively low {\hbox{{\rm H}\kern 0.2em{\sc i}}} gas and H$_{2}$ gas fractions, when compared to the median behavior of the xCOLD GASS sample. In addition, their SFRs are lower than the SFMS galaxies. Thus these three galaxies are mostly quenched because their neutral gas reservoir significantly shrinks, possibly due to environmental gas stripping, or strangulation accelerated by stripping. The necessity of losing gas reservoirs before the quenching of these high-mass galaxies is consistent with recent findings for normal field galaxies \citep[e.g.,][]{cortese2020,wang2020,guo2021}. Overall, environmental effects seem to work on the low- and high-mass galaxies in different ways. The consequence is that the low-mass galaxies systematically and significantly drop in SFR because the conversion of {\hbox{{\rm H}\kern 0.2em{\sc i}}} to H$_{2}$ is severely suppressed, while the high-mass galaxies tend to remain on the SFMS before the {\hbox{{\rm H}\kern 0.2em{\sc i}}} reservoir significantly drops. However, our results in the analysis of the scaling relations suffer from a small sample size. Therefore, in order to verify this result and have a more robust conclusion, more CO observations for group galaxies are required. In fact, individual galaxies in the two groups seem to be subject to different environmental effects. The locations of individual galaxies in each group are different. While galaxies in the I1459G are located in various regions from the group center to the outskirt of the group, many of the sample galaxies in the N4636G are located near the outskirts ($\sim$$R_{200}$) of the N4636G. In addition, the properties (e.g., group mass) of these two groups are different although both groups are loose galaxy groups. These different environmental effects make individual group galaxies have different global properties. We describe individual galaxies in each group in more detail in Appendix~\ref{app:comments}. \subsection{Group pre-processing} \label{subsec:pre} We find that some galaxies in both the I1459G and the N4636G present highly asymmetric distributions in {\hbox{{\rm H}\kern 0.2em{\sc i}}} and/or CO data. (e.g., IC~5264, IC~5273, NGC~7421, NGC~4632, NGC~4772, UGC~07982). Furthermore, some of them have low SFRs and low cold gas fractions. Our results suggest that group galaxies can be pre-processed by external mechanisms (tidal interactions and/or ram pressure stripping). In particular, strong local peaks in the CO disk and one-sided extended CO structures due to the external perturbations are the interesting findings in our study, which are supportive evidence for which the molecular gas can be also affected by the group environment. The N4636G seems to be falling into the Virgo cluster. Together with the asymmetric {\hbox{{\rm H}\kern 0.2em{\sc i}}} and/or CO distributions and changes of global properties of galaxies in the N4636G, the fact that the N4636G has many early-type galaxies around the group center (see the right panel of Figure~\ref{fig:fig1}) and higher {\hbox{{\rm H}\kern 0.2em{\sc i}}} deficiency \citep{kilborn2009} indicates that galaxies can have been already processed by the group environment before they enter the cluster. Recent studies have also found that galaxies in the substructures (e.g., W cloud) and the filaments near the Virgo cluster tend to have a decrease of SFR and low gas fraction \citep{yoon2017,mun2021,morokuma-matsui2021,castignani2021}. As in other previous works, our findings support the group pre-processing scenario, which could be one of the important mechanisms in galaxy evolution with the environment. However, our study is limited to only two groups. Therefore, to have a robust conclusion about the group pre-processing, follow-up studies with more groups associated with the cluster are required. The I1459G is likely to be in an early stage of the evolutionary sequence of a galaxy group, based on the facts that late-type galaxies of the I1459G are located around the BGG as well as in the outskirts of the group (the left panel of Figure~\ref{fig:fig1}) and the stripped {\hbox{{\rm H}\kern 0.2em{\sc i}}} gas from group members is found in the intergalactic space \citep{saponara2018,oosterloo2018}, as introduced in Section~\ref{sec:ic1459g}. On the other hand, the N4636G is thought to be a more evolved system with many early-type galaxies in the central region of the N4636G. Thus, we expected that these two groups show different behaviors in the scaling relations. For example, the N4636G may have relatively low cold gas fractions and suppressed star formation activity, compared to the I1459G. However, in the analysis of the scaling relations (Figure~\ref{fig:fig6}), there are no significant differences in the fractions of {\hbox{{\rm H}\kern 0.2em{\sc i}}}, H$_{2}$ deficient and quenched galaxies between the I1459G and the N4636G (Table~\ref{tab:comp}). \begin{deluxetable}{lcc}[h!] \tablecaption{Fractions of quenched and {\hbox{{\rm H}\kern 0.2em{\sc i}}}/H$_{2}$-deficient members in the I1459G and the N4636G \label{tab:comp}} \tablehead{ \multicolumn{1}{l}{} & \multicolumn{1}{c}{the I1459G} & \multicolumn{1}{c}{the N4636G} } \startdata Quenched members & 0.55 & 0.41 \\ {\hbox{{\rm H}\kern 0.2em{\sc i}}}-deficient members & 0.11 & 0.25 \\ H$_{2}$-deficient members & 0.55 & 0.50 \\ \enddata \end{deluxetable} As already mentioned in previous sections, our sample is already biased toward the {\hbox{{\rm H}\kern 0.2em{\sc i}}}-rich systems in these two groups. This results in missing severely {\hbox{{\rm H}\kern 0.2em{\sc i}}}-deficient members, especially in the N4636G. These {\hbox{{\rm H}\kern 0.2em{\sc i}}}-deficient galaxies may still have the molecular gas within their stellar disks. To have a complete picture of the environmental effects on the group galaxies, these {\hbox{{\rm H}\kern 0.2em{\sc i}}}-deficient members should also be studied together in future work. \section{Summary and conclusions} \label{sec:sum} We have presented the results of the CO imaging survey for 31 galaxies in the I1459G and the N4636G, using the ALMA/ACA. This is the first CO imaging survey for loose galaxy groups. The main scientific goal of this CO survey is to obtain an understanding of the group's environmental effects on the molecular gas, star formation, and galaxy evolution. We obtained well-resolved CO imaging data ($\sim$0.7 $-$ 1.5 kpc; Figure~\ref{fig:fig4} and Appendix~\ref{app:codata}) for 16 out of 31 galaxies in both I1459G and N4636G from our ACA observations in ALMA Cycle 7. In the I1459G, 6 out of 11 galaxies have CO detection, and in the N4636G, 10 out of 20 galaxies have CO detection. Their stellar masses range from 10$^{9}$$M_{\odot}$ to 10$^{10}$$M_{\odot}$. We find that {\hbox{{\rm H}\kern 0.2em{\sc i}}} and/or CO distributions (Figure~\ref{fig:hico}) are asymmetric in our group galaxies. In particular, IC~5264, NGC~7421, and NGC~7418 have {\hbox{{\rm H}\kern 0.2em{\sc i}}} tails and {\hbox{{\rm H}\kern 0.2em{\sc i}}} compression. Their CO distributions are also highly asymmetric, which show extended CO structure and local peak of CO emission. Our CO imaging data reveal the peculiar CO distributions of group galaxies, which are motivations for further study on the molecular gas of group galaxies with CO imaging observations. In comparison of scaling relations of global properties (e.g., SFR and gas fraction) between our group sample and the xCOLD GASS sample, overall, environmental effects seem to work on the low- and high-mass galaxies in different ways. The consequence is that the low-mass galaxies systematically and significantly drop in SFR because the conversion of {\hbox{{\rm H}\kern 0.2em{\sc i}}} to H$_{2}$ seems to be severely suppressed, while the high-mass galaxies tend to remain along the SFMS before the {\hbox{{\rm H}\kern 0.2em{\sc i}}} reservoir significantly drops (Figure~\ref{fig:fig6}). For some interesting group members (e.g., IC~5264, NGC~7421, NGC~4772, and UGC~07982) showing highly asymmetric morphologies in {\hbox{{\rm H}\kern 0.2em{\sc i}}} and/or CO images, a significant decrease of SFRs and gas fractions are found. These results indicate that environmental processes (e.g., tidal interactions and ram pressure stripping) in a group can change the distributions of both molecular gas and {\hbox{{\rm H}\kern 0.2em{\sc i}}} gas. This likely results in changes of global properties of group galaxies, such as a decrease of SFR, {\hbox{{\rm H}\kern 0.2em{\sc i}}} and H$_{2}$ gas fractions. Our results suggest that group galaxies can be significantly processed by the group environment. In particular, the results of the N4636G provide supporting evidence for which the group pre-processing, one of the important mechanisms for galaxy evolution, can occur before groups enter a cluster. However, our conclusions are solely based on a small sample. To have a robust result of the group environmental effects on physical properties, especially the molecular gas, of galaxies, more CO observations for group members are required for future studies. \acknowledgments BL acknowledges support from the National Science Foundation of China (12073002, 11721303, 11991052) and the National Key R\&D Program of China (2016YFA0400702). BL is supported by the Boya Fellowship at Peking University. BL gratefully thanks Hyein Yoon for useful discussions. Support for this work was also provided by the National Research Foundation of Korea to the grant No. 2018R1D1A1B07048314. Y.K. was supported by the National Research Foundation of Korea (NRF) grant funded by the Korean government (MSIT) (No. 2021R1C1C2091550), and acknowledges the support from China Postdoc Science General (2020M670022) and Special (2020T130018) Grants funded by the China Postdoctoral Science Foundation. Parts of this research were supported by the Australian Research Council Centre of Excellence for All Sky Astrophysics in 3 Dimensions (ASTRO 3D), through project number CE170100013. LC is the recipient of an Australian Research Council Future Fellowship (FT180100066) funded by the Australian Government. This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement no. 679627; project name FORNAX). KS acknowledges support from the Natural Sciences and Engineering Research Council of Canada (NSERC). AB acknowledges support from the Centre National d'Etudes Spatiales (CNES), France. LVM acknowledges financial support from the State Agency for Research of the Spanish Ministry of Science, Innovation and Universities through the ``Center of Excellence Severo Ochoa'' awarded to the Instituto de Astrofisica de Andalucia (SEV-2017-0709), from grant RTI2018-096228-B-C31 (Ministry of Science, Innovation and Universities/State Agency for Research/European Regional Development Funds, European Union), and grant IAA4SKA (Ref. P18-RT-3082) from the Consejeria de Transformacion Economica, Industria, Conocimiento y Universidades de la Junta de Andalucia and the European Regional Development Fund from the European Union. FB acknowledges funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No.726384/Empire). This paper makes use of the following ALMA data: ADS/JAO.ALMA\#2019.1.01804.S. ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada), MOST and ASIAA (Taiwan), and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ. The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. The Australian SKA Pathfinder is part of the Australia Telescope National Facility (https://ror.org/05qajvd42) which is managed by CSIRO. Operation of ASKAP is funded by the Australian Government with support from the National Collaborative Research Infrastructure Strategy. ASKAP uses the resources of the Pawsey Supercomputing Centre. Establishment of ASKAP, the Murchison Radio-astronomy Observatory and the Pawsey Supercomputing Centre are initiatives of the Australian Government, with support from the Government of Western Australia and the Science and Industry Endowment Fund. We acknowledge the Wajarri Yamatji people as the traditional owners of the Observatory site. We acknowledge the usage of the HyperLeda database (http://leda.univ-lyon1.fr). \vspace{5mm} \facilities{ALMA/ACA} \software{ Astropy \citep{astropy:2013, astropy:2018}, CASA \citep{mcmullin2007}, Matplotlib \citep{Hunter:2007}, NumPy \citep{harris2020array}, SciPy \citep{2020SciPy-NMeth}, SoFiA \citep{serra2015a,westmeier2021} }